WorldWideScience

Sample records for local instrumental errors

  1. Error Control in Distributed Node Self-Localization

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2008-03-01

    Full Text Available Location information of nodes in an ad hoc sensor network is essential to many tasks such as routing, cooperative sensing, and service delivery. Distributed node self-localization is lightweight and requires little communication overhead, but often suffers from the adverse effects of error propagation. Unlike other localization papers which focus on designing elaborate localization algorithms, this paper takes a different perspective, focusing on the error propagation problem, addressing questions such as where localization error comes from and how it propagates from node to node. To prevent error from propagating and accumulating, we develop an error-control mechanism based on characterization of node uncertainties and discrimination between neighboring nodes. The error-control mechanism uses only local knowledge and is fully decentralized. Simulation results have shown that the active selection strategy significantly mitigates the effect of error propagation for both range and directional sensors. It greatly improves localization accuracy and robustness.

  2. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    in certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...

  3. Local high precision 3D measurement based on line laser measuring instrument

    Science.gov (United States)

    Zhang, Renwei; Liu, Wei; Lu, Yongkang; Zhang, Yang; Ma, Jianwei; Jia, Zhenyuan

    2018-03-01

    In order to realize the precision machining and assembly of the parts, the geometrical dimensions of the surface of the local assembly surfaces need to be strictly guaranteed. In this paper, a local high-precision three-dimensional measurement method based on line laser measuring instrument is proposed to achieve a high degree of accuracy of the three-dimensional reconstruction of the surface. Aiming at the problem of two-dimensional line laser measuring instrument which lacks one-dimensional high-precision information, a local three-dimensional profile measuring system based on an accurate single-axis controller is proposed. First of all, a three-dimensional data compensation method based on spatial multi-angle line laser measuring instrument is proposed to achieve the high-precision measurement of the default axis. Through the pretreatment of the 3D point cloud information, the measurement points can be restored accurately. Finally, the target spherical surface is needed to make local three-dimensional scanning measurements for accuracy verification. The experimental results show that this scheme can get the local three-dimensional information of the target quickly and accurately, and achieves the purpose of gaining the information and compensating the error for laser scanner information, and improves the local measurement accuracy.

  4. Optimal Inference for Instrumental Variables Regression with non-Gaussian Errors

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper is concerned with inference on the coefficient on the endogenous regressor in a linear instrumental variables model with a single endogenous regressor, nonrandom exogenous regressors and instruments, and i.i.d. errors whose distribution is unknown. It is shown that under mild smoothness...

  5. Wavefront-Error Performance Characterization for the James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) Science Instruments

    Science.gov (United States)

    Aronstein, David L.; Smith, J. Scott; Zielinski, Thomas P.; Telfer, Randal; Tournois, Severine C.; Moore, Dustin B.; Fienup, James R.

    2016-01-01

    The science instruments (SIs) comprising the James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) were tested in three cryogenic-vacuum test campaigns in the NASA Goddard Space Flight Center (GSFC)'s Space Environment Simulator (SES). In this paper, we describe the results of optical wavefront-error performance characterization of the SIs. The wavefront error is determined using image-based wavefront sensing (also known as phase retrieval), and the primary data used by this process are focus sweeps, a series of images recorded by the instrument under test in its as-used configuration, in which the focal plane is systematically changed from one image to the next. High-precision determination of the wavefront error also requires several sources of secondary data, including 1) spectrum, apodization, and wavefront-error characterization of the optical ground-support equipment (OGSE) illumination module, called the OTE Simulator (OSIM), 2) plate scale measurements made using a Pseudo-Nonredundant Mask (PNRM), and 3) pupil geometry predictions as a function of SI and field point, which are complicated because of a tricontagon-shaped outer perimeter and small holes that appear in the exit pupil due to the way that different light sources are injected into the optical path by the OGSE. One set of wavefront-error tests, for the coronagraphic channel of the Near-Infrared Camera (NIRCam) Longwave instruments, was performed using data from transverse translation diversity sweeps instead of focus sweeps, in which a sub-aperture is translated andor rotated across the exit pupil of the system.Several optical-performance requirements that were verified during this ISIM-level testing are levied on the uncertainties of various wavefront-error-related quantities rather than on the wavefront errors themselves. This paper also describes the methodology, based on Monte Carlo simulations of the wavefront-sensing analysis of focus-sweep data, used to establish the

  6. Optimizer convergence and local minima errors and their clinical importance

    International Nuclear Information System (INIS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-01-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization

  7. Metrological Array of Cyber-Physical Systems. Part 7. Additive Error Correction for Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Yuriy YATSUK

    2015-06-01

    Full Text Available Since during design it is impossible to use the uncertainty approach because the measurement results are still absent and as noted the error approach that can be successfully applied taking as true the nominal value of instruments transformation function. Limiting possibilities of additive error correction of measuring instruments for Cyber-Physical Systems are studied basing on general and special methods of measurement. Principles of measuring circuit maximal symmetry and its minimal reconfiguration are proposed for measurement or/and calibration. It is theoretically justified for the variety of correction methods that minimum additive error of measuring instruments exists under considering the real equivalent parameters of input electronic switches. Terms of self-calibrating and verification the measuring instruments in place are studied.

  8. Limit of detection in the presence of instrumental and non-instrumental errors: study of the possible sources of error and application to the analysis of 41 elements at trace levels by inductively coupled plasma-mass spectrometry technique

    International Nuclear Information System (INIS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Tapparo, Andrea; Pastore, Paolo

    2015-01-01

    In this paper the detection limit was estimated when signals were affected by two error contributions, namely instrumental errors and operational-non-instrumental errors. The detection limit was theoretically obtained following the hypothesis testing schema implemented with the calibration curve methodology. The experimental calibration design was based on J standards measured I times with non-instrumental errors affecting each standard systematically but randomly among the J levels. A two-component variance regression was performed to determine the calibration curve and to define the detection limit in these conditions. The detection limit values obtained from the calibration at trace levels of 41 elements by ICP-MS resulted larger than those obtainable from a one component variance regression. The role of the reagent impurities on the instrumental errors was ascertained and taken into account. Environmental pollution was studied as source of non-instrumental errors. The environmental pollution role was evaluated by Principal Component Analysis technique (PCA) applied to a series of nine calibrations performed in fourteen months. The influence of the seasonality of the environmental pollution on the detection limit was evidenced for many elements usually present in the urban air particulate. The obtained results clearly indicated the need of using the two-component variance regression approach for the calibration of all the elements usually present in the environment at significant concentration levels. - Highlights: • Limit of detection was obtained considering a two variance component regression. • Calibration data may be affected by instrumental and operational conditions errors. • Calibration model was applied to determine 41 elements at trace level by ICP-MS. • Non instrumental errors were evidenced by PCA analysis

  9. On the Interpretation of Instrumental Variables in the Presence of Specification Errors

    Directory of Open Access Journals (Sweden)

    P.A.V.B. Swamy

    2015-01-01

    Full Text Available The method of instrumental variables (IV and the generalized method of moments (GMM, and their applications to the estimation of errors-in-variables and simultaneous equations models in econometrics, require data on a sufficient number of instrumental variables that are both exogenous and relevant. We argue that, in general, such instruments (weak or strong cannot exist.

  10. Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo.

    Science.gov (United States)

    Krogel, Jaron T; Kent, P R C

    2017-06-28

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energy and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+ and 4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.

  11. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    Directory of Open Access Journals (Sweden)

    Francisco J. Casas

    2015-08-01

    Full Text Available This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  12. Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors

    International Nuclear Information System (INIS)

    Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A

    2012-01-01

    This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)

  13. Error analysis of marker-based object localization using a single-plane XRII

    International Nuclear Information System (INIS)

    Habets, Damiaan F.; Pollmann, Steven I.; Yuan, Xunhua; Peters, Terry M.; Holdsworth, David W.

    2009-01-01

    The role of imaging and image guidance is increasing in surgery and therapy, including treatment planning and follow-up. Fluoroscopy is used for two-dimensional (2D) guidance or localization; however, many procedures would benefit from three-dimensional (3D) guidance or localization. Three-dimensional computed tomography (CT) using a C-arm mounted x-ray image intensifier (XRII) can provide high-quality 3D images; however, patient dose and the required acquisition time restrict the number of 3D images that can be obtained. C-arm based 3D CT is therefore limited in applications for x-ray based image guidance or dynamic evaluations. 2D-3D model-based registration, using a single-plane 2D digital radiographic system, does allow for rapid 3D localization. It is our goal to investigate - over a clinically practical range - the impact of x-ray exposure on the resulting range of 3D localization precision. In this paper it is assumed that the tracked instrument incorporates a rigidly attached 3D object with a known configuration of markers. A 2D image is obtained by a digital fluoroscopic x-ray system and corrected for XRII distortions (±0.035 mm) and mechanical C-arm shift (±0.080 mm). A least-square projection-Procrustes analysis is then used to calculate the 3D position using the measured 2D marker locations. The effect of x-ray exposure on the precision of 2D marker localization and on 3D object localization was investigated using numerical simulations and x-ray experiments. The results show a nearly linear relationship between 2D marker localization precision and the 3D localization precision. However, a significant amplification of error, nonuniformly distributed among the three major axes, occurs, and that is demonstrated. To obtain a 3D localization error of less than ±1.0 mm for an object with 20 mm marker spacing, the 2D localization precision must be better than ±0.07 mm. This requirement was met for all investigated nominal x-ray exposures at 28 cm FOV, and

  14. Error Estimation for the Linearized Auto-Localization Algorithm

    Directory of Open Access Journals (Sweden)

    Fernando Seco

    2012-02-01

    Full Text Available The Linearized Auto-Localization (LAL algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs, using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL, the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.

  15. Human dorsal striatum encodes prediction errors during observational learning of instrumental actions.

    Science.gov (United States)

    Cooper, Jeffrey C; Dunne, Simon; Furey, Teresa; O'Doherty, John P

    2012-01-01

    The dorsal striatum plays a key role in the learning and expression of instrumental reward associations that are acquired through direct experience. However, not all learning about instrumental actions require direct experience. Instead, humans and other animals are also capable of acquiring instrumental actions by observing the experiences of others. In this study, we investigated the extent to which human dorsal striatum is involved in observational as well as experiential instrumental reward learning. Human participants were scanned with fMRI while they observed a confederate over a live video performing an instrumental conditioning task to obtain liquid juice rewards. Participants also performed a similar instrumental task for their own rewards. Using a computational model-based analysis, we found reward prediction errors in the dorsal striatum not only during the experiential learning condition but also during observational learning. These results suggest a key role for the dorsal striatum in learning instrumental associations, even when those associations are acquired purely by observing others.

  16. MEASURING LOCAL GRADIENT AND SKEW QUADRUPOLE ERRORS IN RHIC IRS

    International Nuclear Information System (INIS)

    CARDONA, J.; PEGGS, S.; PILAT, R.; PTITSYN, V.

    2004-01-01

    The measurement of local linear errors at RHIC interaction regions using an ''action and phase'' analysis of difference orbits has already been presented [2]. This paper evaluates the accuracy of this technique using difference orbits that were taken when known gradient errors and skew quadrupole errors were intentionally introduced. It also presents action and phase analysis of simulated orbits when controlled errors are intentionally placed in a RHIC simulation model

  17. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    Science.gov (United States)

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  18. Interaction of Instrumental and Goal-Directed Learning Modulates Prediction Error Representations in the Ventral Striatum.

    Science.gov (United States)

    Guo, Rong; Böhmer, Wendelin; Hebart, Martin; Chien, Samson; Sommer, Tobias; Obermayer, Klaus; Gläscher, Jan

    2016-12-14

    Goal-directed and instrumental learning are both important controllers of human behavior. Learning about which stimulus event occurs in the environment and the reward associated with them allows humans to seek out the most valuable stimulus and move through the environment in a goal-directed manner. Stimulus-response associations are characteristic of instrumental learning, whereas response-outcome associations are the hallmark of goal-directed learning. Here we provide behavioral, computational, and neuroimaging results from a novel task in which stimulus-response and response-outcome associations are learned simultaneously but dominate behavior at different stages of the experiment. We found that prediction error representations in the ventral striatum depend on which type of learning dominates. Furthermore, the amygdala tracks the time-dependent weighting of stimulus-response versus response-outcome learning. Our findings suggest that the goal-directed and instrumental controllers dynamically engage the ventral striatum in representing prediction errors whenever one of them is dominating choice behavior. Converging evidence in human neuroimaging studies has shown that the reward prediction errors are correlated with activity in the ventral striatum. Our results demonstrate that this region is simultaneously correlated with a stimulus prediction error. Furthermore, the learning system that is currently dominating behavioral choice dynamically engages the ventral striatum for computing its prediction errors. This demonstrates that the prediction error representations are highly dynamic and influenced by various experimental context. This finding points to a general role of the ventral striatum in detecting expectancy violations and encoding error signals regardless of the specific nature of the reinforcer itself. Copyright © 2016 the authors 0270-6474/16/3612650-11$15.00/0.

  19. A high-precision instrument for mapping of rotational errors in rotary stages

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Weihe; Lauer, Kenneth; Chu, Yong; Nazaretski, Evgeny

    2014-10-02

    A rotational stage is a key component of every X-ray instrument capable of providing tomographic or diffraction measurements. To perform accurate three-dimensional reconstructions, runout errors due to imperfect rotation (e.g.circle of confusion) must be quantified and corrected. A dedicated instrument capable of full characterization and circle of confusion mapping in rotary stages down to the sub-10 nm level has been developed. A high-stability design, with an array of five capacitive sensors, allows simultaneous measurements of wobble, radial and axial displacements. The developed instrument has been used for characterization of two mechanical stages which are part of an X-ray microscope.

  20. Software for the Local Control and Instrumentation System for MFTF

    International Nuclear Information System (INIS)

    Labiak, W.G.

    1979-01-01

    There are nine different systems requiring over fifty computers in the Local Control and Instrumentation System for the Mirror Fusion Test Facility. Each computer system consists of an LSI-11/2 processor with 32,000 words of memory, a serial driver that implements the CAMAC serial highway protocol. With this large number of systems it is important that as much software as possible be common to all systems. A serial communications system has been developed for data transfers between the LSI-11/2's and the supervisory computers. This system is based on the RS 232 C interface with modem control lines. Six modem control lines are used for hardware handshaking, which allows totally independent full duplex communications to occur. Odd parity on each byte and a 16-bit checksum are used to detect errors in transmission

  1. A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem

    KAUST Repository

    Delaigle, Aurore

    2009-03-01

    Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errors-in-variables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained an open problem for the last 15 years. We propose an innovative local polynomial estimator of any order in the errors-in-variables context, derive its design-adaptive asymptotic properties and study its finite sample performance on simulated examples. We provide not only a solution to a long-standing open problem, but also provide methodological contributions to error-invariable regression, including local polynomial estimation of derivative functions.

  2. Characterization of identification errors and uses in localization of poor modal correlation

    Science.gov (United States)

    Martin, Guillaume; Balmes, Etienne; Chancelier, Thierry

    2017-05-01

    While modal identification is a mature subject, very few studies address the characterization of errors associated with components of a mode shape. This is particularly important in test/analysis correlation procedures, where the Modal Assurance Criterion is used to pair modes and to localize at which sensors discrepancies occur. Poor correlation is usually attributed to modeling errors, but clearly identification errors also occur. In particular with 3D Scanning Laser Doppler Vibrometer measurement, many transfer functions are measured. As a result individual validation of each measurement cannot be performed manually in a reasonable time frame and a notable fraction of measurements is expected to be fairly noisy leading to poor identification of the associated mode shape components. The paper first addresses measurements and introduces multiple criteria. The error measures the difference between test and synthesized transfer functions around each resonance and can be used to localize poorly identified modal components. For intermediate error values, diagnostic of the origin of the error is needed. The level evaluates the transfer function amplitude in the vicinity of a given mode and can be used to eliminate sensors with low responses. A Noise Over Signal indicator, product of error and level, is then shown to be relevant to detect poorly excited modes and errors due to modal property shifts between test batches. Finally, a contribution is introduced to evaluate the visibility of a mode in each transfer. Using tests on a drum brake component, these indicators are shown to provide relevant insight into the quality of measurements. In a second part, test/analysis correlation is addressed with a focus on the localization of sources of poor mode shape correlation. The MACCo algorithm, which sorts sensors by the impact of their removal on a MAC computation, is shown to be particularly relevant. Combined with the error it avoids keeping erroneous modal components

  3. Local Use-Dependent Sleep in Wakefulness Links Performance Errors to Learning.

    Science.gov (United States)

    Quercia, Angelica; Zappasodi, Filippo; Committeri, Giorgia; Ferrara, Michele

    2018-01-01

    Sleep and wakefulness are no longer to be considered as discrete states. During wakefulness brain regions can enter a sleep-like state (off-periods) in response to a prolonged period of activity (local use-dependent sleep). Similarly, during nonREM sleep the slow-wave activity, the hallmark of sleep plasticity, increases locally in brain regions previously involved in a learning task. Recent studies have demonstrated that behavioral performance may be impaired by off-periods in wake in task-related regions. However, the relation between off-periods in wake, related performance errors and learning is still untested in humans. Here, by employing high density electroencephalographic (hd-EEG) recordings, we investigated local use-dependent sleep in wake, asking participants to repeat continuously two intensive spatial navigation tasks. Critically, one task relied on previous map learning (Wayfinding) while the other did not (Control). Behaviorally awake participants, who were not sleep deprived, showed progressive increments of delta activity only during the learning-based spatial navigation task. As shown by source localization, delta activity was mainly localized in the left parietal and bilateral frontal cortices, all regions known to be engaged in spatial navigation tasks. Moreover, during the Wayfinding task, these increments of delta power were specifically associated with errors, whose probability of occurrence was significantly higher compared to the Control task. Unlike the Wayfinding task, during the Control task neither delta activity nor the number of errors increased progressively. Furthermore, during the Wayfinding task, both the number and the amplitude of individual delta waves, as indexes of neuronal silence in wake (off-periods), were significantly higher during errors than hits. Finally, a path analysis linked the use of the spatial navigation circuits undergone to learning plasticity to off periods in wake. In conclusion, local sleep regulation in

  4. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    Directory of Open Access Journals (Sweden)

    Mu Zhou

    2014-01-01

    Full Text Available This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs in logarithmic received signal strength (RSS varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.

  5. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    Science.gov (United States)

    Tian, Zengshan; Xu, Kunjie; Yu, Xiang

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349

  6. Local tax interaction with multiple tax instruments: evidence from Flemish municipalities

    OpenAIRE

    S. VAN PARYS; B. MERLEVEDE; T. VERBEKE

    2010-01-01

    We investigate the long run result of strategic interaction among local jurisdictions using multiple tax instruments. Most studies about local policy interaction only consider a single policy instrument. With multiple tax instruments, however, tax interaction is more complex. We construct a simple theoretical framework based on a basic spillover model, with two tax rates and immobile resources. We show that the signs of within and cross tax interaction crucially depend on the extent to which ...

  7. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan; Hart, Jeffrey D.; Janicki, Ryan; Carroll, Raymond J.

    2010-01-01

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal

  8. Weak instruments and the first stage F-statistic in IV models with a nonscalar error covariance structure

    NARCIS (Netherlands)

    Bun, M.; de Haan, M.

    2010-01-01

    We analyze the usefulness of the first stage F-statistic for detecting weak instruments in the IV model with a nonscalar error covariance structure. More in particular, we question the validity of the rule of thumb of a first stage F-statistic of 10 or higher for models with correlated errors

  9. Clinical and Radiographic Evaluation of Procedural Errors during Preparation of Curved Root Canals with Hand and Rotary Instruments: A Randomized Clinical Study

    Science.gov (United States)

    Khanna, Rajesh; Handa, Aashish; Virk, Rupam Kaur; Ghai, Deepika; Handa, Rajni Sharma; Goel, Asim

    2017-01-01

    Background: The process of cleaning and shaping the canal is not an easy goal to obtain, as canal curvature played a significant role during the instrumentation of the curved canals. Aim: The present in vivo study was conducted to evaluate procedural errors during the preparation of curved root canals using hand Nitiflex and rotary K3XF instruments. Materials and Methods: Procedural errors such as ledge formation, instrument separation, and perforation (apical, furcal, strip) were determined in sixty patients, divided into two groups. In Group I, thirty teeth in thirty patients were prepared using hand Nitiflex system, and in Group II, thirty teeth in thirty patients were prepared using K3XF rotary system. The evaluation was done clinically as well as radiographically. The results recorded from both groups were compiled and put to statistical analysis. Statistical Analysis: Chi-square test was used to compare the procedural errors (instrument separation, ledge formation, and perforation). Results: In the present study, both hand Nitiflex and rotary K3XF showed ledge formation and instrument separation. Although ledge formation and instrument separation by rotary K3XF file system was less as compared to hand Nitiflex. No perforation was seen in both the instrument groups. Conclusion: Canal curvature played a significant role during the instrumentation of the curved canals. Procedural errors such as ledge formation and instrument separation by rotary K3XF file system were less as compared to hand Nitiflex. PMID:29042727

  10. Automatic localization of the da Vinci surgical instrument tips in 3-D transrectal ultrasound.

    Science.gov (United States)

    Mohareri, Omid; Ramezani, Mahdi; Adebar, Troy K; Abolmaesumi, Purang; Salcudean, Septimiu E

    2013-09-01

    Robot-assisted laparoscopic radical prostatectomy (RALRP) using the da Vinci surgical system is the current state-of-the-art treatment option for clinically confined prostate cancer. Given the limited field of view of the surgical site in RALRP, several groups have proposed the integration of transrectal ultrasound (TRUS) imaging in the surgical workflow to assist with accurate resection of the prostate and the sparing of the neurovascular bundles (NVBs). We previously introduced a robotic TRUS manipulator and a method for automatically tracking da Vinci surgical instruments with the TRUS imaging plane, in order to facilitate the integration of intraoperative TRUS in RALRP. Rapid and automatic registration of the kinematic frames of the da Vinci surgical system and the robotic TRUS probe manipulator is a critical component of the instrument tracking system. In this paper, we propose a fully automatic registration technique based on automatic 3-D TRUS localization of robot instrument tips pressed against the air-tissue boundary anterior to the prostate. The detection approach uses a multiscale filtering technique to identify and localize surgical instrument tips in the TRUS volume, and could also be used to detect other surface fiducials in 3-D ultrasound. Experiments have been performed using a tissue phantom and two ex vivo tissue samples to show the feasibility of the proposed methods. Also, an initial in vivo evaluation of the system has been carried out on a live anaesthetized dog with a da Vinci Si surgical system and a target registration error (defined as the root mean square distance of corresponding points after registration) of 2.68 mm has been achieved. Results show this method's accuracy and consistency for automatic registration of TRUS images to the da Vinci surgical system.

  11. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  12. The behaviour of the local error in splitting methods applied to stiff problems

    International Nuclear Information System (INIS)

    Kozlov, Roman; Kvaernoe, Anne; Owren, Brynjulf

    2004-01-01

    Splitting methods are frequently used in solving stiff differential equations and it is common to split the system of equations into a stiff and a nonstiff part. The classical theory for the local order of consistency is valid only for stepsizes which are smaller than what one would typically prefer to use in the integration. Error control and stepsize selection devices based on classical local order theory may lead to unstable error behaviour and inefficient stepsize sequences. Here, the behaviour of the local error in the Strang and Godunov splitting methods is explained by using two different tools, Lie series and singular perturbation theory. The two approaches provide an understanding of the phenomena from different points of view, but both are consistent with what is observed in numerical experiments

  13. Design, calibration and error analysis of instrumentation for heat transfer measurements in internal combustion engines

    Science.gov (United States)

    Ferguson, C. R.; Tree, D. R.; Dewitt, D. P.; Wahiduzzaman, S. A. H.

    1987-01-01

    The paper reports the methodology and uncertainty analyses of instrumentation for heat transfer measurements in internal combustion engines. Results are presented for determining the local wall heat flux in an internal combustion engine (using a surface thermocouple-type heat flux gage) and the apparent flame-temperature and soot volume fraction path length product in a diesel engine (using two-color pyrometry). It is shown that a surface thermocouple heat transfer gage suitably constructed and calibrated will have an accuracy of 5 to 10 percent. It is also shown that, when applying two-color pyrometry to measure the apparent flame temperature and soot volume fraction-path length, it is important to choose at least one of the two wavelengths to lie in the range of 1.3 to 2.3 micrometers. Carefully calibrated two-color pyrometer can ensure that random errors in the apparent flame temperature and in the soot volume fraction path length will remain small (within about 1 percent and 10-percent, respectively).

  14. Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.

    Science.gov (United States)

    Monica, Stefania; Ferrari, Gianluigi

    2018-05-17

    Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.

  15. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    KAUST Repository

    Pan, B.

    2016-03-22

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work. © 2016 Elsevier Ltd. All rights reserved.

  16. The Effects of Lever Arm (Instrument Offset) Error on GRAV-D Airborne Gravity Data

    Science.gov (United States)

    Johnson, J. A.; Youngman, M.; Damiani, T.

    2017-12-01

    High quality airborne gravity collection with a 2-axis, stabilized platform gravity instrument, such as with a Micro-g LaCoste Turnkey Airborne Gravity System (TAGS), is dependent on the aircraft's ability to maintain "straight and level" flight. However, during flight there is constant rotation about the aircraft's center of gravity. Standard practice is to install the scientific equipment close to the aircraft's estimated center of gravity to minimize the relative rotations with aircraft motion. However, there remain small offsets between the instruments. These distance offsets, the lever arm, are used to define the rigid-body, spatial relationship between the IMU, GPS antenna, and airborne gravimeter within the aircraft body frame. The Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, which is collecting airborne gravity data across the U.S., uses a commercial software package for coupled IMU-GNSS aircraft positioning. This software incorporates a lever arm correction to calculate a precise position for the airborne gravimeter. The positioning software must do a coordinate transformation to relate each epoch of the coupled GNSS-IMU derived position to the position of the gravimeter within the constantly-rotating aircraft. This transformation requires three inputs: accurate IMU-measured aircraft rotations, GNSS positions, and lever arm distances between instruments. Previous studies show that correcting for the lever arm distances improves gravity results, but no sensitivity tests have been done to investigate how error in the lever arm distances affects the final airborne gravity products. This research investigates the effects of lever arm measurement error on airborne gravity data. GRAV-D lever arms are nominally measured to the cm-level using surveying equipment. "Truth" data sets will be created by processing GRAV-D flight lines with both relatively small lever arms and large lever arms. Then negative and positive incremental

  17. Multi-isocenter stereotactic radiotherapy: implications for target dose distributions of systematic and random localization errors

    International Nuclear Information System (INIS)

    Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.

    2001-01-01

    Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition

  18. MITS instrumentation error analysis report

    International Nuclear Information System (INIS)

    Nelson, D.W.; Hillon, D.D.

    1980-01-01

    The MITS (Machine Interface Test System) installation consists of three types of process monitoring and control instrumentation: flow, pressure, and temperature. An effort has been made to assess the various instruments used and assign a value to the accuracy that can be expected. Efforts were also made to analyze the calibration and installation procedures to be used and determine how these might effect the system accuracy

  19. The Accuracy of GBM GRB Localizations

    Science.gov (United States)

    Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.

    2010-03-01

    We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.

  20. Quantitative analysis of residual protein contamination of podiatry instruments reprocessed through local and central decontamination units.

    Science.gov (United States)

    Smith, Gordon Wg; Goldie, Frank; Long, Steven; Lappin, David F; Ramage, Gordon; Smith, Andrew J

    2011-01-10

    The cleaning stage of the instrument decontamination process has come under increased scrutiny due to the increasing complexity of surgical instruments and the adverse affects of residual protein contamination on surgical instruments. Instruments used in the podiatry field have a complex surface topography and are exposed to a wide range of biological contamination. Currently, podiatry instruments are reprocessed locally within surgeries while national strategies are favouring a move toward reprocessing in central facilities. The aim of this study was to determine the efficacy of local and central reprocessing on podiatry instruments by measuring residual protein contamination of instruments reprocessed by both methods. The residual protein of 189 instruments reprocessed centrally and 189 instruments reprocessed locally was determined using a fluorescent assay based on the reaction of proteins with o-phthaldialdehyde/sodium 2-mercaptoethanesulfonate. Residual protein was detected on 72% (n = 136) of instruments reprocessed centrally and 90% (n = 170) of instruments reprocessed locally. Significantly less protein (p podiatry instruments when protein contamination is considered, though no significant difference was found in residual protein between local decontamination unit and central decontamination unit processes for Blacks files. Further research is needed to undertake qualitative identification of protein contamination to identify any cross contamination risks and a standard for acceptable residual protein contamination applicable to different instruments and specialities should be considered as a matter of urgency.

  1. GPS/DR Error Estimation for Autonomous Vehicle Localization.

    Science.gov (United States)

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-08-21

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  2. Quantitative analysis of residual protein contamination of podiatry instruments reprocessed through local and central decontamination units

    Directory of Open Access Journals (Sweden)

    Ramage Gordon

    2011-01-01

    Full Text Available Abstract Background The cleaning stage of the instrument decontamination process has come under increased scrutiny due to the increasing complexity of surgical instruments and the adverse affects of residual protein contamination on surgical instruments. Instruments used in the podiatry field have a complex surface topography and are exposed to a wide range of biological contamination. Currently, podiatry instruments are reprocessed locally within surgeries while national strategies are favouring a move toward reprocessing in central facilities. The aim of this study was to determine the efficacy of local and central reprocessing on podiatry instruments by measuring residual protein contamination of instruments reprocessed by both methods. Methods The residual protein of 189 instruments reprocessed centrally and 189 instruments reprocessed locally was determined using a fluorescent assay based on the reaction of proteins with o-phthaldialdehyde/sodium 2-mercaptoethanesulfonate. Results Residual protein was detected on 72% (n = 136 of instruments reprocessed centrally and 90% (n = 170 of instruments reprocessed locally. Significantly less protein (p Conclusions Overall, the results show the superiority of central reprocessing for complex podiatry instruments when protein contamination is considered, though no significant difference was found in residual protein between local decontamination unit and central decontamination unit processes for Blacks files. Further research is needed to undertake qualitative identification of protein contamination to identify any cross contamination risks and a standard for acceptable residual protein contamination applicable to different instruments and specialities should be considered as a matter of urgency.

  3. Multipole error analysis using local 3-bump orbit data in Fermilab Recycler

    International Nuclear Information System (INIS)

    Yang, M.J.; Xiao, M.

    2005-01-01

    The magnetic harmonic errors of the Fermilab Recycler ring were examined using circulating beam data taken with closed local orbit bumps. Data was first parsed into harmonic orbits of first, second, and third order. Each of which was analyzed for sources of magnetic errors of corresponding order. This study was made possible only with the incredible resolution of a new BPM system that was commissioned after June of 2003

  4. Fourier decomposition of spatial localization errors reveals an idiotropic dominance of an internal model of gravity.

    Science.gov (United States)

    De Sá Teixeira, Nuno Alexandre

    2014-12-01

    Given its conspicuous nature, gravity has been acknowledged by several research lines as a prime factor in structuring the spatial perception of one's environment. One such line of enquiry has focused on errors in spatial localization aimed at the vanishing location of moving objects - it has been systematically reported that humans mislocalize spatial positions forward, in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, spatial localization errors were found to evolve dynamically with time in a pattern congruent with an anticipated trajectory (representational trajectory). The present study attempts to ascertain the degree to which vestibular information plays a role in these phenomena. Human observers performed a spatial localization task while tilted to varying degrees and referring to the vanishing locations of targets moving along several directions. A Fourier decomposition of the obtained spatial localization errors revealed that although spatial errors were increased "downward" mainly along the body's longitudinal axis (idiotropic dominance), the degree of misalignment between the latter and physical gravity modulated the time course of the localization responses. This pattern is surmised to reflect increased uncertainty about the internal model when faced with conflicting cues regarding the perceived "downward" direction.

  5. Eliminating the domain error in local explicitly correlated second-order Møller-Plesset perturbation theory.

    Science.gov (United States)

    Werner, Hans-Joachim

    2008-09-14

    A new explicitly correlated local MP2-F12 method is proposed in which the error caused by truncating the virtual orbital space to pair-specific local domains is almost entirely removed. This is achieved by a simple modification of the ansatz for the explicitly correlated wave function, which makes it possible that the explicitly correlated terms correct both for the basis set incompleteness error as well as for the domain error in the LMP2. Benchmark calculations are presented for 21 molecules and 16 chemical reactions. The results demonstrate that the local approximations have hardly any effect on the accuracy of the computed correlation energies and reaction energies, and the LMP2-F12 reaction energies agree within 0.1-0.2 kcal/mol with estimated MP2 basis set limits.

  6. GPS/DR Error Estimation for Autonomous Vehicle Localization

    Directory of Open Access Journals (Sweden)

    Byung-Hyun Lee

    2015-08-01

    Full Text Available Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  7. Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice

    International Nuclear Information System (INIS)

    Kim, Isaac H.

    2011-01-01

    We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.

  8. Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice

    Science.gov (United States)

    Kim, Isaac H.

    2011-05-01

    We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.

  9. Benefits and Impact of Joint Metric of AOA/RSS/TOF on Indoor Localization Error

    Directory of Open Access Journals (Sweden)

    Qing Jiang

    2016-10-01

    Full Text Available The emerging techniques in the Fifth Generation (5G communication system, like the millimeter-Wave (mmWave and massive Multiple Input Multiple Output (MIMO, make it possible to measure the Angle-Of-arrival (AOA, Receive Signal Strength (RSS and Time-Of-flight (TOF by using various types of mobile devices. At the same time, there is always significant interest in the high-precision localization techniques based on the joint metric of AOA/RSS/TOF, which enable one to overcome the drawback of the single metric-based localization. Motivated by this concern, we rely on the Cramer–Rao Lower Bound (CRLB to analyze the localization errors of RSS/AOA, RSS/TOF, AOA/TOF and the Joint Metric of AOA/RSS/TOF (JMART-based localization. The error bounds derived in this paper can be selected as the benchmarking results to evaluate the indoor localization performance. Finally, extensive simulations are conducted to support our claim.

  10. Quantifying and handling errors in instrumental measurements using the measurement error theory

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.

    2003-01-01

    . This is a new way of using the measurement error theory. Reliability ratios illustrate that the models for the two fish species are influenced differently by the error. However, the error seems to influence the predictions of the two reference measures in the same way. The effect of using replicated x...... measurements. A new general formula is given for how to correct the least squares regression coefficient when a different number of replicated x-measurements is used for prediction than for calibration. It is shown that the correction should be applied when the number of replicates in prediction is less than...

  11. Indoor footstep localization from structural dynamics instrumentation

    Science.gov (United States)

    Poston, Jeffrey D.; Buehrer, R. Michael; Tarazaga, Pablo A.

    2017-05-01

    Measurements from accelerometers originally deployed to measure a building's structural dynamics can serve a new role: locating individuals moving within a building. Specifically, this paper proposes measurements of footstep-generated vibrations as a novel source of information for localization. The complexity of wave propagation in a building (e.g., dispersion and reflection) limits the utility of existing algorithms designed to locate, for example, the source of sound in a room or radio waves in free space. This paper develops enhancements for arrival time determination and time difference of arrival localization in order to address the complexities posed by wave propagation within a building's structure. Experiments with actual measurements from an instrumented public building demonstrate the potential of locating footsteps to sub-meter accuracy. Furthermore, this paper explains how to forecast performance in other buildings with different sensor configurations. This localization capability holds the potential to assist public safety agencies in building evacuation and incidence response, to facilitate occupancy-based optimization of heating or cooling and to inform facility security.

  12. Systematic instrumental errors between oxygen saturation analysers in fetal blood during deep hypoxemia.

    Science.gov (United States)

    Porath, M; Sinha, P; Dudenhausen, J W; Luttkus, A K

    2001-05-01

    During a study of artificially produced deep hypoxemia in fetal cord blood, systematic errors of three different oxygen saturation analysers were evaluated against a reference CO oximeter. The oxygen tensions (PO2) of 83 pre-heparinized fetal blood samples from umbilical veins were reduced by tonometry to 1.3 kPa (10 mm Hg) and 2.7 kPa (20 mm Hg). The oxygen saturation (SO2) was determined (n=1328) on a reference CO oximeter (ABL625, Radiometer Copenhagen) and on three tested instruments (two CO oximeters: Chiron865, Bayer Diagnostics; ABL700, Radiometer Copenhagen, and a portable blood gas analyser, i-STAT, Abbott). The CO oximeters measure the oxyhemoglobin and the reduced hemoglobin fractions by absorption spectrophotometry. The i-STAT system calculates the oxygen saturation from the measured pH, PO2, and PCO2. The measurements were performed in duplicate. Statistical evaluation focused on the differences between duplicate measurements and on systematic instrumental errors in oxygen saturation analysis compared to the reference CO oximeter. After tonometry, the median saturation dropped to 32.9% at a PO2=2.7 kPa (20 mm Hg), defined as saturation range 1, and to 10% SO2 at a PO2=1.3 kPa (10 mm Hg), defined as range 2. With decreasing SO2, all devices showed an increased difference between duplicate measurements. ABL625 and ABL700 showed the closest agreement between instruments (0.25% SO2 bias at saturation range 1 and -0.33% SO2 bias at saturation range 2). Chiron865 indicated higher saturation values than ABL 625 (3.07% SO2 bias at saturation range 1 and 2.28% SO2 bias at saturation range 2). Calculated saturation values (i-STAT) were more than 30% lower than the measured values of ABL625. The disagreement among CO oximeters was small but increasing under deep hypoxemia. Calculation found unacceptably low saturation.

  13. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    Science.gov (United States)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  14. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    Science.gov (United States)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion

  15. Technical Note: Error metrics for estimating the accuracy of needle/instrument placement during transperineal magnetic resonance/ultrasound-guided prostate interventions.

    Science.gov (United States)

    Bonmati, Ester; Hu, Yipeng; Villarini, Barbara; Rodell, Rachael; Martin, Paul; Han, Lianghao; Donaldson, Ian; Ahmed, Hashim U; Moore, Caroline M; Emberton, Mark; Barratt, Dean C

    2018-04-01

    Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. A set of nine measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0 ± 1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0 ± 1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. The application of a comprehensive, unbiased validation assessment for MR/US guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behavior of these systems. © 2018 American Association of Physicists in Medicine.

  16. Fit between Conservation Instruments and Local Social Systems: Cases of Co-management and Payments for Ecosystem Services

    Directory of Open Access Journals (Sweden)

    Sarkki Simo

    2015-01-01

    Full Text Available We draw on the concept of ‘fit’ to understand how co-management and Payments for Ecosystem Services (PES as governance instruments could better acknowledge local social complexities. Achieving ‘participatory fit’ requires well-designed and fair processes, which enhance local acceptance towards the implemented rules. Thus, such fit can contribute to establishing new institutions in conservation governance. However, previous literature on participation has had strong focus on properties of decision-making processes, which often neglects the question on how local realities effect on local people’s ability and willingness to participate in the work of governance instruments. We approach ‘participatory fit’ by identifying six properties of heterogeneous local social systems that governance instruments need to acknowledge to nurture balanced bottom-up participation: 1 economic resources and structures, 2 relationships to land, 3 level of education, 4 relationships between diverse actors, 5 divergent problem definitions, and 6 local identities. We discuss related sources of misfits and develop proposals on how conservation instruments could function as bridging organizations facilitating polycentric institutional structures that fit better to the social systems they are intended to govern. Such hybridization of governance could avoid pitfalls of considering one particular instrument (e.g. co-management or PES as a panacea able to create win-win solutions.

  17. The KNK II instrumentation for global and local supervision of the reactor core

    International Nuclear Information System (INIS)

    Steiger, W.O.

    1991-01-01

    After an introduction into the KNK plant itself, their historical development and their present situation, the instrumentation of the global and local supervision of the KNK II-core as well as the main safety-related instrumentation and control systems is described. Special emphasis is laid on the instrumentation of the reactor protection systems and the shut down systems. After that some practices are reported about instrumentation behavior and lessons learned from the operation and maintenance of the above mentioned systems. At last follows a short description of the special instrumentation for the detection of failed fuel subassemblies and of the plant data processing system. (author). 4 refs, 18 tabs

  18. Density dependence and climate effects in Rocky Mountain elk: an application of regression with instrumental variables for population time series with sampling error.

    Science.gov (United States)

    Creel, Scott; Creel, Michael

    2009-11-01

    1. Sampling error in annual estimates of population size creates two widely recognized problems for the analysis of population growth. First, if sampling error is mistakenly treated as process error, one obtains inflated estimates of the variation in true population trajectories (Staples, Taper & Dennis 2004). Second, treating sampling error as process error is thought to overestimate the importance of density dependence in population growth (Viljugrein et al. 2005; Dennis et al. 2006). 2. In ecology, state-space models are used to account for sampling error when estimating the effects of density and other variables on population growth (Staples et al. 2004; Dennis et al. 2006). In econometrics, regression with instrumental variables is a well-established method that addresses the problem of correlation between regressors and the error term, but requires fewer assumptions than state-space models (Davidson & MacKinnon 1993; Cameron & Trivedi 2005). 3. We used instrumental variables to account for sampling error and fit a generalized linear model to 472 annual observations of population size for 35 Elk Management Units in Montana, from 1928 to 2004. We compared this model with state-space models fit with the likelihood function of Dennis et al. (2006). We discuss the general advantages and disadvantages of each method. Briefly, regression with instrumental variables is valid with fewer distributional assumptions, but state-space models are more efficient when their distributional assumptions are met. 4. Both methods found that population growth was negatively related to population density and winter snow accumulation. Summer rainfall and wolf (Canis lupus) presence had much weaker effects on elk (Cervus elaphus) dynamics [though limitation by wolves is strong in some elk populations with well-established wolf populations (Creel et al. 2007; Creel & Christianson 2008)]. 5. Coupled with predictions for Montana from global and regional climate models, our results

  19. Robust Estimator for Non-Line-of-Sight Error Mitigation in Indoor Localization

    Directory of Open Access Journals (Sweden)

    Marco A

    2006-01-01

    Full Text Available Indoor localization systems are undoubtedly of interest in many application fields. Like outdoor systems, they suffer from non-line-of-sight (NLOS errors which hinder their robustness and accuracy. Though many ad hoc techniques have been developed to deal with this problem, unfortunately most of them are not applicable indoors due to the high variability of the environment (movement of furniture and of people, etc.. In this paper, we describe the use of robust regression techniques to detect and reject NLOS measures in a location estimation using multilateration. We show how the least-median-of-squares technique can be used to overcome the effects of NLOS errors, even in environments with little infrastructure, and validate its suitability by comparing it to other methods described in the bibliography. We obtained remarkable results when using it in a real indoor positioning system that works with Bluetooth and ultrasound (BLUPS, even when nearly half the measures suffered from NLOS or other coarse errors.

  20. Robust Estimator for Non-Line-of-Sight Error Mitigation in Indoor Localization

    Science.gov (United States)

    Casas, R.; Marco, A.; Guerrero, J. J.; Falcó, J.

    2006-12-01

    Indoor localization systems are undoubtedly of interest in many application fields. Like outdoor systems, they suffer from non-line-of-sight (NLOS) errors which hinder their robustness and accuracy. Though many ad hoc techniques have been developed to deal with this problem, unfortunately most of them are not applicable indoors due to the high variability of the environment (movement of furniture and of people, etc.). In this paper, we describe the use of robust regression techniques to detect and reject NLOS measures in a location estimation using multilateration. We show how the least-median-of-squares technique can be used to overcome the effects of NLOS errors, even in environments with little infrastructure, and validate its suitability by comparing it to other methods described in the bibliography. We obtained remarkable results when using it in a real indoor positioning system that works with Bluetooth and ultrasound (BLUPS), even when nearly half the measures suffered from NLOS or other coarse errors.

  1. The KNK II instrumentation for global and local supervision of the reactor core

    International Nuclear Information System (INIS)

    Steiger, W.O.

    1990-01-01

    After an introduction into the KNK plant itself, their historical development and their present situation, the instrumentation of the global and local supervision of the KNK II-core as well as the main safety-related i- and c-systems are described. Special emphasis is laid on the instrumentation of the reactor protection systems and the shutdown systems. After that some practices are reported about instrumentation behavior and lessons learned from the operation and maintenance of the above mentioned systems. At last follows a short description of the special instrumentation for the detection of failed fuel subassemblies and of the plant data processing system. (orig.)

  2. Extension to HiRLoc Algorithm for Localization Error Computation in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Swati Saxena

    2013-09-01

    Full Text Available Wireless sensor networks (WSNs have gained importance in recent years as this support a large spectrum of applications such as automotive, health, military, environmental, home and office. Various algorithms have been proposed for making this technology more adaptive the existing algorithms address issues such as safety, security, power consumption, lifetime and localization. This paper presents an extension to HiRLoc algorithm and highlights its benefits. Extended HiRLoc significantly reduce the average localization error by suggesting a new method directional antenna based scheme.

  3. Local and accumulated truncation errors in a class of perturbative numerical methods

    International Nuclear Information System (INIS)

    Adam, G.; Adam, S.; Corciovei, A.

    1980-01-01

    The approach to the solution of the radial Schroedinger equation using piecewise perturbative theory with a step function reference potential leads to a class of powerful numerical methods, conveniently abridged as SF-PNM(K), where K denotes the order at which the perturbation series was truncated. In the present paper rigorous results are given for the local truncation errors and bounds are derived for the accumulated truncated errors associated to SF-PNM(K), K = 0, 1, 2. They allow us to establish the smoothness conditions which have to be fulfilled by the potential in order to ensure a safe use of SF-PNM(K), and to understand the experimentally observed behaviour of the numerical results with the step size h. (author)

  4. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    Science.gov (United States)

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  5. Automated systems help prevent operator error during [reactor] I and C [instrumentation and control] testing

    International Nuclear Information System (INIS)

    Courcoux, R.

    1989-01-01

    On a nuclear steam supply system, even a minor failure can involve actuation of the whole reactor protection system (RPS). To reduce the likelihood of human error leading to unwanted trips during the maintenance of instrumentation and control systems, Framatome has been developing and installing various automated testing systems. Such automated systems are particularly helpful when periodic tests with a potential for RPS actuation have to be carried out, or when the test is on the critical path for the refuelling outage. The Sensitive Channel Programme described is an example of the sort of work that has been done. (author)

  6. A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers

    Science.gov (United States)

    Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen; hide

    2016-01-01

    We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.

  7. Runtime Detection of C-Style Errors in UPC Code

    Energy Technology Data Exchange (ETDEWEB)

    Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D

    2011-09-29

    Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.

  8. Local setup errors in image-guided radiotherapy for head and neck cancer patients immobilized with a custom-made device.

    Science.gov (United States)

    Giske, Kristina; Stoiber, Eva M; Schwarz, Michael; Stoll, Armin; Muenter, Marc W; Timke, Carmen; Roeder, Falk; Debus, Juergen; Huber, Peter E; Thieke, Christian; Bendl, Rolf

    2011-06-01

    To evaluate the local positioning uncertainties during fractionated radiotherapy of head-and-neck cancer patients immobilized using a custom-made fixation device and discuss the effect of possible patient correction strategies for these uncertainties. A total of 45 head-and-neck patients underwent regular control computed tomography scanning using an in-room computed tomography scanner. The local and global positioning variations of all patients were evaluated by applying a rigid registration algorithm. One bounding box around the complete target volume and nine local registration boxes containing relevant anatomic structures were introduced. The resulting uncertainties for a stereotactic setup and the deformations referenced to one anatomic local registration box were determined. Local deformations of the patients immobilized using our custom-made device were compared with previously published results. Several patient positioning correction strategies were simulated, and the residual local uncertainties were calculated. The patient anatomy in the stereotactic setup showed local systematic positioning deviations of 1-4 mm. The deformations referenced to a particular anatomic local registration box were similar to the reported deformations assessed from patients immobilized with commercially available Aquaplast masks. A global correction, including the rotational error compensation, decreased the remaining local translational errors. Depending on the chosen patient positioning strategy, the remaining local uncertainties varied considerably. Local deformations in head-and-neck patients occur even if an elaborate, custom-made patient fixation method is used. A rotational error correction decreased the required margins considerably. None of the considered correction strategies achieved perfect alignment. Therefore, weighting of anatomic subregions to obtain the optimal correction vector should be investigated in the future. Copyright © 2011 Elsevier Inc. All rights

  9. Neutron-multiplication measurement instrument

    Energy Technology Data Exchange (ETDEWEB)

    Nixon, K.V.; Dowdy, E.J.; France, S.W.; Millegan, D.R.; Robba, A.A.

    1982-01-01

    The Advanced Nuclear Technology Group of the Los Alamos National Laboratory is now using intelligent data-acquisition and analysis instrumentation for determining the multiplication of nuclear material. Earlier instrumentation, such as the large NIM-crate systems, depended on house power and required additional computation to determine multiplication or to estimate error. The portable, battery-powered multiplication measurement unit, with advanced computational power, acquires data, calculates multiplication, and completes error analysis automatically. Thus, the multiplication is determined easily and an available error estimate enables the user to judge the significance of results.

  10. Neutron multiplication measurement instrument

    International Nuclear Information System (INIS)

    Nixon, K.V.; Dowdy, E.J.; France, S.W.; Millegan, D.R.; Robba, A.A.

    1983-01-01

    The Advanced Nuclear Technology Group of the Los Alamos National Laboratory is now using intelligent data-acquisition and analysis instrumentation for determining the multiplication of nuclear material. Earlier instrumentation, such as the large NIM-crate systems, depended on house power and required additional computation to determine multiplication or to estimate error. The portable, battery-powered multiplication measurement unit, with advanced computational power, acquires data, calculates multiplication, and completes error analysis automatically. Thus, the multiplication is determined easily and an available error estimate enables the user to judge the significance of results

  11. Neutron-multiplication measurement instrument

    International Nuclear Information System (INIS)

    Nixon, K.V.; Dowdy, E.J.; France, S.W.; Millegan, D.R.; Robba, A.A.

    1982-01-01

    The Advanced Nuclear Technology Group of the Los Alamos National Laboratory is now using intelligent data-acquisition and analysis instrumentation for determining the multiplication of nuclear material. Earlier instrumentation, such as the large NIM-crate systems, depended on house power and required additional computation to determine multiplication or to estimate error. The portable, battery-powered multiplication measurement unit, with advanced computational power, acquires data, calculates multiplication, and completes error analysis automatically. Thus, the multiplication is determined easily and an available error estimate enables the user to judge the significance of results

  12. The Neuroelectromagnetic Inverse Problem and the Zero Dipole Localization Error

    Directory of Open Access Journals (Sweden)

    Rolando Grave de Peralta

    2009-01-01

    Full Text Available A tomography of neural sources could be constructed from EEG/MEG recordings once the neuroelectromagnetic inverse problem (NIP is solved. Unfortunately the NIP lacks a unique solution and therefore additional constraints are needed to achieve uniqueness. Researchers are then confronted with the dilemma of choosing one solution on the basis of the advantages publicized by their authors. This study aims to help researchers to better guide their choices by clarifying what is hidden behind inverse solutions oversold by their apparently optimal properties to localize single sources. Here, we introduce an inverse solution (ANA attaining perfect localization of single sources to illustrate how spurious sources emerge and destroy the reconstruction of simultaneously active sources. Although ANA is probably the simplest and robust alternative for data generated by a single dominant source plus noise, the main contribution of this manuscript is to show that zero localization error of single sources is a trivial and largely uninformative property unable to predict the performance of an inverse solution in presence of simultaneously active sources. We recommend as the most logical strategy for solving the NIP the incorporation of sound additional a priori information about neural generators that supplements the information contained in the data.

  13. POTENTIAL DEFICIENCIES IN EDUCATION, INSTRUMENTATION, AND WARNINGS FOR LOCALLY GENERATED TSUNAMIS

    Directory of Open Access Journals (Sweden)

    Daniel A. Walker

    2010-01-01

    Full Text Available A review of historical data for Hawaii reveals that significant tsunamis have been reported for only four of twenty-six potentially tsunamigenic earthquakes from 1868 through 2009 with magnitudes of 6.0 or greater. During the same time period, three significant tsunamis have been reported for substantially smaller earthquakes. This historical perspective, the fact that the last significant local tsunami occurred in 1975, and an understandable preoccupation with tsunamis generated around the margins of the Pacific, all combine to suggest apparent deficiencies in: (1 personal awareness of what to do in the event of a possible local tsunami; (2 the distribution of instrumentation capable of providing rapid confirmation that a local tsunami has been generated; and (3 the subsequent issuance of timely warnings for local tsunamis. With these deficiencies, far more lives may be lost in Hawaii due to local tsunamis than will result from tsunamis that have originated along the margins of the Pacific. Similar deficiencies may exist in other areas of the world threatened by local tsunamis.

  14. Film techniques in radiotherapy for treatment verification, determination of patient exit dose, and detection of localization error

    International Nuclear Information System (INIS)

    Haus, A.G.; Marks, J.E.

    1974-01-01

    In patient radiation therapy, it is important to know that the diseased area is included in the treatment field and that normal anatomy is properly shielded or excluded. Since 1969, a film technique developed for imaging of the complete patient radiation exposure has been applied for treatment verification and for the detection and evaluation of localization errors that may occur during treatment. The technique basically consists of placing a film under the patient during the entire radiation exposure. This film should have proper sensitivity and contrast in the exit dose exposure range encountered in radiotherapy. In this communication, we describe how various exit doses fit the characteristic curve of the film; examples of films exposed to various exit doses; the technique for using the film to determine the spatial distribution of the absorbed exit dose; and types of errors commonly detected. Results are presented illustrating that, as the frequency of use of this film technique is increased, localization error is reduced significantly

  15. Improvement of least-squares collocation error estimates using local GOCE Tzz signal standard deviations

    DEFF Research Database (Denmark)

    Tscherning, Carl Christian

    2015-01-01

    outside the data area. On the other hand, a comparison of predicted quantities with observed values show that the error also varies depending on the local data standard deviation. This quantity may be (and has been) estimated using the GOCE second order vertical derivative, Tzz, in the area covered...... by the satellite. The ratio between the nearly constant standard deviations of a predicted quantity (e.g. in a 25° × 25° area) and the standard deviations of Tzz in smaller cells (e.g., 1° × 1°) have been used as a scale factor in order to obtain more realistic error estimates. This procedure has been applied...

  16. Local hybrid functionals with orbital-free mixing functions and balanced elimination of self-interaction error

    International Nuclear Information System (INIS)

    Silva, Piotr de; Corminboeuf, Clémence

    2015-01-01

    The recently introduced density overlap regions indicator (DORI) [P. de Silva and C. Corminboeuf, J. Chem. Theory Comput. 10(9), 3745–3756 (2014)] is a density-dependent scalar field revealing regions of high density overlap between shells, atoms, and molecules. In this work, we exploit its properties to construct local hybrid exchange-correlation functionals aiming at balanced reduction of the self-interaction error. We show that DORI can successfully replace the ratio of the von Weizsäcker and exact positive-definite kinetic energy densities, which is commonly used in mixing functions of local hybrids. Additionally, we introduce several semi-empirical parameters to control the local and global admixture of exact exchange. The most promising of our local hybrids clearly outperforms the underlying semi-local functionals as well as their global hybrids

  17. Errors in Sounding of the Atmosphere Using Broadband Emission Radiometry (SABER) Kinetic Temperature Caused by Non-Local Thermodynamic Equilibrium Model Parameters

    Science.gov (United States)

    Garcia-Comas, Maya; Lopez-Puertas, M.; Funke, B.; Bermejo-Pantaleon, D.; Marshall, Benjamin T.; Mertens, Christopher J.; Remsberg, Ellis E.; Mlynczak, Martin G.; Gordley, L. L.; Russell, James M.

    2008-01-01

    The vast set of near global and continuous atmospheric measurements made by the SABER instrument since 2002, including daytime and nighttime kinetic temperature (T(sub k)) from 20 to 105 km, is available to the scientific community. The temperature is retrieved from SABER measurements of the atmospheric 15 micron CO2 limb emission. This emission separates from local thermodynamic equilibrium (LTE) conditions in the rarefied mesosphere and thermosphere, making it necessary to consider the CO2 vibrational state non-LTE populations in the retrieval algorithm above 70 km. Those populations depend on kinetic parameters describing the rate at which energy exchange between atmospheric molecules take place, but some of these collisional rates are not well known. We consider current uncertainties in the rates of quenching of CO2 (v2 ) by N2 , O2 and O, and the CO2 (v2 ) vibrational-vibrational exchange to estimate their impact on SABER T(sub k) for different atmospheric conditions. The T(sub k) is more sensitive to the uncertainty in the latter two and their effects depend on altitude. The T(sub k) combined systematic error due to non-LTE kinetic parameters does not exceed +/- 1.5 K below 95 km and +/- 4-5 K at 100 km for most latitudes and seasons (except for polar summer) if the Tk profile does not have pronounced vertical structure. The error is +/- 3 K at 80 km, +/- 6 K at 84 km and +/- 18 K at 100 km under the less favourable polar summer conditions. For strong temperature inversion layers, the errors reach +/- 3 K at 82 km and +/- 8 K at 90 km. This particularly affects tide amplitude estimates, with errors of up to +/- 3 K.

  18. A method for local transport analysis in tokamaks with error calculation

    International Nuclear Information System (INIS)

    Hogeweij, G.M.D.; Hordosy, G.; Lopes Cardozo, N.J.

    1989-01-01

    Global transport studies have revealed that heat transport in a tokamak is anomalous, but cannot provide information about the nature of the anomaly. Therefore, local transport analysis is essential for the study of anomalous transport. However, the determination of local transport coefficients is not a trivial affair. Generally speaking one can either directly measure the heat diffusivity, χ, by means of heat pulse propagation analysis, or deduce the profile of χ from measurements of the profiles of the temperature, T, and the power deposition. Here we are concerned only with the latter method, the local power balance analysis. For the sake of clarity heat diffusion only is considered: ρ=-gradT/q (1) where ρ=κ -1 =(nχ) -1 is the heat resistivity and q is the heat flux per unit area. It is assumed that the profiles T(r) and q(r) are given with some experimental error. In practice T(r) is measured directly, e.g. from ECE spectroscopy, while q(r) is deduced from the power deposition and loss profiles. The latter cannot be measured directly and is partly determined on the basis of models. This complication will not be considered here. Since in eq. (1) the gradient of T appears, noise on T can severely affect the solution ρ. This means that in general some form of smoothing must be applied. A criterion is needed to select the optimal smoothing. Too much smoothing will wipe out the details, whereas with too little smoothing the noise will distort the reconstructed profile of ρ. Here a new method to solve eq. (1) is presented which expresses ρ(r) as a cosine-series. The coefficients of this series are given as linear combinations of the Fourier coefficients of the measured T- and q-profiles. This formulation allows 1) the stable and accurate calculation of the ρ-profile, and 2) the analytical calculation of the error in this profile. (author) 5 refs., 3 figs

  19. Aeroacoustics of Musical Instruments

    NARCIS (Netherlands)

    Fabre, B.; Gilbert, J.; Hirschberg, Abraham; Pelorson, X.

    2012-01-01

    We are interested in the quality of sound produced by musical instruments and their playability. In wind instruments, a hydrodynamic source of sound is coupled to an acoustic resonator. Linear acoustics can predict the pitch of an instrument. This can significantly reduce the trial-and-error process

  20. Detection and Localization of Tooth Breakage Fault on Wind Turbine Planetary Gear System considering Gear Manufacturing Errors

    Directory of Open Access Journals (Sweden)

    Y. Gui

    2014-01-01

    Full Text Available Sidebands of vibration spectrum are sensitive to the fault degree and have been proved to be useful for tooth fault detection and localization. However, the amplitude and frequency modulation due to manufacturing errors (which are inevitable in actual planetary gear system lead to much more complex sidebands. Thus, in the paper, a lumped parameter model for a typical planetary gear system with various types of errors is established. In the model, the influences of tooth faults on time-varying mesh stiffness and tooth impact force are derived analytically. Numerical methods are then utilized to obtain the response spectra of the system with tooth faults with and without errors. Three system components (including sun, planet, and ring gears with tooth faults are considered in the discussion, respectively. Through detailed comparisons of spectral sidebands, fault characteristic frequencies of the system are acquired. Dynamic experiments on a planetary gear-box test rig are carried out to verify the simulation results and these results are of great significances for the detection and localization of tooth faults in wind turbines.

  1. On Calibrating the Sensor Errors of a PDR-Based Indoor Localization System

    Directory of Open Access Journals (Sweden)

    Wen-Yuah Shih

    2013-04-01

    Full Text Available Many studies utilize the signal strength of short-range radio systems (such as WiFi, ultrasound and infrared to build a radio map for indoor localization, by deploying a large number of beacon nodes within a building. The drawback of such an infrastructure-based approach is that the deployment and calibration of the system are costly and labor-intensive. Some prior studies proposed the use of Pedestrian Dead Reckoning (PDR for indoor localization, which does not require the deployment of beacon nodes. In a PDR system, a small number of sensors are put on the pedestrian. These sensors (such as a G-sensor and gyroscope are used to estimate the distance and direction that a user travels. The effectiveness of a PDR system lies in its success in accurately estimating the user’s moving distance and direction. In this work, we propose a novel waist-mounted based PDR that can measure the user’s step lengths with a high accuracy. We utilize vertical acceleration of the body to calculate the user’s change in height during walking. Based on the Pythagorean Theorem, we can then estimate each step length using this data. Furthermore, we design a map matching algorithm to calibrate the direction errors from the gyro using building floor plans. The results of our experiment show that we can achieve about 98.26% accuracy in estimating the user’s walking distance, with an overall location error of about 0.48 m.

  2. Analysis on detection accuracy of binocular photoelectric instrument optical axis parallelism digital calibration instrument

    Science.gov (United States)

    Ying, Jia-ju; Yin, Jian-ling; Wu, Dong-sheng; Liu, Jie; Chen, Yu-dan

    2017-11-01

    Low-light level night vision device and thermal infrared imaging binocular photoelectric instrument are used widely. The maladjustment of binocular instrument ocular axises parallelism will cause the observer the symptom such as dizziness, nausea, when use for a long time. Binocular photoelectric equipment digital calibration instrument is developed for detecting ocular axises parallelism. And the quantitative value of optical axis deviation can be quantitatively measured. As a testing instrument, the precision must be much higher than the standard of test instrument. Analyzes the factors that influence the accuracy of detection. Factors exist in each testing process link which affect the precision of the detecting instrument. They can be divided into two categories, one category is factors which directly affect the position of reticle image, the other category is factors which affect the calculation the center of reticle image. And the Synthesize error is calculated out. And further distribute the errors reasonably to ensure the accuracy of calibration instruments.

  3. LARF: Instrumental Variable Estimation of Causal Effects through Local Average Response Functions

    Directory of Open Access Journals (Sweden)

    Weihua An

    2016-07-01

    Full Text Available LARF is an R package that provides instrumental variable estimation of treatment effects when both the endogenous treatment and its instrument (i.e., the treatment inducement are binary. The method (Abadie 2003 involves two steps. First, pseudo-weights are constructed from the probability of receiving the treatment inducement. By default LARF estimates the probability by a probit regression. It also provides semiparametric power series estimation of the probability and allows users to employ other external methods to estimate the probability. Second, the pseudo-weights are used to estimate the local average response function conditional on treatment and covariates. LARF provides both least squares and maximum likelihood estimates of the conditional treatment effects.

  4. High-Performance Operational and Instrumentation Amplifiers

    NARCIS (Netherlands)

    Shahi, B.

    2015-01-01

    This thesis describes techniques to reduce the offset error in precision instrumentation and operational amplifiers. The offset error which is considered a major error source associated with gain blocks, together with other errors are reviewed. Conventional and newer approaches to remove offset and

  5. Local density measurement of additive manufactured copper parts by instrumented indentation

    Science.gov (United States)

    Santo, Loredana; Quadrini, Fabrizio; Bellisario, Denise; Tedde, Giovanni Matteo; Zarcone, Mariano; Di Domenico, Gildo; D'Angelo, Pierpaolo; Corona, Diego

    2018-05-01

    Instrumented flat indentation has been used to evaluate local density of additive manufactured (AM) copper samples with different relative density. Indentations were made by using tungsten carbide (WC) flat pins with 1 mm diameter. Pure copper powders were used in a selective laser melting (SLM) machine to produce samples to test. By changing process parameters, samples density was changed from the relative density of 63% to 71%. Indentation tests were performed on the xy surface of the AM samples. In order to make a correlation between indentation test results and sample density, the indentation pressure at fixed displacement was selected. Results show that instrumented indentation is a valid technique to measure density distribution along the geometry of an SLM part. In fact, a linear trend between indentation pressure and sample density was found for the selected density range.

  6. Design of a Channel Error Simulator using Virtual Instrument Techniques for the Initial Testing of TCP/IP and SCPS Protocols

    Science.gov (United States)

    Horan, Stephen; Wang, Ru-Hai

    1999-01-01

    There exists a need for designers and developers to have a method to conveniently test a variety of communications parameters for an overall system design. This is no different when testing network protocols as when testing modulation formats. In this report, we discuss a means of providing a networking test device specifically designed to be used for space communications. This test device is a PC-based Virtual Instrument (VI) programmed using the LabVIEW(TM) version 5 software suite developed by National Instruments(TM)TM. This instrument was designed to be portable and usable by others without special, additional equipment. The programming was designed to replicate a VME-based hardware module developed earlier at New Mexico State University (NMSU) and to provide expanded capabilities exceeding the baseline configuration existing in that module. This report describes the design goals for the VI module in the next section and follows that with a description of the design of the VI instrument. This is followed with a description of the validation tests run on the VI. An application of the error-generating VI to networking protocols is then given.

  7. Impact of instrument response variations on health physics measurements

    International Nuclear Information System (INIS)

    Armantrout, G.A.

    1984-10-01

    Uncertainties in estimating the potential health impact of a given radiation exposure include instrument measurement error in determining exposure and difficulty in relating this exposure to an effective dose value. Instrument error can be due to design or manufacturing deficiencies, limitations of the sensing element used, and calibration and maintenance of the instrument. This paper evaluates the errors which can be introduced by design deficiencies and limitations of the sensing element for a wide variety of commonly used survey instruments. The results indicate little difference among sensing element choice for general survey work, with variations among specific instrument designs being the major factor. Ion chamber instruments tend to be the best for all around use, while scintillator-based units should not be used where accurate measurements are required. The need to properly calibrate and maintain an instrument appears to be the most important factor in instrument accuracy. 8 references, 6 tables

  8. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs

  9. Definition of the limit of quantification in the presence of instrumental and non-instrumental errors. Comparison among various definitions applied to the calibration of zinc by inductively coupled plasma-mass spectrometry

    Science.gov (United States)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo

    2015-12-01

    The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.

  10. Interactions and Localization of Escherichia coli Error-Prone DNA Polymerase IV after DNA Damage.

    Science.gov (United States)

    Mallik, Sarita; Popodi, Ellen M; Hanson, Andrew J; Foster, Patricia L

    2015-09-01

    Escherichia coli's DNA polymerase IV (Pol IV/DinB), a member of the Y family of error-prone polymerases, is induced during the SOS response to DNA damage and is responsible for translesion bypass and adaptive (stress-induced) mutation. In this study, the localization of Pol IV after DNA damage was followed using fluorescent fusions. After exposure of E. coli to DNA-damaging agents, fluorescently tagged Pol IV localized to the nucleoid as foci. Stepwise photobleaching indicated ∼60% of the foci consisted of three Pol IV molecules, while ∼40% consisted of six Pol IV molecules. Fluorescently tagged Rep, a replication accessory DNA helicase, was recruited to the Pol IV foci after DNA damage, suggesting that the in vitro interaction between Rep and Pol IV reported previously also occurs in vivo. Fluorescently tagged RecA also formed foci after DNA damage, and Pol IV localized to them. To investigate if Pol IV localizes to double-strand breaks (DSBs), an I-SceI endonuclease-mediated DSB was introduced close to a fluorescently labeled LacO array on the chromosome. After DSB induction, Pol IV localized to the DSB site in ∼70% of SOS-induced cells. RecA also formed foci at the DSB sites, and Pol IV localized to the RecA foci. These results suggest that Pol IV interacts with RecA in vivo and is recruited to sites of DSBs to aid in the restoration of DNA replication. DNA polymerase IV (Pol IV/DinB) is an error-prone DNA polymerase capable of bypassing DNA lesions and aiding in the restart of stalled replication forks. In this work, we demonstrate in vivo localization of fluorescently tagged Pol IV to the nucleoid after DNA damage and to DNA double-strand breaks. We show colocalization of Pol IV with two proteins: Rep DNA helicase, which participates in replication, and RecA, which catalyzes recombinational repair of stalled replication forks. Time course experiments suggest that Pol IV recruits Rep and that RecA recruits Pol IV. These findings provide in vivo evidence

  11. Metrological Array of Cyber-Physical Systems. Part 11. Remote Error Correction of Measuring Channel

    Directory of Open Access Journals (Sweden)

    Yuriy YATSUK

    2015-09-01

    Full Text Available The multi-channel measuring instruments with both the classical structure and the isolated one is identified their errors major factors basing on general it metrological properties analysis. Limiting possibilities of the remote automatic method for additive and multiplicative errors correction of measuring instruments with help of code-control measures are studied. For on-site calibration of multi- channel measuring instruments, the portable voltage calibrators structures are suggested and their metrological properties while automatic errors adjusting are analysed. It was experimentally envisaged that unadjusted error value does not exceed ± 1 mV that satisfies most industrial applications. This has confirmed the main approval concerning the possibilities of remote errors self-adjustment as well multi- channel measuring instruments as calibration tools for proper verification.

  12. Detection of random alterations to time-varying musical instrument spectra.

    Science.gov (United States)

    Horner, Andrew; Beauchamp, James; So, Richard

    2004-09-01

    The time-varying spectra of eight musical instrument sounds were randomly altered by a time-invariant process to determine how detection of spectral alteration varies with degree of alteration, instrument, musical experience, and spectral variation. Sounds were resynthesized with centroids equalized to the original sounds, with frequencies harmonically flattened, and with average spectral error levels of 8%, 16%, 24%, 32%, and 48%. Listeners were asked to discriminate the randomly altered sounds from reference sounds resynthesized from the original data. For all eight instruments, discrimination was very good for the 32% and 48% error levels, moderate for the 16% and 24% error levels, and poor for the 8% error levels. When the error levels were 16%, 24%, and 32%, the scores of musically experienced listeners were found to be significantly better than the scores of listeners with no musical experience. Also, in this same error level range, discrimination was significantly affected by the instrument tested. For error levels of 16% and 24%, discrimination scores were significantly, but negatively correlated with measures of spectral incoherence and normalized centroid deviation on unaltered instrument spectra, suggesting that the presence of dynamic spectral variations tends to increase the difficulty of detecting spectral alterations. Correlation between discrimination and a measure of spectral irregularity was comparatively low.

  13. An Enhanced Intelligent Handheld Instrument with Visual Servo Control for 2-DOF Hand Motion Error Compensation

    Directory of Open Access Journals (Sweden)

    Yan Naing Aye

    2013-10-01

    Full Text Available The intelligent handheld instrument, ITrem2, enhances manual positioning accuracy by cancelling erroneous hand movements and, at the same time, provides automatic micromanipulation functions. Visual data is acquired from a high speed monovision camera attached to the optical surgical microscope and acceleration measurements are acquired from the inertial measurement unit (IMU on board ITrem2. Tremor estimation and canceling is implemented via Band-limited Multiple Fourier Linear Combiner (BMFLC filter. The piezoelectric actuated micromanipulator in ITrem2 generates the 3D motion to compensate erroneous hand motion. Preliminary bench-top 2-DOF experiments have been conducted. The error motions simulated by a motion stage is reduced by 67% for multiple frequency oscillatory motions and 56.16% for pre-conditioned recorded physiological tremor.

  14. Analysis of alpha spectrum instrumental errors accounting for the low energy part of semiconductor detector response function

    International Nuclear Information System (INIS)

    Gurbich, A.F.

    1981-01-01

    A technique for processing of instrumental spectrum of charged particles permitting to take account of a low-energy part of spectrometer line shape, to improve accuracy and to estimate detection efficiency is stated on the example of 226 Ra alpha spectrum. The results obtained show that relative intensities of alpha lines within the limits of statistical errors coincide with the known values, line ''tails'' constituting to 3% of total area of the line. Taking account of ''the line tail'' results in shift of centers of peak gravity by 10-20 keV. So low-energy part of the alpha spectrometer line, which is usually not taken account during spectra processing, markedly affect the results [ru

  15. Unit of measurement used and parent medication dosing errors.

    Science.gov (United States)

    Yin, H Shonna; Dreyer, Benard P; Ugboaja, Donna C; Sanchez, Dayana C; Paul, Ian M; Moreira, Hannah A; Rodriguez, Luis; Mendelsohn, Alan L

    2014-08-01

    Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2-4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03-3.5) dose; associations greater for parents with low health literacy and non-English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon-associated measurement errors. Findings support a milliliter-only standard to reduce medication errors. Copyright © 2014 by the American Academy of Pediatrics.

  16. Stochastic goal-oriented error estimation with memory

    Science.gov (United States)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  17. Development of a Computerized Multifunctional Form and Position Measurement Instrument

    International Nuclear Information System (INIS)

    Liu, P; Tian, W Y

    2006-01-01

    A model machine of multifunctional form and position measurement instrument controlled by a personal computer has been successfully developed. The instrument is designed in rotary table type with a high precision air bearing and the radial rotation error of the rotary table is 0.08 μm. Since a high precision vertical sliding carriage supported by an air bearing is used for the instrument, the straightaway motion error of the carriage is 0.3 μm/200 mm and the parallelism error of the motion of the carriage relative to the rotation axis of the rotary table is 0.4 μm/200 mm. The mathematical models have been established for assessing planar and spatial straightness, flatness, roundness, cylindricity, and coaxality errors. By radial deviation measurement, the instrument can accurately measure form and position errors of such workpieces as shafts, round plates and sleeves of medium or small dimensions with the tolerance grades mostly used in industry

  18. Conjugate descent formulation of backpropagation error in feedforward neural networks

    Directory of Open Access Journals (Sweden)

    NK Sharma

    2009-06-01

    Full Text Available The feedforward neural network architecture uses backpropagation learning to determine optimal weights between different interconnected layers. This learning procedure uses a gradient descent technique applied to a sum-of-squares error function for the given input-output pattern. It employs an iterative procedure to minimise the error function for a given set of patterns, by adjusting the weights of the network. The first derivates of the error with respect to the weights identify the local error surface in the descent direction. Hence the network exhibits a different local error surface for every different pattern presented to it, and weights are iteratively modified in order to minimise the current local error. The determination of an optimal weight vector is possible only when the total minimum error (mean of the minimum local errors for all patterns from the training set may be minimised. In this paper, we present a general mathematical formulation for the second derivative of the error function with respect to the weights (which represents a conjugate descent for arbitrary feedforward neural network topologies, and we use this derivative information to obtain the optimal weight vector. The local error is backpropagated among the units of hidden layers via the second order derivative of the error with respect to the weights of the hidden and output layers independently and also in combination. The new total minimum error point may be evaluated with the help of the current total minimum error and the current minimised local error. The weight modification processes is performed twice: once with respect to the present local error and once more with respect to the current total or mean error. We present some numerical evidence that our proposed method yields better network weights than those determined via a conventional gradient descent approach.

  19. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  20. Residual rotational set-up errors after daily cone-beam CT image guided radiotherapy of locally advanced cervical cancer

    International Nuclear Information System (INIS)

    Laursen, Louise Vagner; Elstrøm, Ulrik Vindelev; Vestergaard, Anne; Muren, Ludvig P.; Petersen, Jørgen Baltzer; Lindegaard, Jacob Christian; Grau, Cai; Tanderup, Kari

    2012-01-01

    Purpose: Due to the often quite extended treatment fields in cervical cancer radiotherapy, uncorrected rotational set-up errors result in a potential risk of target miss. This study reports on the residual rotational set-up error after using daily cone beam computed tomography (CBCT) to position cervical cancer patients for radiotherapy treatment. Methods and materials: Twenty-five patients with locally advanced cervical cancer had daily CBCT scans (650 CBCTs in total) prior to treatment delivery. We retrospectively analyzed the translational shifts made in the clinic prior to each treatment fraction as well as the residual rotational errors remaining after translational correction. Results: The CBCT-guided couch movement resulted in a mean translational 3D vector correction of 7.4 mm. Residual rotational error resulted in a target shift exceeding 5 mm in 57 of the 650 treatment fractions. Three patients alone accounted for 30 of these fractions. Nine patients had no shifts exceeding 5 mm and 13 patients had 5 or less treatment fractions with such shifts. Conclusion: Twenty-two of the 25 patients have none or few treatment fractions with target shifts larger than 5 mm due to residual rotational error. However, three patients display a significant number of shifts suggesting a more systematic set-up error.

  1. Rectifying calibration error of Goldmann applanation tonometer is easy!

    Directory of Open Access Journals (Sweden)

    Nikhil S Choudhari

    2014-01-01

    Full Text Available Purpose: Goldmann applanation tonometer (GAT is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn′t suffice. We followed the South East Asia Glaucoma Interest Group′s definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively. Results: Twelve out of 29 (41.3% GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6% faulty instruments. Only one (8.3% faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.

  2. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  3. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    Science.gov (United States)

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling

  4. Quality of local authority occupational therapy services: developing an instrument to measure the user's perspective.

    NARCIS (Netherlands)

    Calnan, S.; Sixma, H.J.; Calnan, M.W.; Groenewegen, P.P.

    2000-01-01

    The aims of this paper are threefold: (1) to describe the development of an instrument measuring quality of care from the specific perspective of the users of local authority occupational therapy services; (2) to present the results from a survey of users' views about the quality of services offered

  5. Instrument uncertainty predictions

    International Nuclear Information System (INIS)

    Coutts, D.A.

    1991-07-01

    The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty

  6. Problems with radiological surveillance instrumentation

    International Nuclear Information System (INIS)

    Swinth, K.L.; Tanner, J.E.; Fleming, D.M.

    1984-09-01

    Many radiological surveillance instruments are in use at DOE facilities throughout the country. These instruments are an essential part of all health physics programs, and poor instrument performance can increase program costs or compromise program effectiveness. Generic data from simple tests on newly purchased instruments shows that many instruments will not meet requirements due to manufacturing defects. In other cases, lack of consideration of instrument use has resulted in poor acceptance of instruments and poor reliability. The performance of instruments is highly variable for electronic and mechanical performance, radiation response, susceptibility to interferences and response to environmental factors. Poor instrument performance in these areas can lead to errors or poor accuracy in measurements

  7. Problems with radiological surveillance instrumentation

    International Nuclear Information System (INIS)

    Swinth, K.L.; Tanner, J.E.; Fleming, D.M.

    1985-01-01

    Many radiological surveillance instruments are in use at DOE facilities throughout the country. These instruments are an essential part of all health physics programs, and poor instrument performance can increase program costs or compromise program effectiveness. Generic data from simple tests on newly purchased instruments shows that many instruments will not meet requirements due to manufacturing defects. In other cases, lack of consideration of instrument use has resulted in poor acceptance of instruments and poor reliability. The performance of instruments is highly variable for electronic and mechanical performance, radiation response, susceptibility to interferences and response to environmental factors. Poor instrument performance in these areas can lead to errors or poor accuracy in measurements

  8. Error management for musicians: an interdisciplinary conceptual framework.

    Science.gov (United States)

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and

  9. Error management for musicians: an interdisciplinary conceptual framework

    Directory of Open Access Journals (Sweden)

    Silke eKruse-Weber

    2014-07-01

    Full Text Available Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for errorless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error and error management (during and after the error are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of these abilities. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further

  10. Strictly local one-dimensional topological quantum error correction with symmetry-constrained cellular automata

    Directory of Open Access Journals (Sweden)

    Nicolai Lang, Hans Peter Büchler

    2018-01-01

    Full Text Available Active quantum error correction on topological codes is one of the most promising routes to long-term qubit storage. In view of future applications, the scalability of the used decoding algorithms in physical implementations is crucial. In this work, we focus on the one-dimensional Majorana chain and construct a strictly local decoder based on a self-dual cellular automaton. We study numerically and analytically its performance and exploit these results to contrive a scalable decoder with exponentially growing decoherence times in the presence of noise. Our results pave the way for scalable and modular designs of actively corrected one-dimensional topological quantum memories.

  11. European integration and the supervision of local and regional authorities
    Experiences in the Netherlands with requirements of European Community law

    Directory of Open Access Journals (Sweden)

    Bart Hessel

    2006-06-01

    Full Text Available As a result of increasing European integration, local and regional authorities are having to deal with European law more and more intensively. As Member States (read: central government are responsible vis-à-vis the Community for the errors of local and regional authorities, the question arises within Member States whether the central government possesses sufficient supervisory instruments for complying with their obligations under Community law: they must ensure that the errors of local and regional authorities are rectified in time, and national law must provide for sufficient possibilities to do so. Although Community law is neutral towards the internal relations between the various tiers of government within the Member States, this responsibility of the central government may, as a result of European integration, lead to a need for more powerful supervisory instruments in relation to local and regional authorities. In the past five years there has been some debate on this subject within the Netherlands and after a long delay the Dutch cabinet in 2004 decided that the existing supervisory instruments in the decentralized unitary state of the Netherlands should be expanded. The legislation intended to realize this expansion is being prepared. This discussion and its results would seem of interest to other Member States of the Community facing similar problems.

  12. Angular discretization errors in transport theory

    International Nuclear Information System (INIS)

    Nelson, P.; Yu, F.

    1992-01-01

    Elements of the information-based complexity theory are computed for several types of information and associated algorithms for angular approximations in the setting of a on-dimensional model problem. For point-evaluation information, the local and global radii of information are computed, a (trivial) optimal algorithm is determined, and the local and global error of a discrete ordinates algorithm are shown to be infinite. For average cone-integral information, the local and global radii of information are computed, the local and global error tends to zero as the underlying partition is indefinitely refined. A central algorithm for such information and an optimal partition (of given cardinality) are described. It is further shown that the analytic first-collision source method has zero error (for the purely absorbing model problem). Implications of the restricted problem domains suitable for the various types of information are discussed

  13. Assessment of Multiple Scattering Errors of Laser Diffraction Instruments

    National Research Council Canada - National Science Library

    Strakey, Peter

    2003-01-01

    The accuracy of two commercial laser diffraction instruments was compared under conditions of multiple scattering designed to simulate the high droplet number densities encountered in liquid propellant rocket combustors...

  14. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    Science.gov (United States)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  16. Developing Learning Model Based on Local Culture and Instrument for Mathematical Higher Order Thinking Ability

    Science.gov (United States)

    Saragih, Sahat; Napitupulu, E. Elvis; Fauzi, Amin

    2017-01-01

    This research aims to develop a student-centered learning model based on local culture and instrument of mathematical higher order thinking of junior high school students in the frame of the 2013-Curriculum in North Sumatra, Indonesia. The subjects of the research are seventh graders which are taken proportionally random consisted of three public…

  17. Fit between Conservation Instruments and Local Social Systems: Cases of Co-management and Payments for Ecosystem Services

    OpenAIRE

    Sarkki Simo; Rantala Lauri; Karjalainen Timo P.

    2015-01-01

    We draw on the concept of ‘fit’ to understand how co-management and Payments for Ecosystem Services (PES) as governance instruments could better acknowledge local social complexities. Achieving ‘participatory fit’ requires well-designed and fair processes, which enhance local acceptance towards the implemented rules. Thus, such fit can contribute to establishing new institutions in conservation governance. However, previous literature on participation has had strong focus on properties of dec...

  18. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  19. Autonomous optical navigation using nanosatellite-class instruments: a Mars approach case study

    Science.gov (United States)

    Enright, John; Jovanovic, Ilija; Kazemi, Laila; Zhang, Harry; Dzamba, Tom

    2018-02-01

    This paper examines the effectiveness of small star trackers for orbital estimation. Autonomous optical navigation has been used for some time to provide local estimates of orbital parameters during close approach to celestial bodies. These techniques have been used extensively on spacecraft dating back to the Voyager missions, but often rely on long exposures and large instrument apertures. Using a hyperbolic Mars approach as a reference mission, we present an EKF-based navigation filter suitable for nanosatellite missions. Observations of Mars and its moons allow the estimator to correct initial errors in both position and velocity. Our results show that nanosatellite-class star trackers can produce good quality navigation solutions with low position (<300 {m}) and velocity (<0.15 {m/s}) errors as the spacecraft approaches periapse.

  20. Monitoring Instrument Performance in Regional Broadband Seismic Network Using Ambient Seismic Noise

    Science.gov (United States)

    Ye, F.; Lyu, S.; Lin, J.

    2017-12-01

    In the past ten years, the number of seismic stations has increased significantly, and regional seismic networks with advanced technology have been gradually developed all over the world. The resulting broadband data help to improve the seismological research. It is important to monitor the performance of broadband instruments in a new network in a long period of time to ensure the accuracy of seismic records. Here, we propose a method that uses ambient noise data in the period range 5-25 s to monitor instrument performance and check data quality in situ. The method is based on an analysis of amplitude and phase index parameters calculated from pairwise cross-correlations of three stations, which provides multiple references for reliable error estimates. Index parameters calculated daily during a two-year observation period are evaluated to identify stations with instrument response errors in near real time. During data processing, initial instrument responses are used in place of available instrument responses to simulate instrument response errors, which are then used to verify our results. We also examine feasibility of the tailing noise using data from stations selected from USArray in different locations and analyze the possible instrumental errors resulting in time-shifts used to verify the method. Additionally, we show an application that effects of instrument response errors that experience pole-zeros variations on monitoring temporal variations in crustal properties appear statistically significant velocity perturbation larger than the standard deviation. The results indicate that monitoring seismic instrument performance helps eliminate data pollution before analysis begins.

  1. Analysis of strain error sources in micro-beam Laue diffraction

    International Nuclear Information System (INIS)

    Hofmann, Felix; Eve, Sophie; Belnoue, Jonathan; Micha, Jean-Sébastien; Korsunsky, Alexander M.

    2011-01-01

    Micro-beam Laue diffraction is an experimental method that allows the measurement of local lattice orientation and elastic strain within individual grains of engineering alloys, ceramics, and other polycrystalline materials. Unlike other analytical techniques, e.g. based on electron microscopy, it is not limited to surface characterisation or thin sections, but rather allows non-destructive measurements in the material bulk. This is of particular importance for in situ loading experiments where the mechanical response of a material volume (rather than just surface) is studied and it is vital that no perturbation/disturbance is introduced by the measurement technique. Whilst the technique allows lattice orientation to be determined to a high level of precision, accurate measurement of elastic strains and estimating the errors involved is a significant challenge. We propose a simulation-based approach to assess the elastic strain errors that arise from geometrical perturbations of the experimental setup. Using an empirical combination rule, the contributions of different geometrical uncertainties to the overall experimental strain error are estimated. This approach was applied to the micro-beam Laue diffraction setup at beamline BM32 at the European Synchrotron Radiation Facility (ESRF). Using a highly perfect germanium single crystal, the mechanical stability of the instrument was determined and hence the expected strain errors predicted. Comparison with the actual strain errors found in a silicon four-point beam bending test showed good agreement. The simulation-based error analysis approach makes it possible to understand the origins of the experimental strain errors and thus allows a directed improvement of the experimental geometry to maximise the benefit in terms of strain accuracy.

  2. Developing Evidence for Action on the Postgraduate Experience: An Effective Local Instrument to Move beyond Benchmarking

    Science.gov (United States)

    Sampson, K. A.; Johnston, L.; Comer, K.; Brogt, E.

    2016-01-01

    Summative and benchmarking surveys to measure the postgraduate student research experience are well reported in the literature. While useful, we argue that local instruments that provide formative resources with an academic development focus are also required. If higher education institutions are to move beyond the identification of issues and…

  3. Automated Patient Identification and Localization Error Detection Using 2-Dimensional to 3-Dimensional Registration of Kilovoltage X-Ray Setup Images

    International Nuclear Information System (INIS)

    Lamb, James M.; Agazaryan, Nzhde; Low, Daniel A.

    2013-01-01

    Purpose: To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Methods and Materials: Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. Results: A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. Conclusions: An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments

  4. Automated Patient Identification and Localization Error Detection Using 2-Dimensional to 3-Dimensional Registration of Kilovoltage X-Ray Setup Images

    Energy Technology Data Exchange (ETDEWEB)

    Lamb, James M., E-mail: jlamb@mednet.ucla.edu; Agazaryan, Nzhde; Low, Daniel A.

    2013-10-01

    Purpose: To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Methods and Materials: Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. Results: A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. Conclusions: An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments.

  5. Automated patient identification and localization error detection using 2-dimensional to 3-dimensional registration of kilovoltage x-ray setup images.

    Science.gov (United States)

    Lamb, James M; Agazaryan, Nzhde; Low, Daniel A

    2013-10-01

    To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Measurement, instrumentation, and sensors handbook

    CERN Document Server

    Eren, Halit

    2014-01-01

    The Second Edition of the bestselling Measurement, Instrumentation, and Sensors Handbook brings together all aspects of the design and implementation of measurement, instrumentation, and sensors. Reflecting the current state of the art, it describes the use of instruments and techniques for performing practical measurements in engineering, physics, chemistry, and the life sciences and discusses processing systems, automatic data acquisition, reduction and analysis, operation characteristics, accuracy, errors, calibrations, and the incorporation of standards for control purposes. Organized acco

  7. Self-Interaction Error in Density Functional Theory: An Appraisal.

    Science.gov (United States)

    Bao, Junwei Lucas; Gagliardi, Laura; Truhlar, Donald G

    2018-05-03

    Self-interaction error (SIE) is considered to be one of the major sources of error in most approximate exchange-correlation functionals for Kohn-Sham density-functional theory (KS-DFT), and it is large with all local exchange-correlation functionals and with some hybrid functionals. In this work, we consider systems conventionally considered to be dominated by SIE. For these systems, we demonstrate that by using multiconfiguration pair-density functional theory (MC-PDFT), the error of a translated local density-functional approximation is significantly reduced (by a factor of 3) when using an MCSCF density and on-top density, as compared to using KS-DFT with the parent functional; the error in MC-PDFT with local on-top functionals is even lower than the error in some popular KS-DFT hybrid functionals. Density-functional theory, either in MC-PDFT form with local on-top functionals or in KS-DFT form with some functionals having 50% or more nonlocal exchange, has smaller errors for SIE-prone systems than does CASSCF, which has no SIE.

  8. On the relation between orbital-localization and self-interaction errors in the density functional theory treatment of organic semiconductors.

    Science.gov (United States)

    Körzdörfer, T

    2011-03-07

    It is commonly argued that the self-interaction error (SIE) inherent in semilocal density functionals is related to the degree of the electronic localization. Yet at the same time there exists a latent ambiguity in the definitions of the terms "localization" and "self-interaction," which ultimately prevents a clear and readily accessible quantification of this relationship. This problem is particularly pressing for organic semiconductor molecules, in which delocalized molecular orbitals typically alternate with localized ones, thus leading to major distortions in the eigenvalue spectra. This paper discusses the relation between localization and SIEs in organic semiconductors in detail. Its findings provide further insights into the SIE in the orbital energies and yield a new perspective on the failure of self-interaction corrections that identify delocalized orbital densities with electrons. © 2011 American Institute of Physics.

  9. Estimating and localizing the algebraic and total numerical errors using flux reconstructions

    Czech Academy of Sciences Publication Activity Database

    Papež, Jan; Strakoš, Z.; Vohralík, M.

    2018-01-01

    Roč. 138, č. 3 (2018), s. 681-721 ISSN 0029-599X R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * a posteriori error estimation * algebraic error * discretization error * stopping criteria * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016

  10. The QUIET Instrument

    Energy Technology Data Exchange (ETDEWEB)

    Bischoff, C.; et al.

    2012-07-01

    The Q/U Imaging ExperimenT (QUIET) is designed to measure polarization in the Cosmic Microwave Background, targeting the imprint of inflationary gravitational waves at large angular scales ({approx}1{sup o}). Between 2008 October and 2010 December, two independent receiver arrays were deployed sequentially on a 1.4m side-fed Dragonian telescope. The polarimeters which form the focal planes use a highly compact design based on High Electron Mobility Transistors (HEMTs) that provides simultaneous measurements of the Stokes parameters Q, U, and I in a single module. The 17-element Q-band polarimeter array, with a central frequency of 43.1 GHz, has the best sensitivity (69 {mu}Ks{sup 1/2}) and the lowest instrumental systematic errors ever achieved in this band, contributing to the tensor-to-scalar ratio at r < 0:1. The 84-element W-band polarimeter array has a sensitivity of 87 {mu}Ks{sup 1/2} at a central frequency of 94.5 GHz. It has the lowest systematic errors to date, contributing at r < 0:01. The two arrays together cover multipoles in the range {ell} {approx} 25 -- 975. These are the largest HEMT-based arrays deployed to date. This article describes the design, calibration, performance of, and sources of systematic error for the instrument.

  11. Local-metrics error-based Shepard interpolation as surrogate for highly non-linear material models in high dimensions

    Science.gov (United States)

    Lorenzi, Juan M.; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian

    2017-10-01

    Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.

  12. Calibration of solar radiation measuring instruments. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Bahm, R J; Nakos, J C

    1979-11-01

    A review of solar radiation measurement of instruments and some types of errors is given; and procedures for calibrating solar radiation measuring instruments are detailed. An appendix contains a description of various agencies who perform calibration of solar instruments and a description of the methods they used at the time this report was prepared. (WHK)

  13. Endoscopic localization of colorectal cancer: Study of its accuracy and possible error factors Localización endoscópica del cáncer colorrectal: estudio de su precisión y posibles factores de error

    Directory of Open Access Journals (Sweden)

    Fernando Borda

    2012-11-01

    Full Text Available Introduction: accurate preoperative localization of colorectal cancer (CRC is very important, with a wide range of published error rates. Aim: to determine accuracy of endoscopic localization of CRC in comparison with preoperative computed tomography (CT. To analyse variables that could be associated with a wrong endoscopic localization. Patients and methods: endoscopic and CT localization of a series of CRC without previous surgery were reviewed. We studied the concordance between endoscopic and radiologic localization against operative findings comparing accuracy of endoscopy and CT. We analysed the frequency of wrong endoscopic diagnoses with regard to a series of patient, endoscopy and tumor variables. Results: two hundred thirty seven CRC in 223 patients were studied. Concordance with surgical localization was: colonoscopy = 0.87 and CT = 0.69. Endoscopic localization accuracy was: 91.1%; CT: 76.2%: p = 0.00001; OR = 3.22 (1.82-5.72. Obstructive cancer presented a higher rate of wrong localization: 18 vs. 5.7% in non-obstructive tumors (p = 0.0034; OR = 3.65 (1.35-9.96. Endoscopic localization mistakes varied depending on tumor location, being more frequent in descending colon: 36.3%, p = 0.014; OR = 6.23 (1.38-26.87 and cecum: 23.1%, p = 0.007; OR = 3.92 (1.20-12.43. Conclusions: endoscopic accuracy for CRC localization was very high and significantly better than CT accuracy. Obstructive tumor and those located in the descending colon or cecum were associated with a significant increase of the error risk of CRC endoscopic localization.Introducción: una correcta localización preoperatoria del cáncer colorrectal (CCR es muy importante, siendo variables las tasas de error de localización endoscópica publicadas. Objetivo: determinar la precisión de la localización endoscópica del CCR, comparándola con la del TAC preoperatorio. Analizar las variables que pudieran asociarse a una localización endoscópica errónea. Pacientes y m

  14. A statistical approach to instrument calibration

    Science.gov (United States)

    Robert R. Ziemer; David Strauss

    1978-01-01

    Summary - It has been found that two instruments will yield different numerical values when used to measure identical points. A statistical approach is presented that can be used to approximate the error associated with the calibration of instruments. Included are standard statistical tests that can be used to determine if a number of successive calibrations of the...

  15. Music Abilities and Experiences as Predictors of Error-Detection Skill.

    Science.gov (United States)

    Brand, Manny; Burnsed, Vernon

    1981-01-01

    This study examined the predictive validity of previous music abilities and experiences of skill in music error detection among undergraduate instrumental music education majors. Results indicated no statistically significant relationships which suggest that the ability to detect music errors may exist independently of other music abilities.…

  16. The error analysis of coke moisture measured by neutron moisture gauge

    International Nuclear Information System (INIS)

    Tian Huixing

    1995-01-01

    The error of coke moisture measured by neutron method in the iron and steel industry is analyzed. The errors are caused by inaccurate sampling location in the calibration procedure on site. By comparison, the instrument error and the statistical fluctuation error are smaller. So the sampling proportion should be increased as large as possible in the calibration procedure on site, and a satisfied calibration effect can be obtained on a suitable size hopper

  17. Entanglement renormalization, quantum error correction, and bulk causality

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Isaac H. [IBM T.J. Watson Research Center,1101 Kitchawan Rd., Yorktown Heights, NY (United States); Kastoryano, Michael J. [NBIA, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2017-04-07

    Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progressively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.

  18. Instrumental dead-time and its relationship with matrix corrections in X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Thomas, I.L.; Haukka, M.T.; Anderson, D.H.

    1979-01-01

    The relationship between instrumental dead-time and the self-absorption coefficients, αsub(ii) in x.r.f. matrix correction by means of influence coefficients, is not generally recognized but has important analytical consequences. Systematic errors of the order of 1% (relative) for any analyte result from experimental uncertainties in instrumental dead-time. Such errors are applied unevenly across a given range of concentration because the error depends on the calibration standards and on the instrumental conditions used. Refinement of the instrumental dead-time value and other calibration parameters to conform with influence coefficients determined elsewhere assumes exact knowledge of dead-time of the instrument used originally, and quite similar excitation conditions and spectrometer geometry for the two instruments. Though these qualifications may not be met, adjustment of any of the parameters (dead-time, reference concentration, background concentration, self-absorption and other influence coefficients) can be easily achieved. (Auth.)

  19. Evaluation of inter-fraction error during prostate radiotherapy

    International Nuclear Information System (INIS)

    Komiyama, Takafumi; Nakamura, Koji; Motoyama, Tsuyoshi; Onishi, Hiroshi; Sano, Naoki

    2008-01-01

    The purpose of this study was to evaluate inter-fraction error (inter-fraction set-up error+inter-fraction internal organ motion) between treatment planning and delivery during radiotherapy for localized prostate cancer. Twenty three prostate cancer patients underwent image-guided radical irradiation with the CT-linac system. All patients were treated in the supine position. After set-up with external skin markers, using CT-linac system, pretherapy CT images were obtained and isocenter displacement was measured. The mean displacement of the isocenter was 1.8 mm, 3.3 mm, and 1.7 mm in the left-right, ventral-dorsal, and cranial-caudal directions, respectively. The maximum displacement of the isocenter was 7 mm, 12 mm, and 9 mm in the left-right, ventral-dorsal, and cranial-caudal directions, respectively. The mean interquartile range of displacement of the isocenter was 1.8 mm, 3.7 mm, and 2.0 mm in the left-right, ventral-dorsal, and cranial-caudal directions, respectively. In radiotherapy for localized prostate cancer, inter-fraction error was largest in the ventral-dorsal directions. Errors in the ventral-dorsal directions influence both local control and late adverse effects. Our study suggested the set-up with external skin markers was not enough for radical radiotherapy for localized prostate cancer, thereby those such as a CT-linac system for correction of inter-fraction error being required. (author)

  20. Computerized Design and Generation of Gear Drives With a Localized Bearing Contact and a Low Level of Transmission Errors

    Science.gov (United States)

    Litvin, F.; Chen, J.; Seol, I.; Kim, D.; Lu, J.; Zhao, X.; Handschuh, R.

    1996-01-01

    A general approach developed for the computerized simulation of loaded gear drives is presented. In this paper the methodology used to localize the bearing contact, provide a parabolic function of transmission errors, and simulate meshing and contact of unloaded gear drives is developed. The approach developed is applied to spur and helical gears, spiral bevel gears, face-gear drives, and worm-gear drives with cylindrical worms.

  1. The regression-calibration method for fitting generalized linear models with additive measurement error

    OpenAIRE

    James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll

    2003-01-01

    This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...

  2. Application of a New Statistical Model for Measurement Error to the Evaluation of Dietary Self-report Instruments.

    Science.gov (United States)

    Freedman, Laurence S; Midthune, Douglas; Carroll, Raymond J; Commins, John M; Arab, Lenore; Baer, David J; Moler, James E; Moshfegh, Alanna J; Neuhouser, Marian L; Prentice, Ross L; Rhodes, Donna; Spiegelman, Donna; Subar, Amy F; Tinker, Lesley F; Willett, Walter; Kipnis, Victor

    2015-11-01

    Most statistical methods that adjust analyses for dietary measurement error treat an individual's usual intake as a fixed quantity. However, usual intake, if defined as average intake over a few months, varies over time. We describe a model that accounts for such variation and for the proximity of biomarker measurements to self-reports within the framework of a meta-analysis, and apply it to the analysis of data on energy, protein, potassium, and sodium from a set of five large validation studies of dietary self-report instruments using recovery biomarkers as reference instruments. We show that this time-varying usual intake model fits the data better than the fixed usual intake assumption. Using this model, we estimated attenuation factors and correlations with true longer-term usual intake for single and multiple 24-hour dietary recalls (24HRs) and food frequency questionnaires (FFQs) and compared them with those obtained under the "fixed" method. Compared with the fixed method, the estimates using the time-varying model showed slightly larger values of the attenuation factor and correlation coefficient for FFQs and smaller values for 24HRs. In some cases, the difference between the fixed method estimate and the new estimate for multiple 24HRs was substantial. With the new method, while four 24HRs had higher estimated correlations with truth than a single FFQ for absolute intakes of protein, potassium, and sodium, for densities the correlations were approximately equal. Accounting for the time element in dietary validation is potentially important, and points toward the need for longer-term validation studies.

  3. Seismic Station Installation Orientation Errors at ANSS and IRIS/USGS Stations

    Science.gov (United States)

    Ringler, Adam T.; Hutt, Charles R.; Persfield, K.; Gee, Lind S.

    2013-01-01

    Many seismological studies depend on the published orientations of sensitive axes of seismic instruments relative to north (e.g., Li et al., 2011). For example, studies of the anisotropic structure of the Earth’s mantle through SKS‐splitting measurements (Long et al., 2009), constraints on core–mantle electromagnetic coupling from torsional normal‐mode measurements (Dumberry and Mound, 2008), and models of three‐dimensional (3D) velocity variations from surface waves (Ekström et al., 1997) rely on accurate sensor orientation. Unfortunately, numerous results indicate that this critical parameter is often subject to significant error (Laske, 1995; Laske and Masters, 1996; Yoshizawa et al., 1999; Schulte‐Pelkum et al., 2001; Larson and Ekström, 2002). For the Advanced National Seismic System (ANSS; ANSS Technical Integration Committee, 2002), the Global Seismographic Network (GSN; Butler et al., 2004), and many other networks, sensor orientation is typically determined by a field engineer during installation. Successful emplacement of a seismic instrument requires identifying true north, transferring a reference line, and measuring the orientation of the instrument relative to the reference line. Such an exercise is simple in theory, but there are many complications in practice. There are four commonly used methods for determining true north at the ANSS and GSN stations operated by the USGS Albuquerque Seismological Laboratory (ASL), including gyroscopic, astronomical, Global Positioning System (GPS), and magnetic field techniques. A particular method is selected based on site conditions (above ground, below ground, availability of astronomical observations, and so on) and in the case of gyroscopic methods, export restrictions. Once a north line has been determined, it must be translated to the sensor location. For installations in mines or deep vaults, this step can include tracking angles through the one or more turns in the access tunnel leading to

  4. A Robust Vehicle Localization Approach Based on GNSS/IMU/DMI/LiDAR Sensor Fusion for Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Xiaoli Meng

    2017-09-01

    Full Text Available Precise and robust localization in a large-scale outdoor environment is essential for an autonomous vehicle. In order to improve the performance of the fusion of GNSS (Global Navigation Satellite System/IMU (Inertial Measurement Unit/DMI (Distance-Measuring Instruments, a multi-constraint fault detection approach is proposed to smooth the vehicle locations in spite of GNSS jumps. Furthermore, the lateral localization error is compensated by the point cloud-based lateral localization method proposed in this paper. Experiment results have verified the algorithms proposed in this paper, which shows that the algorithms proposed in this paper are capable of providing precise and robust vehicle localization.

  5. A Robust Vehicle Localization Approach Based on GNSS/IMU/DMI/LiDAR Sensor Fusion for Autonomous Vehicles.

    Science.gov (United States)

    Meng, Xiaoli; Wang, Heng; Liu, Bingbing

    2017-09-18

    Precise and robust localization in a large-scale outdoor environment is essential for an autonomous vehicle. In order to improve the performance of the fusion of GNSS (Global Navigation Satellite System)/IMU (Inertial Measurement Unit)/DMI (Distance-Measuring Instruments), a multi-constraint fault detection approach is proposed to smooth the vehicle locations in spite of GNSS jumps. Furthermore, the lateral localization error is compensated by the point cloud-based lateral localization method proposed in this paper. Experiment results have verified the algorithms proposed in this paper, which shows that the algorithms proposed in this paper are capable of providing precise and robust vehicle localization.

  6. Medical Errors in Cyprus: The 2005 Eurobarometer Survey

    Directory of Open Access Journals (Sweden)

    Andreas Pavlakis

    2012-01-01

    Full Text Available Background: Medical errors have been highlighted in recent years by different agencies, scientific bodies and research teams alike. We sought to explore the issue of medical errors in Cyprus using data from the Eurobarometer survey.Methods: Data from the special Eurobarometer survey conducted in 2005 across all European Union countries (EU-25 and the acceding countries were obtained from the corresponding EU office. Statisticalanalyses including logistic regression models were performed using SPSS.Results: A total of 502 individuals participated in the Cyprus survey. About 90% reported that they had often or sometimes heard about medical errors, while 22% reported that a family member or they had suffered a serious medical error in a local hospital. In addition, 9.4% reported a serious problem from a prescribed medicine. We also found statistically significant differences across different ages and gender and in rural versus urban residents. Finally, using multivariable-adjusted logistic regression models, wefound that residents in rural areas were more likely to have suffered a serious medical error in a local hospital or from a prescribed medicine.Conclusion: Our study shows that the vast majority of residents in Cyprus in parallel with the other Europeans worry about medical errors and a significant percentage report having suffered a serious medical error at a local hospital or from a prescribed medicine. The results of our study could help the medical community in Cyprus and the society at large to enhance its vigilance with respect to medical errors in order to improve medical care.

  7. Optimization of sample absorbance for quantitative analysis in the presence of pathlength error in the IR and NIR regions

    International Nuclear Information System (INIS)

    Hirschfeld, T.; Honigs, D.; Hieftje, G.

    1985-01-01

    Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable

  8. Temperature-dependent errors in nuclear lattice simulations

    International Nuclear Information System (INIS)

    Lee, Dean; Thomson, Richard

    2007-01-01

    We study the temperature dependence of discretization errors in nuclear lattice simulations. We find that for systems with strong attractive interactions the predominant error arises from the breaking of Galilean invariance. We propose a local 'well-tempered' lattice action which eliminates much of this error. The well-tempered action can be readily implemented in lattice simulations for nuclear systems as well as cold atomic Fermi systems

  9. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  10. Radon measurements-discussion of error estimates for selected methods

    International Nuclear Information System (INIS)

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2010-01-01

    The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%.

  11. Instrument for measuring metal-thermoelectric semiconductor contact resistence

    International Nuclear Information System (INIS)

    Lanxner, M.; Nechmadi, M.; Meiri, B.; Schildkraut, I.

    1979-02-01

    An instrument for measuring electrical, metal-thermoelectric semiconductor contact resistance is described. The expected errors of measurement are indicated. The operation of the instrument which is based on potential traversing perpendicularly to the contact plane is illustrated for the case of contacts of palladium and bismuth telluride-based thermoelectric material

  12. The accuracy of webcams in 2D motion analysis: sources of error and their control

    International Nuclear Information System (INIS)

    Page, A; Candelas, P; Belmar, F; Moreno, R

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented. Finally, an experiment with controlled movement is performed to experimentally measure the errors described above and to assess the effectiveness of the proposed corrective measures. It will be shown that when these aspects are considered, it is possible to obtain errors lower than 0.1%. This level of accuracy demonstrates that webcams should be considered as very precise and accurate measuring instruments at a remarkably low cost

  13. The accuracy of webcams in 2D motion analysis: sources of error and their control

    Energy Technology Data Exchange (ETDEWEB)

    Page, A; Candelas, P; Belmar, F [Departamento de Fisica Aplicada, Universidad Politecnica de Valencia, Valencia (Spain); Moreno, R [Instituto de Biomecanica de Valencia, Valencia (Spain)], E-mail: alvaro.page@ibv.upv.es

    2008-07-15

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented. Finally, an experiment with controlled movement is performed to experimentally measure the errors described above and to assess the effectiveness of the proposed corrective measures. It will be shown that when these aspects are considered, it is possible to obtain errors lower than 0.1%. This level of accuracy demonstrates that webcams should be considered as very precise and accurate measuring instruments at a remarkably low cost.

  14. Supervised local error estimation for nonlinear image registration using convolutional neural networks

    NARCIS (Netherlands)

    Eppenhof, Koen A.J.; Pluim, Josien P.W.; Styner, M.A.; Angelini, E.D.

    2017-01-01

    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation

  15. A Comparison of seismic instrument noise coherence analysis techniques

    Science.gov (United States)

    Ringler, A.T.; Hutt, C.R.; Evans, J.R.; Sandoval, L.D.

    2011-01-01

    The self-noise of a seismic instrument is a fundamental characteristic used to evaluate the quality of the instrument. It is important to be able to measure this self-noise robustly, to understand how differences among test configurations affect the tests, and to understand how different processing techniques and isolation methods (from nonseismic sources) can contribute to differences in results. We compare two popular coherence methods used for calculating incoherent noise, which is widely used as an estimate of instrument self-noise (incoherent noise and self-noise are not strictly identical but in observatory practice are approximately equivalent; Holcomb, 1989; Sleeman et al., 2006). Beyond directly comparing these two coherence methods on similar models of seismometers, we compare how small changes in test conditions can contribute to incoherent-noise estimates. These conditions include timing errors, signal-to-noise ratio changes (ratios between background noise and instrument incoherent noise), relative sensor locations, misalignment errors, processing techniques, and different configurations of sensor types.

  16. Subject-verb agreement: Error production by Tourism undergraduate students

    Directory of Open Access Journals (Sweden)

    Ana Paula Correia

    2014-11-01

    Full Text Available The aim of this paper, which is part of a more extensive research on verb tense errors, is to investigate the subject-verb agreement errors in the simple present in the texts of a group of Tourism undergraduate students. Based on the concept of interlanguage and following the error analysis model, this descriptive non-experimental study applies qualitative and quantitative procedures. Three types of instruments were used to collect data: a sociolinguistic questionnaire (to define the learners’ profile; the Dialang test (to establish their proficiency level in English; and our own learner corpus (140 texts. Errors were identified and classified by an expert panel in accordance with a verb error taxonomy developed for this study based on the taxonomy established by the Cambridge Learner Corpus. The Markin software was used to code errors in the corpus and the Wordsmith Tools software to analyze the data. Subject-verb agreement errors and their relation with the learners’ proficiency levels are described.

  17. Validation and Error Characterization for the Global Precipitation Measurement

    Science.gov (United States)

    Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.

    2003-01-01

    The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration

  18. Age-Related Changes in Bimanual Instrument Playing with Rhythmic Cueing

    Directory of Open Access Journals (Sweden)

    Soo Ji Kim

    2017-09-01

    Full Text Available Deficits in bimanual coordination of older adults have been demonstrated to significantly limit their functioning in daily life. As a bimanual sensorimotor task, instrument playing has great potential for motor and cognitive training in advanced age. While the process of matching a person’s repetitive movements to auditory rhythmic cueing during instrument playing was documented to involve motor and attentional control, investigation into whether the level of cognitive functioning influences the ability to rhythmically coordinate movement to an external beat in older populations is relatively limited. Therefore, the current study aimed to examine how timing accuracy during bimanual instrument playing with rhythmic cueing differed depending on the degree of participants’ cognitive aging. Twenty one young adults, 20 healthy older adults, and 17 older adults with mild dementia participated in this study. Each participant tapped an electronic drum in time to the rhythmic cueing provided using both hands simultaneously and in alternation. During bimanual instrument playing with rhythmic cueing, mean and variability of synchronization errors were measured and compared across the groups and the tempo of cueing during each type of tapping task. Correlations of such timing parameters with cognitive measures were also analyzed. The results showed that the group factor resulted in significant differences in the synchronization errors-related parameters. During bimanual tapping tasks, cognitive decline resulted in differences in synchronization errors between younger adults and older adults with mild dimentia. Also, in terms of variability of synchronization errors, younger adults showed significant differences in maintaining timing performance from older adults with and without mild dementia, which may be attributed to decreased processing time for bimanual coordination due to aging. Significant correlations were observed between variability of

  19. Analysis of localizer and glide slope Flight Technical Error

    Science.gov (United States)

    2008-12-09

    A new wake turbulence procedure has been developed that permits two dependent arrival traffic streams during instrument meteorological conditions : to runways with centerline separations less than 2500 ft. For the proposed procedure, aircraft approac...

  20. refractive errors among secondary school students in Isuikwuato

    African Journals Online (AJOL)

    Eyamba

    STUDENTS IN ISUIKWUATO LOCAL GOVERNMENT AREA OF ... the prevalence and types of refractive errors among secondary school students ... KEYWORDS: Refractive error, Secondary School students, ametropia, .... interviews of the teachers as regards the general performance of those students with obvious visual.

  1. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S. [et al.

    2016-05-27

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  2. A SHARIA RETURN AS AN ALTERNATIVE INSTRUMENT FOR MONETARY POLICY

    Directory of Open Access Journals (Sweden)

    Ashief Hamam

    2011-09-01

    Full Text Available Rapid development in Islamic financial industry has not been supported by sharia monetary policy instruments. This study looks at the possibility of sharia returns as the instrument. Using both error correction model and vector error correction model to estimate the data from 2002(1 to 2010(12, this paper finds that sharia return has the same effect as the interest rate in the demand for money. The shock effect of sharia return on broad money supply, Gross Domestic Product, and Consumer Price Index is greater than that of interest rate. In addition, these three variables are more quickly become stable following the shock of sharia return. Keywords: Sharia return, islamic financial system, vector error correction modelJEL classification numbers: E52, G15

  3. Black Holes, Holography, and Quantum Error Correction

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    How can it be that a local quantum field theory in some number of spacetime dimensions can "fake" a local gravitational theory in a higher number of dimensions?  How can the Ryu-Takayanagi Formula say that an entropy is equal to the expectation value of a local operator?  Why do such things happen only in gravitational theories?  In this talk I will explain how a new interpretation of the AdS/CFT correspondence as a quantum error correcting code provides satisfying answers to these questions, and more generally gives a natural way of generating simple models of the correspondence.  No familiarity with AdS/CFT or quantum error correction is assumed, but the former would still be helpful.  

  4. Design of the Local Instrument for KSTAR Gravity Support

    International Nuclear Information System (INIS)

    Her, N. I.; Kim, H. K.; Kim, Y. O.; Sa, J. W.; Kim, G. H.; Kim, K. M.; Park, Y. M.; Hong, G. H.; Choi, C. H.; Bak, J. S.

    2005-01-01

    should be operated without malfunction in the environment of cryogenic temperature up to 4.5 K and high magnetic field up to 2T(Tesla). In this study, the design of local instrumentation system for gravity support and its prototype testing results are summarized

  5. The Michelson Stellar Interferometer Error Budget for Triple Triple-Satellite Configuration

    Science.gov (United States)

    Marathay, Arvind S.; Shiefman, Joe

    1996-01-01

    This report presents the results of a study of the instrumentation tolerances for a conventional style Michelson stellar interferometer (MSI). The method used to determine the tolerances was to determine the change, due to the instrument errors, in the measured fringe visibility and phase relative to the ideal values. The ideal values are those values of fringe visibility and phase that would be measured by a perfect MSI and are attributable solely to the object being detected. Once the functional relationship for changes in visibility and phase as a function of various instrument errors is understood it is then possible to set limits on the instrument errors in order to ensure that the measured visibility and phase are different from the ideal values by no more than some specified amount. This was done as part of this study. The limits we obtained are based on a visibility error of no more than 1% and a phase error of no more than 0.063 radians (this comes from 1% of 2(pi) radians). The choice of these 1% limits is supported in the literture. The approach employed in the study involved the use of ASAP (Advanced System Analysis Program) software provided by Breault Research Organization, Inc., in conjunction with parallel analytical calculations. The interferometer accepts object radiation into two separate arms each consisting of an outer mirror, an inner mirror, a delay line (made up of two moveable mirrors and two static mirrors), and a 10:1 afocal reduction telescope. The radiation coming out of both arms is incident on a slit plane which is opaque with two openings (slits). One of the two slits is centered directly under one of the two arms of the interferometer and the other slit is centered directly under the other arm. The slit plane is followed immediately by an ideal combining lens which images the radiation in the fringe plane (also referred to subsequently as the detector plane).

  6. Error estimation for variational nodal calculations

    International Nuclear Information System (INIS)

    Zhang, H.; Lewis, E.E.

    1998-01-01

    Adaptive grid methods are widely employed in finite element solutions to both solid and fluid mechanics problems. Either the size of the element is reduced (h refinement) or the order of the trial function is increased (p refinement) locally to improve the accuracy of the solution without a commensurate increase in computational effort. Success of these methods requires effective local error estimates to determine those parts of the problem domain where the solution should be refined. Adaptive methods have recently been applied to the spatial variables of the discrete ordinates equations. As a first step in the development of adaptive methods that are compatible with the variational nodal method, the authors examine error estimates for use in conjunction with spatial variables. The variational nodal method lends itself well to p refinement because the space-angle trial functions are hierarchical. Here they examine an error estimator for use with spatial p refinement for the diffusion approximation. Eventually, angular refinement will also be considered using spherical harmonics approximations

  7. Instrumental systematics and weak gravitational lensing

    International Nuclear Information System (INIS)

    Mandelbaum, R.

    2015-01-01

    We present a pedagogical review of the weak gravitational lensing measurement process and its connection to major scientific questions such as dark matter and dark energy. Then we describe common ways of parametrizing systematic errors and understanding how they affect weak lensing measurements. Finally, we discuss several instrumental systematics and how they fit into this context, and conclude with some future perspective on how progress can be made in understanding the impact of instrumental systematics on weak lensing measurements

  8. Daily CT localization for correcting portal errors in the treatment of prostate cancer

    International Nuclear Information System (INIS)

    Lattanzi, Joseph; McNeely, Shawn; Hanlon, Alexandra; Das, Indra; Schultheiss, Timothy E.; Hanks, Gerald E.

    1998-01-01

    Introduction: Improved prostate localization techniques should allow the reduction of margins around the target to facilitate dose escalation in high-risk patients while minimizing the risk of normal tissue morbidity. A daily CT simulation technique is presented to assess setup variations in portal placement and organ motion for the treatment of localized prostate cancer. Methods and Materials: Six patients who consented to this study underwent supine position CT simulation with an alpha cradle cast, intravenous contrast, and urethrogram. Patients received 46 Gy to the initial Planning Treatment Volume (PTV 1 ) in a four-field conformal technique that included the prostate, seminal vesicles, and lymph nodes as the Gross Tumor Volume (GTV 1 ). The prostate or prostate and seminal vesicles (GTV 2 ) then received 56 Gy to PTV 2 . All doses were delivered in 2-Gy fractions. After 5 weeks of treatment (50 Gy), a second CT simulation was performed. The alpha cradle was secured to a specially designed rigid sliding board. The prostate was contoured and a new isocenter was generated with appropriate surface markers. Prostate-only treatment portals for the final conedown (GTV 3 ) were created with a 0.25-cm margin from the GTV to PTV. On each subsequent treatment day, the patient was placed in his cast on the sliding board for a repeat CT simulation. The daily isocenter was recalculated in the anterior/posterior (A/P) and lateral dimension and compared to the 50-Gy CT simulation isocenter. Couch and surface marker shifts were calculated to produce portal alignment. To maintain proper positioning, the patients were transferred to a stretcher while on the sliding board in the cast and transported to the treatment room where they were then transferred to the treatment couch. The patients were then treated to the corrected isocenter. Portal films and electronic portal images were obtained for each field. Results: Utilizing CT-CT image registration (fusion) of the daily and 50

  9. Evaluation and Error Analysis for a Solar thermal Receiver

    Energy Technology Data Exchange (ETDEWEB)

    Pfander, M.

    2001-07-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Prohermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. the ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver module's entry plane and the receiver operating temperature. (Author) 26 refs.

  10. Evaluation and Error Analysis for a Solar Thermal Receiver

    International Nuclear Information System (INIS)

    Pfander, M.

    2001-01-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Pro hermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. The ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver modules entry plane and the receiver operating temperature. (Author) 26 refs

  11. The relationship between automation complexity and operator error

    International Nuclear Information System (INIS)

    Ogle, Russell A.; Morrison, Delmar 'Trey'; Carpenter, Andrew R.

    2008-01-01

    One of the objectives of process automation is to improve the safety of plant operations. Manual operation, it is often argued, provides too many opportunities for operator error. By this argument, process automation should decrease the risk of accidents caused by operator error. However, some accident theorists have argued that while automation may eliminate some types of operator error, it may create new varieties of error. In this paper we present six case studies of explosions involving operator error in an automated process facility. Taken together, these accidents resulted in six fatalities, 30 injuries and hundreds of millions of dollars in property damage. The case studies are divided into two categories: low and high automation complexity (three case studies each). The nature of the operator error was dependent on the level of automation complexity. For each case study, we also consider the contribution of the existing engineering controls such as safety instrumented systems (SIS) or safety critical devices (SCD) and explore why they were insufficient to prevent, or mitigate, the severity of the explosion

  12. Role of memory errors in quantum repeaters

    International Nuclear Information System (INIS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Duer, W.

    2007-01-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication

  13. Locked modes and magnetic field errors in MST

    International Nuclear Information System (INIS)

    Almagri, A.F.; Assadi, S.; Prager, S.C.; Sarff, J.S.; Kerst, D.W.

    1992-06-01

    In the MST reversed field pinch magnetic oscillations become stationary (locked) in the lab frame as a result of a process involving interactions between the modes, sawteeth, and field errors. Several helical modes become phase locked to each other to form a rotating localized disturbance, the disturbance locks to an impulsive field error generated at a sawtooth crash, the error fields grow monotonically after locking (perhaps due to an unstable interaction between the modes and field error), and over the tens of milliseconds of growth confinement degrades and the discharge eventually terminates. Field error control has been partially successful in eliminating locking

  14. Design aspects of safety critical instrumentation of nuclear installations

    Energy Technology Data Exchange (ETDEWEB)

    Swaminathan, P. [Electronics Group, Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102, Tamil Nadu (India)]. E-mail: swamy@igcar.ernet.in

    2005-07-01

    Safety critical instrumentation systems ensure safe shutdown/configuration of the nuclear installation when process status exceeds the safety threshold limits. Design requirements for safety critical instrumentation such as functional and electrical independence, fail-safe design, and architecture to ensure the specified unsafe failure rate and safe failure rate, human machine interface (HMI), etc., are explained with examples. Different fault tolerant architectures like 1/2, 2/2, 2/3 hot stand-by are compared for safety critical instrumentation. For embedded systems, software quality assurance is detailed both during design phase and O and M phase. Different software development models such as waterfall model and spiral model are explained with examples. The error distribution in embedded system is detailed. The usage of formal method is outlined to reduce the specification error. The guidelines for coding of application software are outlined. The interface problems of safety critical instrumentation with sensors, actuators, other computer systems, etc., are detailed with examples. Testability and maintainability shall be taken into account during design phase. Online diagnostics for safety critical instrumentation is detailed with examples. Salient details of design guides from Atomic Energy Regulatory Board, International Atomic Energy Agency and standards from IEEE, BIS are given towards the design of safety critical instrumentation systems. (author)

  15. Design aspects of safety critical instrumentation of nuclear installations

    International Nuclear Information System (INIS)

    Swaminathan, P.

    2005-01-01

    Safety critical instrumentation systems ensure safe shutdown/configuration of the nuclear installation when process status exceeds the safety threshold limits. Design requirements for safety critical instrumentation such as functional and electrical independence, fail-safe design, and architecture to ensure the specified unsafe failure rate and safe failure rate, human machine interface (HMI), etc., are explained with examples. Different fault tolerant architectures like 1/2, 2/2, 2/3 hot stand-by are compared for safety critical instrumentation. For embedded systems, software quality assurance is detailed both during design phase and O and M phase. Different software development models such as waterfall model and spiral model are explained with examples. The error distribution in embedded system is detailed. The usage of formal method is outlined to reduce the specification error. The guidelines for coding of application software are outlined. The interface problems of safety critical instrumentation with sensors, actuators, other computer systems, etc., are detailed with examples. Testability and maintainability shall be taken into account during design phase. Online diagnostics for safety critical instrumentation is detailed with examples. Salient details of design guides from Atomic Energy Regulatory Board, International Atomic Energy Agency and standards from IEEE, BIS are given towards the design of safety critical instrumentation systems. (author)

  16. Models and error analyses of measuring instruments in accountability systems in safeguards control

    International Nuclear Information System (INIS)

    Dattatreya, E.S.

    1977-05-01

    Essentially three types of measuring instruments are used in plutonium accountability systems: (1) the bubblers, for measuring the total volume of liquid in the holding tanks, (2) coulometers, titration apparatus and calorimeters, for measuring the concentration of plutonium; and (3) spectrometers, for measuring isotopic composition. These three classes of instruments are modeled and analyzed. Finally, the uncertainty in the estimation of total plutonium in the holding tank is determined

  17. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators......We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...

  18. Efforts onto electricity and instrumentation technology for nuclear power generation

    International Nuclear Information System (INIS)

    Hayakawa, Toshifumi

    2000-01-01

    Nuclear power generation shares more than 1/3 of all amounts of in-land generation at present, as a supplying source of stable electric energy after 2000 either. As a recent example of efforts onto electricity and instrumentation technology for nuclear power generation, there are, on instrumentation control system a new central control board aiming at reduction of operator's load, protection of human error, and upgrading of system reliability and economics by applying high level micro-processor applied technique and high speed data transfer technique to central monitoring operation and plant control protection, on a field of reactor instrumentation a new digital control rod position indicator improved of conventional system on a base of operation experience and recent technology, on a field of radiation instrumentation a new radiation instrumentation system accumulating actual results in a wide application field on a concept of application to nuclear power plant by adopting in-situ separation processing system using local network technique, and on a field of operation maintenance and management a conservation management system for nuclear generation plant intending of further effectiveness of operation maintenance management of power plant by applying of operation experience and recent data processing and communication technology. And, in the large electric apparatus, there are some generators carried out production and verification of a model one with actual size in lengthwise dimension, to correspond to future large capacity nuclear power plant. By this verification, it was proved that even large capacity generator of 1800 MVA class could be manufactured. (G.K.)

  19. Proposal for a Universal Test Mirror for Characterization of Slope Measuring Instruments

    International Nuclear Information System (INIS)

    Yashchuk, Valeriy V.; McKinney, Wayne R.; Warwick, Tony; Noll, Tino; Siewert, Frank; Zeschke, Thomas; Geckeler, Ralf D.

    2007-01-01

    The development of third generation light sources like the Advanced Light Source (ALS) or BESSY II brought to a focus the need for high performance synchrotron optics with unprecedented tolerances for slope error and micro roughness. Proposed beam lines at Free Electron Lasers (FEL) require optical elements up to a length of one meter, characterized by a residual slope error in the range of 0.1mu rad (rms),and rms values of 0.1 nm for micro roughness. These optical elements must be inspected by highly accurate measuring instruments, providing a measurement uncertainty lower than the specified accuracy of the surface under test. It is essential that metrology devices in use at synchrotron laboratories be precisely characterized and calibrated to achieve this target. In this paper we discuss a proposal for a Universal Test Mirror (UTM) as a realization of a high performance calibration instrument. The instrument would provide an ideal calibration surface to replicate a redundant surface under test of redundant figure. The application of a sophisticated calibration instrument will allow the elimination of the majority of the systematic error from the error budget of an individual measurement of a particular optical element. We present the limitations of existing methods, initial UTM design considerations, possible calibration algorithms, and an estimation of the expected accuracy

  20. A Proposal to Localize Fermi GBM GRBs Through Coordinated Scanning of the GBM Error Circle via Optical Telescopes

    Science.gov (United States)

    Ukwatta, T. N.; Linnemann, J. T.; Tollefson, K.; Abeysekara, A. U.; Bhat, P. N.; Sonbas, E.; Gehrels, N.

    2011-01-01

    We investigate the feasibility of implementing a system that will coordinate ground-based optical telescopes to cover the Fermi GBM Error Circle (EC). The aim of the system is to localize GBM detected GRBs and facilitate multi-wavelength follow-up from space and ground. This system will optimize the observing locations in the GBM EC based on individual telescope location, Field of View (FoV) and sensitivity. The proposed system will coordinate GBM EC scanning by professional as well as amateur astronomers around the world. The results of a Monte Carlo simulation to investigate the feasibility of the project are presented.

  1. Estimation of heading gyrocompass error using a GPS 3DF system: Impact on ADCP measurements

    Directory of Open Access Journals (Sweden)

    Simón Ruiz

    2002-12-01

    Full Text Available Traditionally the horizontal orientation in a ship (heading has been obtained from a gyrocompass. This instrument is still used on research vessels but has an estimated error of about 2-3 degrees, inducing a systematic error in the cross-track velocity measured by an Acoustic Doppler Current Profiler (ADCP. The three-dimensional positioning system (GPS 3DF provides an independent heading measurement with accuracy better than 0.1 degree. The Spanish research vessel BIO Hespérides has been operating with this new system since 1996. For the first time on this vessel, the data from this new instrument are used to estimate gyrocompass error. The methodology we use follows the scheme developed by Griffiths (1994, which compares data from the gyrocompass and the GPS system in order to obtain an interpolated error function. In the present work we apply this methodology on mesoscale surveys performed during the observational phase of the OMEGA project, in the Alboran Sea. The heading-dependent gyrocompass error dominated. Errors in gyrocompass heading of 1.4-3.4 degrees have been found, which give a maximum error in measured cross-track ADCP velocity of 24 cm s-1.

  2. SU-D-BRA-03: Analysis of Systematic Errors with 2D/3D Image Registration for Target Localization and Treatment Delivery in Stereotactic Radiosurgery

    International Nuclear Information System (INIS)

    Xu, H; Chetty, I; Wen, N

    2016-01-01

    Purpose: Determine systematic deviations between 2D/3D and 3D/3D image registrations with six degrees of freedom (6DOF) for various imaging modalities and registration algorithms on the Varian Edge Linac. Methods: The 6DOF systematic errors were assessed by comparing automated 2D/3D (kV/MV vs. CT) with 3D/3D (CBCT vs. CT) image registrations from different imaging pairs, CT slice thicknesses, couch angles, similarity measures, etc., using a Rando head and a pelvic phantom. The 2D/3D image registration accuracy was evaluated at different treatment sites (intra-cranial and extra-cranial) by statistically analyzing 2D/3D pre-treatment verification against 3D/3D localization of 192 Stereotactic Radiosurgery/Stereotactic Body Radiation Therapy treatment fractions for 88 patients. Results: The systematic errors of 2D/3D image registration using kV-kV, MV-kV and MV-MV image pairs using 0.8 mm slice thickness CT images were within 0.3 mm and 0.3° for translations and rotations with a 95% confidence interval (CI). No significant difference between 2D/3D and 3D/3D image registrations (P>0.05) was observed for target localization at various CT slice thicknesses ranging from 0.8 to 3 mm. Couch angles (30, 45, 60 degree) did not impact the accuracy of 2D/3D image registration. Using pattern intensity with content image filtering was recommended for 2D/3D image registration to achieve the best accuracy. For the patient study, translational error was within 2 mm and rotational error was within 0.6 degrees in terms of 95% CI for 2D/3D image registration. For intra-cranial sites, means and std. deviations of translational errors were −0.2±0.7, 0.04±0.5, 0.1±0.4 mm for LNG, LAT, VRT directions, respectively. For extra-cranial sites, means and std. deviations of translational errors were - 0.04±1, 0.2±1, 0.1±1 mm for LNG, LAT, VRT directions, respectively. 2D/3D image registration uncertainties for intra-cranial and extra-cranial sites were comparable. Conclusion: The Varian

  3. SU-D-BRA-03: Analysis of Systematic Errors with 2D/3D Image Registration for Target Localization and Treatment Delivery in Stereotactic Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Xu, H [Wayne State University, Detroit, MI (United States); Chetty, I; Wen, N [Henry Ford Health System, Detroit, MI (United States)

    2016-06-15

    Purpose: Determine systematic deviations between 2D/3D and 3D/3D image registrations with six degrees of freedom (6DOF) for various imaging modalities and registration algorithms on the Varian Edge Linac. Methods: The 6DOF systematic errors were assessed by comparing automated 2D/3D (kV/MV vs. CT) with 3D/3D (CBCT vs. CT) image registrations from different imaging pairs, CT slice thicknesses, couch angles, similarity measures, etc., using a Rando head and a pelvic phantom. The 2D/3D image registration accuracy was evaluated at different treatment sites (intra-cranial and extra-cranial) by statistically analyzing 2D/3D pre-treatment verification against 3D/3D localization of 192 Stereotactic Radiosurgery/Stereotactic Body Radiation Therapy treatment fractions for 88 patients. Results: The systematic errors of 2D/3D image registration using kV-kV, MV-kV and MV-MV image pairs using 0.8 mm slice thickness CT images were within 0.3 mm and 0.3° for translations and rotations with a 95% confidence interval (CI). No significant difference between 2D/3D and 3D/3D image registrations (P>0.05) was observed for target localization at various CT slice thicknesses ranging from 0.8 to 3 mm. Couch angles (30, 45, 60 degree) did not impact the accuracy of 2D/3D image registration. Using pattern intensity with content image filtering was recommended for 2D/3D image registration to achieve the best accuracy. For the patient study, translational error was within 2 mm and rotational error was within 0.6 degrees in terms of 95% CI for 2D/3D image registration. For intra-cranial sites, means and std. deviations of translational errors were −0.2±0.7, 0.04±0.5, 0.1±0.4 mm for LNG, LAT, VRT directions, respectively. For extra-cranial sites, means and std. deviations of translational errors were - 0.04±1, 0.2±1, 0.1±1 mm for LNG, LAT, VRT directions, respectively. 2D/3D image registration uncertainties for intra-cranial and extra-cranial sites were comparable. Conclusion: The Varian

  4. Measurement Errors and Uncertainties Theory and Practice

    CERN Document Server

    Rabinovich, Semyon G

    2006-01-01

    Measurement Errors and Uncertainties addresses the most important problems that physicists and engineers encounter when estimating errors and uncertainty. Building from the fundamentals of measurement theory, the author develops the theory of accuracy of measurements and offers a wealth of practical recommendations and examples of applications. This new edition covers a wide range of subjects, including: - Basic concepts of metrology - Measuring instruments characterization, standardization and calibration -Estimation of errors and uncertainty of single and multiple measurements - Modern probability-based methods of estimating measurement uncertainty With this new edition, the author completes the development of the new theory of indirect measurements. This theory provides more accurate and efficient methods for processing indirect measurement data. It eliminates the need to calculate the correlation coefficient - a stumbling block in measurement data processing - and offers for the first time a way to obtain...

  5. Error mapping of high-speed AFM systems

    Science.gov (United States)

    Klapetek, Petr; Picco, Loren; Payton, Oliver; Yacoot, Andrew; Miles, Mervyn

    2013-02-01

    In recent years, there have been several advances in the development of high-speed atomic force microscopes (HSAFMs) to obtain images with nanometre vertical and lateral resolution at frame rates in excess of 1 fps. To date, these instruments are lacking in metrology for their lateral scan axes; however, by imaging a series of two-dimensional lateral calibration standards, it has been possible to obtain information about the errors associated with these HSAFM scan axes. Results from initial measurements are presented in this paper and show that the scan speed needs to be taken into account when performing a calibration as it can lead to positioning errors of up to 3%.

  6. In-core failure of the instrumented BWR rod by locally induced high coolant temperature

    International Nuclear Information System (INIS)

    Yanagisawa, Kazuaki

    1985-12-01

    In the BWR type light water loop instrumented in HBWR, a current BWR type fuel rod pre-irradiated up to 5.6 MWd/kgU was power ramped to 50 kW/m. During the ramp, the diameter of the rod was expanded significantly at the bottom end. The behaviour was different from which caused by pellet-cladding interaction (PCI). In the post-irradiation examination, the rod was found to be failed. In this paper, the cause of the failure was studied and obtained the followings. (1) The significant expansion of the rod diameter was attributed to marked oxidation of cladding outer diameter, appeared in the direction of 0 0 -180 0 degree with a shape of nodular. (2) The cladding in the place was softened by high coolant temperature. Coolant pressure, 7MPa intruded the cladding into inside chamfer void at pellet interface. (3) At the place of the significant oxidation, an instrumented transformer was existed and the coolant flow area was very little. The reduction of the coolant flow was enhanced by the bending of the cladding which was caused in pre-irradiation stage. They are considered to be a principal cause of local closure of coolant flow and resultant high temperature in the place. (author)

  7. Fusing Bluetooth Beacon Data with Wi-Fi Radiomaps for Improved Indoor Localization

    Science.gov (United States)

    Kanaris, Loizos; Kokkinis, Akis; Liotta, Antonio; Stavrou, Stavros

    2017-01-01

    Indoor user localization and tracking are instrumental to a broad range of services and applications in the Internet of Things (IoT) and particularly in Body Sensor Networks (BSN) and Ambient Assisted Living (AAL) scenarios. Due to the widespread availability of IEEE 802.11, many localization platforms have been proposed, based on the Wi-Fi Received Signal Strength (RSS) indicator, using algorithms such as K-Nearest Neighbour (KNN), Maximum A Posteriori (MAP) and Minimum Mean Square Error (MMSE). In this paper, we introduce a hybrid method that combines the simplicity (and low cost) of Bluetooth Low Energy (BLE) and the popular 802.11 infrastructure, to improve the accuracy of indoor localization platforms. Building on KNN, we propose a new positioning algorithm (dubbed i-KNN) which is able to filter the initial fingerprint dataset (i.e., the radiomap), after considering the proximity of RSS fingerprints with respect to the BLE devices. In this way, i-KNN provides an optimised small subset of possible user locations, based on which it finally estimates the user position. The proposed methodology achieves fast positioning estimation due to the utilization of a fragment of the initial fingerprint dataset, while at the same time improves positioning accuracy by minimizing any calculation errors. PMID:28394268

  8. Fusing Bluetooth Beacon Data with Wi-Fi Radiomaps for Improved Indoor Localization.

    Science.gov (United States)

    Kanaris, Loizos; Kokkinis, Akis; Liotta, Antonio; Stavrou, Stavros

    2017-04-10

    Indoor user localization and tracking are instrumental to a broad range of services and applications in the Internet of Things (IoT) and particularly in Body Sensor Networks (BSN) and Ambient Assisted Living (AAL) scenarios. Due to the widespread availability of IEEE 802.11, many localization platforms have been proposed, based on the Wi-Fi Received Signal Strength (RSS) indicator, using algorithms such as K -Nearest Neighbour (KNN), Maximum A Posteriori (MAP) and Minimum Mean Square Error (MMSE). In this paper, we introduce a hybrid method that combines the simplicity (and low cost) of Bluetooth Low Energy (BLE) and the popular 802.11 infrastructure, to improve the accuracy of indoor localization platforms. Building on KNN, we propose a new positioning algorithm (dubbed i-KNN) which is able to filter the initial fingerprint dataset (i.e., the radiomap), after considering the proximity of RSS fingerprints with respect to the BLE devices. In this way, i-KNN provides an optimised small subset of possible user locations, based on which it finally estimates the user position. The proposed methodology achieves fast positioning estimation due to the utilization of a fragment of the initial fingerprint dataset, while at the same time improves positioning accuracy by minimizing any calculation errors.

  9. Fusing Bluetooth Beacon Data with Wi-Fi Radiomaps for Improved Indoor Localization

    Directory of Open Access Journals (Sweden)

    Loizos Kanaris

    2017-04-01

    Full Text Available Indoor user localization and tracking are instrumental to a broad range of services and applications in the Internet of Things (IoT and particularly in Body Sensor Networks (BSN and Ambient Assisted Living (AAL scenarios. Due to the widespread availability of IEEE 802.11, many localization platforms have been proposed, based on the Wi-Fi Received Signal Strength (RSS indicator, using algorithms such as K-Nearest Neighbour (KNN, Maximum A Posteriori (MAP and Minimum Mean Square Error (MMSE. In this paper, we introduce a hybrid method that combines the simplicity (and low cost of Bluetooth Low Energy (BLE and the popular 802.11 infrastructure, to improve the accuracy of indoor localization platforms. Building on KNN, we propose a new positioning algorithm (dubbed i-KNN which is able to filter the initial fingerprint dataset (i.e., the radiomap, after considering the proximity of RSS fingerprints with respect to the BLE devices. In this way, i-KNN provides an optimised small subset of possible user locations, based on which it finally estimates the user position. The proposed methodology achieves fast positioning estimation due to the utilization of a fragment of the initial fingerprint dataset, while at the same time improves positioning accuracy by minimizing any calculation errors.

  10. Instrumentation

    International Nuclear Information System (INIS)

    Umminger, K.

    2008-01-01

    A proper measurement of the relevant single and two-phase flow parameters is the basis for the understanding of many complex thermal-hydraulic processes. Reliable instrumentation is therefore necessary for the interaction between analysis and experiment especially in the field of nuclear safety research where postulated accident scenarios have to be simulated in experimental facilities and predicted by complex computer code systems. The so-called conventional instrumentation for the measurement of e. g. pressures, temperatures, pressure differences and single phase flow velocities is still a solid basis for the investigation and interpretation of many phenomena and especially for the understanding of the overall system behavior. Measurement data from such instrumentation still serves in many cases as a database for thermal-hydraulic system codes. However some special instrumentation such as online concentration measurement for boric acid in the water phase or for non-condensibles in steam atmosphere as well as flow visualization techniques were further developed and successfully applied during the recent years. Concerning the modeling needs for advanced thermal-hydraulic codes, significant advances have been accomplished in the last few years in the local instrumentation technology for two-phase flow by the application of new sensor techniques, optical or beam methods and electronic technology. This paper will give insight into the current state of instrumentation technology for safety-related thermohydraulic experiments. Advantages and limitations of some measurement processes and systems will be indicated as well as trends and possibilities for further development. Aspects of instrumentation in operating reactors will also be mentioned.

  11. Teacher knowledge of error analysis in differential calculus

    Directory of Open Access Journals (Sweden)

    Eunice K. Moru

    2014-12-01

    Full Text Available The study investigated teacher knowledge of error analysis in differential calculus. Two teachers were the sample of the study: one a subject specialist and the other a mathematics education specialist. Questionnaires and interviews were used for data collection. The findings of the study reflect that the teachers’ knowledge of error analysis was characterised by the following assertions, which are backed up with some evidence: (1 teachers identified the errors correctly, (2 the generalised error identification resulted in opaque analysis, (3 some of the identified errors were not interpreted from multiple perspectives, (4 teachers’ evaluation of errors was either local or global and (5 in remedying errors accuracy and efficiency were emphasised more than conceptual understanding. The implications of the findings of the study for teaching include engaging in error analysis continuously as this is one way of improving knowledge for teaching.

  12. Student Self-Assessment and Faculty Assessment of Performance in an Interprofessional Error Disclosure Simulation Training Program.

    Science.gov (United States)

    Poirier, Therese I; Pailden, Junvie; Jhala, Ray; Ronald, Katie; Wilhelm, Miranda; Fan, Jingyang

    2017-04-01

    Objectives. To conduct a prospective evaluation for effectiveness of an error disclosure assessment tool and video recordings to enhance student learning and metacognitive skills while assessing the IPEC competencies. Design. The instruments for assessing performance (planning, communication, process, and team dynamics) in interprofessional error disclosure were developed. Student self-assessment of performance before and after viewing the recordings of their encounters were obtained. Faculty used a similar instrument to conduct real-time assessments. An instrument to assess achievement of the Interprofessional Education Collaborative (IPEC) core competencies was developed. Qualitative data was reviewed to determine student and faculty perceptions of the simulation. Assessment. The interprofessional simulation training involved a total of 233 students (50 dental, 109 nursing and 74 pharmacy). Use of video recordings made a significant difference in student self-assessment for communication and process categories of error disclosure. No differences in student self-assessments were noted among the different professions. There were differences among the family member affects for planning and communication for both pre-video and post-video data. There were significant differences between student self-assessment and faculty assessment for all paired comparisons, except communication in student post-video self-assessment. Students' perceptions of achievement of the IPEC core competencies were positive. Conclusion. The use of assessment instruments and video recordings may have enhanced students' metacognitive skills for assessing performance in interprofessional error disclosure. The simulation training was effective in enhancing perceptions on achievement of IPEC core competencies. This enhanced assessment process appeared to enhance learning about the skills needed for interprofessional error disclosure.

  13. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad

    Energy Technology Data Exchange (ETDEWEB)

    Alcock, Simon G., E-mail: simon.alcock@diamond.ac.uk; Nistea, Ioana; Sawhney, Kawal [Diamond Light Source Ltd., Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)

    2016-05-15

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds into the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  14. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad

    International Nuclear Information System (INIS)

    Alcock, Simon G.; Nistea, Ioana; Sawhney, Kawal

    2016-01-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds into the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  15. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.

    Science.gov (United States)

    Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal

    2016-05-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  16. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    Directory of Open Access Journals (Sweden)

    Wei He

    Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.

  17. Laser tracker error determination using a network measurement

    International Nuclear Information System (INIS)

    Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim

    2011-01-01

    We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies

  18. ASSESSMENT OF SYSTEMATIC CHROMATIC ERRORS THAT IMPACT SUB-1% PHOTOMETRIC PRECISION IN LARGE-AREA SKY SURVEYS

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Boada, S.; Mondrik, N.; Nagasawa, D. [George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, and Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Tucker, D.; Annis, J.; Finley, D. A.; Kent, S.; Lin, H.; Marriner, J.; Wester, W. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Kessler, R.; Scolnic, D. [Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Bernstein, G. M. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Burke, D. L.; Rykoff, E. S. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); James, D. J.; Walker, A. R. [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile); Collaboration: DES Collaboration; and others

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  19. Adaptation of WHOQOL as health-related quality of life instrument to develop a vision-specific instrument.

    Directory of Open Access Journals (Sweden)

    Dandona Lalit

    2000-01-01

    Full Text Available The WHOQOL instrument was adapted as a health-related QOL instrument for a population-based epidemiologic study of eye diseases in southern India, the Andhra Pradesh Eye Disease Study (APEDS. A follow-up question was added to each item in WHOQOL to determine whether the decrease in QOL was due to any health reasons including eye-related reasons. Modifications in WHOQOL and translation in local language were done through the use of the focus groups including health professionals and people not related to health care. The modified instrument has 28 items across 6 domains of the WHOQOL and was translated into the local language, Telugu, using the pragmatic approach. It takes 10-20 minutes to be administered by a trained interviewer. Reliability was within acceptable range. This health-related QOL instrument is being used in the population-based study APEDS to develop a vision-specific QOL instrument which could potentially be used to assess the impact of visual impairment on QOL across different cultures and for use in evaluating eye-care interventions. This health-related QOL instrument could also be used to develop other disease-specific instruments as it allows assessment of the extent to which various aspects of QOL are affected by a variety of health problems.

  20. Assessment of the measurement control program for solution assay instruments at the Los Alamos National Laboratory Plutonium Facility

    International Nuclear Information System (INIS)

    Goldman, A.S.

    1985-05-01

    This report documents and reviews the measurement control program (MCP) over a 27-month period for four solution assay instruments (SAIs) Facility. SAI measurement data collected during the period January 1982 through March 1984 were analyzed. The sources of these data included computer listings of measurements emanating from operator entries on computer terminals, logbook entries of measurements transcribed by operators, and computer listings of measurements recorded internally in the instruments. Data were also obtained from control charts that are available as part of the MCP. As a result of our analyses we observed agreement between propagated and historical variances and concluded instruments were functioning properly from a precision aspect. We noticed small, persistent biases indicating slight instrument inaccuracies. We suggest that statistical tests for bias be incorporated in the MCP on a monthly basis and if the instrument bias is significantly greater than zero, the instrument should undergo maintenance. We propose the weekly precision test be replaced by a daily test to provide more timely detection of possible problems. We observed that one instrument showed a trend of increasing bias during the past six months and recommend a randomness test be incorporated to detect trends in a more timely fashion. We detected operator transcription errors during data transmissions and advise direct instrument transmission to the MCP to eliminate these errors. A transmission error rate based on those errors that affected decisions in the MCP was estimated as 1%. 11 refs., 10 figs., 4 tabs

  1. Measurements on pointing error and field of view of Cimel-318 Sun photometers in the scope of AERONET

    Directory of Open Access Journals (Sweden)

    B. Torres

    2013-08-01

    Full Text Available Sensitivity studies indicate that among the diverse error sources of ground-based sky radiometer observations, the pointing error plays an important role in the correct retrieval of aerosol properties. The accurate pointing is specially critical for the characterization of desert dust aerosol. The present work relies on the analysis of two new measurement procedures (cross and matrix specifically designed for the evaluation of the pointing error in the standard instrument of the Aerosol Robotic Network (AERONET, the Cimel CE-318 Sun photometer. The first part of the analysis contains a preliminary study whose results conclude on the need of a Sun movement correction for an accurate evaluation of the pointing error from both new measurements. Once this correction is applied, both measurements show equivalent results with differences under 0.01° in the pointing error estimations. The second part of the analysis includes the incorporation of the cross procedure in the AERONET routine measurement protocol in order to monitor the pointing error in field instruments. The pointing error was evaluated using the data collected for more than a year, in 7 Sun photometers belonging to AERONET sites. The registered pointing error values were generally smaller than 0.1°, though in some instruments values up to 0.3° have been observed. Moreover, the pointing error analysis shows that this measurement can be useful to detect mechanical problems in the robots or dirtiness in the 4-quadrant detector used to track the Sun. Specifically, these mechanical faults can be detected due to the stable behavior of the values over time and vs. the solar zenith angle. Finally, the matrix procedure can be used to derive the value of the solid view angle of the instruments. The methodology has been implemented and applied for the characterization of 5 Sun photometers. To validate the method, a comparison with solid angles obtained from the vicarious calibration method was

  2. Real-time detection and elimination of nonorthogonality error in interference fringe processing

    International Nuclear Information System (INIS)

    Hu Haijiang; Zhang Fengdeng

    2011-01-01

    In the measurement system of interference fringe, the nonorthogonality error is a main error source that influences the precision and accuracy of the measurement system. The detection and elimination of the error has been an important target. A novel method that only uses the cross-zero detection and the counting is proposed to detect and eliminate the nonorthogonality error in real time. This method can be simply realized by means of the digital logic device, because it does not invoke trigonometric functions and inverse trigonometric functions. And it can be widely used in the bidirectional subdivision systems of a Moire fringe and other optical instruments.

  3. Efficient detection of dangling pointer error for C/C++ programs

    Science.gov (United States)

    Zhang, Wenzhe

    2017-08-01

    Dangling pointer error is pervasive in C/C++ programs and it is very hard to detect. This paper introduces an efficient detector to detect dangling pointer error in C/C++ programs. By selectively leave some memory accesses unmonitored, our method could reduce the memory monitoring overhead and thus achieves better performance over previous methods. Experiments show that our method could achieve an average speed up of 9% over previous compiler instrumentation based method and more than 50% over previous page protection based method.

  4. Data evaluation for operator-inspector differences for a specific NDA instrument

    International Nuclear Information System (INIS)

    Franklin, M.

    1984-01-01

    The Joint Research Centre (JRC) of the European Commission is developing a number of NDA instruments for safeguards use. In particular the JRC has developed a photo neutron active interrogation device (Phonid) for the assay of U-235 in bulk quantities. This report describes new statistical results for the D statistic in the context of data evaluation algorithms for the Phonid instrument. The Phonid instrument is useful for this purpose because its error propagation structure is well characterised and yet not trivially simple. The data evaluation for Phonid data is derived from its error propagation modelling plus new results for the sampling distribution of the D statistic. The problem of assigning an uncertainty to the D statistic value without any diversion strategy assumptions has long been an unresolved problem. The results described in this report provide the solution to this problem by considering the sampling distribution of the D statistic given the population of discrepancies. Discrepancy is defined as the difference between operator declared values and the true values measured by the inspector. This approach provides estimable expressions for the sampling moments of the D statistic without making any assumption about the cause (diversion, clerical error, measurement error) of the discrepancy. The report also provides a general discussion of the distinction between planning a verification and performing the data analysis after the verification has been carried out

  5. Error reduction techniques for measuring long synchrotron mirrors

    International Nuclear Information System (INIS)

    Irick, S.

    1998-07-01

    Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP

  6. Error prevention at a radon measurement service laboratory

    International Nuclear Information System (INIS)

    Cohen, B.L.; Cohen, F.

    1989-01-01

    This article describes the steps taken at a high volume counting laboratory to avoid human, instrument, and computer errors. The laboratory analyzes diffusion barrier charcoal adsorption canisters which have been used to test homes and commercial buildings. A series of computer and human cross-checks are utilized to assure that accurate results are reported to the correct client

  7. Errors and Correction of Precipitation Measurements in China

    Institute of Scientific and Technical Information of China (English)

    REN Zhihua; LI Mingqin

    2007-01-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  8. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  9. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    Science.gov (United States)

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  10. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing

    International Nuclear Information System (INIS)

    Yang, Z.; Hong, J.; Zhang, J.; Wang, M. Y.; Zhu, Y.

    2013-01-01

    The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results on axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements’ repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings

  12. Design and application of location error teaching aids in measuring and visualization

    Directory of Open Access Journals (Sweden)

    Yu Fengning

    2015-01-01

    Full Text Available As an abstract concept, ‘location error’ in is considered to be an important element with great difficult to understand and apply. The paper designs and develops an instrument to measure the location error. The location error is affected by different position methods and reference selection. So we choose position element by rotating the disk. The tiny movement transfers by grating ruler and programming by PLC can show the error on text display, which also helps students understand the position principle and related concepts of location error. After comparing measurement results with theoretical calculations and analyzing the measurement accuracy, the paper draws a conclusion that the teaching aid owns reliability and a promotion of high value.

  13. LESSER KNOWN MUSICAL INSTRUMENTS IN KOSOVO

    Directory of Open Access Journals (Sweden)

    Rešad Fazli

    2012-04-01

    Full Text Available In this paper the author presented the instruments that were originated in this region, as well as those instruments that are brought from other regions, and became deeply carved into the tradition and culture of the local people, that they feel as their own. Some of these instruments are kept only here in this region, and they are not used anymore in the area they originated from. This paper also covers instruments that are rarely used or completely lost in this region.

  14. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Science.gov (United States)

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  15. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  16. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  17. Assessing the local windfield with instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Zambrano, T.G.

    1980-10-01

    This report concerns the development and testing of a technique for the initial screening and evaluation of potential sites for wind-energy conversion systems (WECS). The methodology was developed through a realistic siting exercise. The siting exercise involved measurements of winds along the surface and winds aloft using a relatively new instrument system, the Tethered Aerodynamic Lifting Anemometer (TALA) kite; notation of ecological factors such as vegetation flagging, soil erosion and site exposure, and verification of an area best suited for wind-energy development by establishing and maintaining a wind monitoring network. The siting exercise was carried out in an approximately 100-square-mile region of the Tehachapi Mountains of Southern California. The results showed that a comprehensive site survey involving field measurements, ecological survey, and wind-monitoring can be an effective tool for preliminary evaluation of WECS sites.

  18. Analysis of Students' Error in Learning of Quadratic Equations

    Science.gov (United States)

    Zakaria, Effandi; Ibrahim; Maat, Siti Mistima

    2010-01-01

    The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…

  19. Instrument evaluation no. 10. Scanray radiation meter type 751

    CERN Document Server

    Burgess, P H; White, D F

    1978-01-01

    The various radiations encountered in radiological protection cover a wide range of energies and radiation measurements have to be carried out under an equally broad spectrum of environmental conditions. This report is one of a series intended to give information on the performance characteristics of radiological protection instruments, to assist in the selection of appropriate instruments for a given purpose, to interpret the results obtained with such instruments, and, in particular, to know the likely sources and magnitude of errors that might be associated with measurements in the field. The radiation, electrical and environmental characteristics of radiation protection instruments are considered together with those aspects of the construction which make an instrument convenient for routine use. To provide consistent criteria for instrument performance, the range of tests performed on any particular class of instrument, the test methods and the criteria of acceptable performance are based broadly on the a...

  20. Human error theory: relevance to nurse management.

    Science.gov (United States)

    Armitage, Gerry

    2009-03-01

    Describe, discuss and critically appraise human error theory and consider its relevance for nurse managers. Healthcare errors are a persistent threat to patient safety. Effective risk management and clinical governance depends on understanding the nature of error. This paper draws upon a wide literature from published works, largely from the field of cognitive psychology and human factors. Although the content of this paper is pertinent to any healthcare professional; it is written primarily for nurse managers. Error is inevitable. Causation is often attributed to individuals, yet causation in complex environments such as healthcare is predominantly multi-factorial. Individual performance is affected by the tendency to develop prepacked solutions and attention deficits, which can in turn be related to local conditions and systems or latent failures. Blame is often inappropriate. Defences should be constructed in the light of these considerations and to promote error wisdom and organizational resilience. Managing and learning from error is seen as a priority in the British National Health Service (NHS), this can be better achieved with an understanding of the roots, nature and consequences of error. Such an understanding can provide a helpful framework for a range of risk management activities.

  1. Micro-computed tomographic comparison of nickel-titanium rotary versus traditional instruments in C-shaped root canal system.

    Science.gov (United States)

    Yin, Xingzhe; Cheung, Gary Shun-Pan; Zhang, Chengfei; Masuda, Yoshiko Murakami; Kimura, Yuichi; Matsumoto, Koukichi

    2010-04-01

    The purpose of this study was to assess the efficacy of instrumentation of C-shaped canals with ProTaper rotary system and traditional instruments by using micro-computed tomography (micro-CT). Twenty-four mandibular molars with C-shaped canals were selected in pairs and sorted equally into 2 groups, which were assigned for instrumentation by ProTaper rotary system (ProTaper group) or by K-files and Gates-Glidden burs (Hand Instrument group). Three-dimensional images were constructed by micro-CT. The volume of dentin removed, uninstrumented canal area, time taken for instrumentation, and iatrogenic error of instrumentation were investigated. Hand Instrument group showed greater amount of volumetric dentin removal and left less uninstrumented canal area than ProTaper group (P ProTaper group than for Hand Instrument group (P Hand Instrument group than for ProTaper group. It was concluded that ProTaper rotary system maintained the canal curvature with speediness and few procedural errors, whereas traditional instrumentation can clean more canal surface. Copyright (c) 2010 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  2. Psychometric properties of the national eye institute refractive error correction quality-of-life questionnaire among Iranian patients

    Directory of Open Access Journals (Sweden)

    Amir H Pakpour

    2013-01-01

    Conclusions: The Iranian version of the NEI-RQL-42 is a valid and reliable instrument to assess refractive error correction quality-of-life in Iranian patients. Moreover this questionnaire can be used to evaluate the effectiveness of interventions in patients with refractive errors.

  3. Mass measurement errors of Fourier-transform mass spectrometry (FTMS): distribution, recalibration, and application.

    Science.gov (United States)

    Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu

    2009-02-01

    The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.

  4. A digital, constant-frequency pulsed phase-locked-loop instrument for real-time, absolute ultrasonic phase measurements

    Science.gov (United States)

    Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.

    2018-05-01

    A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.

  5. The Error Reporting in the ATLAS TDAQ system

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2014-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  6. The Error Reporting in the ATLAS TDAQ System

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2015-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  7. Accurately Localize and Recognize Instruments with Substation Inspection Robot in Complex Environments

    Directory of Open Access Journals (Sweden)

    Hui Song

    2014-07-01

    Full Text Available This paper designs and develops an automatic detection system in the substation environment where complex and multi-inspecting objects exist. The inspection robot is able to fix and identify the objects quickly using a visual servo control system. This paper focuses on the analysis of fast lockup and recognition method of the substation instruments based on an improved Adaboost algorithm. The robot adjusts its position to the best view point and best resolution for the instrument in real-time. The dial and pointer of the instruments are detected with an improved Hough algorithm, and the angle of the pointer is converted to the corresponding readings. The experimental results indicate that the inspection robot can fix and identify the substation instruments quickly, and has a wide range of practical applications.

  8. Design and validation of portable optical instrument for crop diagnose

    Science.gov (United States)

    Sun, Gang; Zheng, Wengang; Huang, Wengjiang; Wan, Huawei; Liu, Liangyun

    2005-12-01

    In this paper, a portable diagnostic instrument was designed and tested, which can measure the normalized difference vegetation index (NDVI) and structure insensitive pigment index (SIPI) of crop canopy in field. The instrument have a valid survey area of 1 m*1 m when the height between instrument and the ground was fixed to 1.3 meter The crop growth condition can be assessed based on their NDVI and SIPI values, so it will be very important for crop management to get these values. The instrument uses sunlight as its light source. There are six special different photoelectrical detectors within red, blue and near infrared bands, which are used for detecting incidence sunlight and reflex light from the canopy of crop. This optical instrument includes photoelectric detector module, signal process and A/D convert module, the data storing and transmission module and human-machine interface module. The detector is the core of the instrument which measures the spectrums at special bands. The microprocessor calculates the NDVI and SIPI value based on the A/D value. And the value can be displayed on the instrument's LCD, stored in the flash memory of instrument and can also be uploaded to PC through the PC's RS232 serial interface. The prototype was tested in the crop field at different view directions. This paper also provided the method of calibration, the results showed that the average measurement error to SIPI value of instrument was 5.25% and the average measurement error to NDVI value in vegetation-covered region is 6.40%. It reveals the on-site and non-sampling mode of crop growth monitoring by fixed on the agricultural machine traveling in the field.

  9. An improved estimator for the hydration of fat-free mass from in vivo measurements subject to additive technical errors

    International Nuclear Information System (INIS)

    Kinnamon, Daniel D; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L; Lipsitz, Stuart R

    2010-01-01

    The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not

  10. Wireless Sensor Network Localization Research

    OpenAIRE

    Liang Xin

    2014-01-01

    DV-Hop algorithm is one of the important range-free localization algorithms. It performs better in isotropic density senor networks, however, it brings larger location errors in random distributed networks. According to the localization principle of the DV-Hop algorithm, this paper improves the estimation of average single hop distance by using the Least Equal Square Error, and revises the estimated distance between the unknown node and the anchor node with compensation coefficient considerin...

  11. Instruments for implementation of Local Agenda 21 in Borsod-Abauj-Zemplen County in Hungary

    Energy Technology Data Exchange (ETDEWEB)

    Avar, B.; Domaradzka, M.; Goettinger, P.; Loncsar, K.; Majlathova, L.; Monteiro, S.C.; Solti, E. [Environmental Management and Policy Workgroup, Kossuth Lajos University, Debrecen (Hungary)

    1999-04-01

    The aim of this project is to contribute to the Local Agenda 21 concept in regional development. The overall objective of the report is to formulate a bridge between actors playing their role on different levels, vertical and horizontal integration of different European Union and national development and strategies. It also can be worded as to draw a multi-folded picture and give recommendations about the implementation of the principles of Local Agenda 21 in regional development processes in the Borsod-Abauj-Zemplen County in Hungary. In order to reach this goal this report aims to serve as a tool for better understanding and provide better information and communication between different levels of actors on this field. In order to fulfil the task described above the main objectives of the research are the followings: the report has an overview and evaluation of the present state of regional development activities in terms of county and sub-regional strategies, programmes and their implementation. Identification of gaps in the implementation phase serves as a base for better understanding the reasons behind them. On the base of conclusions from case studies surveyed a set of tools and instruments can be recommended in order to find possible solutions for the gaps and to improve not only the quality of implementation at local level but also for national and county level responsible. The report aims to define ways of improvement of regional strategies to provide better interaction mainly between the different levels of playing field. For county level it is aimed to elaborate ways of improvement to get closer the real needs and possibilities to local level especially in the field of communication and co-operation. For local level, building a potential to fulfil its needs by better use of resources, improvement of their potential in strategical levels is a first priority. In order to achieve the improvement of quality of life, economic situation, chance of employment

  12. PRINCÍPIO DA PARTICIPAÇÃO E INSTRUMENTOS DE DEMOCRACIA PARTICIPATIVA EM ÂMBITO LOCAL / PARTICIPATION PRINCIPLE AND LOCAL PARTICIPATORY DEMOCRACY INSTRUMENTS

    Directory of Open Access Journals (Sweden)

    Janaína Rigo Santin

    2017-04-01

    Full Text Available Brazil’s 1988 Constitution created the legal basis for the development of some of the world’s most progressive democratic practices, consecrated as fundamental rights. The Brazilian Democratic State of Law must be based on popular sovereignty in which the people are the holder of political power and sometimes are also called to exercise it. However, even though there is a formal recognition, there is, in practical terms, a great resistance both by civil society and by political society in turning participation into an effective practice in Brazilian public management, residing here the research problematization. Thus, based on the dialectical method, the objective is to analyze the local instruments of participatory democracy in Brazil, especially the institutes of municipal democratic management, participatory budgetary management, public hearings and municipal management councils. There is a need of creating, through participation, a sense of belonging in every citizen to their local space. This can be done by aiming to optimize the application of public money and increasing social control over historical political practices such as clientelism, corruption and diversion of funds for private purposes. These practices are still present in the country and contaminate the effectiveness of the public machine and the legitimacy of institutions.

  13. Nurses' attitude and intention of medication administration error reporting.

    Science.gov (United States)

    Hung, Chang-Chiao; Chu, Tsui-Ping; Lee, Bih-O; Hsiao, Chia-Chi

    2016-02-01

    The Aims of this study were to explore the effects of nurses' attitudes and intentions regarding medication administration error reporting on actual reporting behaviours. Underreporting of medication errors is still a common occurrence. Whether attitude and intention towards medication administration error reporting connect to actual reporting behaviours remain unclear. This study used a cross-sectional design with self-administered questionnaires, and the theory of planned behaviour was used as the framework for this study. A total of 596 staff nurses who worked in general wards and intensive care units in a hospital were invited to participate in this study. The researchers used the instruments measuring nurses' attitude, nurse managers' and co-workers' attitude, report control, and nurses' intention to predict nurses' actual reporting behaviours. Data were collected from September-November 2013. Path analyses were used to examine the hypothesized model. Of the 596 nurses invited to participate, 548 (92%) completed and returned a valid questionnaire. The findings indicated that nurse managers' and co-workers' attitudes are predictors for nurses' attitudes towards medication administration error reporting. Nurses' attitudes also influenced their intention to report medication administration errors; however, no connection was found between intention and actual reporting behaviour. The findings reflected links among colleague perspectives, nurses' attitudes, and intention to report medication administration errors. The researchers suggest that hospitals should increase nurses' awareness and recognition of error occurrence. Regardless of nurse managers' and co-workers' attitudes towards medication administration error reporting, nurses are likely to report medication administration errors if they detect them. Management of medication administration errors should focus on increasing nurses' awareness and recognition of error occurrence. © 2015 John Wiley & Sons Ltd.

  14. Technical errors in MR arthrography

    International Nuclear Information System (INIS)

    Hodler, Juerg

    2008-01-01

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  15. Technical errors in MR arthrography

    Energy Technology Data Exchange (ETDEWEB)

    Hodler, Juerg [Orthopaedic University Hospital of Balgrist, Radiology, Zurich (Switzerland)

    2008-01-15

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  16. Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation

    Directory of Open Access Journals (Sweden)

    Laura Ruotsalainen

    2018-02-01

    Full Text Available The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU, sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF, which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is

  17. Refractive error assessment: influence of different optical elements and current limits of biometric techniques.

    Science.gov (United States)

    Ribeiro, Filomena; Castanheira-Dinis, Antonio; Dias, Joao Mendanha

    2013-03-01

    To identify and quantify sources of error on refractive assessment using exact ray tracing. The Liou-Brennan eye model was used as a starting point and its parameters were varied individually within a physiological range. The contribution of each parameter to refractive error was assessed using linear regression curve fits and Gaussian error propagation analysis. A MonteCarlo analysis quantified the limits of refractive assessment given by current biometric measurements. Vitreous and aqueous refractive indices are the elements that influence refractive error the most, with a 1% change of each parameter contributing to a refractive error variation of +1.60 and -1.30 diopters (D), respectively. In the phakic eye, axial length measurements taken by ultrasound (vitreous chamber depth, lens thickness, and anterior chamber depth [ACD]) were the most sensitive to biometric errors, with a contribution to the refractive error of 62.7%, 14.2%, and 10.7%, respectively. In the pseudophakic eye, vitreous chamber depth showed the highest contribution at 53.7%, followed by postoperative ACD at 35.7%. When optic measurements were considered, postoperative ACD was the most important contributor, followed by anterior corneal surface and its asphericity. A MonteCarlo simulation showed that current limits of refractive assessment are 0.26 and 0.28 D for the phakic and pseudophakic eye, respectively. The most relevant optical elements either do not have available measurement instruments or the existing instruments still need to improve their accuracy. Ray tracing can be used as an optical assessment technique, and may be the correct path for future personalized refractive assessment. Copyright 2013, SLACK Incorporated.

  18. Intelligent type sodium instrumentations for LMFBR

    International Nuclear Information System (INIS)

    Chen Daolong

    1996-07-01

    The constructions and performances of lots of newly developed intelligent type sodium instrumentations are described. The graduation characteristic equations for corresponding transducer using the medium temperature as the parameter are given. These intelligent type sodium instrumentations are possessed of good linearity. The accurate measurement data of sodium process parameters (flowrate, pressure and level) can be obtained by means of their on-line compensation function of the temperature effect. Moreover, these intelligent type sodium instrumentations are possessed of the self-inspection, the electric shutoff protection, the setting of full-scale, the setting of alarm limits (two upper limits and two lower limits alarms), the thermocouple breaking alarm, mutual isolative the 0∼10 V direct-current analogue output and the CENTRONICS standard digital output, and the alarm relay contact output. Theses intelligent type sodium instrumentations are suitable particularly for the instrument, control and protective systems of LMFBR by means of these excellent functions based on microprocessor. The basic errors of the intelligent type sodium flowmeter, immersed sodium flowmeter, sodium manometer and sodium level gauge are +-2%, +-2.3%, +-0.3% and +-1.9% of measuring ranges respectively. (9 figs.)

  19. Absorption coefficient instrument for turbid natural waters

    Science.gov (United States)

    Friedman, E.; Cherdak, A.; Poole, L.; Houghton, W.

    1980-01-01

    The paper presents an instrument that directly measures multispectral absorption coefficient of turbid natural water. Attention is given to the design, which is shown to incorporate methods for the compensation of variation in the internal light source intensity, correction of the spectrally dependent nature of the optical elements, and correction for variation in the background light level. In addition, when used in conjunction with a spectrally matched total attenuation instrument, the spectrally dependent scattering coefficient can also be derived. Finally, it is reported that systematic errors associated with multiple scattering have been estimated using Monte Carlo techniques.

  20. Error modeling for surrogates of dynamical systems using machine learning

    Science.gov (United States)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-12-01

    A machine-learning-based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (e.g., random forests, LASSO) to map a large set of inexpensively computed `error indicators' (i.e., features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering), and subsequently constructs a `local' regression model to predict the time-instantaneous error within each identified region of feature space. We consider two uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance, and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (e.g., time-integrated errors). We apply the proposed framework to model errors in reduced-order models of nonlinear oil--water subsurface flow simulations. The reduced-order models used in this work entail application of trajectory piecewise linearization with proper orthogonal decomposition. When the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.

  1. Compensation of kinematic geometric parameters error and comparative study of accuracy testing for robot

    Science.gov (United States)

    Du, Liang; Shi, Guangming; Guan, Weibin; Zhong, Yuansheng; Li, Jin

    2014-12-01

    Geometric error is the main error of the industrial robot, and it plays a more significantly important fact than other error facts for robot. The compensation model of kinematic error is proposed in this article. Many methods can be used to test the robot accuracy, therefore, how to compare which method is better one. In this article, a method is used to compare two methods for robot accuracy testing. It used Laser Tracker System (LTS) and Three Coordinate Measuring instrument (TCM) to test the robot accuracy according to standard. According to the compensation result, it gets the better method which can improve the robot accuracy apparently.

  2. Seismic instrumentation for nuclear power plants

    International Nuclear Information System (INIS)

    Senne Junior, M.

    1983-01-01

    A seismic instrumentation system used in Nuclear Power Plants to monitor the design parameters of systems, structures and components, needed to provide safety to those Plants, against the action of earthquakes is described. The instrumentation described is based on the nuclear standards in force. The minimum amount of sensors and other components used, as well as their general localization, is indicated. The operation of the instrumentation system as a whole and the handling of the recovered data are dealt with accordingly. The various devices used are not covered in detail, except for the accelerometer, which is the seismic instrumentation basic component. (Author) [pt

  3. Measurements of Gun Tube Motion and Muzzle Pointing Error of Main Battle Tanks

    Directory of Open Access Journals (Sweden)

    Peter L. McCall

    2001-01-01

    Full Text Available Beginning in 1990, the US Army Aberdeen Test Center (ATC began testing a prototype cannon mounted in a non-armored turret fitted to an M1A1 Abrams tank chassis. The cannon design incorporated a longer gun tube as a means to increase projectile velocity. A significant increase in projectile impact dispersion was measured early in the test program. Through investigative efforts, the cause of the error was linked to the increased dynamic bending or flexure of the longer tube observed while the vehicle was moving. Research and investigative work was conducted through a collaborative effort with the US Army Research Laboratory, Benet Laboratory, Project Manager – Tank Main Armament Systems, US Army Research and Engineering Center, and Cadillac Gage Textron Inc. New test methods, instrumentation, data analysis procedures, and stabilization control design resulted through this series of investigations into the dynamic tube flexure error source. Through this joint research, improvements in tank fire control design have been developed to improve delivery accuracy. This paper discusses the instrumentation implemented, methods applied, and analysis procedures used to characterize the tube flexure during dynamic tests of a main battle tank and the relationship between gun pointing error and muzzle pointing error.

  4. The Importance of Relying on the Manual: Scoring Error Variance in the WISC-IV Vocabulary Subtest

    Science.gov (United States)

    Erdodi, Laszlo A.; Richard, David C. S.; Hopwood, Christopher

    2009-01-01

    Classical test theory assumes that ability level has no effect on measurement error. Newer test theories, however, argue that the precision of a measurement instrument changes as a function of the examinee's true score. Research has shown that administration errors are common in the Wechsler scales and that subtests requiring subjective scoring…

  5. Towards Sustainable Flow Management: Local Agenda 21 - Conclusions

    DEFF Research Database (Denmark)

    Moss, Timothy; Elle, Morten

    1998-01-01

    Concluding on the casestudies of Local Agenda 21 as an instrument of sustainable flow management......Concluding on the casestudies of Local Agenda 21 as an instrument of sustainable flow management...

  6. Human errors related to maintenance and modifications

    International Nuclear Information System (INIS)

    Laakso, K.; Pyy, P.; Reiman, L.

    1998-01-01

    about weakness in audits made by the operating organisation and in tests relating to plant operation. The number of plant-specific maintenance records used as input material was high and the findings were discussed thoroughly with the plant maintenance personnel. The results indicated that instrumentation is more prone to human error than the rest of maintenance. Most errors stem from refuelling outage periods and about a half of them were identified during the same outage they were committed. Plant modifications are a significant source of common cause failures. The number of dependent errors could be reduced by improved co-ordination and auditing, post-installation checking, training and start-up testing programmes. (orig.)

  7. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  8. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    Science.gov (United States)

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  9. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  10. Injecting Errors for Testing Built-In Test Software

    Science.gov (United States)

    Gender, Thomas K.; Chow, James

    2010-01-01

    Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers

  11. Seismic instrumentation for nuclear power plants

    International Nuclear Information System (INIS)

    Senne Junior, M.

    1983-07-01

    A seismic instrumentation system used in Nuclear Power Plants to monitor the design parameters of systems, structures and components, needed to provide safety to those plants, against the action of earth quarks is described. The instrumentation is based on the nuclear standards and other components used, as well as their general localization is indicated. The operation of the instrumentation system as a whole and the handling of the recovered data are dealt with accordingly. The accelerometer is described in detail. (Author) [pt

  12. The VTTVIS line imaging spectrometer - principles, error sources, and calibration

    DEFF Research Database (Denmark)

    Jørgensen, R.N.

    2002-01-01

    work describing the basic principles, potential error sources, and/or adjustment and calibration procedures. This report fulfils the need for such documentationwith special focus on the system at KVL. The PGP based system has several severe error sources, which should be removed prior any analysis......Hyperspectral imaging with a spatial resolution of a few mm2 has proved to have a great potential within crop and weed classification and also within nutrient diagnostics. A commonly used hyperspectral imaging system is based on the Prism-Grating-Prism(PGP) principles produced by Specim Ltd...... in off-axis transmission efficiencies, diffractionefficiencies, and image distortion have a significant impact on the instrument performance. Procedures removing or minimising these systematic error sources are developed and described for the system build at KVL but can be generalised to other PGP...

  13. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Radiological error: analysis, standard setting, targeted instruction and teamworking

    International Nuclear Information System (INIS)

    FitzGerald, Richard

    2005-01-01

    Diagnostic radiology does not have objective benchmarks for acceptable levels of missed diagnoses [1]. Until now, data collection of radiological discrepancies has been very time consuming. The culture within the specialty did not encourage it. However, public concern about patient safety is increasing. There have been recent innovations in compiling radiological interpretive discrepancy rates which may facilitate radiological standard setting. However standard setting alone will not optimise radiologists' performance or patient safety. We must use these new techniques in radiological discrepancy detection to stimulate greater knowledge sharing, targeted instruction and teamworking among radiologists. Not all radiological discrepancies are errors. Radiological discrepancy programmes must not be abused as an instrument for discrediting individual radiologists. Discrepancy rates must not be distorted as a weapon in turf battles. Radiological errors may be due to many causes and are often multifactorial. A systems approach to radiological error is required. Meaningful analysis of radiological discrepancies and errors is challenging. Valid standard setting will take time. Meanwhile, we need to develop top-up training, mentoring and rehabilitation programmes. (orig.)

  15. Errors associated with IOLMaster biometry as a function of internal ocular dimensions.

    Science.gov (United States)

    Faria-Ribeiro, Miguel; Lopes-Ferreira, Daniela; López-Gil, Norberto; Jorge, Jorge; González-Méijome, José Manuel

    2014-01-01

    To evaluate the error in the estimation of axial length (AL) with the IOLMaster partial coherence interferometry (PCI) biometer and obtain a correction factor that varies as a function of AL and crystalline lens thickness (LT). Optical simulations were produced for theoretical eyes using Zemax-EE software. Thirty-three combinations including eleven different AL (from 20mm to 30mm in 1mm steps) and three different LT (3.6mm, 4.2mm and 4.8mm) were used. Errors were obtained comparing the AL measured for a constant equivalent refractive index of 1.3549 and for the actual combinations of indices and intra-ocular dimensions of LT and AL in each model eye. In the range from 20mm to 30mm AL and 3.6-4.8mm LT, the instrument measurements yielded an error between -0.043mm and +0.089mm. Regression analyses for the three LT condition were combined in order to derive a correction factor as a function of the instrument measured AL for each combination of AL and LT in the theoretical eye. The assumption of a single "average" refractive index in the estimation of AL by the IOLMaster PCI biometer only induces very small errors in a wide range of combinations of ocular dimensions. Even so, the accurate estimation of those errors may help to improve accuracy of intra-ocular lens calculations through exact ray tracing, particularly in longer eyes and eyes with thicker or thinner crystalline lenses. Copyright © 2013 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  16. Precision Instrumentation Amplifiers and Read-Out Integrated Circuits

    CERN Document Server

    Wu, Rong; Makinwa, Kofi A A

    2013-01-01

    This book presents innovative solutions in the design of precision instrumentation amplifier and read-out ICs, which can be used to boost millivolt-level signals transmitted by modern sensors, to levels compatible with the input ranges of typical Analog-to-Digital Converters (ADCs).  The discussion includes the theory, design and realization of interface electronics for bridge transducers and thermocouples. It describes the use of power efficient techniques to mitigate low frequency errors, resulting in interface electronics with high accuracy, low noise and low drift. Since this book is mainly about techniques for eliminating low frequency errors, it describes the nature of these errors and the associated dynamic offset cancellation techniques used to mitigate them.  Surveys comprehensively offset cancellation and accuracy improvement techniques applied in precision amplifier designs; Presents techniques in precision circuit design to mitigate low frequency errors in millivolt-level signals transmitted by ...

  17. Applications of Calendar Instruments in Social Surveys: a review

    NARCIS (Netherlands)

    Glasner, T.J.; Vaart, van der W.

    2009-01-01

    Retrospective reports in survey interviews and questionnaires are subject to many types of recall error, which affect completeness, consistency, and dating accuracy. Concerns about this problem have led to the development of so-called calendar instruments, or timeline techniques. These aided recall

  18. Lens design and local minima

    International Nuclear Information System (INIS)

    Brixner, B.

    1981-01-01

    The widespread belief that local minima exist in the least squares lens-design error function is not confirmed by the Los Alamos Scientific Laboratory (LASL) optimization program. LASL finds the optimum-mimimum region, which is characterized by small parameter gradients of similar size, small performance improvement per iteration, and many designs that give similar performance. Local minima and unique prescriptions have not been found in many-parameter problems. The reason for these absences is that image errors caused by a change in one parameter can be compensated by changes in the remaining parameters. False local minima have been found, and four cases are discussed

  19. Risk Assessment Stability: A Revalidation Study of the Arizona Risk/Needs Assessment Instrument

    Science.gov (United States)

    Schwalbe, Craig S.

    2009-01-01

    The actuarial method is the gold standard for risk assessment in child welfare, juvenile justice, and criminal justice. It produces risk classifications that are highly predictive and that may be robust to sampling error. This article reports a revalidation study of the Arizona Risk/Needs Assessment instrument, an actuarial instrument for juvenile…

  20. Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform.

    Science.gov (United States)

    Bai, Guanbing; Liu, Jinghong; Song, Yueming; Zuo, Yujia

    2017-01-06

    To address the limitation of the existing UAV (unmanned aerial vehicles) photoelectric localization method used for moving objects, this paper proposes an improved two-UAV intersection localization system based on airborne optoelectronic platforms by using the crossed-angle localization method of photoelectric theodolites for reference. This paper introduces the makeup and operating principle of intersection localization system, creates auxiliary coordinate systems, transforms the LOS (line of sight, from the UAV to the target) vectors into homogeneous coordinates, and establishes a two-UAV intersection localization model. In this paper, the influence of the positional relationship between UAVs and the target on localization accuracy has been studied in detail to obtain an ideal measuring position and the optimal localization position where the optimal intersection angle is 72.6318°. The result shows that, given the optimal position, the localization root mean square error (RMS) will be 25.0235 m when the target is 5 km away from UAV baselines. Finally, the influence of modified adaptive Kalman filtering on localization results is analyzed, and an appropriate filtering model is established to reduce the localization RMS error to 15.7983 m. Finally, An outfield experiment was carried out and obtained the optimal results: σ B = 1.63 × 10 - 4 ( ° ) , σ L = 1.35 × 10 - 4 ( ° ) , σ H = 15.8 ( m ) , σ s u m = 27.6 ( m ) , where σ B represents the longitude error, σ L represents the latitude error, σ H represents the altitude error, and σ s u m represents the error radius.

  1. Initial Results of Using Daily CT Localization to Correct Portal Error in Prostate Cancer

    International Nuclear Information System (INIS)

    Lattanzi, Joseph; McNeely, Shawn; Barnes, Scott; Das, Indra; Schultheiss, Timothy E; Hanks, Gerald E.

    1997-01-01

    Purpose: To evaluate the use of daily CT simulation in prostate cancer to correct errors in portal placement and organ motion. Improved localization with this technique should allow the reduction of target margins and facilitate dose escalation in high risk patients while minimizing the risk of normal tissue morbidity. Methods and Materials : Five patients underwent standard CT simulation with the alpha cradle cast, IV contrast, and urethrogram. All were initially treated to 46 Gy in a four field conformal technique which included the prostate, seminal vesicles and pelvic lymph nodes (GTV 1 ). The prostate or prostate and seminal vesicles (GTV 2 ) then received 56 Gy with a 1.0 cm margin to the PTV. At 50 Gy a second CT simulation was performed with IV contrast, urethrogram and the alpha cradle secured to a rigid sliding board. The prostate was contoured, a new isocenter generated, and surface markers placed. Prostate only treatment portals for the final conedown (GTV 3 ) were created with 0.25 cm isodose margins to the PTV. The final six fractions in 2 patients with favorable disease and eight fractions in 3 patients with unfavorable disease were delivered using the daily CT technique. On each treatment day the patient was placed in his cast on the sliding board and a CT scan performed. The daily isocenter was calculated in the A/P and lateral dimension and compared to the 50 Gy CT simulation isocenter. Couch and surface marker shifts were calculated to produce perfect portal alignment. To maintain positioning, the patient was transferred to a gurney while on the sliding board in his cast, transported to the treatment room and then transferred to the treatment couch. The patient was then treated to the corrected isocenter. Portal films and real time images were obtained for each portal. Results: Utilizing CT-CT image registration (fusion) of the daily and 50 Gy baseline CT scans the isocenter changes were quantified to reflect the contribution of positional

  2. A New Automated Instrument Calibration Facility at the Savannah River Site

    International Nuclear Information System (INIS)

    Polz, E.; Rushton, R.O.; Wilkie, W.H.; Hancock, R.C.

    1998-01-01

    The Health Physics Instrument Calibration Facility at the Savannah River Site in Aiken, SC was expressly designed and built to calibrate portable radiation survey instruments. The facility incorporates recent advances in automation technology, building layout and construction, and computer software to improve the calibration process. Nine new calibration systems automate instrument calibration and data collection. The building is laid out so that instruments are moved from one area to another in a logical, efficient manner. New software and hardware integrate all functions such as shipping/receiving, work flow, calibration, testing, and report generation. Benefits include a streamlined and integrated program, improved efficiency, reduced errors, and better accuracy

  3. Computable Error Estimates for Finite Element Approximations of Elliptic Partial Differential Equations with Rough Stochastic Data

    KAUST Repository

    Hall, Eric Joseph; Hoel, Hå kon; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2016-01-01

    posteriori error estimates fail to capture. We propose goal-oriented estimates, based on local error indicators, for the pathwise Galerkin and expected quadrature errors committed in standard, continuous, piecewise linear finite element approximations

  4. Injecting Artificial Memory Errors Into a Running Computer Program

    Science.gov (United States)

    Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.

    2008-01-01

    Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.

  5. Lane Level Localization; Using Images and HD Maps to Mitigate the Lateral Error

    Science.gov (United States)

    Hosseinyalamdary, S.; Peter, M.

    2017-05-01

    In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines) in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone's GPS receiver, images are taken from smartphone's camera and the ground truth is provided by using Real-Time Kinematic (RTK) technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.

  6. LANE LEVEL LOCALIZATION; USING IMAGES AND HD MAPS TO MITIGATE THE LATERAL ERROR

    Directory of Open Access Journals (Sweden)

    S. Hosseinyalamdary

    2017-05-01

    Full Text Available In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone’s GPS receiver, images are taken from smartphone’s camera and the ground truth is provided by using Real-Time Kinematic (RTK technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.

  7. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy.

    Science.gov (United States)

    Cohen, E A K; Ober, R J

    2013-12-15

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data.

  8. Error Ratio Analysis: Alternate Mathematics Assessment for General and Special Educators.

    Science.gov (United States)

    Miller, James H.; Carr, Sonya C.

    1997-01-01

    Eighty-seven elementary students in grades four, five, and six, were administered a 30-item multiplication instrument to assess performance in computation across grade levels. An interpretation of student performance using error ratio analysis is provided and the use of this method with groups of students for instructional decision making is…

  9. US-LHC IR magnet error analysis and compensation

    International Nuclear Information System (INIS)

    Wei, J.; Ptitsin, V.; Pilat, F.; Tepikian, S.; Gelfand, N.; Wan, W.; Holt, J.

    1998-01-01

    This paper studies the impact of the insertion-region (IR) magnet field errors on LHC collision performance. Compensation schemes including magnet orientation optimization, body-end compensation, tuning shims, and local nonlinear correction are shown to be highly effective

  10. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  11. Instrument evaluation no. 11. ESI nuclear model 271 C contamination monitor

    CERN Document Server

    Burgess, P H

    1978-01-01

    The various radiations encountered in radiological protection cover a wide range of energies and radiation measurements have to he carried out under an equally broad spectrum of environmental conditions. This report is one of a series intended to give information on the performance characteristics of radiological protection instruments, to assist in the selection of appropriate instruments for a given purpose, to interpret the results obtained with such instruments, and, in particular, to know the likely sources and magnitude of errors that might be associated with measurements in the field. The radiation, electrical and environmental characteristics of radiation protection instruments are considered together with those aspects of the construction which make an instrument convenient for routine use. To provide consistent criteria for instrument performance, the range of tests performed on any particular class of instrument, the test methods and the criteria of acceptable performance are based broadly on the a...

  12. Instrument evaluation no. 17. Wallac automatic alarm dosimeter type RAD21

    CERN Document Server

    Burgess, P H

    1980-01-01

    The various radiations encountered in radiological protection cover a wide range of energies and radiation measurements have to be carried out under an equally broad spectrum of environmental conditions. This report is one of a series intended to give information on the performance characteristics of radiological protection instruments, to assist in the selection of appropriate instruments for a given purpose, to interpret the results obtained with such instruments, and, in particular, to know the likely sources and magnitude of errors that might be associated with measurements in the field. The radiation, electrical and environmental characteristics of radiation protection instruments are considered together with those aspects of the construction which make an instrument convenient for routine use. To provide consistent criteria for instrument performance, the range of tests performed on any particular class of instrument, the test methods and the criteria of acceptable performance are based broadly on the a...

  13. Error Estimation and Accuracy Improvements in Nodal Transport Methods; Estimacion de Errores y Aumento de la Precision en Metodos Nodales de Transporte

    Energy Technology Data Exchange (ETDEWEB)

    Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.

  14. Reduction of determinate errors in mass bias-corrected isotope ratios measured using a multi-collector plasma mass spectrometer

    International Nuclear Information System (INIS)

    Doherty, W.

    2015-01-01

    A nebulizer-centric instrument response function model of the plasma mass spectrometer was combined with a signal drift model, and the result was used to identify the causes of the non-spectroscopic determinate errors remaining in mass bias-corrected Pb isotope ratios (Tl as internal standard) measured using a multi-collector plasma mass spectrometer. Model calculations, confirmed by measurement, show that the detectable time-dependent errors are a result of the combined effect of signal drift and differences in the coordinates of the Pb and Tl response function maxima (horizontal offset effect). If there are no horizontal offsets, then the mass bias-corrected isotope ratios are approximately constant in time. In the absence of signal drift, the response surface curvature and horizontal offset effects are responsible for proportional errors in the mass bias-corrected isotope ratios. The proportional errors will be different for different analyte isotope ratios and different at every instrument operating point. Consequently, mass bias coefficients calculated using different isotope ratios are not necessarily equal. The error analysis based on the combined model provides strong justification for recommending a three step correction procedure (mass bias correction, drift correction and a proportional error correction, in that order) for isotope ratio measurements using a multi-collector plasma mass spectrometer

  15. NLO error propagation exercise data collection system

    International Nuclear Information System (INIS)

    Keisch, B.; Bieber, A.M. Jr.

    1983-01-01

    A combined automated and manual system for data collection is described. The system is suitable for collecting, storing, and retrieving data related to nuclear material control at a bulk processing facility. The system, which was applied to the NLO operated Feed Materials Production Center, was successfully demonstrated for a selected portion of the facility. The instrumentation consisted of off-the-shelf commercial equipment and provided timeliness, convenience, and efficiency in providing information for generating a material balance and performing error propagation on a sound statistical basis

  16. Evaluation of localization errors for craniospinal axis irradiation delivery using volume modulated arc therapy and proposal of a technique to minimize such errors

    International Nuclear Information System (INIS)

    Myers, Pamela; Stathakis, Sotirios; Mavroidis, Panayiotis; Esquivel, Carlos; Papanikolaou, Niko

    2013-01-01

    Purpose: To dosimetrically evaluate the effects of improper patient positioning in the junction area of a VMAT cranio-spinal axis irradiation technique consisting of one superior and one inferior arc and propose a solution to minimize these patient setup errors. Methods: Five (n = 5) cranio-spinal axis irradiation patients were planned with 2 arcs: one superior and one inferior. In order to mimic patient setup errors, the plans were recalculated with the inferior isocenter shifted by: 1, 2, 5, and 10 mm superiorly, and 1, 2, 5, and 10 mm inferiorly. The plans were then compared with the corresponding original, non-shifted arc plans on the grounds of target metrics such as conformity number and homogeneity index, as well as several normal tissue dose descriptors. “Gradient-optimized” plans were then created for each patient in an effort to reduce dose discrepancies due to setup errors. Results: Percent differences were calculated in order to compare each of the eight shifted plans with the original non-shifted arc plan, which corresponds to the ideal patient setup. The conformity number was on average lower by 0.9%, 2.7%, 5.8%, and 9.1% for the 1, 2, 5, and 10 mm inferiorly-shifted plans and 0.4%, 0.8%, 2.8%, and 6.0% for the respective superiorly-shifted plans. The homogeneity indices were, averaged among the five patients and they indicated less homogeneous dose distributions by 0.03%, 0.3%, 1.0%, and 2.8% for the inferior shifts and 0.2%, 1.2%, 6.3%, and 15.3% for the superior shifts. Overall, the mean doses to the organs at risk deviate by less than 2% for the 1, 2, and 5 mm shifted plans. The 10 mm shifted plans, however, showed average percent differences, over all studied organs, from the original plan of up to 5.6%. Using “gradient-optimized” plans, the average dose differences were reduced by 0.2%, 0.5%, 1.2%, and 2.1% for 1, 2, 5, and 10 mm shifts, respectively compared to the originally optimized plans, and the maximum dose differences were

  17. Local rollback for fault-tolerance in parallel computing systems

    Science.gov (United States)

    Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY

    2012-01-24

    A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

  18. Sensitivity analysis of periodic errors in heterodyne interferometry

    International Nuclear Information System (INIS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-01-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors

  19. Sensitivity analysis of periodic errors in heterodyne interferometry

    Science.gov (United States)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  20. Localization of the solar flare SF900610 in X-rays with the WATCH instrument of the GRANAT observatory

    DEFF Research Database (Denmark)

    Terekhov, O.V.; Kuzmin, A.G.; Shevchenko, A.V.

    2002-01-01

    -ray source do not coincide with the coordinates of the Ha-line flare. The X-ray source moved over the solar disk during the flare. This probably implies that, as the X-ray emission was generated, different parts of one loop or a system of magnetic loops dominated at different flare times.......During the solar flare of June 10, 1990, the WATCH instrument of the GRANAT space observatory obtained 110 localizations of the X-ray source in the X-ray range 8-20 keV. Its coordinates were measured with an accuracy of similar to2 arcmin at a 3sigma confidence level. The coordinates of the X...

  1. SLC beam line error analysis using a model-based expert system

    International Nuclear Information System (INIS)

    Lee, M.; Kleban, S.

    1988-02-01

    Commissioning particle beam line is usually a very time-consuming and labor-intensive task for accelerator physicists. To aid in commissioning, we developed a model-based expert system that identifies error-free regions, as well as localizing beam line errors. This paper will give examples of the use of our system for the SLC commissioning. 8 refs., 5 figs

  2. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  3. Social Networking of Instrumentation - a Case Study in Telematics

    OpenAIRE

    ROBU, D.; SANDU, F.; PETREUS, D.; NEDELCU, A.; BALICA, A.

    2014-01-01

    The research work contributes to the design and implementation of the communication part for integrating remote instruments and drives via social networks (SN) into instrumentation communities. It is used the virtual instrumentation (VI) to manage objects that tweet on popular SN platforms applying the concept of the Internet of Things (IoT). Local and remote resource aggregation is based on National Instruments (NI) data acquisition and distribution hardware in a NI software ...

  4. Practical considerations in developing an instrument-maintenance plan--

    International Nuclear Information System (INIS)

    Guth, M.A.S.

    1989-01-01

    The author develops a general set of considerations to explain how a consistent, well-organized, prioritized, and adequate time-allowance program plan for routine maintenance can be constructed. The analysis is supplemented with experience from the high flux isotope reactor (HFIR) at US Oak Ridge National Laboratory (ORNL). After the preventive maintenance (PM) problem was defined, the instruments on the schedule were selected based on the manufacturer's design specifications, quality-assurance requirements, prior classifications, experiences with the incidence of breakdowns or calibration, and dependencies among instruments. The effects of repair error in PM should be also studied. The HFIR requires three full-time technicians to perform both PM and unscheduled maintenance. A review is presented of concepts from queuing theory to determine anticipated breakdown patterns. In practice, the pneumatic instruments have a much longer lifetime than the electric/electronic instruments on various reactors at ORNL. Some special considerations and risk aversion in choosing a maintenance schedule

  5. Local government financial autonomy in Nigeria: The State Joint Local Government Account

    Directory of Open Access Journals (Sweden)

    Jude Okafor

    2010-07-01

    Full Text Available This paper addresses the statutory financial relations and financial autonomy of local government in Nigeria, and the freedom of local government to generate revenue from its assigned sources without external interference. It focuses particularly on a financial instrument called the State Joint Local Government Account (SJLGA and how its operations have positively or negatively affected the financial autonomy of local government councils and the inter-relations between state and local government in Nigeria.

  6. Robust a Posteriori Error Control and Adaptivity for Multiscale, Multinumerics, and Mortar Coupling

    KAUST Repository

    Pencheva, Gergina V.; Vohralí k, Martin; Wheeler, Mary F.; Wildey, Tim

    2013-01-01

    -order polynomials are used on the mortar interface mesh. We derive several fully computable a posteriori error estimates which deliver a guaranteed upper bound on the error measured in the energy norm. Our estimates are also locally efficient and one of them

  7. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  8. Bias Correction and Random Error Characterization for the Assimilation of HRDI Line-of-Sight Wind Measurements

    Science.gov (United States)

    Tangborn, Andrew; Menard, Richard; Ortland, David; Einaudi, Franco (Technical Monitor)

    2001-01-01

    A new approach to the analysis of systematic and random observation errors is presented in which the error statistics are obtained using forecast data rather than observations from a different instrument type. The analysis is carried out at an intermediate retrieval level, instead of the more typical state variable space. This method is carried out on measurements made by the High Resolution Doppler Imager (HRDI) on board the Upper Atmosphere Research Satellite (UARS). HRDI, a limb sounder, is the only satellite instrument measuring winds in the stratosphere, and the only instrument of any kind making global wind measurements in the upper atmosphere. HRDI measures doppler shifts in the two different O2 absorption bands (alpha and B) and the retrieved products are tangent point Line-of-Sight wind component (level 2 retrieval) and UV winds (level 3 retrieval). This analysis is carried out on a level 1.9 retrieval, in which the contributions from different points along the line-of-sight have not been removed. Biases are calculated from O-F (observed minus forecast) LOS wind components and are separated into a measurement parameter space consisting of 16 different values. The bias dependence on these parameters (plus an altitude dependence) is used to create a bias correction scheme carried out on the level 1.9 retrieval. The random error component is analyzed by separating the gamma and B band observations and locating observation pairs where both bands are very nearly looking at the same location at the same time. It is shown that the two observation streams are uncorrelated and that this allows the forecast error variance to be estimated. The bias correction is found to cut the effective observation error variance in half.

  9. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  10. The role of errors in the measurements performed at the reprocessing plant head-end for material accountancy purposes

    International Nuclear Information System (INIS)

    Foggi, C.; Liebetrau, A.M.; Petraglia, E.

    1999-01-01

    One of the most common procedures used in determining the amount of nuclear material contained in solutions consists of first measuring the volume and the density of the solution, and then determining the concentrations of this material. This presentation will focus on errors generated at the process lime in the measurement of volume and density. These errors and their associated uncertainties can be grouped into distinct categories depending on their origin: those attributable to measuring instruments; those attributable to operational procedures; variability in measurement conditions; errors in the analysis and interpretation of results. Possible errors sources, their relative magnitudes, and an error propagation rationale are discussed, with emphasis placed on bases and errors of the last three types called systematic errors [ru

  11. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. A probabilistic framework for acoustic emission source localization in plate-like structures

    International Nuclear Information System (INIS)

    Dehghan Niri, E; Salamone, S

    2012-01-01

    This paper proposes a probabilistic approach for acoustic emission (AE) source localization in isotropic plate-like structures based on an extended Kalman filter (EKF). The proposed approach consists of two main stages. During the first stage, time-of-flight (TOF) measurements of Lamb waves are carried out by a continuous wavelet transform (CWT), accounting for systematic errors due to the Heisenberg uncertainty; the second stage uses an EKF to iteratively estimate the AE source location and the wave velocity. The advantages of the proposed algorithm over the traditional methods include the capability of: (1) taking into account uncertainties in TOF measurements and wave velocity and (2) efficiently fusing multi-sensor data to perform AE source localization. The performance of the proposed approach is validated through pencil-lead breaks performed on an aluminum plate at systematic grid locations. The plate was instrumented with an array of four piezoelectric transducers in two different configurations. (paper)

  13. Shariah Bond as Financial Instrument For Local Government

    Directory of Open Access Journals (Sweden)

    Anim Rahmayati

    2016-06-01

    Full Text Available This study aims to analyse the potential of sharia bonds in the region as an alternative to local financing. This research is a kind of literary descriptive qualitative research using SWOT analysis. The results of this study indicate that in the area of sharia bonds is an alternative worth considering regional funding compared to other funding. Support policy, very large financing needs for region infrastructure development, the market potential in the area of sharia bonds is an opportunity for local governments in Indonesia to immediately issue sharia bonds in the area.DOI:  10.15408/sjie.v5i1.3126

  14. Psychological safety and error reporting within Veterans Health Administration hospitals.

    Science.gov (United States)

    Derickson, Ryan; Fishman, Jonathan; Osatuke, Katerine; Teclaw, Robert; Ramsel, Dee

    2015-03-01

    In psychologically safe workplaces, employees feel comfortable taking interpersonal risks, such as pointing out errors. Previous research suggested that psychologically safe climate optimizes organizational outcomes. We evaluated psychological safety levels in Veterans Health Administration (VHA) hospitals and assessed their relationship to employee willingness of reporting medical errors. We conducted an ANOVA on psychological safety scores from a VHA employees census survey (n = 185,879), assessing variability of means across racial and supervisory levels. We examined organizational climate assessment interviews (n = 374) evaluating how many employees asserted willingness to report errors (or not) and their stated reasons. Finally, based on survey data, we identified 2 (psychologically safe versus unsafe) hospitals and compared their number of employees who would be willing/unwilling to report an error. Psychological safety increased with supervisory level (P hospital (71% would report, 13% would not) were less willing to report an error than at the psychologically safe hospital (91% would, 0% would not). A substantial minority would not report an error and were willing to admit so in a private interview setting. Their stated reasons as well as higher psychological safety means for supervisory employees both suggest power as an important determinant. Intentions to report were associated with psychological safety, strongly suggesting this climate aspect as instrumental to improving patient safety and reducing costs.

  15. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    Science.gov (United States)

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  16. Assessment of local GNSS baselines at co-location sites

    Science.gov (United States)

    Herrera Pinzón, Iván; Rothacher, Markus

    2018-01-01

    As one of the major contributors to the realisation of the International Terrestrial Reference System (ITRS), the Global Navigation Satellite Systems (GNSS) are prone to suffer from irregularities and discontinuities in time series. While often associated with hardware/software changes and the influence of the local environment, these discrepancies constitute a major threat for ITRS realisations. Co-located GNSS at fundamental sites, with two or more available instruments, provide the opportunity to mitigate their influence while improving the accuracy of estimated positions by examining data breaks, local biases, deformations, time-dependent variations and the comparison of GNSS baselines with existing local tie measurements. With the use of co-located GNSS data from a subset sites of the International GNSS Service network, this paper discusses a global multi-year analysis with the aim of delivering homogeneous time series of coordinates to analyse system-specific error sources in the local baselines. Results based on the comparison of different GNSS-based solutions with the local survey ties show discrepancies of up to 10 mm despite GNSS coordinate repeatabilities at the sub-mm level. The discrepancies are especially large for the solutions using the ionosphere-free linear combination and estimating tropospheric zenith delays, thus corresponding to the processing strategy used for global solutions. Snow on the antennas causes further problems and seasonal variations of the station coordinates. These demonstrate the need for a permanent high-quality monitoring of the effects present in the short GNSS baselines at fundamental sites.

  17. Electronic instrument for radon daughter dosimetry. Report of investigations

    International Nuclear Information System (INIS)

    Durkin, J.

    1977-01-01

    Due to the daily exposures of uranium mining personnel to 222Rn daughters, a device is needed which will continually monitor the individual's exposure. Such a device has been built and tested and is known as the Radon Daughter Dosimeter. This is an electronic instrument using a solid-state detector and circuitry. The system permits the evaluation of cumulative exposures to airborne radon progeny, expressed in units of Working Level Hours (WL-HRS). The instrument is a personal device worn by the individual throughout the working shift. Since the instrument is in close proximity to the miner and measures continual exposure, it provides an accurate account of total cumulative exposure, thus avoiding the errors caused by the present technique of spot checking of the environment

  18. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in

  19. Radiology errors: are we learning from our mistakes?

    International Nuclear Information System (INIS)

    Mankad, K.; Hoey, E.T.D.; Jones, J.B.; Tirukonda, P.; Smith, J.T.

    2009-01-01

    Aim: To question practising radiologists and radiology trainees at a large international meeting in an attempt to survey individuals about error reporting. Materials and methods: Radiologists attending the 2007 Radiological Society of North America (RSNA) annual meeting were approached to fill in a written questionnaire. Participants were questioned as to their grade, country in which they practised, and subspecialty interest. They were asked whether they kept a personal log of their errors (with an error defined as 'a mistake that has management implications for the patient'), how many errors they had made in the preceding 12 months, and the types of errors that had occurred. They were also asked whether their local department held regular discrepancy/errors meetings, how many they had attended in the preceding 12 months, and the perceived atmosphere at these meetings (on a qualitative scale). Results: A total of 301 radiologists with a wide range of specialty interests from 32 countries agreed to take part. One hundred and sixty-six of 301 (55%) of responders were consultant/attending grade. One hundred and thirty-five of 301 (45%) were residents/fellows. Fifty-nine of 301 (20%) of responders kept a personal record of their errors. The number of errors made per person per year ranged from none (2%) to 16 or more (7%). The majority (91%) reported making between one and 15 errors/year. Overcalls (40%), under-calls (25%), and interpretation error (15%) were the predominant error types. One hundred and seventy-eight of 301 (59%) of participants stated that their department held regular errors meeting. One hundred and twenty-seven of 301 (42%) had attended three or more meetings in the preceding year. The majority (55%) who had attended errors meetings described the atmosphere as 'educational.' Only a small minority (2%) described the atmosphere as 'poor' meaning non-educational and/or blameful. Conclusion: Despite the undeniable importance of learning from errors

  20. Minimum-error discrimination of entangled quantum states

    International Nuclear Information System (INIS)

    Lu, Y.; Coish, N.; Kaltenbaek, R.; Hamel, D. R.; Resch, K. J.; Croke, S.

    2010-01-01

    Strategies to optimally discriminate between quantum states are critical in quantum technologies. We present an experimental demonstration of minimum-error discrimination between entangled states, encoded in the polarization of pairs of photons. Although the optimal measurement involves projection onto entangled states, we use a result of J. Walgate et al. [Phys. Rev. Lett. 85, 4972 (2000)] to design an optical implementation employing only local polarization measurements and feed-forward, which performs at the Helstrom bound. Our scheme can achieve perfect discrimination of orthogonal states and minimum-error discrimination of nonorthogonal states. Our experimental results show a definite advantage over schemes not using feed-forward.

  1. IAEA programme on maintenance of nuclear instruments

    International Nuclear Information System (INIS)

    Vuister, P.H.

    1986-01-01

    The Medical Applications Section in the Division of Life Sciences of the International Atomic Energy Agency has been engaged since 1975 in activities aimed at the more effective use of nuclear instruments. Activities and achievements are described concerning the conditioning of laboratories, preventive maintenance and repair of instruments, the management thereof, space parts and the promotion of local training in these subjects. (author)

  2. Calibration Errors in Interferometric Radio Polarimetry

    Science.gov (United States)

    Hales, Christopher A.

    2017-08-01

    Residual calibration errors are difficult to predict in interferometric radio polarimetry because they depend on the observational calibration strategy employed, encompassing the Stokes vector of the calibrator and parallactic angle coverage. This work presents analytic derivations and simulations that enable examination of residual on-axis instrumental leakage and position-angle errors for a suite of calibration strategies. The focus is on arrays comprising alt-azimuth antennas with common feeds over which parallactic angle is approximately uniform. The results indicate that calibration schemes requiring parallactic angle coverage in the linear feed basis (e.g., the Atacama Large Millimeter/submillimeter Array) need only observe over 30°, beyond which no significant improvements in calibration accuracy are obtained. In the circular feed basis (e.g., the Very Large Array above 1 GHz), 30° is also appropriate when the Stokes vector of the leakage calibrator is known a priori, but this rises to 90° when the Stokes vector is unknown. These findings illustrate and quantify concepts that were previously obscure rules of thumb.

  3. Measuring Error Identification and Recovery Skills in Surgical Residents.

    Science.gov (United States)

    Sternbach, Joel M; Wang, Kevin; El Khoury, Rym; Teitelbaum, Ezra N; Meyerson, Shari L

    2017-02-01

    Although error identification and recovery skills are essential for the safe practice of surgery, they have not traditionally been taught or evaluated in residency training. This study validates a method for assessing error identification and recovery skills in surgical residents using a thoracoscopic lobectomy simulator. We developed a 5-station, simulator-based examination containing the most commonly encountered cognitive and technical errors occurring during division of the superior pulmonary vein for left upper lobectomy. Successful completion of each station requires identification and correction of these errors. Examinations were video recorded and scored in a blinded fashion using an examination-specific rating instrument evaluating task performance as well as error identification and recovery skills. Evidence of validity was collected in the categories of content, response process, internal structure, and relationship to other variables. Fifteen general surgical residents (9 interns and 6 third-year residents) completed the examination. Interrater reliability was high, with an intraclass correlation coefficient of 0.78 between 4 trained raters. Station scores ranged from 64% to 84% correct. All stations adequately discriminated between high- and low-performing residents, with discrimination ranging from 0.35 to 0.65. The overall examination score was significantly higher for intermediate residents than for interns (mean, 74 versus 64 of 90 possible; p = 0.03). The described simulator-based examination with embedded errors and its accompanying assessment tool can be used to measure error identification and recovery skills in surgical residents. This examination provides a valid method for comparing teaching strategies designed to improve error recognition and recovery to enhance patient safety. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Death Certification Errors and the Effect on Mortality Statistics.

    Science.gov (United States)

    McGivern, Lauri; Shulman, Leanne; Carney, Jan K; Shapiro, Steven; Bundock, Elizabeth

    Errors in cause and manner of death on death certificates are common and affect families, mortality statistics, and public health research. The primary objective of this study was to characterize errors in the cause and manner of death on death certificates completed by non-Medical Examiners. A secondary objective was to determine the effects of errors on national mortality statistics. We retrospectively compared 601 death certificates completed between July 1, 2015, and January 31, 2016, from the Vermont Electronic Death Registration System with clinical summaries from medical records. Medical Examiners, blinded to original certificates, reviewed summaries, generated mock certificates, and compared mock certificates with original certificates. They then graded errors using a scale from 1 to 4 (higher numbers indicated increased impact on interpretation of the cause) to determine the prevalence of minor and major errors. They also compared International Classification of Diseases, 10th Revision (ICD-10) codes on original certificates with those on mock certificates. Of 601 original death certificates, 319 (53%) had errors; 305 (51%) had major errors; and 59 (10%) had minor errors. We found no significant differences by certifier type (physician vs nonphysician). We did find significant differences in major errors in place of death ( P statistics. Surveillance and certifier education must expand beyond local and state efforts. Simplifying and standardizing underlying literal text for cause of death may improve accuracy, decrease coding errors, and improve national mortality statistics.

  5. Low-Dimensional Feature Representation for Instrument Identification

    Science.gov (United States)

    Ihara, Mizuki; Maeda, Shin-Ichi; Ikeda, Kazushi; Ishii, Shin

    For monophonic music instrument identification, various feature extraction and selection methods have been proposed. One of the issues toward instrument identification is that the same spectrum is not always observed even in the same instrument due to the difference of the recording condition. Therefore, it is important to find non-redundant instrument-specific features that maintain information essential for high-quality instrument identification to apply them to various instrumental music analyses. For such a dimensionality reduction method, the authors propose the utilization of linear projection methods: local Fisher discriminant analysis (LFDA) and LFDA combined with principal component analysis (PCA). After experimentally clarifying that raw power spectra are actually good for instrument classification, the authors reduced the feature dimensionality by LFDA or by PCA followed by LFDA (PCA-LFDA). The reduced features achieved reasonably high identification performance that was comparable or higher than those by the power spectra and those achieved by other existing studies. These results demonstrated that our LFDA and PCA-LFDA can successfully extract low-dimensional instrument features that maintain the characteristic information of the instruments.

  6. LOWER BOUNDS ON PHOTOMETRIC REDSHIFT ERRORS FROM TYPE Ia SUPERNOVA TEMPLATES

    International Nuclear Information System (INIS)

    Asztalos, S.; Nikolaev, S.; De Vries, W.; Olivier, S.; Cook, K.; Wang, L.

    2010-01-01

    Cosmology with Type Ia supernova heretofore has required extensive spectroscopic follow-up to establish an accurate redshift. Though this resource-intensive approach is tolerable at the present discovery rate, the next generation of ground-based all-sky survey instruments will render it unsustainable. Photometry-based redshift determination may be a viable alternative, though the technique introduces non-negligible errors that ultimately degrade the ability to discriminate between competing cosmologies. We present a strictly template-based photometric redshift estimator and compute redshift reconstruction errors in the presence of statistical errors. Under highly degraded photometric conditions corresponding to a statistical error σ of 0.5, the residual redshift error is found to be 0.236 when assuming a nightly observing cadence and a single Large Synoptic Science Telescope (LSST) u-band filter. Utilizing all six LSST bandpass filters reduces the residual redshift error to 9.1 x 10 -3 . Assuming a more optimistic statistical error σ of 0.05, we derive residual redshift errors of 4.2 x 10 -4 , 5.2 x 10 -4 , 9.2 x 10 -4 , and 1.8 x 10 -3 for observations occuring nightly, every 5th, 20th and 45th night, respectively, in each of the six LSST bandpass filters. Adopting an observing cadence in which photometry is acquired with all six filters every 5th night and a realistic supernova distribution, binned redshift errors are combined with photometric errors with a σ of 0.17 and systematic errors with a σ∼ 0.003 to derive joint errors (σ w , σ w ' ) of (0.012, 0.066), respectively, in (w,w') with 68% confidence using Fisher matrix formalism. Though highly idealized in the present context, the methodology is nonetheless quite relevant for the next generation of ground-based all-sky surveys.

  7. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  8. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    International Nuclear Information System (INIS)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-01-01

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well

  9. Approach of simultaneous localization and mapping based on local maps for robot

    Institute of Scientific and Technical Information of China (English)

    CHEN Bai-fan; CAI Zi-xing; HU De-wen

    2006-01-01

    An extended Kalman filter approach of simultaneous localization and mapping(SLAM) was proposed based on local maps.A local frame of reference was established periodically at the position of the robot, and then the observations of the robot and landmarks were fused into the global frame of reference. Because of the independence of the local map, the approach does not cumulate the estimate and calculation errors which are produced by SLAM using Kalman filter directly. At the same time, it reduces the computational complexity. This method is proven correct and feasible in simulation experiments.

  10. DEVELOPING EVALUATION INSTRUMENT FOR MATHEMATICS EDUCATIONAL SOFTWARE

    Directory of Open Access Journals (Sweden)

    Wahyu Setyaningrum

    2012-02-01

    Full Text Available The rapid increase and availability of mathematics software, either for classroom or individual learning activities, presents a challenge for teachers. It has been argued that many products are limited in quality. Some of the more commonly used software products have been criticized for poor content, activities which fail to address some learning issues, poor graphics presentation, inadequate documentation, and other technical problems. The challenge for schools is to ensure that the educational software used in classrooms is appropriate and effective in supporting intended outcomes and goals. This paper aimed to develop instrument for evaluating mathematics educational software in order to help teachers in selecting the appropriate software. The instrument considers the notion of educational including content, teaching and learning skill, interaction, and feedback and error correction; and technical aspects of educational software including design, clarity, assessment and documentation, cost and hardware and software interdependence. The instrument use a checklist approach, the easier and effective methods in assessing the quality of educational software, thus the user needs to put tick in each criteria. The criteria in this instrument are adapted and extended from standard evaluation instrument in several references.   Keywords: mathematics educational software, educational aspect, technical aspect.

  11. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  12. Inflammatory bowel disease-specific health-related quality of life instruments: a systematic review of measurement properties.

    Science.gov (United States)

    Chen, Xin-Lin; Zhong, Liang-Huan; Wen, Yi; Liu, Tian-Wen; Li, Xiao-Ying; Hou, Zheng-Kun; Hu, Yue; Mo, Chuan-Wei; Liu, Feng-Bin

    2017-09-15

    This review aims to critically appraise and compare the measurement properties of inflammatory bowel disease (IBD)-specific health-related quality of life instruments. Medline, EMBASE and ISI Web of Knowledge were searched from their inception to May 2016. IBD-specific instruments for patients with Crohn's disease, ulcerative colitis or IBD were enrolled. The basic characteristics and domains of the instruments were collected. The methodological quality of measurement properties and measurement properties of the instruments were assessed. Fifteen IBD-specific instruments were included, which included twelve instruments for adult IBD patients and three for paediatric IBD patients. All of the instruments were developed in North American and European countries. The following common domains were identified: IBD-related symptoms, physical, emotional and social domain. The methodological quality was satisfactory for content validity; fair in internal consistency, reliability, structural validity, hypotheses testing and criterion validity; and poor in measurement error, cross-cultural validity and responsiveness. For adult IBD patients, the IBDQ-32 and its short version (SIBDQ) had good measurement properties and were the most widely used worldwide. For paediatric IBD patients, the IMPACT-III had good measurement properties and had more translated versions. Most methodological quality should be promoted, especially measurement error, cross-cultural validity and responsiveness. The IBDQ-32 was the most widely used instrument with good reliability and validity, followed by the SIBDQ and IMPACT-III. Further validation studies are necessary to support the use of other instruments.

  13. Portable neutron and gamma-radiation instruments

    International Nuclear Information System (INIS)

    Murray, W.S.; Butterfield, K.B.

    1990-01-01

    This paper reports on the design and building of a smart neutron and gamma-radiation detection systems with embedded microprocessors programmed in the FORTH language. These portable instruments can be battery-powered and can provide many analysis functions not available in most radiation detectors. Local operation of the instruments is menu-driven through a graphics liquid crystal display and hex keypad; remote operation is through a serial communications link. While some instruments simply count particles, others determine the energy of the radiation as well as the intensity. The functions the authors have provided include absolute source-strength determination. Feynmann variance analysis, sequential-probability ratio test, and time-history recording

  14. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong; Sun, Shuyu; Xie, Xiaoping

    2015-01-01

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  15. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong

    2015-10-26

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  16. Operator quantum error-correcting subsystems for self-correcting quantum memories

    International Nuclear Information System (INIS)

    Bacon, Dave

    2006-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures

  17. Proceedings of the OECD/CSNI specialist meeting on advanced instrumentation and measurement techniques

    Energy Technology Data Exchange (ETDEWEB)

    Lehner, J [comp.

    1998-09-01

    In the last few years, tremendous advances in the local instrumentation technology for two-phase flow have been accomplished by the applications of new sensor techniques, optical or beam methods and electronic technology. The detailed measurements gave new insight to the true nature of local mechanisms of interfacial transfer between phases, interfacial structure and two-phase flow turbulent transfers. These new developments indicate that more accurate and reliable two-phase flow models can be obtained, if focused experiments are designed and performed by utilizing this advanced instrumentation. The purpose of this Specialist Meeting on Advanced Instrumentation and Measurement Techniques was to review the recent instrumentation developments and the relation between thermal-hydraulic codes and instrumentation capabilities. Four specific objectives were identified for this meeting: bring together international experts on instrumentation, experiments, and modeling; review recent developments in multiphase flow instrumentation; discuss the relation between modeling needs and instrumentation capabilities, and discuss future directions for instrumentation development, modeling, and experiments.

  18. Proceedings of the OECD/CSNI specialist meeting on advanced instrumentation and measurement techniques

    International Nuclear Information System (INIS)

    Lehner, J.

    1998-09-01

    In the last few years, tremendous advances in the local instrumentation technology for two-phase flow have been accomplished by the applications of new sensor techniques, optical or beam methods and electronic technology. The detailed measurements gave new insight to the true nature of local mechanisms of interfacial transfer between phases, interfacial structure and two-phase flow turbulent transfers. These new developments indicate that more accurate and reliable two-phase flow models can be obtained, if focused experiments are designed and performed by utilizing this advanced instrumentation. The purpose of this Specialist Meeting on Advanced Instrumentation and Measurement Techniques was to review the recent instrumentation developments and the relation between thermal-hydraulic codes and instrumentation capabilities. Four specific objectives were identified for this meeting: bring together international experts on instrumentation, experiments, and modeling; review recent developments in multiphase flow instrumentation; discuss the relation between modeling needs and instrumentation capabilities, and discuss future directions for instrumentation development, modeling, and experiments

  19. The PoET (Prevention of Error-Based Transfers) Project.

    Science.gov (United States)

    Oliver, Jill; Chidwick, Paula

    2017-01-01

    The PoET (Prevention of Error-based Transfers) Project is one of the Ethics Quality Improvement Projects (EQIPs) taking place at William Osler Health System. This specific project is designed to reduce transfers from long-term care to hospital that are caused by legal and ethical errors related to consent, capacity and substitute decision-making. The project is currently operating in eight long-term care homes in the Central West Local Health Integration Network and has seen a 56% reduction in multiple transfers before death in hospital.

  20. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  1. Performing T-tests to Compare Autocorrelated Time Series Data Collected from Direct-Reading Instruments.

    Science.gov (United States)

    O'Shaughnessy, Patrick; Cavanaugh, Joseph E

    2015-01-01

    Industrial hygienists now commonly use direct-reading instruments to evaluate hazards in the workplace. The stored values over time from these instruments constitute a time series of measurements that are often autocorrelated. Given the need to statistically compare two occupational scenarios using values from a direct-reading instrument, a t-test must consider measurement autocorrelation or the resulting test will have a largely inflated type-1 error probability (false rejection of the null hypothesis). A method is described for both the one-sample and two-sample cases which properly adjusts for autocorrelation. This method involves the computation of an "equivalent sample size" that effectively decreases the actual sample size when determining the standard error of the mean for the time series. An example is provided for the one-sample case, and an example is given where a two-sample t-test is conducted for two autocorrelated time series comprised of lognormally distributed measurements.

  2. Scalable error correction in distributed ion trap computers

    International Nuclear Information System (INIS)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  3. Cellular telephone-based radiation detection instrument

    Science.gov (United States)

    Craig, William W [Pittsburg, CA; Labov, Simon E [Berkeley, CA

    2011-06-14

    A network of radiation detection instruments, each having a small solid state radiation sensor module integrated into a cellular phone for providing radiation detection data and analysis directly to a user. The sensor module includes a solid-state crystal bonded to an ASIC readout providing a low cost, low power, light weight compact instrument to detect and measure radiation energies in the local ambient radiation field. In particular, the photon energy, time of event, and location of the detection instrument at the time of detection is recorded for real time transmission to a central data collection/analysis system. The collected data from the entire network of radiation detection instruments are combined by intelligent correlation/analysis algorithms which map the background radiation and detect, identify and track radiation anomalies in the region.

  4. Robust frameless stereotactic localization in extra-cranial radiotherapy

    International Nuclear Information System (INIS)

    Riboldi, Marco; Baroni, Guido; Spadea, Maria Francesca; Bassanini, Fabio; Tagaste, Barbara; Garibaldi, Cristina; Orecchia, Roberto; Pedotti, Antonio

    2006-01-01

    In the field of extra-cranial radiotherapy, several inaccuracies can make the application of frameless stereotactic localization techniques error-prone. When optical tracking systems based on surface fiducials are used, inter- and intra-fractional uncertainties in marker three-dimensional (3D) detection may lead to inexact tumor position estimation, resulting in erroneous patient setup. This is due to the fact that external fiducials misdetection results in deformation effects that are poorly handled in a rigid-body approach. In this work, the performance of two frameless stereotactic localization algorithms for 3D tumor position reconstruction in extra-cranial radiotherapy has been specifically tested. Two strategies, unweighted versus weighted, for stereotactic tumor localization were examined by exploiting data coming from 46 patients treated for extra-cranial lesions. Measured isocenter displacements and rotations were combined to define isocentric procedures, featuring 6 degrees of freedom, for correcting patient alignment (isocentric positioning correction). The sensitivity of the algorithms to uncertainties in the 3D localization of fiducials was investigated by means of 184 numerical simulations. The performance of the implemented isocentric positioning correction was compared to conventional point-based registration. The isocentric positioning correction algorithm was tested on a clinical dataset of inter-fractional and intra-fractional setup errors, which was collected by means of an optical tracker on the same group of patients. The weighted strategy exhibited a lower sensitivity to fiducial localization errors in simulated misalignments than those of the unweighted strategy. Isocenter 3D displacements provided by the weighted strategy were consistently smaller than those featured by the unweighted strategy. The peak decrease in median and quartile values of isocenter 3D displacements were 1.4 and 2.7 mm, respectively. Concerning clinical data, the

  5. Spectral and Wavefront Error Performance of WFIRST/AFTA Prototype Filters

    Science.gov (United States)

    Quijada, Manuel; Seide, Laurie; Marx, Cathy; Pasquale, Bert; McMann, Joseph; Hagopian, John; Dominguez, Margaret; Gong, Qian; Morey, Peter

    2016-01-01

    The Cycle 5 design baseline for the Wide-Field Infrared Survey Telescope Astrophysics Focused Telescope Assets (WFIRSTAFTA) instrument includes a single wide-field channel (WFC) instrument for both imaging and slit-less spectroscopy. The only routinely moving part during scientific observations for this wide-field channel is the element wheel (EW) assembly. This filter-wheel assembly will have 8 positions that will be populated with 6 bandpass filters, a blank position, and a Grism that will consist of a three-element assembly to disperse the full field with an undeviated central wavelength for galaxy redshift surveys. All filter elements in the EW assembly will be made out of fused silica substrates (110 mm diameter) that will have the appropriate bandpass coatings according to the filter designations (Z087, Y106, J129, H158, F184, W149 and Grism). This paper presents and discusses the performance (including spectral transmission and reflectedtransmitted wavefront error measurements) of a subset of bandpass filter coating prototypes that are based on the WFC instrument filter compliment. The bandpass coating prototypes that are tested in this effort correspond to the Z087, W149, and Grism filter elements. These filter coatings have been procured from three different vendors to assess the most challenging aspects in terms of the in-band throughput, out of band rejection (including the cut-on and cutoff slopes), and the impact the wavefront error distortions of these filter coatings will have on the imaging performance of the de-field channel in the WFIRSTAFTA observatory.

  6. A practicable signal processing algorithm for industrial nuclear instrument

    International Nuclear Information System (INIS)

    Tang Yaogeng; Gao Song; Yang Wujiao

    2006-01-01

    In order to reduce the statistical error and to improve dynamic performances of the industrial nuclear instrument, a practicable method of nuclear measurement signal processing is developed according to industrial nuclear measurement features. The algorithm designed is implemented with a single-chip microcomputer. The results of application in (radiation level gauge has proved the effectiveness of this method). (authors)

  7. Error quantification of osteometric data in forensic anthropology.

    Science.gov (United States)

    Langley, Natalie R; Meadows Jantz, Lee; McNulty, Shauna; Maijanen, Heli; Ousley, Stephen D; Jantz, Richard L

    2018-04-10

    This study evaluates the reliability of osteometric data commonly used in forensic case analyses, with specific reference to the measurements in Data Collection Procedures 2.0 (DCP 2.0). Four observers took a set of 99 measurements four times on a sample of 50 skeletons (each measurement was taken 200 times by each observer). Two-way mixed ANOVAs and repeated measures ANOVAs with pairwise comparisons were used to examine interobserver (between-subjects) and intraobserver (within-subjects) variability. Relative technical error of measurement (TEM) was calculated for measurements with significant ANOVA results to examine the error among a single observer repeating a measurement multiple times (e.g. repeatability or intraobserver error), as well as the variability between multiple observers (interobserver error). Two general trends emerged from these analyses: (1) maximum lengths and breadths have the lowest error across the board (TEMForensic Skeletal Material, 3rd edition. Each measurement was examined carefully to determine the likely source of the error (e.g. data input, instrumentation, observer's method, or measurement definition). For several measurements (e.g. anterior sacral breadth, distal epiphyseal breadth of the tibia) only one observer differed significantly from the remaining observers, indicating a likely problem with the measurement definition as interpreted by that observer; these definitions were clarified in DCP 2.0 to eliminate this confusion. Other measurements were taken from landmarks that are difficult to locate consistently (e.g. pubis length, ischium length); these measurements were omitted from DCP 2.0. This manual is available for free download online (https://fac.utk.edu/wp-content/uploads/2016/03/DCP20_webversion.pdf), along with an accompanying instructional video (https://www.youtube.com/watch?v=BtkLFl3vim4). Copyright © 2018 Elsevier B.V. All rights reserved.

  8. An error reduction algorithm to improve lidar turbulence estimates for wind energy

    Directory of Open Access Journals (Sweden)

    J. F. Newman

    2017-02-01

    Full Text Available Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidars in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine

  9. Three-dimensional patient setup errors at different treatment sites measured by the Tomotherapy megavoltage CT

    Energy Technology Data Exchange (ETDEWEB)

    Hui, S.K.; Lusczek, E.; Dusenbery, K. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; DeFor, T. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Biostatistics and Informatics Core; Levitt, S. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; Karolinska Institutet, Stockholm (Sweden). Dept. of Onkol-Patol

    2012-04-15

    Reduction of interfraction setup uncertainty is vital for assuring the accuracy of conformal radiotherapy. We report a systematic study of setup error to assess patients' three-dimensional (3D) localization at various treatment sites. Tomotherapy megavoltage CT (MVCT) images were scanned daily in 259 patients from 2005-2008. We analyzed 6,465 MVCT images to measure setup error for head and neck (H and N), chest/thorax, abdomen, prostate, legs, and total marrow irradiation (TMI). Statistical comparisons of the absolute displacements across sites and time were performed in rotation (R), lateral (x), craniocaudal (y), and vertical (z) directions. The global systematic errors were measured to be less than 3 mm in each direction with increasing order of errors for different sites: H and N, prostate, chest, pelvis, spine, legs, and TMI. The differences in displacements in the x, y, and z directions, and 3D average displacement between treatment sites were significant (p < 0.01). Overall improvement in patient localization with time (after 3-4 treatment fractions) was observed. Large displacement (> 5 mm) was observed in the 75{sup th} percentile of the patient groups for chest, pelvis, legs, and spine in the x and y direction in the second week of the treatment. MVCT imaging is essential for determining 3D setup error and to reduce uncertainty in localization at all anatomical locations. Setup error evaluation should be performed daily for all treatment regions, preferably for all treatment fractions. (orig.)

  10. Fingerprinting Localization Method Based on TOA and Particle Filtering for Mines

    Directory of Open Access Journals (Sweden)

    Boming Song

    2017-01-01

    Full Text Available Accurate target localization technology plays a very important role in ensuring mine safety production and higher production efficiency. The localization accuracy of a mine localization system is influenced by many factors. The most significant factor is the non-line of sight (NLOS propagation error of the localization signal between the access point (AP and the target node (Tag. In order to improve positioning accuracy, the NLOS error must be suppressed by an optimization algorithm. However, the traditional optimization algorithms are complex and exhibit poor optimization performance. To solve this problem, this paper proposes a new method for mine time of arrival (TOA localization based on the idea of comprehensive optimization. The proposed method utilizes particle filtering to reduce the TOA data error, and the positioning results are further optimized with fingerprinting based on the Manhattan distance. This proposed method combines the advantages of particle filtering and fingerprinting localization. It reduces algorithm complexity and has better error suppression performance. The experimental results demonstrate that, as compared to the symmetric double-sided two-way ranging (SDS-TWR method or received signal strength indication (RSSI based fingerprinting method, the proposed method has a significantly improved localization performance, and the environment adaptability is enhanced.

  11. Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.

    Science.gov (United States)

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.

  12. CRLBs for WSNs localization in NLOS environment

    Directory of Open Access Journals (Sweden)

    Wang Peng

    2011-01-01

    Full Text Available Abstract Determination of Cramer-Rao lower bound (CRLB as an optimality criterion for the problem of localization in wireless sensor networks (WSNs is a very important issue. Currently, CRLBs have been derived for line-of-sight (LOS situation in WSNs. However, one of major problems for accurate localization in WSNs is non-line-of-sight (NLOS propagation. This article proposes two CRLBs for WSNs localization in NLOS environment. The proposed CRLBs consider both the cases that positions of reference devices (RDs are perfectly or imperfectly known. Since non-parametric kernel method is used to build probability density function of NLOS errors, the proposed CRLBs are suitable for various distributions of NLOS errors. Moreover, the proposed CRLBs provide a unified presentation for both LOS and NLOS environments. Theoretical analysis also proves that the proposed CRLB for NLOS situation becomes the CRLB for LOS situation when NLOS errors go to 0, which gives a robust check for the proposed CRLB.

  13. Position Localization with Impulse Ultra Wide Band

    National Research Council Canada - National Science Library

    Zhang, Guoping; Rao, S. V

    2005-01-01

    ...) bias and clock jittering error of TDOA measurement. In our prototype design, we exploit impulse UWB techniques to implement a very low cost localization system that can achieve centimeters localization for indoor applications...

  14. Using wide area differential GPS to improve total system error for precision flight operations

    Science.gov (United States)

    Alter, Keith Warren

    Total System Error (TSE) refers to an aircraft's total deviation from the desired flight path. TSE can be divided into Navigational System Error (NSE), the error attributable to the aircraft's navigation system, and Flight Technical Error (FTE), the error attributable to pilot or autopilot control. Improvement in either NSE or FTE reduces TSE and leads to the capability to fly more precise flight trajectories. The Federal Aviation Administration's Wide Area Augmentation System (WAAS) became operational for non-safety critical applications in 2000 and will become operational for safety critical applications in 2002. This navigation service will provide precise 3-D positioning (demonstrated to better than 5 meters horizontal and vertical accuracy) for civil aircraft in the United States. Perhaps more importantly, this navigation system, which provides continuous operation across large regions, enables new flight instrumentation concepts which allow pilots to fly aircraft significantly more precisely, both for straight and curved flight paths. This research investigates the capabilities of some of these new concepts, including the Highway-In-The Sky (HITS) display, which not only improves FTE but also reduces pilot workload when compared to conventional flight instrumentation. Augmentation to the HITS display, including perspective terrain and terrain alerting, improves pilot situational awareness. Flight test results from demonstrations in Juneau, AK, and Lake Tahoe, CA, provide evidence of the overall feasibility of integrated, low-cost flight navigation systems based on these concepts. These systems, requiring no more computational power than current-generation low-end desktop computers, have immediate applicability to general aviation flight from Cessnas to business jets and can support safer and ultimately more economical flight operations. Commercial airlines may also, over time, benefit from these new technologies.

  15. Ex vivo study on root canal instrumentation of two rotary nickel-titanium systems in comparison to stainless steel hand instruments.

    Science.gov (United States)

    Vaudt, J; Bitter, K; Neumann, K; Kielbassa, A M

    2009-01-01

    To investigate instrumentation time, working safety and the shaping ability of two rotary nickel-titanium (NiTi) systems (Alpha System and ProTaper Universal) in comparison to stainless steel hand instruments. A total of 45 mesial root canals of extracted human mandibular molars were selected. On the basis of the degree of curvature the matched teeth were allocated randomly into three groups of 15 teeth each. In group 1 root canals were prepared to size 30 using a standardized manual preparation technique; in group 2 and 3 rotary NiTi instruments were used following the manufacturers' instructions. Instrumentation time and procedural errors were recorded. With the aid of pre- and postoperative radiographs, apical straightening of the canal curvature was determined. Photographs of the coronal, middle and apical cross-sections of the pre- and postoperative canals were taken, and superimposed using a standard software. Based on these composite images the portion of uninstrumented canal walls was evaluated. Active instrumentation time of the Alpha System was significantly reduced compared with ProTaper Universal and hand instrumentation (P < 0.05; anova). No instrument fractures occurred in any of the groups. The Alpha System revealed significantly less apical straightening compared with the other instruments (P < 0.05; Mann-Whitney U test). In the apical cross-sections Alpha System resulted in significantly less uninstrumented canal walls compared with stainless steel files (P < 0.05; chi-squared test). Despite the demonstrated differences between the systems, an apical straightening effect could not be prevented; areas of uninstrumented root canal wall were left in all regions using the various systems.

  16. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  17. A statistical model for measurement error that incorporates variation over time in the target measure, with application to nutritional epidemiology.

    Science.gov (United States)

    Freedman, Laurence S; Midthune, Douglas; Dodd, Kevin W; Carroll, Raymond J; Kipnis, Victor

    2015-11-30

    Most statistical methods that adjust analyses for measurement error assume that the target exposure T is a fixed quantity for each individual. However, in many applications, the value of T for an individual varies with time. We develop a model that accounts for such variation, describing the model within the framework of a meta-analysis of validation studies of dietary self-report instruments, where the reference instruments are biomarkers. We demonstrate that in this application, the estimates of the attenuation factor and correlation with true intake, key parameters quantifying the accuracy of the self-report instrument, are sometimes substantially modified under the time-varying exposure model compared with estimates obtained under a traditional fixed-exposure model. We conclude that accounting for the time element in measurement error problems is potentially important. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Evolutionary programming for neutron instrument optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Bentley, Phillip M. [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany)]. E-mail: phillip.bentley@hmi.de; Pappas, Catherine [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Habicht, Klaus [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Lelievre-Berna, Eddy [Institut Laue-Langevin, 6 rue Jules Horowitz, BP 156, 38042 Grenoble Cedex 9 (France)

    2006-11-15

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations.

  19. Instrument for track linear element recognition

    International Nuclear Information System (INIS)

    Krupnov, V.E.; Fedotov, O.P.

    1977-01-01

    Described is the construction of instrument for recognizing linear elements of tracks. For designing this instrument use has been made of the algorithm for conversion of the point data into a set of linear elements. The flowsheet of the instrument shows its major units such as data converter, data representation register unit, local computers, interface with the central computer. The data representation register unit comprises sixteen registers and is capable of presenting data from sixteen lines when raster scanning of a picture taken from a track chamber. The maximum capacity of the code of the coordinate of a point recorded on a picture is up to 16 digits. The time of the inner operating cycle of the instrument is 1.3 μs. The average time required for processing data containing sixteen scanning lines is 250 μs

  20. Evolutionary programming for neutron instrument optimisation

    International Nuclear Information System (INIS)

    Bentley, Phillip M.; Pappas, Catherine; Habicht, Klaus; Lelievre-Berna, Eddy

    2006-01-01

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations

  1. System for simultaneously measuring 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser.

    Science.gov (United States)

    Cui, Cunxing; Feng, Qibo; Zhang, Bin; Zhao, Yuqiong

    2016-03-21

    A novel method for simultaneously measuring six degree-of-freedom (6DOF) geometric motion errors is proposed in this paper, and the corresponding measurement instrument is developed. Simultaneous measurement of 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser is accomplished for the first time to the best of the authors' knowledge. Dual-frequency laser beams that are orthogonally linear polarized were adopted as the measuring datum. Positioning error measurement was achieved by heterodyne interferometry, and other 5DOF geometric motion errors were obtained by fiber collimation measurement. A series of experiments was performed to verify the effectiveness of the developed instrument. The experimental results showed that the stability and accuracy of the positioning error measurement are 31.1 nm and 0.5 μm, respectively. For the straightness error measurements, the stability and resolution are 60 and 40 nm, respectively, and the maximum deviation of repeatability is ± 0.15 μm in the x direction and ± 0.1 μm in the y direction. For pitch and yaw measurements, the stabilities are 0.03″ and 0.04″, the maximum deviations of repeatability are ± 0.18″ and ± 0.24″, and the accuracies are 0.4″ and 0.35″, respectively. The stability and resolution of roll measurement are 0.29″ and 0.2″, respectively, and the accuracy is 0.6″.

  2. Authentication of nuclear-material assays made with in-plant instruments

    International Nuclear Information System (INIS)

    Hatcher, C.R.; Hsue, S.T.; Russo, P.A.

    1982-01-01

    This paper develops a general approach for International Atomic Energy Agency (IAEA) authentication of nuclear material assays made with in-plant instruments under facility operator control. The IAEA is evaluating the use of in-plant instruments as a part of international safeguards at large bulk-handling facilities, such as reprocessing plants, fuel fabrication plants, and enrichment plants. One of the major technical problems associated with IAEA use of data from in-plant instruments is the need to show that there has been no tampering with the measurements. Two fundamentally different methods are discussed that can be used by IAEA inspectors to independently verify (or authenticate) measurements made with in-plant instruments. Method 1, called external authentication, uses a protected IAEA measurement technique to compare in-plant instrument results with IAEA results. Method 2, called internal authentication, uses protected IAEA standards, known physical constants, and special test procedures to determine the performance characteristics of the in-plant instrument. The importance of measurement control programs to detect normally expected instrument failures and procedural errors is also addressed. The paper concludes with a brief discussion of factors that should be considered by the designers of new in-plant instruments in order to facilitate IAEA authentication procedures

  3. Comparing surgical trays with redundant instruments with trays with reduced instruments: a cost analysis.

    Science.gov (United States)

    John-Baptiste, A; Sowerby, L J; Chin, C J; Martin, J; Rotenberg, B W

    2016-01-01

    When prearranged standard surgical trays contain instruments that are repeatedly unused, the redundancy can result in unnecessary health care costs. Our objective was to estimate potential savings by performing an economic evaluation comparing the cost of surgical trays with redundant instruments with surgical trays with reduced instruments ("reduced trays"). We performed a cost-analysis from the hospital perspective over a 1-year period. Using a mathematical model, we compared the direct costs of trays containing redundant instruments to reduced trays for 5 otolaryngology procedures. We incorporated data from several sources including local hospital data on surgical volume, the number of instruments on redundant and reduced trays, wages of personnel and time required to pack instruments. From the literature, we incorporated instrument depreciation costs and the time required to decontaminate an instrument. We performed 1-way sensitivity analyses on all variables, including surgical volume. Costs were estimated in 2013 Canadian dollars. The cost of redundant trays was $21 806 and the cost of reduced trays was $8803, for a 1-year cost saving of $13 003. In sensitivity analyses, cost savings ranged from $3262 to $21 395, based on the surgical volume at the institution. Variation in surgical volume resulted in a wider range of estimates, with a minimum of $3253 for low-volume to a maximum of $52 012 for high-volume institutions. Our study suggests moderate savings may be achieved by reducing surgical tray redundancy and, if applied to other surgical specialties, may result in savings to Canadian health care systems.

  4. Error handling for the CDF online silicon vertex tracker

    CERN Document Server

    Bari, M; Cerri, A; Dell'Orso, Mauro; Donati, S; Galeotti, S; Giannetti, P; Morsani, F; Punzi, G; Ristori, L; Spinella, F; Zanetti, A M

    2001-01-01

    The online silicon vertex tracker (SVT) is composed of 104 VME 9U digital boards (of eight different types). Since the data output from the SVT (few MB/s) are a small fraction of the input data (200 MB/s), it is extremely difficult to track possible internal errors by using only the output stream. For this reason, several diagnostic tools have been implemented: local error registers, error bits propagated through the data streams, and the Spy Buffer system. Data flowing through each input and output stream of every board are continuously copied to memory banks named spy buffers, which act as built-in logic state analyzers hooked continuously to internal data streams. The contents of all buffers can be frozen at any time (e.g., on error detection) to take a snapshot of all data flowing through each SVT board. The spy buffers are coordinated at system level by the Spy Control Board. The architecture, design, and implementation of this system are described. (4 refs).

  5. Quantum entanglement in non-local games, graph parameters and zero-error information theory

    NARCIS (Netherlands)

    Scarpa, G.

    2013-01-01

    We study quantum entanglement and some of its applications in graph theory and zero-error information theory. In Chapter 1 we introduce entanglement and other fundamental concepts of quantum theory. In Chapter 2 we address the question of how much quantum correlations generated by entanglement can

  6. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  7. Emission inventory: An urban public policy instrument and benchmark

    International Nuclear Information System (INIS)

    D'Avignon, Alexander; Azevedo Carloni, Flavia; Lebre La Rovere, Emilio; Burle Schmidt Dubeux, Carolina

    2010-01-01

    Global concern with climate change has led to the development of a variety of solutions to monitor and reduce emissions on both local and global scales. Under the United Nations Framework Convention on Climate Change (UNFCCC), both developed and emerging countries have assumed responsibility for developing and updating national inventories of greenhouse gas emissions from anthropic sources. This creates opportunities and incentives for cities to carry out their own local inventories and, thereby, develop air quality management plans including both essential key players and stakeholders at the local level. The aim of this paper is to discuss the role of local inventories as an urban public policy instrument and how this type of local instrument may bring advantages countrywide in enhancing the global position of a country. Local inventories have been carried out in many cities of the world and the main advantage of this is that it allows an overview of emissions produced by different municipal activities, thereby, helps decision makers in the elaboration of efficient air quality management plans. In that way, measures aimed at the reduction of fossil fuel consumption to lower local atmospheric pollution levels can also, in some ways, reduce GHG emissions.

  8. EPRTM Reactor neutron instrumentation

    International Nuclear Information System (INIS)

    Pfeiffer, Maxime; SALA, Stephanie

    2013-06-01

    The core safety during operation is linked, in particular, to the respect of criteria related to the heat generated in fuel rods and to the heat exchange between the rods and the coolant. This local power information is linked to the power distribution in the core. In order to evaluate the core power distribution, the EPR TM reactor relies on several types of neutron detectors: - ionization chambers located outside the vessel and used for protection and monitoring - a fixed in-core instrumentation based on Cobalt Self Powered Neutron Detectors used for protection and monitoring - a mobile reference in-core instrumentation based on Vanadium aero-balls This document provides a description of this instrumentation and its use in core protection, limitation, monitoring and control functions. In particular, a description of the detectors and the principles of their signal generation is supplied as well as the description of the treatments related to these detectors in the EPR TM reactor I and C systems (including periodical calibration). (authors)

  9. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  10. Structural damage detection robust against time synchronization errors

    International Nuclear Information System (INIS)

    Yan, Guirong; Dyke, Shirley J

    2010-01-01

    Structural damage detection based on wireless sensor networks can be affected significantly by time synchronization errors among sensors. Precise time synchronization of sensor nodes has been viewed as crucial for addressing this issue. However, precise time synchronization over a long period of time is often impractical in large wireless sensor networks due to two inherent challenges. First, time synchronization needs to be performed periodically, requiring frequent wireless communication among sensors at significant energy cost. Second, significant time synchronization errors may result from node failures which are likely to occur during long-term deployment over civil infrastructures. In this paper, a damage detection approach is proposed that is robust against time synchronization errors in wireless sensor networks. The paper first examines the ways in which time synchronization errors distort identified mode shapes, and then proposes a strategy for reducing distortion in the identified mode shapes. Modified values for these identified mode shapes are then used in conjunction with flexibility-based damage detection methods to localize damage. This alternative approach relaxes the need for frequent sensor synchronization and can tolerate significant time synchronization errors caused by node failures. The proposed approach is successfully demonstrated through numerical simulations and experimental tests in a lab

  11. Regional compensation for statistical maximum likelihood reconstruction error of PET image pixels

    International Nuclear Information System (INIS)

    Forma, J; Ruotsalainen, U; Niemi, J A

    2013-01-01

    In positron emission tomography (PET), there is an increasing interest in studying not only the regional mean tracer concentration, but its variation arising from local differences in physiology, the tissue heterogeneity. However, in reconstructed images this physiological variation is shadowed by a large reconstruction error, which is caused by noisy data and the inversion of tomographic problem. We present a new procedure which can quantify the error variation in regional reconstructed values for given PET measurement, and reveal the remaining tissue heterogeneity. The error quantification is made by creating and reconstructing the noise realizations of virtual sinograms, which are statistically similar with the measured sinogram. Tests with physical phantom data show that the characterization of error variation and the true heterogeneity are possible, despite the existing model error when real measurement is considered. (paper)

  12. The instrumentation of fast reactor

    International Nuclear Information System (INIS)

    Endo, Akira

    2003-03-01

    The author has been engaged in the development of fast reactors over the last 30 years with both an involvement with the early technology development on the experimental breeder reactor Joyo, and latterly continuing this work on the prototype breeder reactor, Monju. In order to pass on this experience to younger engineers this paper is produced to outline this experience in the sincere hope that the information given will be utilised in future educational training material. The paper discusses the wide diversity on the associated instrument technology which the fast breeder reactor requires. The first chapter outlines the fast reactor system, followed by discussions on reactor instrumentation, measurement principles, temperature dependencies, and verification response characteristics from various viewpoints, are discussed in chapters two and three. The important issues of failed fuel location detection, and sodium leak detection from steam generators are discussed in chapters 4 and 5 respectively. Appended to this report is an explanation on the methods of measuring response characteristics on instrumentation systems using error analysis, random signal theory and measuring method of response characteristic by AR (autoregressive) model on which it appears is becoming an indispensable problem for persons involved with this technology in the future. (author)

  13. Digital instrumentation system for nuclear research reactors

    International Nuclear Information System (INIS)

    Aghina, Mauricio A.C.; Carvalho, Paulo Vitor R.

    2002-01-01

    This work describes a proposal for a system of nuclear instrumentation and safety totally digital for the Argonauta Reactor. The system divides in the subsystems: channel of pulses, channel of current, conventional instrumentation and safety system. The connection of the subsystems is made through redundant double local net, using the protocol modbus/rtu. So much the channel of pulses, the current channel and safety's system use modules operating in triple redundancy. (author)

  14. Eight year experience in open ended instrumentation laboratory

    Science.gov (United States)

    Marques, Manuel B.; Rosa, Carla C.; Marques, Paulo V. S.

    2015-10-01

    When designing laboratory courses in a Physics Major we consider a range of objectives: teaching Physics; developing lab competencies; instrument control and data acquisition; learning about measurement errors and error propagation; an introduction to project management; team work skills and scientific writing. But nowadays we face pressure to decrease laboratory hours due to the cost involved. Many universities are replacing lab classes for simulation activities, hiring PhD. and master students to give first year lab classes, and reducing lab hours. This leads to formatted lab scripts and poor autonomy of the students, and failure to enhance creativity and autonomy. In this paper we present our eight year experience with a laboratory course that is mandatory in the third year of Physics and Physical Engineering degrees. Since the students had previously two standard laboratory courses, we focused on teaching instrumentation and giving students autonomy. The course is divided in two parts: one third is dedicated to learn computer controlled instrumentation and data acquisition (based in LabView); the final 2/3 is dedicated to a group project. In this project, the team (2 or 3 students) must develop a project and present it in a typical conference format at the end of the semester. The project assignments are usually not very detailed (about two or three lines long), giving only general guidelines pointing to a successful project (students often recycle objectives putting forward a very personal project); all of them require assembling some hardware. Due to our background, about one third of the projects are related to Optics.

  15. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  16. Adjoint-Based a Posteriori Error Estimation for Coupled Time-Dependent Systems

    KAUST Repository

    Asner, Liya; Tavener, Simon; Kay, David

    2012-01-01

    We consider time-dependent parabolic problem s coupled across a common interface which we formulate using a Lagrange multiplier construction and solve by applying a monolithic solution technique. We derive an adjoint-based a posteriori error representation for a quantity of interest given by a linear functional of the solution. We establish the accuracy of our error representation formula through numerical experimentation and investigate the effect of error in the adjoint solution. Crucially, the error representation affords a distinction between temporal and spatial errors and can be used as a basis for a blockwise time-space refinement strategy. Numerical tests illustrate the efficacy of the refinement strategy by capturing the distinctive behavior of a localized traveling wave solution. The saddle point systems considered here are equivalent to those arising in the mortar finite element technique for parabolic problems. © 2012 Society for Industrial and Applied Mathematics.

  17. WE-AB-BRA-01: 3D-2D Image Registration for Target Localization in Spine Surgery: Comparison of Similarity Metrics Against Robustness to Content Mismatch

    International Nuclear Information System (INIS)

    De Silva, T; Ketcha, M; Siewerdsen, J H; Uneri, A; Reaungamornrat, S; Vogt, S; Kleinszig, G; Lo, S F; Wolinsky, J P; Gokaslan, Z L; Aygun, N

    2015-01-01

    Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperative mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such

  18. WE-AB-BRA-01: 3D-2D Image Registration for Target Localization in Spine Surgery: Comparison of Similarity Metrics Against Robustness to Content Mismatch

    Energy Technology Data Exchange (ETDEWEB)

    De Silva, T; Ketcha, M; Siewerdsen, J H [Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD (United States); Uneri, A; Reaungamornrat, S [Department of Computer Science, Johns Hopkins University, Baltimore, MD (United States); Vogt, S; Kleinszig, G [Siemens Healthcare XP Division, Erlangen, DE (Germany); Lo, S F; Wolinsky, J P; Gokaslan, Z L [Department of Neurosurgery, The Johns Hopkins Hospital, Baltimore, MD (United States); Aygun, N [Department of Raiology and Radiological Sciences, The Johns Hopkins Hospital, Baltimore, MD (United States)

    2015-06-15

    Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperative mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such

  19. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    Science.gov (United States)

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  20. Planck 2013 results. IV. Low Frequency Instrument beams and window functions

    DEFF Research Database (Denmark)

    Planck Collaboration,; Aghanim, N.; Armitage-Caplan, C.

    2014-01-01

    This paper presents the characterization of the in-flight beams, the beam window functions and the associated errors for the Planck Low Frequency Instrument (LFI). Knowledge of the beam profiles is the key to determining their imprint on the transfer function from the observed to the actual sky a...

  1. Planck 2013 results. IV. Low Frequency Instrument beams and window functions

    DEFF Research Database (Denmark)

    Planck Collaboration,; Aghanim, N.; Armitage-Caplan, C.

    2013-01-01

    This paper presents the characterization of the in-flight beams, the beam window functions and the associated errors for the Planck Low Frequency Instrument (LFI). Knowledge of the beam profiles is the key to determining their imprint on the transfer function from the observed to the actual sky a...

  2. Performance Analysis of Local Ensemble Kalman Filter

    Science.gov (United States)

    Tong, Xin T.

    2018-03-01

    Ensemble Kalman filter (EnKF) is an important data assimilation method for high-dimensional geophysical systems. Efficient implementation of EnKF in practice often involves the localization technique, which updates each component using only information within a local radius. This paper rigorously analyzes the local EnKF (LEnKF) for linear systems and shows that the filter error can be dominated by the ensemble covariance, as long as (1) the sample size exceeds the logarithmic of state dimension and a constant that depends only on the local radius; (2) the forecast covariance matrix admits a stable localized structure. In particular, this indicates that with small system and observation noises, the filter error will be accurate in long time even if the initialization is not. The analysis also reveals an intrinsic inconsistency caused by the localization technique, and a stable localized structure is necessary to control this inconsistency. While this structure is usually taken for granted for the operation of LEnKF, it can also be rigorously proved for linear systems with sparse local observations and weak local interactions. These theoretical results are also validated by numerical implementation of LEnKF on a simple stochastic turbulence in two dynamical regimes.

  3. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    Science.gov (United States)

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  4. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  5. Visual tracking of da Vinci instruments for laparoscopic surgery

    Science.gov (United States)

    Speidel, S.; Kuhn, E.; Bodenstedt, S.; Röhl, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2014-03-01

    Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.

  6. Working Memory Load Strengthens Reward Prediction Errors.

    Science.gov (United States)

    Collins, Anne G E; Ciullo, Brittany; Frank, Michael J; Badre, David

    2017-04-19

    Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process in which reward prediction errors (RPEs) are used to update expected values of choice options. This modeling ignores the different contributions of different memory and decision-making systems thought to contribute even to simple learning. In an fMRI experiment, we investigated how working memory (WM) and incremental RL processes interact to guide human learning. WM load was manipulated by varying the number of stimuli to be learned across blocks. Behavioral results and computational modeling confirmed that learning was best explained as a mixture of two mechanisms: a fast, capacity-limited, and delay-sensitive WM process together with slower RL. Model-based analysis of fMRI data showed that striatum and lateral prefrontal cortex were sensitive to RPE, as shown previously, but, critically, these signals were reduced when the learning problem was within capacity of WM. The degree of this neural interaction related to individual differences in the use of WM to guide behavioral learning. These results indicate that the two systems do not process information independently, but rather interact during learning. SIGNIFICANCE STATEMENT Reinforcement learning (RL) theory has been remarkably productive at improving our understanding of instrumental learning as well as dopaminergic and striatal network function across many mammalian species. However, this neural network is only one contributor to human learning and other mechanisms such as prefrontal cortex working memory also play a key role. Our results also show that these other players interact with the dopaminergic RL system, interfering with its key computation of reward prediction errors. Copyright © 2017 the authors 0270-6474/17/374332-11$15.00/0.

  7. Assessment of the Local Residual Stresses of 7050-T7452 Aluminum Alloy in Microzones by the Instrumented Indentation with the Berkovich Indenter

    Science.gov (United States)

    He, M.; Huang, C. H.; Wang, X. X.; Yang, F.; Zhang, N.; Li, F. G.

    2017-10-01

    The local residual stresses in microzones are investigated by the instrumented indentation method with the Berkovich indenter. The parameters required for determination of residual stresses are obtained from indentation load-penetration depth curves constructed during instrumented indentation tests on flat square 7050-T7452 aluminum alloy specimens with a central hole containing the compressive residual stresses generated by the cold extrusion process. The force balance system with account of the tensile and compressive residual stresses is used to explain the phenomenon of different contact areas produced by the same indentation load. The effect of strain-hardening exponent on the residual stress is tuned-off by application of the representative stress σ_{0.033} in the average contact pressure assessment using the Π theorem, while the yield stress value is obtained from the constitutive function. Finally, the residual stresses are calculated according to the proposed equations of the force balance system, and their feasibility is corroborated by the XRD measurements.

  8. Instrument workstation for the EGSE of the Near Infrared Spectro-Photometer instrument (NISP) of the EUCLID mission

    Science.gov (United States)

    Trifoglio, M.; Gianotti, F.; Conforti, V.; Franceschi, E.; Stephen, J. B.; Bulgarelli, A.; Fioretti, V.; Maiorano, E.; Nicastro, L.; Valenziano, L.; Zoli, A.; Auricchio, N.; Balestra, A.; Bonino, D.; Bonoli, C.; Bortoletto, F.; Capobianco, V.; Chiarusi, T.; Corcione, L.; Debei, S.; De Rosa, A.; Dusini, S.; Fornari, F.; Giacomini, F.; Guizzo, G. P.; Ligori, S.; Margiotta, A.; Mauri, N.; Medinaceli, E.; Morgante, G.; Patrizii, L.; Sirignano, C.; Sirri, G.; Sortino, F.; Stanco, L.; Tenti, M.

    2016-07-01

    The NISP instrument on board the Euclid ESA mission will be developed and tested at different levels of integration using various test equipment which shall be designed and procured through a collaborative and coordinated effort. The NISP Instrument Workstation (NI-IWS) will be part of the EGSE configuration that will support the NISP AIV/AIT activities from the NISP Warm Electronics level up to the launch of Euclid. One workstation is required for the NISP EQM/AVM, and a second one for the NISP FM. Each workstation will follow the respective NISP model after delivery to ESA for Payload and Satellite AIV/AIT and launch. At these levels the NI-IWS shall be configured as part of the Payload EGSE, the System EGSE, and the Launch EGSE, respectively. After launch, the NI-IWS will be also re-used in the Euclid Ground Segment in order to support the Commissioning and Performance Verification (CPV) phase, and for troubleshooting purposes during the operational phase. The NI-IWS is mainly aimed at the local storage in a suitable format of the NISP instrument data and metadata, at local retrieval, processing and display of the stored data for on-line instrument assessment, and at the remote retrieval of the stored data for off-line analysis on other computers. We describe the design of the IWS software that will create a suitable interface to the external systems in each of the various configurations envisaged at the different levels, and provide the capabilities required to monitor and verify the instrument functionalities and performance throughout all phases of the NISP lifetime.

  9. Error Analysis of Variations on Larsen's Benchmark Problem

    International Nuclear Information System (INIS)

    Azmy, YY

    2001-01-01

    Error norms for three variants of Larsen's benchmark problem are evaluated using three numerical methods for solving the discrete ordinates approximation of the neutron transport equation in multidimensional Cartesian geometry. The three variants of Larsen's test problem are concerned with the incoming flux boundary conditions: unit incoming flux on the left and bottom edges (Larsen's configuration); unit, incoming flux only on the left edge; unit incoming flux only on the bottom edge. The three methods considered are the Diamond Difference (DD) method, and the constant-approximation versions of the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic (AHOT-C) type. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L 1 , L 2 , and L ∞ error norms are calculated. The results of this study demonstrate that while integral error norms, i.e. L 1 , L 2 , converge to zero with mesh refinement, the pointwise L ∞ norm does not due to solution discontinuity across the singular characteristic. Little difference is observed between the error norm behavior of the three methods considered in spite of the fact that AHOT-C is locally exact, suggesting that numerical diffusion across the singular characteristic as the major source of error on the global scale. However, AHOT-C possesses a given accuracy in a larger fraction of computational cells than DD

  10. Some emergency instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Burgess, P H

    1986-10-01

    The widespread release of activity and the resultant spread of contamination after the Chernobyl accident resulted in requests to NRPB to provide instruments for, and expertise in, the measurement of radiation. The most common request was for advice on the usefulness of existing instruments, but Board staff were also involved in their adaptation or in the development of new instruments specially to meet the circumstances of the accident. The accident occurred on 26 April. On 1 May, NRPB was involved at Heathrow Airport in the monitoring of the British students who had returned from Kiev and Minsk. The main purpose was to reassure the students by checking that their persons and belongings did not have significant surface contamination. Additional measurements were also made of iodine activity in thyroid using hand-held detectors or a mobile body monitor. This operation was arranged with the Foreign and Commonwealth Office, which had also received numerous requests for instruments from embassies and consulates in countries close to the scene of the accident. There was concern for the well-being of staff and other United Kingdom nationals who resided in or intended to visit the most affected countries. The board supplied suitable instruments, and the FCO distributed them to embassies. The frequency of environmental monitoring was increased from 29 April in anticipation of contamination and appropriate Board instrumentation was deployed. After the Chernobyl cloud arrived in the UK on 2 May, there were numerous requests from local government, public authorities, private companies and members of the public for information and advice on monitoring equipment and procedures. Some of these requirements could be met with existing equipment but members of the public were usually advised not to proceed. At a later stage, the contamination of foodstuffs and livestock required the development of an instrument capable of detecting low levels of {sup 137}Cs and {sup 134}Cs in food

  11. A portable luminescence dating instrument

    DEFF Research Database (Denmark)

    Kook, M.H.; Murray, A.S.; Lapp, Torben

    2011-01-01

    We describe a portable luminescence reader suitable for use in remote localities in the field. The instrument weighs about 8kg and is based around a 30mm bialkali photomultiplier detecting signals through a glass filter centered on 340nm. Stimulation is by 470nm blue LEDs (24W in total) operating...

  12. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  13. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  14. The importance of local measurements for cosmology

    CERN Document Server

    Verde, Licia; Jimenez, Raul

    2013-01-01

    We explore how local, cosmology-independent measurements of the Hubble constant and the age of the Universe help to provide a powerful consistency check of the currently favored cosmological model (flat LambdaCDM) and model-independent constraints on cosmology. We use cosmic microwave background (CMB) data to define the model-dependent cosmological parameters, and add local measurements to assess consistency and determine whether extensions to the model are justified. At current precision, there is no significant tension between the locally measured Hubble constant and age of the Universe (with errors of 3% and 5% respectively) and the corresponding parameters derived from the CMB. However, if errors on the local measurements could be decreased by a factor of two, one could decisively conclude if there is tension or not. We also compare the local and CMB data assuming simple extensions of the flat, $\\Lambda$CDM model (including curvature, dark energy with a constant equation of state parameter not equal to -1...

  15. Intelligent type sodium instrumentations for LMFR

    International Nuclear Information System (INIS)

    Daolong Chen

    1996-01-01

    The constructions and their performances of a lot of newly developed intelligent type sodium instrumentations that consist of the intelligent type sodium flowmeter, the intelligent type immersed sodium flowmeter, the intelligent type sodium manometer and the intelligent type sodium level gauge are described. The graduation characteristic equations for corresponding transducer using the medium temperature as the parameter are given. Because the operating temperature limit of measured medium (sodium) is wide, so the on-line compensation of the temperature effect of their graduation characteristics much be considered. The tests show that these intelligent type sodium instrumentations possess of good linearity. The accurate sodium process parameter (flowrate, pressure and level) measurement data can be obtained by means of their on-line compensation function of the temperature effect. Moreover, these intelligent type sodium instrumentations possess of the self-inspection, the electric shutoff protection, the setting of full-scale, the setting of alarm limits (two upper limits and two lower limits alarms), the thermocouple breaking alarm, each other isolative the 0-10V direct-current analogue output and CENTRONICS standard digital output, and the alarm relay contact output. These intelligent type sodium instrumentations are suitable particularly for the instrument, control and protective systems of LMFR by means of these excellent functions based on microprocessor. The basic error of the intelligent type sodium flowmeter, immersed sodium flowmeter, sodium manometer and sodium level gauge is respectively ±2%, ±2.3%, ±0.3% and ±1.9% of measuring range. (author). 4 refs, 9 figs

  16. Warped Linear Prediction of Physical Model Excitations with Applications in Audio Compression and Instrument Synthesis

    Science.gov (United States)

    Glass, Alexis; Fukudome, Kimitoshi

    2004-12-01

    A sound recording of a plucked string instrument is encoded and resynthesized using two stages of prediction. In the first stage of prediction, a simple physical model of a plucked string is estimated and the instrument excitation is obtained. The second stage of prediction compensates for the simplicity of the model in the first stage by encoding either the instrument excitation or the model error using warped linear prediction. These two methods of compensation are compared with each other, and to the case of single-stage warped linear prediction, adjustments are introduced, and their applications to instrument synthesis and MPEG4's audio compression within the structured audio format are discussed.

  17. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  18. Climatologies from satellite measurements: the impact of orbital sampling on the standard error of the mean

    Directory of Open Access Journals (Sweden)

    M. Toohey

    2013-04-01

    Full Text Available Climatologies of atmospheric observations are often produced by binning measurements according to latitude and calculating zonal means. The uncertainty in these climatological means is characterised by the standard error of the mean (SEM. However, the usual estimator of the SEM, i.e., the sample standard deviation divided by the square root of the sample size, holds only for uncorrelated randomly sampled measurements. Measurements of the atmospheric state along a satellite orbit cannot always be considered as independent because (a the time-space interval between two nearest observations is often smaller than the typical scale of variations in the atmospheric state, and (b the regular time-space sampling pattern of a satellite instrument strongly deviates from random sampling. We have developed a numerical experiment where global chemical fields from a chemistry climate model are sampled according to real sampling patterns of satellite-borne instruments. As case studies, the model fields are sampled using sampling patterns of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS and Atmospheric Chemistry Experiment Fourier-Transform Spectrometer (ACE-FTS satellite instruments. Through an iterative subsampling technique, and by incorporating information on the random errors of the MIPAS and ACE-FTS measurements, we produce empirical estimates of the standard error of monthly mean zonal mean model O3 in 5° latitude bins. We find that generally the classic SEM estimator is a conservative estimate of the SEM, i.e., the empirical SEM is often less than or approximately equal to the classic estimate. Exceptions occur only when natural variability is larger than the random measurement error, and specifically in instances where the zonal sampling distribution shows non-uniformity with a similar zonal structure as variations in the sampled field, leading to maximum sensitivity to arbitrary phase shifts between the sample distribution and

  19. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  20. A summary of the performance of exposure rate and dose rate instruments contained in instrument evaluation reports NRPB-IE1 to NRPB-IE13

    International Nuclear Information System (INIS)

    Burgess, P.H.; Iles, W.J.

    1979-06-01

    The various radiations encountered in radiological protection cover a wide range of energies and radiation measurements have to be carried out under an equally broad spectrum of environmental conditions. This report is one of a series intended to give information on the performance characteristics of radiological protection instruments, to assist in the selection of appropriate instruments for a given purpose, to interpret the results obtained with such instruments, and, in particular, to know the likely sources and magnitude of errors that might be associated with measurements in the field. The radiation, electrical and environmental characteristics of radiation protection instruments are considered together with those aspects of the construction which make an instrument convenient for routine use. To provide consistent criteria for instrument performance, the range of tests performed on any particular class of instrument, the test methods and the criteria of acceptable performance are based broadly on the appropriate Recommendations of the International Electrotechnical Commission. The radiations in the tests are, in general, selected from the range of reference radiations for instrument calibration being drawn up by the International Standards Organisation. Normally, each report deals with the capabilities and limitations of one model of instrument and no direct comparison with other instruments intended for similar purposes is made, since the significance of particular performance characteristics largely depends on the radiations and environmental conditions in which the instrument is to be used. The results quoted here have all been obtained from tests on instruments in routine production, with the appropriate measurements being made by the NRPB. This report provides a concise summary of measurements of the more important performance characteristics of radiation protection dose rate or exposure rate survey instruments which have been assessed by NRPB as part

  1. A multi-functional testing instrument for heat assisted magnetic recording media

    International Nuclear Information System (INIS)

    Yang, H. Z.; Chen, Y. J.; Leong, S. H.; An, C. W.; Ye, K. D.; Hu, J. F.; Yin, M. J.

    2014-01-01

    With recent developments in heat assisted magnetic recording (HAMR), characterization of HAMR media is becoming very important. We present a multi-functional instrument for testing HAMR media, which integrates HAMR writing, reading, and a micro-magneto-optic Kerr effect (μ-MOKE) testing function. A potential application of the present instrument is to make temperature dependent magnetic property measurement using a pump-probe configuration. In the measurement, the media is heated up by a heating (intense) beam while a testing (weak) beam is overlapped with the heating beam for MOKE measurement. By heating the media with different heating beam power, magnetic measurements by MOKE at different temperatures can be performed. Compared to traditional existing tools such as the vibrating sample magnetometer, the present instrument provides localized and efficient heating at the measurement spot. The integration of HAMR writing and μ-MOKE system can also facilitate a localized full investigation of the magnetic media by potential correlation of HAMR head independent write/read performance to localized magnetic properties

  2. Triaxial Accelerometer Error Coefficients Identification with a Novel Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Yanbin Gao

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is one of the state-of-the-art swarm intelligence techniques, which is widely utilized for optimization purposes. Triaxial accelerometer error coefficients are relatively unstable with the environmental disturbances and aging of the instrument. Therefore, identifying triaxial accelerometer error coefficients accurately and being with lower costs are of great importance to improve the overall performance of triaxial accelerometer-based strapdown inertial navigation system (SINS. In this study, a novel artificial fish swarm algorithm (NAFSA that eliminated the demerits (lack of using artificial fishes’ previous experiences, lack of existing balance between exploration and exploitation, and high computational cost of AFSA is introduced at first. In NAFSA, functional behaviors and overall procedure of AFSA have been improved with some parameters variations. Second, a hybrid accelerometer error coefficients identification algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS approaches. This combination leads to maximum utilization of the involved approaches for triaxial accelerometer error coefficients identification. Furthermore, the NAFSA-identified coefficients are testified with 24-position verification experiment and triaxial accelerometer-based SINS navigation experiment. The priorities of MCS-NAFSA are compared with that of conventional calibration method and optimal AFSA. Finally, both experiments results demonstrate high efficiency of MCS-NAFSA on triaxial accelerometer error coefficients identification.

  3. Instrument evaluation no. 33. Automess Szintomat 6134 radiation survey meter

    International Nuclear Information System (INIS)

    McClure, D.R.

    1986-04-01

    The various radiations encountered in radiological protection cover a wide range of energies and radiation measurements have to be carried out under an equally broad spectrum of environmental conditions. This report is one of a series intended to give information on the performance characteristics of radiological protection instruments, to assist in the selection of appropriate instruments for a given purpose, to interpret the results obtained with such instruments, and, in particular, to know the likely sources and magnitude of errors that might be associated with measurements in the field. The radiation, electrical and environmental characteristics of radiation protection instruments are considered together with those aspects of the construction which make an instrument convenient for routine use. To provide consistent criteria for instrument performance, the range of tests performed on any particular class of instrument, the test methods and the criteria of acceptable performance are based broadly on the appropriate Recommendations of the International Electrotechnical Commission. The radiations in the tests are, in general, selected from the range of reference radiations for instrument calibration being drawn up by the International Standards Organisation. Normally, each report deals with the capabilities and limitations of one model of instrument and no direct comparison with other instruments intended for similar purposes is made, since the significance of particular performance characteristics largely depends on the radiations and environmental conditions in which the instrument is to be used. The results quoted here have all been obtained from tests on instruments in routine production, with the appropriate measurements being made by the NRPB. This instrument evaluation report deals with the Automess Szintomat 6134 Radiation Survey Meter

  4. Instrumentation between science, state and industry

    CERN Document Server

    Shinn, Terry

    2001-01-01

    these. In this book, we appropriate their conception of research-technology, and ex­ tend it to many other phenomena which are less stable and less localized in time and space than the Zeeman/Cotton situation. In the following pages, we use the concept for instances where research activities are orientated primarily toward technologies which facilitate both the production of scientific knowledge and the production of other goods. In particular, we use the tenn for instances where instruments and meth­ ods· traverse numerous geographic and institutional boundaries; that is, fields dis­ tinctly different and distant from the instruments' and methods' initial focus. We suggest that instruments such as the ultra-centrifuge, and the trajectories of the men who devise such artefacts, diverge in an interesting way from other fonns of artefacts and careers in science, metrology and engineering with which students of science and technology are more familiar. The instrument systems developed by re­ search-technolo...

  5. A block matching-based registration algorithm for localization of locally advanced lung tumors

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D., E-mail: gdhugo@vcu.edu [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia, 23298 (United States)

    2014-04-15

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm{sup 3}), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0

  6. A block matching-based registration algorithm for localization of locally advanced lung tumors

    International Nuclear Information System (INIS)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2014-01-01

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm 3 ), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0.001). Left

  7. Two-phase flow measurements with advanced instrumented spool pieces and local conductivity probes

    International Nuclear Information System (INIS)

    Turnage, K.G.; Davis, C.E.

    1979-01-01

    A series of two-phase, air-water and steam-water tests performed with instrumented spool pieces and with conductivity probes obtained from Atomic Energy of Canada, Ltd. is described. The behavior of the three-beam densitometer, turbine meter, and drag flowmeter is discussed in terms of two-phase models. Application of some two-phase mass flow models to the recorded spool piece data is made and preliminary results are shown. Velocity and void fraction information derived from the conductivity probes is presented and compared to velocities and void fractions obtained using the spool piece instrumentation

  8. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  9. Decodoku: Quantum error rorrection as a simple puzzle game

    Science.gov (United States)

    Wootton, James

    To build quantum computers, we need to detect and manage any noise that occurs. This will be done using quantum error correction. At the hardware level, QEC is a multipartite system that stores information non-locally. Certain measurements are made which do not disturb the stored information, but which do allow signatures of errors to be detected. Then there is a software problem. How to take these measurement outcomes and determine: a) The errors that caused them, and (b) how to remove their effects. For qubit error correction, the algorithms required to do this are well known. For qudits, however, current methods are far from optimal. We consider the error correction problem of qubit surface codes. At the most basic level, this is a problem that can be expressed in terms of a grid of numbers. Using this fact, we take the inherent problem at the heart of quantum error correction, remove it from its quantum context, and presented in terms of simple grid based puzzle games. We have developed three versions of these puzzle games, focussing on different aspects of the required algorithms. These have been presented and iOS and Android apps, allowing the public to try their hand at developing good algorithms to solve the puzzles. For more information, see www.decodoku.com. Funding from the NCCR QSIT.

  10. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    Directory of Open Access Journals (Sweden)

    Nazelie Kassabian

    2014-06-01

    Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  11. Opto-mechanical design for transmission optics in cryogenic space instrumentation

    Science.gov (United States)

    Kroes, Gabby; Venema, Lars; Navarro, Ramón

    2017-11-01

    NOVA is involved in the development and realization of various optical astronomical instruments for groundbased as well as space telescopes, with a focus on nearand mid-infrared instrumentation. NOVA has developed a suite of scientific instruments with cryogenic optics for the ESO VLT and VLTI instruments: VISIR, MIDI, the SPIFFI 2Kcamera for SINFONI, X-shooter and MATISSE. Other projects include the cryogenic optics for MIRI for the James Webb Space Telescope and several E-ELT instruments. Mounting optics is always a compromise between firmly fixing the optics and preventing stresses within the optics. The fixing should ensure mechanical stability and thus accurate positioning in various gravity orientations, temperature ranges, during launch, transport or earthquake. On the other hand, the fixings can induce deformations and sometimes birefringence in the optics and thus cause optical errors. Even cracking or breaking of the optics is a risk, especially when using brittle infrared optical materials at the cryogenic temperatures required in instruments for infrared astronomy, where differential expansion of various materials amounts easily to several millimeters per meter. Special kinematic mounts are therefore needed to ensure both accurate positioning and low stress. This paper concentrates on the opto-mechanical design of optics mountings, especially for large transmission optics in cryogenic circumstances in space instruments. It describes the development of temperature-invariant ("a-thermal") kinematic designs, their implementation in ground based instrumentation and ways to make them suitable for space instruments.

  12. A Virtual Instrument System for Determining Sugar Degree of Honey

    Directory of Open Access Journals (Sweden)

    Qijun Wu

    2015-01-01

    Full Text Available This study established a LabVIEW-based virtual instrument system to measure optical activity through the communication of conventional optical instrument with computer via RS232 port. This system realized the functions for automatic acquisition, real-time display, data processing, results playback, and so forth. Therefore, it improved accuracy of the measurement results by avoiding the artificial operation, cumbersome data processing, and the artificial error in optical activity measurement. The system was applied to the analysis of the batch inspection on the sugar degree of honey. The results obtained were satisfying. Moreover, it showed advantages such as friendly man-machine dialogue, simple operation, and easily expanded functions.

  13. A Virtual Instrument System for Determining Sugar Degree of Honey.

    Science.gov (United States)

    Wu, Qijun; Gong, Xun

    2015-01-01

    This study established a LabVIEW-based virtual instrument system to measure optical activity through the communication of conventional optical instrument with computer via RS232 port. This system realized the functions for automatic acquisition, real-time display, data processing, results playback, and so forth. Therefore, it improved accuracy of the measurement results by avoiding the artificial operation, cumbersome data processing, and the artificial error in optical activity measurement. The system was applied to the analysis of the batch inspection on the sugar degree of honey. The results obtained were satisfying. Moreover, it showed advantages such as friendly man-machine dialogue, simple operation, and easily expanded functions.

  14. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  15. An FPGA-based instrumentation platform for use at deep cryogenic temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Conway Lamb, I. D.; Colless, J. I.; Hornibrook, J. M.; Pauka, S. J.; Waddy, S. J.; Reilly, D. J., E-mail: david.reilly@sydney.edu.au [ARC Centre of Excellence for Engineered Quantum Systems, School of Physics, The University of Sydney, Sydney NSW 2006 (Australia); Microsoft Station Q Sydney, The University of Sydney, Sydney NSW 2006 (Australia); Frechtling, M. K. [Microsoft Station Q Sydney, The University of Sydney, Sydney NSW 2006 (Australia); School of Electrical Engineering, The University of Sydney, Sydney NSW 2006 (Australia)

    2016-01-15

    We describe the operation of a cryogenic instrumentation platform incorporating commercially available field-programmable gate arrays (FPGAs). The functionality of the FPGAs at temperatures approaching 4 K enables signal routing, multiplexing, and complex digital signal processing in close proximity to cooled devices or detectors within the cryostat. The performance of the FPGAs in a cryogenic environment is evaluated, including clock speed, error rates, and power consumption. Although constructed for the purpose of controlling and reading out quantum computing devices with low latency, the instrument is generic enough to be of broad use in a range of cryogenic applications.

  16. Accounting for measurement error: a critical but often overlooked process.

    Science.gov (United States)

    Harris, Edward F; Smith, Richard N

    2009-12-01

    Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.

  17. Nuclear instrument maintenance - problems, solutions, and obstacles

    International Nuclear Information System (INIS)

    Vuister, P.H.

    1983-01-01

    In 200 laboratories of South-East Asia, Latin America and Africa a survey was made of the state of instrumentation for nuclear medicine. The principal cause of failures and defects was inadequate quality control and preventive maintenance. On the basis of the survey coordinated research programs were compiled for the maintenance of nuclear instruments. The four principal points of the programs are: to safeguard quality and stable electric power supplies for the instruments, to safeguard permanent temperature and humidity in the environment in which the equipment is operated, effective maintenance, and training of personnel. In the years 1981 and 1982, 14 local training courses were run in which emphasis was put on practicals and tests in mechanics and electronics

  18. Instrument for measuring flow velocities

    International Nuclear Information System (INIS)

    Griffo, J.

    1977-01-01

    The design described here means to produce a 'more satisfying instrument with less cost' than comparable instruments known up to now. Instead of one single turbine rotor, two similar ones but with opposite blade inclination and sense of rotation are to be used. A cylindrical measuring body is carrying in its axis two bearing blocks whose shape is offering little flow resistance. On the shaft, supported by them, the two rotors run in opposite direction a relatively small axial distance apart. The speed of each rotor is picked up as pulse recurrence frequency by a transmitter and fed to an electronic measuring unit. Measuring errors as they are caused for single rotors by turbulent flow, profile distortion of the velocity, or viscous flow are to be eliminated by means of the contrarotating turbines and the subsequently added electronic unit, because in these cases the adulterating increase of the angular velocity of one rotor is compensated by a corresponding deceleration of the other rotor. The mean value then indicated by the electronic unit has high accurancy of measurement. (RW) [de

  19. Organizational Climate, Stress, and Error in Primary Care: The MEMO Study

    Science.gov (United States)

    2005-05-01

    quality, and errors. This model was derived from our earlier work, the Physician Worklife Study14,15 as well as the pioneering work of Lazarus and... Worklife Study instrument,14, 15 and included our five-item global job satisfaction measure and a newly implemented four-item job stress measure.21...measures of practice emphasis with respect to issues such as work–home balance , professionalism, and diversity in office staff, as well as single

  20. Bias correction by use of errors-in-variables regression models in studies with K-X-ray fluorescence bone lead measurements.

    Science.gov (United States)

    Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard

    2011-01-01

    In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Instrument evaluation no. 13. Nuclear enterprises portable meter type PDR

    International Nuclear Information System (INIS)

    Burgess, P.H.; Iles, W.J.

    1978-06-01

    The various radiations encountered in radiological protection cover a wide range of energies and radiation measurements have to be carried out under an equally broad spectrum of environmental conditions. This report is one of a series intended to give information on the performance characteristics of radiological protection instruments, to assist in the selection of appropriate instruments for a given purpose, to interpret the results obtained with such instruments, and, in particular, to know the likely sources and magnitude of errors that might be associated with measurements in the field. The radiation, electrical and environmental characteristics of radiation protection instruments are considered together with those aspects of the construction which make an instrument convenient for routine use. To provide consistent criteria for instrument performance, the range of tests performed on any particular class of instrument, the test methods and the criteria of acceptable performance are based broadly on the appropriate Recommendations of the International Electrotechnical Commission. The radiations in the tests are, in general, selected from the range of reference radiations for instrument calibration being drawn up by the International Standards Organisation. Normally, each report deals with the capabilities and limitations of one model of instrument and no direct comparison with other instruments intended for similar purposes is made, since the significance of particular performance characteristics largely depends on the radiations and environmental conditions in which the instrument is to be used. The results quoted here have all been obtained from tests on instruments in routine production, with the appropriate measurements being made by the NRPB. This report deals with the evaluation of Nuclear Enterprises Portable Dose Rate Meter Type PDR 2

  2. Arc-Second Pointer for Balloon-Borne Astronomical Instrument

    Science.gov (United States)

    Ward, Philip R.; DeWeese, Keith

    2004-01-01

    A control system has been designed to keep a balloon-borne scientific instrument pointed toward a celestial object within an angular error of the order of an arc second. The design is intended to be adaptable to a large range of instrument payloads. The initial payload to which the design nominally applies is considered to be a telescope, modeled as a simple thin-walled cylinder 24 ft (approx.= 7.3 m) long, 3 ft (approx.= 0.91 m) in diameter, weighing 1,500 lb (having a mass of .680 kg). The instrument would be mounted on a set of motor-driven gimbals in pitch-yaw configuration. The motors on the gimbals would apply the control torques needed for fine adjustments of the instrument in pitch and yaw. The pitch-yaw mount would, in turn, be suspended from a motor mount at the lower end of a pair of cables hanging down from the balloon (see figure). The motor in this mount would be used to effect coarse azimuth control of the pitch-yaw mount. A notable innovation incorporated in the design is a provision for keeping the gimbal bearings in constant motion. This innovation would eliminate the deleterious effects of static friction . something that must be done in order to achieve the desired arc-second precision. Another notable innovation is the use of linear accelerometers to provide feedback that would facilitate the early detection and counteraction of disturbance torques before they could integrate into significant angular-velocity and angular-position errors. The control software processing the sensor data would be capable of distinguishing between translational and rotational accelerations. The output of the accelerometers is combined with that of angular position and angular-velocity sensors into a proportional + integral + derivative + acceleration control law for the pitch and yaw torque motors. Preliminary calculations have shown that with appropriate gains, the power demand of the control system would be low enough to be satisfiable by means of storage

  3. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  4. Ergonomic investigation of weight distribution of laparoscopic instruments.

    Science.gov (United States)

    Lin, Chiuhsiang Joe; Chen, Hung-Jen; Lo, Ying-Chu

    2011-06-01

    Laparoscopic surgery procedures require highly specialized visually controlled movements. Investigations of industrial applications indicate that the length as well as the weight of hand-held tools substantially affects movement time (MT). Different weight distributions may have similar effects on long-shafted laparoscopic instruments when performing surgical procedures. For this reason, the current experiment aimed at finding direct evidence of the weight distribution effect in an accurate task. Ten right-handed subjects made continuous Fitts' pointing tasks using a long laparoscopic instrument. The factors and levels were target width: (2.5, 3, 3.5, and 4 cm), target distance (14, 23, and 37 cm), and weight distribution (uniform, front, middle, and rear). Weight distribution was made by chips of lead attached to the laparoscopic instrument. MT, error rate, and throughput (TP) were recorded as dependent variables. There were significant differences between the weight distribution in MT and in TP. The middle position was found to require the least time to manipulate the laparoscopic instrument in pointing tasks and also obtained the highest TP. These analyses and findings pointed to a design direction for the ergonomics and usability of the long hand-held tool such as the laparoscopic instrument in this study. To optimize efficiency in using these tools, the consideration of a better weight design is important and should not be neglected.

  5. Propagation of resist heating mask error to wafer level

    Science.gov (United States)

    Babin, S. V.; Karklin, Linard

    2006-10-01

    As technology is approaching 45 nm and below the IC industry is experiencing a severe product yield hit due to rapidly shrinking process windows and unavoidable manufacturing process variations. Current EDA tools are unable by their nature to deliver optimized and process-centered designs that call for 'post design' localized layout optimization DFM tools. To evaluate the impact of different manufacturing process variations on final product it is important to trace and evaluate all errors through design to manufacturing flow. Photo mask is one of the critical parts of this flow, and special attention should be paid to photo mask manufacturing process and especially to mask tight CD control. Electron beam lithography (EBL) is a major technique which is used for fabrication of high-end photo masks. During the writing process, resist heating is one of the sources for mask CD variations. Electron energy is released in the mask body mainly as heat, leading to significant temperature fluctuations in local areas. The temperature fluctuations cause changes in resist sensitivity, which in turn leads to CD variations. These CD variations depend on mask writing speed, order of exposure, pattern density and its distribution. Recent measurements revealed up to 45 nm CD variation on the mask when using ZEP resist. The resist heating problem with CAR resists is significantly smaller compared to other types of resists. This is partially due to higher resist sensitivity and the lower exposure dose required. However, there is no data yet showing CD errors on the wafer induced by CAR resist heating on the mask. This effect can be amplified by high MEEF values and should be carefully evaluated at 45nm and below technology nodes where tight CD control is required. In this paper, we simulated CD variation on the mask due to resist heating; then a mask pattern with the heating error was transferred onto the wafer. So, a CD error on the wafer was evaluated subject to only one term of the

  6. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  7. The Calibration and error analysis of Shallow water (less than 100m) Multibeam Echo-Sounding System

    Science.gov (United States)

    Lin, M.

    2016-12-01

    Multibeam echo-sounders(MBES) have been developed to gather bathymetric and acoustic data for more efficient and more exact mapping of the oceans. This gain in efficiency does not come without drawbacks. Indeed, the finer the resolution of remote sensing instruments, the harder they are to calibrate. This is the case for multibeam echo-sounding systems (MBES). We are no longer dealing with sounding lines where the bathymetry must be interpolated between them to engender consistent representations of the seafloor. We now need to match together strips (swaths) of totally ensonified seabed. As a consequence, misalignment and time lag problems emerge as artifacts in the bathymetry from adjacent or overlapping swaths, particularly when operating in shallow water. More importantly, one must still verify that bathymetric data meet the accuracy requirements. This paper aims to summarize the system integration involved with MBES and identify the various source of error pertaining to shallow water survey (100m and less). A systematic method for the calibration of shallow water MBES is proposed and presented as a set of field procedures. The procedures aim at detecting, quantifying and correcting systematic instrumental and installation errors. Hence, calibrating for variations of the speed of sound in the water column, which is natural in origin, is not addressed in this document. The data which used in calibration will reference International Hydrographic Organization(IHO) and other related standards to compare. This paper aims to set a model in the specific area which can calibrate the error due to instruments. We will construct a procedure in patch test and figure out all the possibilities may make sounding data with error then calculate the error value to compensate. In general, the problems which have to be solved is the patch test's 4 correction in the Hypack system 1.Roll 2.GPS Latency 3.Pitch 4.Yaw. Cause These 4 correction affect each others, we run each survey line

  8. Reducing systematic errors in measurements made by a SQUID magnetometer

    International Nuclear Information System (INIS)

    Kiss, L.F.; Kaptás, D.; Balogh, J.

    2014-01-01

    A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co 1.9 Fe 1.1 Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors – radial displacement in particular – and not by instrumental or environmental noise. - Highlights: • A simple method is described which reduces systematic errors of a SQUID. • The errors arise from a radial displacement of the sample in the gradiometer coil. • The procedure is to rotate the sample rod (with the sample) around its axis. • The best fit to the SQUID voltage has to be attained moving the sample through the coil. • The accuracy of measuring magnetic moment can be increased significantly

  9. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  10. Estimation of chromatic errors from broadband images for high contrast imaging

    Science.gov (United States)

    Sirbu, Dan; Belikov, Ruslan

    2015-09-01

    Usage of an internal coronagraph with an adaptive optical system for wavefront correction for direct imaging of exoplanets is currently being considered for many mission concepts, including as an instrument addition to the WFIRST-AFTA mission to follow the James Web Space Telescope. The main technical challenge associated with direct imaging of exoplanets with an internal coronagraph is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, wavefront errors are usually estimated using probes on the DM. To date, most broadband lab demonstrations use narrowband filters to estimate the chromaticity of the wavefront error, but this reduces the photon flux per filter and requires a filter system. Here, we propose a method to estimate the chromaticity of wavefront errors using only a broadband image. This is achieved by using special DM probes that have sufficient chromatic diversity. As a case example, we simulate the retrieval of the spectrum of the central wavelength from broadband images for a simple shaped- pupil coronagraph with a conjugate DM and compute the resulting estimation error.

  11. Diabetes and quality of life: Comparing results from utility instruments and Diabetes-39.

    Science.gov (United States)

    Chen, Gang; Iezzi, Angelo; McKie, John; Khan, Munir A; Richardson, Jeff

    2015-08-01

    To compare the Diabetes-39 (D-39) with six multi-attribute utility (MAU) instruments (15D, AQoL-8D, EQ-5D, HUI3, QWB, and SF-6D), and to develop mapping algorithms which could be used to transform the D-39 scores into the MAU scores. Self-reported diabetes sufferers (N=924) and members of the healthy public (N=1760), aged 18 years and over, were recruited from 6 countries (Australia 18%, USA 18%, UK 17%, Canada 16%, Norway 16%, and Germany 15%). Apart from the QWB which was distributed normally, non-parametric rank tests were used to compare subgroup utilities and D-39 scores. Mapping algorithms were estimated using ordinary least squares (OLS) and generalised linear models (GLM). MAU instruments discriminated between diabetes patients and the healthy public; however, utilities varied between instruments. The 15D, SF-6D, AQoL-8D had the strongest correlations with the D-39. Except for the HUI3, there were significant differences by gender. Mapping algorithms based on the OLS estimator consistently gave better goodness-of-fit results. The mean absolute error (MAE) values ranged from 0.061 to 0.147, the root mean square error (RMSE) values 0.083 to 0.198, and the R-square statistics 0.428 and 0.610. Based on MAE and RMSE values the preferred mapping is D-39 into 15D. R-square statistics and the range of predicted utilities indicate the preferred mapping is D-39 into AQoL-8D. Utilities estimated from different MAU instruments differ significantly and the outcome of a study could depend upon the instrument used. The algorithms reported in this paper enable D-39 data to be mapped into utilities predicted from any of six instruments. This provides choice for those conducting cost-utility analyses. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Spectral and Wavefront Error Performance of WFIRST-AFTA Bandpass Filter Coating Prototypes

    Science.gov (United States)

    Quijada, Manuel A.; Seide, Laurie; Pasquale, Bert A.; McMann, Joseph C.; Hagopian, John G.; Dominguez, Margaret Z.; Gong, Quian; Marx, Catherine T.

    2016-01-01

    The Cycle 5 design baseline for the Wide-Field Infrared Survey Telescope Astrophysics Focused Telescope Assets (WFIRST/AFTA) instrument includes a single wide-field channel (WFC) instrument for both imaging and slit-less spectroscopy. The only routinely moving part during scientific observations for this wide-field channel is the element wheel (EW) assembly. This filter-wheel assembly will have 8 positions that will be populated with 6 bandpass filters, a blank position, and a Grism that will consist of a three-element assembly to disperse the full field with an undeviated central wavelength for galaxy redshift surveys. All filter elements in the EW assembly will be made out of fused silica substrates (110 mm diameter) that will have the appropriate bandpass coatings according to the filter designations (Z087, Y106, J129, H158, F184, W149 and Grism). This paper presents and discusses the performance (including spectral transmission and reflected/transmitted wavefront error measurements) of a subset of bandpass filter coating prototypes that are based on the WFC instrument filter compliment. The bandpass coating prototypes that are tested in this effort correspond to the Z087, W149, and Grism filter elements. These filter coatings have been procured from three different vendors to assess the most challenging aspects in terms of the in-band throughput, out of band rejection (including the cut-on and cutoff slopes), and the impact the wavefront error distortions of these filter coatings will have on the imaging performance of the wide-field channel in the WFIRST/AFTA observatory.

  13. Designing communication and remote controlling of virtual instrument network system

    Science.gov (United States)

    Lei, Lin; Wang, Houjun; Zhou, Xue; Zhou, Wenjian

    2005-01-01

    In this paper, a virtual instrument network through the LAN and finally remote control of virtual instruments is realized based on virtual instrument and LabWindows/CVI software platform. The virtual instrument network system is made up of three subsystems. There are server subsystem, telnet client subsystem and local instrument control subsystem. This paper introduced virtual instrument network structure in detail based on LabWindows. Application procedure design of virtual instrument network communication, the Client/the programming mode of the server, remote PC and server communication far realizing, the control power of the workstation is transmitted, server program and so on essential technical were introduced. And virtual instruments network may connect to entire Internet on. Above-mentioned technology, through measuring the application in the electronic measurement virtual instrument network that is already built up, has verified the actual using value of the technology. Experiment and application validate that this design is resultful.

  14. Designing communication and remote controlling of virtual instrument network system

    International Nuclear Information System (INIS)

    Lei Lin; Wang Houjun; Zhou Xue; Zhou Wenjian

    2005-01-01

    In this paper, a virtual instrument network through the LAN and finally remote control of virtual instruments is realized based on virtual instrument and LabWindows/CVI software platform. The virtual instrument network system is made up of three subsystems. There are server subsystem, telnet client subsystem and local instrument control subsystem. This paper introduced virtual instrument network structure in detail based on LabWindows. Application procedure design of virtual instrument network communication, the Client/the programming mode of the server, remote PC and server communication far realizing, the control power of the workstation is transmitted, server program and so on essential technical were introduced. And virtual instruments network may connect to entire Internet on. Above-mentioned technology, through measuring the application in the electronic measurement virtual instrument network that is already built up, has verified the actual using value of the technology. Experiment and application validate that this design is resultful

  15. netherland hydrological modeling instrument

    Science.gov (United States)

    Hoogewoud, J. C.; de Lange, W. J.; Veldhuizen, A.; Prinsen, G.

    2012-04-01

    Netherlands Hydrological Modeling Instrument A decision support system for water basin management. J.C. Hoogewoud , W.J. de Lange ,A. Veldhuizen , G. Prinsen , The Netherlands Hydrological modeling Instrument (NHI) is the center point of a framework of models, to coherently model the hydrological system and the multitude of functions it supports. Dutch hydrological institutes Deltares, Alterra, Netherlands Environmental Assessment Agency, RWS Waterdienst, STOWA and Vewin are cooperating in enhancing the NHI for adequate decision support. The instrument is used by three different ministries involved in national water policy matters, for instance the WFD, drought management, manure policy and climate change issues. The basis of the modeling instrument is a state-of-the-art on-line coupling of the groundwater system (MODFLOW), the unsaturated zone (metaSWAP) and the surface water system (MOZART-DM). It brings together hydro(geo)logical processes from the column to the basin scale, ranging from 250x250m plots to the river Rhine and includes salt water flow. The NHI is validated with an eight year run (1998-2006) with dry and wet periods. For this run different parts of the hydrology have been compared with measurements. For instance, water demands in dry periods (e.g. for irrigation), discharges at outlets, groundwater levels and evaporation. A validation alone is not enough to get support from stakeholders. Involvement from stakeholders in the modeling process is needed. There fore to gain sufficient support and trust in the instrument on different (policy) levels a couple of actions have been taken: 1. a transparent evaluation of modeling-results has been set up 2. an extensive program is running to cooperate with regional waterboards and suppliers of drinking water in improving the NHI 3. sharing (hydrological) data via newly setup Modeling Database for local and national models 4. Enhancing the NHI with "local" information. The NHI is and has been used for many

  16. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2014-01-01

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...

  17. Selective impairment of living things and musical instruments on a verbal 'Semantic Knowledge Questionnaire' in a case of apperceptive visual agnosia.

    Science.gov (United States)

    Masullo, Carlo; Piccininni, Chiara; Quaranta, Davide; Vita, Maria Gabriella; Gaudino, Simona; Gainotti, Guido

    2012-10-01

    Semantic memory was investigated in a patient (MR) affected by a severe apperceptive visual agnosia, due to an ischemic cerebral lesion, bilaterally affecting the infero-mesial parts of the temporo-occipital cortices. The study was made by means of a Semantic Knowledge Questionnaire (Laiacona, Barbarotto, Trivelli, & Capitani, 1993), which takes separately into account four categories of living beings (animals, fruits, vegetables and body parts) and of artefacts (furniture, tools, vehicles and musical instruments), does not require a visual analysis and allows to distinguish errors concerning super-ordinate categorization, perceptual features and functional/encyclopedic knowledge. When the total number of errors obtained on all the categories of living and non-living beings was considered, a non-significant trend toward a higher number of errors in living stimuli was observed. This difference, however, became significant when body parts and musical instruments were excluded from the analysis. Furthermore, the number of errors obtained on the musical instruments was similar to that obtained on the living categories of animals, fruits and vegetables and significantly higher of that obtained in the other artefact categories. This difference was still significant when familiarity, frequency of use and prototypicality of each stimulus entered into a logistic regression analysis. On the other hand, a separate analysis of errors obtained on questions exploring super-ordinate categorization, perceptual features and functional/encyclopedic attributes showed that the differences between living and non-living stimuli and between musical instruments and other artefact categories were mainly due to errors obtained on questions exploring perceptual features. All these data are at variance with the 'domains of knowledge' hypothesis', which assumes that the breakdown of different categories of living and non-living things respects the distinction between biological entities and

  18. Exploring the effect of diffuse reflection on indoor localization systems based on RSSI-VLC.

    Science.gov (United States)

    Mohammed, Nazmi A; Elkarim, Mohammed Abd

    2015-08-10

    This work explores and evaluates the effect of diffuse light reflection on the accuracy of indoor localization systems based on visible light communication (VLC) in a high reflectivity environment using a received signal strength indication (RSSI) technique. The effect of the essential receiver (Rx) and transmitter (Tx) parameters on the localization error with different transmitted LED power and wall reflectivity factors is investigated at the worst Rx coordinates for a directed/overall link. Since this work assumes harsh operating conditions (i.e., a multipath model, high reflectivity surfaces, worst Rx position), an error of ≥ 1.46 m is found. To achieve a localization error in the range of 30 cm under these conditions with moderate LED power (i.e., P = 0.45 W), low reflectivity walls (i.e., ρ = 0.1) should be used, which would enable a localization error of approximately 7 mm at the room's center.

  19. Residual sweeping errors in turbulent particle pair diffusion in a Lagrangian diffusion model.

    Science.gov (United States)

    Malik, Nadeem A

    2017-01-01

    Thomson, D. J. & Devenish, B. J. [J. Fluid Mech. 526, 277 (2005)] and others have suggested that sweeping effects make Lagrangian properties in Kinematic Simulations (KS), Fung et al [Fung J. C. H., Hunt J. C. R., Malik N. A. & Perkins R. J. J. Fluid Mech. 236, 281 (1992)], unreliable. However, such a conclusion can only be drawn under the assumption of locality. The major aim here is to quantify the sweeping errors in KS without assuming locality. Through a novel analysis based upon analysing pairs of particle trajectories in a frame of reference moving with the large energy containing scales of motion it is shown that the normalized integrated error [Formula: see text] in the turbulent pair diffusivity (K) due to the sweeping effect decreases with increasing pair separation (σl), such that [Formula: see text] as σl/η → ∞; and [Formula: see text] as σl/η → 0. η is the Kolmogorov turbulence microscale. There is an intermediate range of separations 1 < σl/η < ∞ in which the error [Formula: see text] remains negligible. Simulations using KS shows that in the swept frame of reference, this intermediate range is large covering almost the entire inertial subrange simulated, 1 < σl/η < 105, implying that the deviation from locality observed in KS cannot be atributed to sweeping errors. This is important for pair diffusion theory and modeling. PACS numbers: 47.27.E?, 47.27.Gs, 47.27.jv, 47.27.Ak, 47.27.tb, 47.27.eb, 47.11.-j.

  20. The problem of assessing landmark error in geometric morphometrics: theory, methods, and modifications.

    Science.gov (United States)

    von Cramon-Taubadel, Noreen; Frazier, Brenda C; Lahr, Marta Mirazón

    2007-09-01

    Geometric morphometric methods rely on the accurate identification and quantification of landmarks on biological specimens. As in any empirical analysis, the assessment of inter- and intra-observer error is desirable. A review of methods currently being employed to assess measurement error in geometric morphometrics was conducted and three general approaches to the problem were identified. One such approach employs Generalized Procrustes Analysis to superimpose repeatedly digitized landmark configurations, thereby establishing whether repeat measures fall within an acceptable range of variation. The potential problem of this error assessment method (the "Pinocchio effect") is demonstrated and its effect on error studies discussed. An alternative approach involves employing Euclidean distances between the configuration centroid and repeat measures of a landmark to assess the relative repeatability of individual landmarks. This method is also potentially problematic as the inherent geometric properties of the specimen can result in misleading estimates of measurement error. A third approach involved the repeated digitization of landmarks with the specimen held in a constant orientation to assess individual landmark precision. This latter approach is an ideal method for assessing individual landmark precision, but is restrictive in that it does not allow for the incorporation of instrumentally defined or Type III landmarks. Hence, a revised method for assessing landmark error is proposed and described with the aid of worked empirical examples. (c) 2007 Wiley-Liss, Inc.

  1. Vertical price leadership on local maize markets in Benin

    NARCIS (Netherlands)

    Kuiper, WE; Lutz, C; van Tilburg, A

    This paper considers vertical price relationships between wholesalers and retailers on five local maize markets in Benin. We show that the common stochastic trend and the long-run disequilibrium error must explicitly be considered to correctly interpret the restrictions on the error-correction

  2. Accuracy of x-ray image-based 3D localization from two C-arm views: a comparison between an ideal system and a real device

    Science.gov (United States)

    Brost, Alexander; Strobel, Norbert; Yatziv, Liron; Gilson, Wesley; Meyer, Bernhard; Hornegger, Joachim; Lewin, Jonathan; Wacker, Frank

    2009-02-01

    arm X-ray imaging devices are commonly used for minimally invasive cardiovascular or other interventional procedures. Calibrated state-of-the-art systems can, however, not only be used for 2D imaging but also for three-dimensional reconstruction either using tomographic techniques or even stereotactic approaches. To evaluate the accuracy of X-ray object localization from two views, a simulation study assuming an ideal imaging geometry was carried out first. This was backed up with a phantom experiment involving a real C-arm angiography system. Both studies were based on a phantom comprising five point objects. These point objects were projected onto a flat-panel detector under different C-arm view positions. The resulting 2D positions were perturbed by adding Gaussian noise to simulate 2D point localization errors. In the next step, 3D point positions were triangulated from two views. A 3D error was computed by taking differences between the reconstructed 3D positions using the perturbed 2D positions and the initial 3D positions of the five points. This experiment was repeated for various C-arm angulations involving angular differences ranging from 15° to 165°. The smallest 3D reconstruction error was achieved, as expected, by views that were 90° degrees apart. In this case, the simulation study yielded a 3D error of 0.82 mm +/- 0.24 mm (mean +/- standard deviation) for 2D noise with a standard deviation of 1.232 mm (4 detector pixels). The experimental result for this view configuration obtained on an AXIOM Artis C-arm (Siemens AG, Healthcare Sector, Forchheim, Germany) system was 0.98 mm +/- 0.29 mm, respectively. These results show that state-of-the-art C-arm systems can localize instruments with millimeter accuracy, and that they can accomplish this almost as well as an idealized theoretical counterpart. High stereotactic localization accuracy, good patient access, and CT-like 3D imaging capabilities render state-of-the-art C-arm systems ideal devices for X

  3. Nonlinear error-field penetration in low density ohmically heated tokamak plasmas

    International Nuclear Information System (INIS)

    Fitzpatrick, R

    2012-01-01

    A theory is developed to predict the error-field penetration threshold in low density, ohmically heated, tokamak plasmas. The novel feature of the theory is that the response of the plasma in the vicinity of the resonant surface to the applied error-field is calculated from nonlinear drift-MHD (magnetohydrodynamical) magnetic island theory, rather than linear layer theory. Error-field penetration, and subsequent locked mode formation, is triggered once the destabilizing effect of the resonant harmonic of the error-field overcomes the stabilizing effect of the ion polarization current (caused by the propagation of the error-field-induced island chain in the local ion fluid frame). The predicted scaling of the error-field penetration threshold with engineering parameters is (b r /B T ) crit ∼n e B T -1.8 R 0 -0.25 , where b r is the resonant harmonic of the vacuum radial error-field at the resonant surface, B T the toroidal magnetic field-strength, n e the electron number density at the resonant surface and R 0 the major radius of the plasma. This scaling—in particular, the linear dependence of the threshold with density—is consistent with experimental observations. When the scaling is used to extrapolate from JET to ITER, the predicted ITER error-field penetration threshold is (b r /B T ) crit ∼ 5 × 10 −5 , which just lies within the expected capabilities of the ITER error-field correction system. (paper)

  4. Design and operation of dust measuring instrumentation based on the beta-radiation method

    International Nuclear Information System (INIS)

    Lilienfeld, P.

    1975-01-01

    The theory, instrument design aspects and applications of beta-radiation attenuation for the measurement of the mass concentration of airborne particulates are reviewed. Applicable methods of particle collection, beta sensing configurations, source ( 63 Ni, 14 C, 147 Pr, 85 Kr) and detector design criteria, electronic signal processing, digital control and instrument programming techniques are treated. Advantages, limitations and error sources of beta-attenuation instrumentation are analyzed. Applications to industrial dust measurements, source testing, ambient monitoring, and particle size analysis are the major areas of practical utilization of this technique, and its inherent capability for automated and unattended operation provides compatibility with process control synchronization and alarm, telemetry, and incorporation into pollution monitoring network sensing stations. (orig.) [de

  5. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  6. Uncertainty quantification in a chemical system using error estimate-based mesh adaption

    International Nuclear Information System (INIS)

    Mathelin, Lionel; Le Maitre, Olivier P.

    2012-01-01

    This paper describes a rigorous a posteriori error analysis for the stochastic solution of non-linear uncertain chemical models. The dual-based a posteriori stochastic error analysis extends the methodology developed in the deterministic finite elements context to stochastic discretization frameworks. It requires the resolution of two additional (dual) problems to yield the local error estimate. The stochastic error estimate can then be used to adapt the stochastic discretization. Different anisotropic refinement strategies are proposed, leading to a cost-efficient tool suitable for multi-dimensional problems of moderate stochastic dimension. The adaptive strategies allow both for refinement and coarsening of the stochastic discretization, as needed to satisfy a prescribed error tolerance. The adaptive strategies were successfully tested on a model for the hydrogen oxidation in supercritical conditions having 8 random parameters. The proposed methodologies are however general enough to be also applicable for a wide class of models such as uncertain fluid flows. (authors)

  7. Nonparametric instrumental regression with non-convex constraints

    International Nuclear Information System (INIS)

    Grasmair, M; Scherzer, O; Vanhems, A

    2013-01-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition. (paper)

  8. Nonparametric instrumental regression with non-convex constraints

    Science.gov (United States)

    Grasmair, M.; Scherzer, O.; Vanhems, A.

    2013-03-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.

  9. Error-in-variables models in calibration

    Science.gov (United States)

    Lira, I.; Grientschnig, D.

    2017-12-01

    In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.

  10. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  11. Multi point of care instrument evaluation for use in anti-retroviral clinics in South Africa.

    Science.gov (United States)

    Gounden, Verena; George, Jaya

    2012-01-01

    South Africa has the largest prevalence of HIV infected individuals in the world. The introduction of point of care testing to anti-retroviral (ARV) clinic sites is hoped to fast track initiation of patients on ARVs and to allow for earlier recognition of adverse effects such as dyslipidaemia, renal and hepatic dysfunction. We evaluated six instruments for the following analytes: glucose, lactate, creatinine, cholesterol, triglycerides, HDL-cholesterol, alanine transaminase (ALT), and glycated haemoglobin. Comparisons with the central laboratory analyser were performed as well as precision studies. A scoring system was developed by the authors to evaluate the instruments in terms of analytical performance, cost, ease of use, and other operational characteristics. As one of the goals of the placement of these instruments was that their operation was simple enough to be used by non-laboratory staff, ease of use contributed a large proportion to the final scoring. Analytical performance of the POC analysers were generally similar, however, there were significant differences in operational characteristics and ease of use. Bias for the different analytes when compared to the laboratory analyser ranged from -27% to 14%. Calculated total errors for all analytes except for HDL cholesterol were within total allowable error recommendations. The two instruments (Roche Reflotron and Cholestech LDX) with the highest overall total points achieved the highest scores for ease of use. This pilot study has led to the development of a scoring system for the evaluation of POC instruments.

  12. Real-Time Multi-Target Localization from Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Xuan Wang

    2016-12-01

    Full Text Available In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1 the real-time zoom lens distortion correction method; (2 a recursive least squares (RLS filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.

  13. LabVIEW 2010 Computer Vision Platform Based Virtual Instrument and Its Application for Pitting Corrosion Study.

    Science.gov (United States)

    Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco

    2013-01-01

    A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) "true/false" SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved "true" zones could determine the corrosion rate in any of the zones.

  14. Use of the mathematical modelling method for the investigation of dynamic characteristics of acoustical measuring instruments

    Science.gov (United States)

    Vasilyev, Y. M.; Lagunov, L. F.

    1973-01-01

    The schematic diagram of a noise measuring device is presented that uses pulse expansion modeling according to the peak or any other measured values, to obtain instrument readings at a very low noise error.

  15. Assessment of instruments in facilitating investment in off-grid renewable energy projects

    International Nuclear Information System (INIS)

    Shi, Xunpeng; Liu, Xiying; Yao, Lixia

    2016-01-01

    Renewable off-grid solution plays a critical role in supporting rural electrification. However, off-grid Renewable Energy (OGRE) project financing faces significant challenges due to limited financing access, low affordability of consumers, high transactions costs and etc. Various supporting instruments have been implemented to facilitate OGRE investment. This study assesses the effectiveness of those instruments with a framework consists of three dimensions: feasibility, sustainability and replicability. The weights of each dimension in the framework and the scores of each instrument are assessed by expert surveys based on the Delphi method. It is suggested that all the three dimensions should be taken into consideration while assessing the instruments, among which feasibility and sustainability are considered as the most important dimensions in the assessment framework. Furthermore, the top-5 most effective instruments in facilitating OGRE investment are local engagement in operation and maintenance, loan guarantee, start-up grant, end user financing, and concessional finance. Developing countries that need to increase electrification, such as most of the ASEAN member states, could use these top scored instruments despite of their limited amount of public finance. - Highlights: •Assess the effectiveness of instruments for promoting financing for OGRE projects. •A three-dimension assessment framework: feasibility, sustainability, replicability. •Use online surveys and the Delphi method to collect experts’ assessment. •The most effective instruments: local engagement, loan guarantee, and start-up grant.

  16. Learning from sensory and reward prediction errors during motor adaptation.

    Science.gov (United States)

    Izawa, Jun; Shadmehr, Reza

    2011-03-01

    Voluntary motor commands produce two kinds of consequences. Initially, a sensory consequence is observed in terms of activity in our primary sensory organs (e.g., vision, proprioception). Subsequently, the brain evaluates the sensory feedback and produces a subjective measure of utility or usefulness of the motor commands (e.g., reward). As a result, comparisons between predicted and observed consequences of motor commands produce two forms of prediction error. How do these errors contribute to changes in motor commands? Here, we considered a reach adaptation protocol and found that when high quality sensory feedback was available, adaptation of motor commands was driven almost exclusively by sensory prediction errors. This form of learning had a distinct signature: as motor commands adapted, the subjects altered their predictions regarding sensory consequences of motor commands, and generalized this learning broadly to neighboring motor commands. In contrast, as the quality of the sensory feedback degraded, adaptation of motor commands became more dependent on reward prediction errors. Reward prediction errors produced comparable changes in the motor commands, but produced no change in the predicted sensory consequences of motor commands, and generalized only locally. Because we found that there was a within subject correlation between generalization patterns and sensory remapping, it is plausible that during adaptation an individual's relative reliance on sensory vs. reward prediction errors could be inferred. We suggest that while motor commands change because of sensory and reward prediction errors, only sensory prediction errors produce a change in the neural system that predicts sensory consequences of motor commands.

  17. Localization and registration accuracy in image guided neurosurgery: a clinical study

    Energy Technology Data Exchange (ETDEWEB)

    Shamir, Reuben R.; Joskowicz, Leo [Hebrew University of Jerusalem, School of Engineering and Computer Science, Jerusalem (Israel); Spektor, Sergey; Shoshan, Yigal [Hadassah University Hospital, Department of Neurosurgery, School of Medicine, Jerusalem (Israel)

    2009-01-15

    To measure and compare the clinical localization and registration errors in image-guided neurosurgery, with the purpose of revising current assumptions. Twelve patients who underwent brain surgeries with a navigation system were randomly selected. A neurosurgeon localized and correlated the landmarks on preoperative MRI images and on the intraoperative physical anatomy with a tracked pointer. In the laboratory, we generated 612 scenarios in which one landmark pair was defined as the target and the remaining ones were used to compute the registration transformation. Four errors were measured: (1) fiducial localization error (FLE); (2) target registration error (TRE); (3) fiducial registration error (FRE); (4) Fitzpatrick's target registration error estimation (F-TRE). We compared the different errors and computed their correlation. The image and physical FLE ranges were 0.5-2.0 and 1.6-3.0 mm, respectively. The measured TRE, FRE and F-TRE were 4.1{+-}1.6, 3.9{+-}1.2, and 3.7{+-}2.2 mm, respectively. Low correlations of 0.19 and 0.37 were observed between the FRE and TRE and between the F-TRE and the TRE, respectively. The differences of the FRE and F-TRE from the TRE were 1.3{+-}1.0 mm (max=5.5 mm) and 1.3{+-}1.2 mm (max=7.3 mm), respectively. Contrary to common belief, the FLE presents significant variations. Moreover, both the FRE and the F-TRE are poor indicators of the TRE in image-to-patient registration. (orig.)

  18. Error analysis for mesospheric temperature profiling by absorptive occultation sensors

    Directory of Open Access Journals (Sweden)

    M. J. Rieder

    unprecedented accuracy and vertical resolution. A major part of the error analysis also applies to refractive (e.g., Global Navigation Satellite System based occultations as well as to any temperature profile retrieval based on air density or major species density measurements (e.g., from Rayleigh lidar or falling sphere techniques.

    Key words. Atmospheric composition and structure (pressure, density, and temperature; instruments and techniques – Radio science (remote sensing

  19. Error analysis for mesospheric temperature profiling by absorptive occultation sensors

    Directory of Open Access Journals (Sweden)

    M. J. Rieder

    2001-01-01

    unprecedented accuracy and vertical resolution. A major part of the error analysis also applies to refractive (e.g., Global Navigation Satellite System based occultations as well as to any temperature profile retrieval based on air density or major species density measurements (e.g., from Rayleigh lidar or falling sphere techniques.Key words. Atmospheric composition and structure (pressure, density, and temperature; instruments and techniques – Radio science (remote sensing

  20. Using the electron localization function to correct for confinement physics in semi-local density functional theory

    International Nuclear Information System (INIS)

    Hao, Feng; Mattsson, Ann E.; Armiento, Rickard

    2014-01-01

    We have previously proposed that further improved functionals for density functional theory can be constructed based on the Armiento-Mattsson subsystem functional scheme if, in addition to the uniform electron gas and surface models used in the Armiento-Mattsson 2005 functional, a model for the strongly confined electron gas is also added. However, of central importance for this scheme is an index that identifies regions in space where the correction provided by the confined electron gas should be applied. The electron localization function (ELF) is a well-known indicator of strongly localized electrons. We use a model of a confined electron gas based on the harmonic oscillator to show that regions with high ELF directly coincide with regions where common exchange energy functionals have large errors. This suggests that the harmonic oscillator model together with an index based on the ELF provides the crucial ingredients for future improved semi-local functionals. For a practical illustration of how the proposed scheme is intended to work for a physical system we discuss monoclinic cupric oxide, CuO. A thorough discussion of this system leads us to promote the cell geometry of CuO as a useful benchmark for future semi-local functionals. Very high ELF values are found in a shell around the O ions, and take its maximum value along the Cu–O directions. An estimate of the exchange functional error from the effect of electron confinement in these regions suggests a magnitude and sign that could account for the error in cell geometry

  1. Error handling for the CDF Silicon Vertex Tracker

    CERN Document Server

    Belforte, S; Dell'Orso, Mauro; Donati, S; Galeotti, S; Giannetti, P; Morsani, F; Punzi, G; Ristori, L; Spinella, F; Zanetti, A M

    2000-01-01

    The SVT online tracker for the CDF upgrade reconstructs two- dimensional tracks using information from the Silicon Vertex detector (SVXII) and the Central Outer Tracker (COT). The SVT has an event rate of 100 kHz and a latency time of 10 mu s. The system is composed of 104 VME 9U digital boards (of 8 different types) and it is implemented as a data driven architecture. Each board runs on its own 30 MHz clock. Since the data output from the SVT (few Mbytes/sec) are a small fraction of the input data (200 Mbytes/sec), it is extremely difficult to track possible internal errors by using only the output stream. For this reason several diagnostic tools have been implemented: local error registers, error bits propagated through the data streams and the Spy Buffer system. Data flowing through each input and output stream of every board are continuously copied to memory banks named Spy Buffers which act as built in logic state analyzers hooked continuously to internal data streams. The contents of all buffers can be ...

  2. Force sensing of multiple-DOF cable-driven instruments for minimally invasive robotic surgery.

    Science.gov (United States)

    He, Chao; Wang, Shuxin; Sang, Hongqiang; Li, Jinhua; Zhang, Linan

    2014-09-01

    Force sensing for robotic surgery is limited by the size of the instrument, friction and sterilization requirements. This paper presents a force-sensing instrument to avoid these restrictions. Operating forces were calculated according to cable tension. Mathematical models of the force-sensing system were established. A force-sensing instrument was designed and fabricated. A signal collection and processing system was constructed. The presented approach can avoid the constraints of space limits, sterilization requirements and friction introduced by the transmission parts behind the instrument wrist. Test results showed that the developed instrument has a 0.03 N signal noise, a 0.05 N drift, a 0.04 N resolution and a maximum error of 0.4 N. The validation experiment indicated that the operating and grasping forces can be effectively sensed. The developed force-sensing system can be used in minimally invasive robotic surgery to construct a force-feedback system. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Error sensitivity to refinement: a criterion for optimal grid adaptation

    Science.gov (United States)

    Luchini, Paolo; Giannetti, Flavio; Citro, Vincenzo

    2017-12-01

    Most indicators used for automatic grid refinement are suboptimal, in the sense that they do not really minimize the global solution error. This paper concerns with a new indicator, related to the sensitivity map of global stability problems, suitable for an optimal grid refinement that minimizes the global solution error. The new criterion is derived from the properties of the adjoint operator and provides a map of the sensitivity of the global error (or its estimate) to a local mesh refinement. Examples are presented for both a scalar partial differential equation and for the system of Navier-Stokes equations. In the last case, we also present a grid-adaptation algorithm based on the new estimator and on the FreeFem++ software that improves the accuracy of the solution of almost two order of magnitude by redistributing the nodes of the initial computational mesh.

  4. Ultrahigh Error Threshold for Surface Codes with Biased Noise

    Science.gov (United States)

    Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.

    2018-02-01

    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

  5. Holistic approach for overlay and edge placement error to meet the 5nm technology node requirements

    Science.gov (United States)

    Mulkens, Jan; Slachter, Bram; Kubis, Michael; Tel, Wim; Hinnen, Paul; Maslow, Mark; Dillen, Harm; Ma, Eric; Chou, Kevin; Liu, Xuedong; Ren, Weiming; Hu, Xuerang; Wang, Fei; Liu, Kevin

    2018-03-01

    In this paper, we discuss the metrology methods and error budget that describe the edge placement error (EPE). EPE quantifies the pattern fidelity of a device structure made in a multi-patterning scheme. Here the pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. EPE is computed by combining optical and ebeam metrology data. We show that high NA optical scatterometer can be used to densely measure in device CD and overlay errors. Large field e-beam system enables massive CD metrology which is used to characterize the local CD error. Local CD distribution needs to be characterized beyond 6 sigma, and requires high throughput e-beam system. We present in this paper the first images of a multi-beam e-beam inspection system. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As a use case, we evaluated a 5-nm logic patterning process based on Self-Aligned-QuadruplePatterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography.

  6. The development of state/region owned goods management’s monitoring instrument design

    Directory of Open Access Journals (Sweden)

    Ikhwanto Yogy

    2017-01-01

    Full Text Available The problems in state/region owned goods in Indonesian state and local governments are suspected to occur because of weak monitoring programs, according to many studies. A tool or instrument in implementing this monitoring program is expected to address this problem. Such tool currently doesn’t exist yet. This research aims to fill that gap by developing a monitoring instrument design for state/region owned goods by using Daerah Istimewa Yogyakarta (DIY Local Government as a research context in order to take valuable inputs for the design. This research is using developmental research method. Government Regulation were used for normative reference and Friedman’s results-based accountability quadrat were used in developing good indicators for the instrument. This research is succeeded in formulating the indicators that made up the instrument. Indicators compiled are divided into compliance-based indicators and results-based indicators. Indicators are formulated based on the validation and inputs from employees of DIY’s Assets Management Agency and experts from academia. This instrument still has some limitations that need improvement through further research.

  7. SU-E-T-377: Inaccurate Positioning Might Introduce Significant MapCheck Calibration Error in Flatten Filter Free Beams

    International Nuclear Information System (INIS)

    Wang, S; Chao, C; Chang, J

    2014-01-01

    Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as a detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect

  8. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  9. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    Science.gov (United States)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  10. Polychromatic X-ray Micro- and Nano-Beam Science and Instrumentation

    Science.gov (United States)

    Ice, G. E.; Larson, B. C.; Liu, W.; Barabash, R. I.; Specht, E. D.; Pang, J. W. L.; Budai, J. D.; Tischler, J. Z.; Khounsary, A.; Liu, C.; Macrander, A. T.; Assoufid, L.

    2007-01-01

    Polychromatic x-ray micro- and nano-beam diffraction is an emerging nondestructive tool for the study of local crystalline structure and defect distributions. Both long-standing fundamental materials science issues, and technologically important questions about specific materials systems can be uniquely addressed. Spatial resolution is determined by the beam size at the sample and by a knife-edge technique called differential aperture microscopy that decodes the origin of scattering from along the penetrating x-ray beam. First-generation instrumentation on station 34-ID-E at the Advanced Photon Source (APS) allows for nondestructive automated recovery of the three-dimensional (3D) local crystal phase and orientation. Also recovered are the local elastic-strain and the dislocation tensor distributions. New instrumentation now under development will further extend the applications of polychromatic microdiffraction and will revolutionize materials characterization.

  11. Polychromatic X-ray Micro- and Nano-Beam Science and Instrumentation

    International Nuclear Information System (INIS)

    Ice, G.E.; Larson, Ben C.; Liu, Wenjun; Barabash, Rozaliya; Specht, Eliot D; Pang, Judy; Budai, John D.; Tischler, Jonathan Zachary; Khounsary, Ali; Liu, Chian; Macrander, Albert T.; Assoufid, Lahsen

    2007-01-01

    Polychromatic x-ray micro- and nano-beam diffraction is an emerging nondestructive tool for the study of local crystalline structure and defect distributions. Both long-standing fundamental materials science issues, and technologically important questions about specific materials systems can be uniquely addressed. Spatial resolution is determined by the beam size at the sample and by a knife-edge technique called differential aperture microscopy that decodes the origin of scattering from along the penetrating x-ray beam. First-generation instrumentation on station 34-ID-E at the Advanced Photon Source (APS) allows for nondestructive automated recovery of the three-dimensional (3D) local crystal phase and orientation. Also recovered are the local elastic-strain and the dislocation tensor distributions. New instrumentation now under development will further extend the applications of polychromatic microdiffraction and will revolutionize materials characterization

  12. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  13. Topographical optimization of structures for use in musical instruments and other applications

    Science.gov (United States)

    Kirkland, William Brandon

    Mallet percussion instruments such as the xylophone, marimba, and vibraphone have been produced and tuned since their inception by arduously grinding the keys to achieve harmonic ratios between their 1st, 2 nd, and 3rd transverse modes. In consideration of this, it would be preferable to have defined mathematical models such that the keys of these instruments can be produced quickly and reliably. Additionally, physical modeling of these keys or beams provides a useful application of non-uniform beam vibrations as studied by Euler-Bernoulli and Timoshenko beam theories. This thesis work presents a literature review of previous studies regarding mallet percussion instrument design and optimization of non-uniform keys. The progression of previous research from strictly mathematical approaches to finite element methods is shown, ultimately arriving at the most current optimization techniques used by other authors. However, previous research varies slightly in the relative degree of accuracy to which a non-uniform beam can be modeled. Typically, accuracies are shown in literature as 1% to 2% error. While this seems attractive, musical tolerances require 0.25% error and beams are otherwise unsuitable. This research seeks to build on and add to the previous field research by optimizing beam topology and machining keys within tolerances that no further tuning is required. The optimization methods relied on finite element analysis and used harmonic modal frequencies as constraints rather than arguments of an error function to be optimized. Instead, the beam mass was minimized while the modal frequency constraints were required to be satisfied within 0.25% tolerance. The final optimized and machined keys of an A4 vibraphone were shown to be accurate within the required musical tolerances, with strong resonance at the designed frequencies. The findings solidify a systematic method for designing musical structures for accuracy and repeatability upon manufacture.

  14. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  15. A Note on NCOM Temperature Forecast Error Calibration Using the Ensemble Transform

    Science.gov (United States)

    2009-01-01

    Division Head Ruth H. Preller, 7300 Security, Code 1226 Office of Counsel,Code 1008.3 ADOR/Director NCST E. R. Franchi , 7000 Public Affairs...problem, local unbiased (correlation) and persistent errors (bias) of the Navy Coastal Ocean Modeling (NCOM) System nested in global ocean domains, are...system were made available in real-time without performing local data assimilation, though remote sensing and global data was assimilated on the

  16. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  17. Invited Article: Deep Impact instrument calibration.

    Science.gov (United States)

    Klaasen, Kenneth P; A'Hearn, Michael F; Baca, Michael; Delamere, Alan; Desnoyer, Mark; Farnham, Tony; Groussin, Olivier; Hampton, Donald; Ipatov, Sergei; Li, Jianyang; Lisse, Carey; Mastrodemos, Nickolaos; McLaughlin, Stephanie; Sunshine, Jessica; Thomas, Peter; Wellnitz, Dennis

    2008-09-01

    Calibration of NASA's Deep Impact spacecraft instruments allows reliable scientific interpretation of the images and spectra returned from comet Tempel 1. Calibrations of the four onboard remote sensing imaging instruments have been performed in the areas of geometric calibration, spatial resolution, spectral resolution, and radiometric response. Error sources such as noise (random, coherent, encoding, data compression), detector readout artifacts, scattered light, and radiation interactions have been quantified. The point spread functions (PSFs) of the medium resolution instrument and its twin impactor targeting sensor are near the theoretical minimum [ approximately 1.7 pixels full width at half maximum (FWHM)]. However, the high resolution instrument camera was found to be out of focus with a PSF FWHM of approximately 9 pixels. The charge coupled device (CCD) read noise is approximately 1 DN. Electrical cross-talk between the CCD detector quadrants is correctable to <2 DN. The IR spectrometer response nonlinearity is correctable to approximately 1%. Spectrometer read noise is approximately 2 DN. The variation in zero-exposure signal level with time and spectrometer temperature is not fully characterized; currently corrections are good to approximately 10 DN at best. Wavelength mapping onto the detector is known within 1 pixel; spectral lines have a FWHM of approximately 2 pixels. About 1% of the IR detector pixels behave badly and remain uncalibrated. The spectrometer exhibits a faint ghost image from reflection off a beamsplitter. Instrument absolute radiometric calibration accuracies were determined generally to <10% using star imaging. Flat-field calibration reduces pixel-to-pixel response differences to approximately 0.5% for the cameras and <2% for the spectrometer. A standard calibration image processing pipeline is used to produce archival image files for analysis by researchers.

  18. Invited Article: Deep Impact instrument calibration

    International Nuclear Information System (INIS)

    Klaasen, Kenneth P.; Mastrodemos, Nickolaos; A'Hearn, Michael F.; Farnham, Tony; Groussin, Olivier; Ipatov, Sergei; Li Jianyang; McLaughlin, Stephanie; Sunshine, Jessica; Wellnitz, Dennis; Baca, Michael; Delamere, Alan; Desnoyer, Mark; Thomas, Peter; Hampton, Donald; Lisse, Carey

    2008-01-01

    Calibration of NASA's Deep Impact spacecraft instruments allows reliable scientific interpretation of the images and spectra returned from comet Tempel 1. Calibrations of the four onboard remote sensing imaging instruments have been performed in the areas of geometric calibration, spatial resolution, spectral resolution, and radiometric response. Error sources such as noise (random, coherent, encoding, data compression), detector readout artifacts, scattered light, and radiation interactions have been quantified. The point spread functions (PSFs) of the medium resolution instrument and its twin impactor targeting sensor are near the theoretical minimum [∼1.7 pixels full width at half maximum (FWHM)]. However, the high resolution instrument camera was found to be out of focus with a PSF FWHM of ∼9 pixels. The charge coupled device (CCD) read noise is ∼1 DN. Electrical cross-talk between the CCD detector quadrants is correctable to <2 DN. The IR spectrometer response nonlinearity is correctable to ∼1%. Spectrometer read noise is ∼2 DN. The variation in zero-exposure signal level with time and spectrometer temperature is not fully characterized; currently corrections are good to ∼10 DN at best. Wavelength mapping onto the detector is known within 1 pixel; spectral lines have a FWHM of ∼2 pixels. About 1% of the IR detector pixels behave badly and remain uncalibrated. The spectrometer exhibits a faint ghost image from reflection off a beamsplitter. Instrument absolute radiometric calibration accuracies were determined generally to <10% using star imaging. Flat-field calibration reduces pixel-to-pixel response differences to ∼0.5% for the cameras and <2% for the spectrometer. A standard calibration image processing pipeline is used to produce archival image files for analysis by researchers.

  19. The Impact of Error-Management Climate, Error Type and Error Originator on Auditors’ Reporting Errors Discovered on Audit Work Papers

    NARCIS (Netherlands)

    A.H. Gold-Nöteberg (Anna); U. Gronewold (Ulfert); S. Salterio (Steve)

    2010-01-01

    textabstractWe examine factors affecting the auditor’s willingness to report their own or their peers’ self-discovered errors in working papers subsequent to detailed working paper review. Prior research has shown that errors in working papers are detected in the review process; however, such

  20. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  1. Who Gets to Play? Investigating Equity in Musical Instrument Instruction in Scottish Primary Schools

    Science.gov (United States)

    Moscardini, Lio; Barron, David S.; Wilson, Alastair

    2013-01-01

    There is a widely held view that learning to play a musical instrument is a valuable experience for all children in terms of their personal growth and development. Although there is no statutory obligation for instrumental music provision in Scottish primary schools, there are well-established Instrumental Music Services in Local Education…

  2. Minimizing pulling geometry errors in atomic force microscope single molecule force spectroscopy.

    Science.gov (United States)

    Rivera, Monica; Lee, Whasil; Ke, Changhong; Marszalek, Piotr E; Cole, Daniel G; Clark, Robert L

    2008-10-01

    In atomic force microscopy-based single molecule force spectroscopy (AFM-SMFS), it is assumed that the pulling angle is negligible and that the force applied to the molecule is equivalent to the force measured by the instrument. Recent studies, however, have indicated that the pulling geometry errors can drastically alter the measured force-extension relationship of molecules. Here we describe a software-based alignment method that repositions the cantilever such that it is located directly above the molecule's substrate attachment site. By aligning the applied force with the measurement axis, the molecule is no longer undergoing combined loading, and the full force can be measured by the cantilever. Simulations and experimental results verify the ability of the alignment program to minimize pulling geometry errors in AFM-SMFS studies.

  3. A study of the modifications of nuclear instrumentation systems for JRR-2

    International Nuclear Information System (INIS)

    Azim, Mohammad; Horiki, Ooichiro; Sato, Mitsugu

    1978-04-01

    In this report a comparative study has been carried out between the original A.M.F. design and the modified design for the nuclear instrumentation systems of the Research Reactor JRR-2, at the Tokai Research Establishment of JAERI. Due to a fire accident in the control room, in July 1968, the originally designed nuclear instrumentation systems, using conventional vacuum tube circuits, were destroyed and were replaced by the modified design, incorporating solid state linear integrated circuits as basic circuit components. The results of the reactor instrumentation systems modification at JRR-2 are very encouraging as the operating efficiency of the Reactor registered an improvement of 43%. Moreover the safety aspects have been fully taken care of in the new design and the reactor is well guarded against all possible instrument failures and human errors. This report presents the basic theory of operation of the two designs alongwith a comparative safety analysis. (auth.)

  4. Setup error in radiotherapy: on-line correction using electronic kilovoltage and megavoltage radiographs

    International Nuclear Information System (INIS)

    Pisani, Laura; Lockman, David; Jaffray, David; Yan Di; Martinez, Alvaro; Wong, John

    2000-01-01

    Purpose: We hypothesize that the difference in image quality between the traditional kilovoltage (kV) prescription radiographs and megavoltage (MV) treatment radiographs is a major factor hindering our ability to accurately measure, thus correct, setup error in radiation therapy. The objective of this work is to study the accuracy of on-line correction of setup errors achievable using either kV- or MV-localization (i.e., open-field) radiographs. Methods and Materials: Using a gantry mounted kV and MV dual-beam imaging system, the accuracy of on-line measurement and correction of setup error using electronic kV- and MV-localization images was examined based on anthropomorphic phantom and patient imaging studies. For the phantom study, the user's ability to accurately detect known translational shifts was analyzed. The clinical study included 14 patients with disease in the head and neck, thoracic, and pelvic regions. For each patient, 4 orthogonal kV radiographs acquired during treatment simulation from the right lateral, anterior-to-posterior, left lateral, and posterior-to-anterior directions were employed as reference prescription images. Two-dimensional (2D) anatomic templates were defined on each of the 4 reference images. On each treatment day, after positioning the patient for treatment, 4 orthogonal electronic localization images were acquired with both kV and 6-MV photon beams. On alternate weeks, setup errors were determined from either the kV- or MV-localization images but not both. Setup error was determined by aligning each 2D template with the anatomic information on the corresponding localization image, ignoring rotational and nonrigid variations. For each set of 4 orthogonal images, the results from template alignments were averaged. Based on the results from the phantom study and a parallel study of the inter- and intraobserver template alignment variability, a threshold for minimum correction was set at 2 mm in any direction. Setup correction was

  5. The Effect of Random Error on Diagnostic Accuracy Illustrated with the Anthropometric Diagnosis of Malnutrition

    Science.gov (United States)

    2016-01-01

    Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627

  6. Inter-laboratory evaluation of instrument platforms and experimental workflows for quantitative accuracy and reproducibility assessment

    Directory of Open Access Journals (Sweden)

    Andrew J. Percy

    2015-09-01

    Full Text Available The reproducibility of plasma protein quantitation between laboratories and between instrument types was examined in a large-scale international study involving 16 laboratories and 19 LC–MS/MS platforms, using two kits designed to evaluate instrument performance and one kit designed to evaluate the entire bottom-up workflow. There was little effect of instrument type on the quality of the results, demonstrating the robustness of LC/MRM-MS with isotopically labeled standards. Technician skill was a factor, as errors in sample preparation and sub-optimal LC–MS performance were evident. This highlights the importance of proper training and routine quality control before quantitation is done on patient samples.

  7. Understanding the nature of errors in nursing: using a model to analyse critical incident reports of errors which had resulted in an adverse or potentially adverse event.

    Science.gov (United States)

    Meurier, C E

    2000-07-01

    Human errors are common in clinical practice, but they are under-reported. As a result, very little is known of the types, antecedents and consequences of errors in nursing practice. This limits the potential to learn from errors and to make improvement in the quality and safety of nursing care. The aim of this study was to use an Organizational Accident Model to analyse critical incidents of errors in nursing. Twenty registered nurses were invited to produce a critical incident report of an error (which had led to an adverse event or potentially could have led to an adverse event) they had made in their professional practice and to write down their responses to the error using a structured format. Using Reason's Organizational Accident Model, supplemental information was then collected from five of the participants by means of an individual in-depth interview to explore further issues relating to the incidents they had reported. The detailed analysis of one of the incidents is discussed in this paper, demonstrating the effectiveness of this approach in providing insight into the chain of events which may lead to an adverse event. The case study approach using critical incidents of clinical errors was shown to provide relevant information regarding the interaction of organizational factors, local circumstances and active failures (errors) in producing an adverse or potentially adverse event. It is suggested that more use should be made of this approach to understand how errors are made in practice and to take appropriate preventative measures.

  8. Multiple-copy state discrimination: Thinking globally, acting locally

    International Nuclear Information System (INIS)

    Higgins, B. L.; Pryde, G. J.; Wiseman, H. M.; Doherty, A. C.; Bartlett, S. D.

    2011-01-01

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N→∞. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.

  9. Sound Localization in Patients With Congenital Unilateral Conductive Hearing Loss With a Transcutaneous Bone Conduction Implant.

    Science.gov (United States)

    Vyskocil, Erich; Liepins, Rudolfs; Kaider, Alexandra; Blineder, Michaela; Hamzavi, Sasan

    2017-03-01

    There is no consensus regarding the benefit of implantable hearing aids in congenital unilateral conductive hearing loss (UCHL). This study aimed to measure sound source localization performance in patients with congenital UCHL and contralateral normal hearing who received a new bone conduction implant. Evaluation of within-subject performance differences for sound source localization in a horizontal plane. Tertiary referral center. Five patients with atresia of the external auditory canal and contralateral normal hearing implanted with transcutaneous bone conduction implant at the Medical University of Vienna were tested. Activated/deactivated implant. Sound source localization test; localization performance quantified using the root mean square (RMS) error. Sound source localization ability was highly variable among individual subjects, with RMS errors ranging from 21 to 40 degrees. Horizontal plane localization performance in aided conditions showed statistically significant improvement compared with the unaided conditions, with RMS errors ranging from 17 to 27 degrees. The mean RMS error decreased by a factor of 0.71 (p conduction implant. Some patients with congenital UCHL might be capable of developing improved horizontal plane localization abilities with the binaural cues provided by this device.

  10. Improvement of the physically-based groundwater model simulations through complementary correction of its errors

    Directory of Open Access Journals (Sweden)

    Jorge Mauricio Reyes Alcalde

    2017-04-01

    Full Text Available Physically-Based groundwater Models (PBM, such MODFLOW, are used as groundwater resources evaluation tools supposing that the produced differences (residuals or errors are white noise. However, in the facts these numerical simulations usually show not only random errors but also systematic errors. For this work it has been developed a numerical procedure to deal with PBM systematic errors, studying its structure in order to model its behavior and correct the results by external and complementary means, trough a framework called Complementary Correction Model (CCM. The application of CCM to PBM shows a decrease in local biases, better distribution of errors and reductions in its temporal and spatial correlations, with 73% of reduction in global RMSN over an original PBM. This methodology seems an interesting chance to update a PBM avoiding the work and costs of interfere its internal structure.

  11. ac driving amplitude dependent systematic error in scanning Kelvin probe microscope measurements: Detection and correction

    International Nuclear Information System (INIS)

    Wu Yan; Shannon, Mark A.

    2006-01-01

    The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed

  12. Localization and registration accuracy in image guided neurosurgery: a clinical study

    International Nuclear Information System (INIS)

    Shamir, Reuben R.; Joskowicz, Leo; Spektor, Sergey; Shoshan, Yigal

    2009-01-01

    To measure and compare the clinical localization and registration errors in image-guided neurosurgery, with the purpose of revising current assumptions. Twelve patients who underwent brain surgeries with a navigation system were randomly selected. A neurosurgeon localized and correlated the landmarks on preoperative MRI images and on the intraoperative physical anatomy with a tracked pointer. In the laboratory, we generated 612 scenarios in which one landmark pair was defined as the target and the remaining ones were used to compute the registration transformation. Four errors were measured: (1) fiducial localization error (FLE); (2) target registration error (TRE); (3) fiducial registration error (FRE); (4) Fitzpatrick's target registration error estimation (F-TRE). We compared the different errors and computed their correlation. The image and physical FLE ranges were 0.5-2.0 and 1.6-3.0 mm, respectively. The measured TRE, FRE and F-TRE were 4.1±1.6, 3.9±1.2, and 3.7±2.2 mm, respectively. Low correlations of 0.19 and 0.37 were observed between the FRE and TRE and between the F-TRE and the TRE, respectively. The differences of the FRE and F-TRE from the TRE were 1.3±1.0 mm (max=5.5 mm) and 1.3±1.2 mm (max=7.3 mm), respectively. Contrary to common belief, the FLE presents significant variations. Moreover, both the FRE and the F-TRE are poor indicators of the TRE in image-to-patient registration. (orig.)

  13. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    Science.gov (United States)

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  14. Comparison of the Retrieval of Sea Surface Salinity Using Different Instrument Configurations of MICAP

    Directory of Open Access Journals (Sweden)

    Lanjie Zhang

    2018-04-01

    Full Text Available The Microwave Imager Combined Active/Passive (MICAP has been designed to simultaneously retrieve sea surface salinity (SSS, sea surface temperature (SST and wind speed (WS, and its performance has also been preliminarily analyzed. To determine the influence of the first guess values uncertainties on the retrieved parameters of MICAP, the retrieval accuracies of SSS, SST, and WS are estimated at various noise levels. The results suggest that the errors on the retrieved SSS have not increased dues poorly known initial values of SST and WS, since the MICAP can simultaneously acquire SST information and correct ocean surface roughness. The main objective of this paper is to obtain the simplified instrument configuration of MICAP without loss of the SSS, SST, and WS retrieval accuracies. Comparisons are conducted between three different instrument configurations in retrieval mode, based on the simulation measurements of MICAP. The retrieval results tend to prove that, without the 23.8 GHz channel, the errors on the retrieved SSS, SST, and WS for MICAP could also satisfy the accuracy requirements well globally during only one satellite pass. By contrast, without the 1.26 GHz scatterometer, there are relatively large increases in the SSS, SST, and WS errors at middle/low latitudes.

  15. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  16. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks.

    Science.gov (United States)

    Eppenhof, Koen A J; Pluim, Josien P W

    2018-04-01

    Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.

  17. Processing graded feedback: electrophysiological correlates of learning from small and large errors.

    Science.gov (United States)

    Luft, Caroline Di Bernardi; Takase, Emilio; Bhattacharya, Joydeep

    2014-05-01

    Feedback processing is important for learning and therefore may affect the consolidation of skills. Considerable research demonstrates electrophysiological differences between correct and incorrect feedback, but how we learn from small versus large errors is usually overlooked. This study investigated electrophysiological differences when processing small or large error feedback during a time estimation task. Data from high-learners and low-learners were analyzed separately. In both high- and low-learners, large error feedback was associated with higher feedback-related negativity (FRN) and small error feedback was associated with a larger P300 and increased amplitude over the motor related areas of the left hemisphere. In addition, small error feedback induced larger desynchronization in the alpha and beta bands with distinctly different topographies between the two learning groups: The high-learners showed a more localized decrease in beta power over the left frontocentral areas, and the low-learners showed a widespread reduction in the alpha power following small error feedback. Furthermore, only the high-learners showed an increase in phase synchronization between the midfrontal and left central areas. Importantly, this synchronization was correlated to how well the participants consolidated the estimation of the time interval. Thus, although large errors were associated with higher FRN, small errors were associated with larger oscillatory responses, which was more evident in the high-learners. Altogether, our results suggest an important role of the motor areas in the processing of error feedback for skill consolidation.

  18. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    Science.gov (United States)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  19. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  20. An instrumentation and control philosophy for high-level nuclear waste processing facilities

    International Nuclear Information System (INIS)

    Weigle, D.H.

    1990-01-01

    The purpose of this paper is to present an instrumentation and control philosophy which may be applied to high-level nuclear waste processing facilities. This philosophy describes the recommended criteria for automatic/manual control, remote/local control, remote/local display, diagnostic instrumentation, interlocks, alarm levels, and redundancy. Due to the hazardous nature of the process constituents of a high-level nuclear waste processing facility, it is imperative that safety and control features required for accident-free operation and maintenance be incorporated. A well-instrumented and controlled process, while initially more expensive in capital and design costs, is generally safer and less expensive to operate. When the long term cost savings of a well designed process is coupled with the high savings enjoyed by accident avoidance, the benefits far outweigh the initial capital and design costs

  1. Malaysian Preparation for Nuclear Power Plant Instrumentation and Control System

    International Nuclear Information System (INIS)

    Mohd Idris Taib; Nurfarhana Ayuni Joha; Kamarudin Sulaiman; Izhar Abu Hussin

    2011-01-01

    Instrumentation and Control System is required in Nuclear Power Plant for their safe and effective operation. The system is combination and integrated from detectors, actuators, analog system as well as digital system. Current design of system definitely follows of electronic as well as computer technology, with strictly follow regulation and guideline from local regulator as well as International Atomic Energy Agency. Commercial Off-The-Shelf products are extensively used with specific nucleonic instrumentation. Malaysian experiences depend on Reactor TRIGA PUSPATI Instrumentation and Control, Power Plant Instrumentation and Control as well as Process Control System. However Malaysians have capabilities to upgrade themself from Electronics, Computers, Electrical and Mechanical based. Proposal is presented for Malaysian preparation. (author)

  2. The effects of local street network characteristics on the positional accuracy of automated geocoding for geographic health studies

    Directory of Open Access Journals (Sweden)

    Zimmerman Dale L

    2010-02-01

    Full Text Available Abstract Background Automated geocoding of patient addresses for the purpose of conducting spatial epidemiologic studies results in positional errors. It is well documented that errors tend to be larger in rural areas than in cities, but possible effects of local characteristics of the street network, such as street intersection density and street length, on errors have not yet been documented. Our study quantifies effects of these local street network characteristics on the means and the entire probability distributions of positional errors, using regression methods and tolerance intervals/regions, for more than 6000 geocoded patient addresses from an Iowa county. Results Positional errors were determined for 6376 addresses in Carroll County, Iowa, as the vector difference between each 100%-matched automated geocode and its ground-truthed location. Mean positional error magnitude was inversely related to proximate street intersection density. This effect was statistically significant for both rural and municipal addresses, but more so for the former. Also, the effect of street segment length on geocoding accuracy was statistically significant for municipal, but not rural, addresses; for municipal addresses mean error magnitude increased with length. Conclusion Local street network characteristics may have statistically significant effects on geocoding accuracy in some places, but not others. Even in those locales where their effects are statistically significant, street network characteristics may explain a relatively small portion of the variability among geocoding errors. It appears that additional factors besides rurality and local street network characteristics affect accuracy in general.

  3. Varying coefficients model with measurement error.

    Science.gov (United States)

    Li, Liang; Greene, Tom

    2008-06-01

    We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.

  4. Quality appraisal of generic self-reported instruments measuring health-related productivity changes: a systematic review

    Science.gov (United States)

    2014-01-01

    Background Health impairments can result in disability and changed work productivity imposing considerable costs for the employee, employer and society as a whole. A large number of instruments exist to measure health-related productivity changes; however their methodological quality remains unclear. This systematic review critically appraised the measurement properties in generic self-reported instruments that measure health-related productivity changes to recommend appropriate instruments for use in occupational and economic health practice. Methods PubMed, PsycINFO, Econlit and Embase were systematically searched for studies whereof: (i) instruments measured health-related productivity changes; (ii) the aim was to evaluate instrument measurement properties; (iii) instruments were generic; (iv) ratings were self-reported; (v) full-texts were available. Next, methodological quality appraisal was based on COSMIN elements: (i) internal consistency; (ii) reliability; (iii) measurement error; (iv) content validity; (v) structural validity; (vi) hypotheses testing; (vii) cross-cultural validity; (viii) criterion validity; and (ix) responsiveness. Recommendations are based on evidence syntheses. Results This review included 25 articles assessing the reliability, validity and responsiveness of 15 different generic self-reported instruments measuring health-related productivity changes. Most studies evaluated criterion validity, none evaluated cross-cultural validity and information on measurement error is lacking. The Work Limitation Questionnaire (WLQ) was most frequently evaluated with moderate respectively strong positive evidence for content and structural validity and negative evidence for reliability, hypothesis testing and responsiveness. Less frequently evaluated, the Stanford Presenteeism Scale (SPS) showed strong positive evidence for internal consistency and structural validity, and moderate positive evidence for hypotheses testing and criterion validity. The

  5. Indoor localization using unsupervised manifold alignment with geometry perturbation

    KAUST Repository

    Majeed, Khaqan

    2014-04-01

    The main limitation of deploying/updating Received Signal Strength (RSS) based indoor localization is the construction of fingerprinted radio map, which is quite a hectic and time-consuming process especially when the indoor area is enormous and/or dynamic. Different approaches have been undertaken to reduce such deployment/update efforts, but the performance degrades when the fingerprinting load is reduced below a certain level. In this paper, we propose an indoor localization scheme that requires as low as 1% fingerprinting load. This scheme employs unsupervised manifold alignment that takes crowd sourced RSS readings and localization requests as source data set and the environment\\'s plan coordinates as destination data set. The 1% fingerprinting load is only used to perturb the local geometries in the destination data set. Our proposed algorithm was shown to achieve less than 5 m mean localization error with 1% fingerprinting load and a limited number of crowd sourced readings, when other learning based localization schemes pass the 10 m mean error with the same information.

  6. Indoor localization using unsupervised manifold alignment with geometry perturbation

    KAUST Repository

    Majeed, Khaqan; Sorour, Sameh; Al-Naffouri, Tareq Y.; Valaee, Shahrokh

    2014-01-01

    The main limitation of deploying/updating Received Signal Strength (RSS) based indoor localization is the construction of fingerprinted radio map, which is quite a hectic and time-consuming process especially when the indoor area is enormous and/or dynamic. Different approaches have been undertaken to reduce such deployment/update efforts, but the performance degrades when the fingerprinting load is reduced below a certain level. In this paper, we propose an indoor localization scheme that requires as low as 1% fingerprinting load. This scheme employs unsupervised manifold alignment that takes crowd sourced RSS readings and localization requests as source data set and the environment's plan coordinates as destination data set. The 1% fingerprinting load is only used to perturb the local geometries in the destination data set. Our proposed algorithm was shown to achieve less than 5 m mean localization error with 1% fingerprinting load and a limited number of crowd sourced readings, when other learning based localization schemes pass the 10 m mean error with the same information.

  7. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  8. Development and implementation of an electronic interface for complex clinical laboratory instruments without a vendor-provided data transfer interface

    Directory of Open Access Journals (Sweden)

    Gary E Blank

    2011-01-01

    Full Text Available Background: Clinical pathology laboratories increasingly use complex instruments that incorporate chromatographic separation, e.g. liquid chromatography, with mass detection for rapid identification and quantification of biochemicals, biomolecules, or pharmaceuticals. Electronic data management for these instruments through interfaces with laboratory information systems (LIS is not generally available from the instrument manufacturers or LIS vendors. Unavailability of a data management interface is a limiting factor in the use of these instruments in clinical laboratories where there is a demand for high-throughput assays with turn-around times that meet patient care needs. Materials and Methods: Professional society guidelines for design and transfer of data between instruments and LIS were used in the development and implementation of the interface. File transfer protocols and support utilities were written to facilitate transfer of information between the instruments and the LIS. An interface was created for liquid chromatography-tandem mass spectroscopy and inductively coupled plasma-mass spectroscopy instruments to manage data in the Sunquest® LIS. Results: Interface validation, implementation and data transfer fidelity as well as training of technologists for use of the interface was performed by the LIS group. The technologists were familiarized with the data verification process as a part of the data management protocol. The total time for the technologists for patient/control sample data entry, assay results data transfer, and results verification was reduced from approximately 20 s per sample to <1 s per sample. Sample identification, results data entry errors, and omissions were eliminated. There was electronic record of the technologist performing the assay runs and data management. Conclusions: Development of a data management interface for complex, chromatography instruments in clinical laboratories has resulted in rapid, accurate

  9. Multivariate analysis for the estimation of target localization errors in fiducial marker-based radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Takamiya, Masanori [Department of Nuclear Engineering, Graduate School of Engineering, Kyoto University, Kyoto 606-8501, Japan and Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Nakamura, Mitsuhiro, E-mail: m-nkmr@kuhp.kyoto-u.ac.jp; Akimoto, Mami; Ueki, Nami; Yamada, Masahiro; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Tanabe, Hiroaki [Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe 650-0047 (Japan); Kokubo, Masaki [Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe 650-0047, Japan and Department of Radiation Oncology, Kobe City Medical Center General Hospital, Kobe 650-0047 (Japan); Itoh, Akio [Department of Nuclear Engineering, Graduate School of Engineering, Kyoto University, Kyoto 606-8501 (Japan)

    2016-04-15

    Purpose: To assess the target localization error (TLE) in terms of the distance between the target and the localization point estimated from the surrogates (|TMD|), the average of respiratory motion for the surrogates and the target (|aRM|), and the number of fiducial markers used for estimating the target (n). Methods: This study enrolled 17 lung cancer patients who subsequently underwent four fractions of real-time tumor tracking irradiation. Four or five fiducial markers were implanted around the lung tumor. The three-dimensional (3D) distance between the tumor and markers was at maximum 58.7 mm. One of the markers was used as the target (P{sub t}), and those markers with a 3D |TMD{sub n}| ≤ 58.7 mm at end-exhalation were then selected. The estimated target position (P{sub e}) was calculated from a localization point consisting of one to three markers except P{sub t}. Respiratory motion for P{sub t} and P{sub e} was defined as the root mean square of each displacement, and |aRM| was calculated from the mean value. TLE was defined as the root mean square of each difference between P{sub t} and P{sub e} during the monitoring of each fraction. These procedures were performed repeatedly using the remaining markers. To provide the best guidance on the answer with n and |TMD|, fiducial markers with a 3D |aRM ≥ 10 mm were selected. Finally, a total of 205, 282, and 76 TLEs that fulfilled the 3D |TMD| and 3D |aRM| criteria were obtained for n = 1, 2, and 3, respectively. Multiple regression analysis (MRA) was used to evaluate TLE as a function of |TMD| and |aRM| in each n. Results: |TMD| for n = 1 was larger than that for n = 3. Moreover, |aRM| was almost constant for all n, indicating a similar scale for the marker’s motion near the lung tumor. MRA showed that |aRM| in the left–right direction was the major cause of TLE; however, the contribution made little difference to the 3D TLE because of the small amount of motion in the left–right direction. The TLE

  10. Instrument evaluation no. 11. ESI nuclear model 271 C contamination monitor

    International Nuclear Information System (INIS)

    Burgess, P.H.; Iles, W.J.

    1978-06-01

    The various radiations encountered in radiological protection cover a wide range of energies and radiation measurements have to he carried out under an equally broad spectrum of environmental conditions. This report is one of a series intended to give information on the performance characteristics of radiological protection instruments, to assist in the selection of appropriate instruments for a given purpose, to interpret the results obtained with such instruments, and, in particular, to know the likely sources and magnitude of errors that might be associated with measurements in the field. The radiation, electrical and environmental characteristics of radiation protection instruments are considered together with those aspects of the construction which make an instrument convenient for routine use. To provide consistent criteria for instrument performance, the range of tests performed on any particular class of instrument, the test methods and the criteria of acceptable performance are based broadly on the appropriate Recommendations of the International Electrotechnical Commission. The radiations in the tests are, in general, selected from the range of reference radiations for instrument calibration being drawn up by the International Standards Organisation. Normally, each report deals with the capabilities and limitations of one model of instrument and no direct comparison with other