WorldWideScience

Sample records for sandwich-type standard error

  1. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  2. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  3. The standard error of the Pearson skew

    Directory of Open Access Journals (Sweden)

    Bradley Harding

    2015-02-01

    Full Text Available The Pearson skew is a measure of asymmetry of a distribution, based on the difference between the mean and the median of a distribution. Here we show how to calculate the Pearson skew, estimate its standard error and the confidence interval. The derivation is based on a population following a normal distribution. Simulations explored the validity of this expression when the normality assumption is met in comparison to when the normality assumption is not met. The standard error of the Pearson skew revealed very robust in case of non-normal populations, compared to the Fisher Skew as presented in Harding, Tremblay and Cousineau (2014.

  4. Laser-induced fluorescence reader with a turbidimetric system for sandwich-type immunoassay using nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y.H.; Lim, H.B., E-mail: plasma@dankook.ac.kr

    2015-07-09

    Graphical abstract: Laser-induced fluorescence reader with ratiometric correction for sandwich-type immunoassay using nanoparticles. - Highlights: • Laser-induced fluorescence system with ratiometric correction was developed. • The system reduced experimental error caused by particle loss and aggregation. • The detection limit of about 39 pg mL{sup −1} for salinomycin was obtained. • Calibration linearity and sensitivity were also significantly improved. • The system has the potential for bioanalysis using various nanoparticles. - Abstract: A unique laser-induced fluorescence (LIF) reader equipped with a turbidimetric system was developed for a sandwich-type immunoassay using nanoparticles. The system was specifically designed to reduce experimental error caused by particle loss, aggregation and sinking, and to improve analytical performance through ratiometric measurement of the fluorescence with respect to the turbidimetric absorbance. For application to determine the concentration of salinomycin, magnetic nanoparticles (MNPs) and FITC-doped silica nanoparticles (colored balls) immobilized with antibody were synthesized for magnetic extraction and for tagging as a fluorescence probe, respectively. The detection limit of about 39 pg mL{sup −1} was obtained, which was an improvement of about 2-fold compared to that obtained without employment of the turbidimetric system. Calibration linearity and sensitivity were also improved, with increase from 0.8601 to 0.9905 in the R{sup 2}-coefficient and by 1.92-fold for the curve slope, respectively. The developed LIF reader has the potential to be used for fluorescence measurements using various nanomaterials, such as quantum dots.

  5. Analytic standard errors for exploratory process factor analysis.

    Science.gov (United States)

    Zhang, Guangjian; Browne, Michael W; Ong, Anthony D; Chow, Sy Miin

    2014-07-01

    Exploratory process factor analysis (EPFA) is a data-driven latent variable model for multivariate time series. This article presents analytic standard errors for EPFA. Unlike standard errors for exploratory factor analysis with independent data, the analytic standard errors for EPFA take into account the time dependency in time series data. In addition, factor rotation is treated as the imposition of equality constraints on model parameters. Properties of the analytic standard errors are demonstrated using empirical and simulated data.

  6. A Novel Sandwich-type Dinuclear Complex for High-capacity Hydrogen Storage%A Novel Sandwich-type Dinuclear Complex for High-capacity Hydrogen Storage

    Institute of Scientific and Technical Information of China (English)

    朱海燕; 陈元振; 李赛; 曹秀贞; 柳永宁

    2012-01-01

    From density functional theory (DFT) calculations, we predicted that the sandwich-type dinuclear organometallic compounds Cpffi2 and Cp2Sc2 can adsorb up to eight hydrogen molecules respectively, corresponding to a high gravimetric storage capacity of 6.7% and 6.8% (w), respectively. These sandwich-type organometallocenes proposed in this work are favorable for reversible adsorption and desorption of hydrogen at ambient conditions.

  7. Sandwich-type theorems for a class of integral operators with special properties

    Directory of Open Access Journals (Sweden)

    Parisa Hariri

    2014-02-01

    Full Text Available In the present paper, we prove subordination, superordination and sandwich-type properties of a certain integral operators for univalent functions on open unit disc, moreover the special behavior of this class is investigated.

  8. Factor Rotation and Standard Errors in Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.

    2015-01-01

    In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…

  9. 75 FR 15371 - Time Error Correction Reliability Standard

    Science.gov (United States)

    2010-03-29

    ...Pursuant to section 215 of the Federal Power Act, the Commission proposes to remand the proposed revised Time Error Correction Reliability Standard developed by the North American Electric Reliability Corporation (NERC) in order for NERC to develop several modifications to the proposed Reliability Standard. The proposed action ensures that any modifications to Reliability Standards will be......

  10. Standard Errors of Prediction for the Vineland Adaptive Behavior Scales.

    Science.gov (United States)

    Atkinson, Leslie

    1990-01-01

    Offers standard errors of prediction and confidence intervals for Vineland Adaptive Behavior Scales (VABS) that help in deciding whether variation in obtained scores of scale administered to the same person more than once is a result of measurement error or whether it reflects actual change in examinee's functional level. Presented values were…

  11. Influence of Sandwich-Type Constrained Layer Damper Design Parameters on Damping Strength

    Directory of Open Access Journals (Sweden)

    Inaki Merideno

    2016-01-01

    Full Text Available This paper presents a theoretical study of the parameters that influence sandwich-type constrained layer damper design. Although there are different ways to reduce the noise generated by a railway wheel, most devices are based on the mechanism of increasing wheel damping. Sandwich-type constrained layer dampers can be designed so their resonance frequencies coincide with the wheel’s resonant vibration frequencies, and thus the damping effect can be concentrated within the frequency ranges of interest. However, the influence of design parameters has not yet been studied. Based on a number of numerical simulations, this paper provides recommendations for the design stages of sandwich-type constrained layer dampers.

  12. SANDWICH-TYPE THEOREMS FOR MEROMORPHIC MULTIVALENT FUNCTIONS ASSOCIATED WITH THE LIU-SRIVASTAVA OPERATOR

    Institute of Scientific and Technical Information of China (English)

    Nak Eun Cho

    2012-01-01

    The purpose of this article is to obtain some subordination and superordination preserving properties of meromorphic multivalent functions in the punctured open unit disk associated with the Liu-Srivastava operator.The sandwich-type results for these meromorphic multivalent functions are also considered.

  13. A Novel Borophosphate Coordination Polymer with Sandwich-type Supramolecular Architecture

    Institute of Scientific and Technical Information of China (English)

    Mao Feng LI; Heng Zhen SHI; Yong Kui SHAN; Ming Yuan HE

    2004-01-01

    A novel borophosphate (Hmel)3{Co2[(mel)2(HPO4)2(PO4)](H3BO3·H2O} (mel = melamine) has been synthesized under mild solvothermal conditions. The structure of the compound exists a high ordered organic-inorganic sandwich-type supramolecular architecture via metal-coordination, hydrogen bonds and π-π stacking interactions.

  14. Sandwich-type tetrakis(phthalocyaninato) dysprosium-cadmium quadruple-decker SMM.

    Science.gov (United States)

    Wang, Hailong; Qian, Kang; Wang, Kang; Bian, Yongzhong; Jiang, Jianzhuang; Gao, Song

    2011-09-14

    Homoleptic tetrakis[2,3,9,10,16,17,23,24-octa(butyloxy)phthalocyaninato] dysprosium-cadmium quadruple-decker complex 1 was isolated in relatively good yield of 43% from a simple one-pot reaction. This compound represents the first sandwich-type tetrakis(phthalocyaninato) rare earth-cadmium quadruple-decker SMM that has been structurally characterized.

  15. Sandwich-Type Theorems for a Class of Multiplier Transformations Associated with the Noor Integral Operators

    Directory of Open Access Journals (Sweden)

    Nak Eun Cho

    2012-01-01

    Full Text Available We obtain some subordination- and superordination-preserving properties for a class of multiplier transformations associated with Noor integral operators defined on the space of normalized analytic functions in the open unit disk. The sandwich-type theorems for these transformations are also considered.

  16. Highly efficient recycling of a sandwich type polyoxometalate oxidation catalyst using solvent resistant nanofiltration

    NARCIS (Netherlands)

    Witte, Peter T.; Chowdhury, Sankhanilay Roy; Elshof, ten Johan E.; Sloboda-Rozner, Dorit; Neumann, Ronny; Alsters, Paul L.

    2005-01-01

    A sandwich type polyoxometalate catalyst ([MeN(n-C8H17)3]12[WZn3(ZnW9O34)2]) was very efficiently recycled by nanofiltration with almost quantitative retention, using an -alumina supported mesoporous -alumina membrane.

  17. Robust Computation of Error Vector Magnitude for Wireless Standards

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Larsen, Torben

    2013-01-01

    The modulation accuracy described by an error vector magnitude is a critical parameter in modern communication systems — defined originally as a performance metric for transmitters but now also used in receiver design and for more general signal analysis. The modulation accuracy is a measure of how...... far a test signal is from a reference signal at the symbol values when some parameters in a reconstruction model are optimized for best agreement. This paper provides an approach to computation of error vector magnitude as described in several standards from measured or simulated data. It is shown...... that the error vector magnitude optimization problem is generally non-convex. Robust estimation of the initial conditions for the optimizer is suggested, which is particularly important for a non-convex problem. A Bender decomposition approach is used to separate convex and non-convex parts of the problem...

  18. Target registration and target positioning errors in computer-assisted neurosurgery: proposal for a standardized reporting of error assessment.

    Science.gov (United States)

    Widmann, Gerlig; Stoffner, Rudolf; Sieb, Michael; Bale, Reto

    2009-12-01

    Assessment of errors is essential in development, testing and clinical application of computer-assisted neurosurgery. Our aim was to provide a comprehensive overview of the different methods to assess target registration error (TRE) and target positioning error (TPE) and to develop a proposal for a standardized reporting of error assessment. A PubMed research on phantom, cadaver or clinical studies on TRE and TPE has been performed. Reporting standards have been defined according to (a) study design and evaluation methods and (b) specifications of the navigation technology. The proposed standardized reporting includes (a) study design (controlled, non-controlled), study type (non-anthropomorphic phantom, anthropomorphic phantom, cadaver, patient), target design, error type and subtypes, space of TPE measurement, statistics, and (b) image modality, scan parameters, tracking technology, registration procedure and targeting technique. Adoption of the proposed standardized reporting may help in the understanding and comparability of different accuracy reports. Copyright (c) 2009 John Wiley & Sons, Ltd.

  19. Estimating the Standard Error of the Judging in a modified-Angoff Standards Setting Procedure

    Directory of Open Access Journals (Sweden)

    Robert G. MacCann

    2004-03-01

    Full Text Available For a modified Angoff standards setting procedure, two methods of calculating the standard error of the..judging were compared. The Central Limit Theorem (CLT method is easy to calculate and uses readily..available data. It estimates the variance of mean cut scores as a function of the variance of cut scores within..a judging group, based on the independent judgements at Stage 1 of the process. Its theoretical drawback is..that it is unable to take account of the effects of collaboration among the judges at Stages 2 and 3. The..second method, an application of equipercentile (EQP equating, relies on the selection of very large stable..candidatures and the standardisation of the raw score distributions to remove effects associated with test..difficulty. The standard error estimates were then empirically obtained from the mean cut score variation..observed over a five year period. For practical purposes, the two methods gave reasonable agreement, with..the CLT method working well for the top band, the band that attracts most public attention. For some..bands in English and Mathematics, the CLT standard error was smaller than the EQP estimate, suggesting..the CLT method be used with caution as an approximate guide only.

  20. Theoretical prediction of the damping of a railway wheel with sandwich-type dampers

    Science.gov (United States)

    Merideno, Inaki; Nieto, Javier; Gil-Negrete, Nere; Giménez Ortiz, José Germán; Landaberea, Aitor; Iartza, Jon

    2014-09-01

    This paper presents a procedure for predicting the damping added to a railway wheel when sandwich-type dampers are installed. Although there are different ways to reduce the noise generated by a railway wheel, most devices are based on the mechanism of increasing wheel damping. This is why modal damping ratios are a clear indicator of the efficiency of the damping device and essential when a vibro-acoustic study of a railway wheel is carried out. Based on a number of output variables extracted from the wheel and damper models, the strategy explained herein provides the final damping ratios of the damped wheel. Several different configurations are designed and experimentally tested. Theoretical and experimental results agree adequately, and it is demonstrated that this procedure is a good tool for qualitative comparison between different solutions in the design stages.

  1. Glass-sandwich-type organic solar cells utilizing liquid crystalline phthalocyanine

    Science.gov (United States)

    Usui, Toshiki; Nakata, Yuya; De Romeo Banoukepa, Gilles; Fujita, Kento; Nishikawa, Yuki; Shimizu, Yo; Fujii, Akihiko; Ozaki, Masanori

    2017-02-01

    Glass-sandwich-type organic solar cells utilizing liquid crystalline phthalocyanine, 1,4,8,11,15,18,22,25-octahexylphthalocyanine (C6PcH2), have been fabricated and their photovoltaic properties have been studied. The short-circuit current density (J sc) and power conversion efficiency (PCE) depend on the C6PcH2 layer thickness, and the maximum performance, such as a J sc of 7.1 mA/cm2 and a PCE of 1.64%, was demonstrated for a device having a 420-nm-thick C6PcH2 layer. We examined the photovoltaic properties from the viewpoint of the C6PcH2-layer electrical conductance, based on the distribution of the column-axis direction.

  2. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    Science.gov (United States)

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  3. Relation between Streaming Potential and Streaming Electrification Generated by Streaming of Water through a Sandwich-type Cell

    OpenAIRE

    Maruyama, Kazunori; NIKAIDO, Mitsuru; Hara, Yoshinori; Tanizaki, Yoshie

    2012-01-01

    Both streaming potential and accumulated charge of water flowed out were measured simultaneously using a sandwich-type cell. The voltages generated in divided sections along flow direction satisfied additivity. The sign of streaming potential agreed with that of streaming electrification. The relation between streaming potential and streaming electrification was explained from a viewpoint of electrical double layer in glass-water interface.

  4. Relation between Streaming Potential and Streaming Electrification Generated by Streaming of Water through a Sandwich-type Cell

    OpenAIRE

    Maruyama, Kazunori; NIKAIDO, Mitsuru; Hara, Yoshinori; Tanizaki, Yoshie

    2012-01-01

    Both streaming potential and accumulated charge of water flowed out were measured simultaneously using a sandwich-type cell. The voltages generated in divided sections along flow direction satisfied additivity. The sign of streaming potential agreed with that of streaming electrification. The relation between streaming potential and streaming electrification was explained from a viewpoint of electrical double layer in glass-water interface.

  5. Flexible freestanding sandwich type ZnO/rGO/ZnO electrode for wearable supercapacitor

    Science.gov (United States)

    Ghorbani, Mina; Golobostanfard, Mohammad Reza; Abdizadeh, Hossein

    2017-10-01

    The development of flexible supercapacitors with high energy and power density as one of the main components of wearable electronics is in an enormous interest. In this report, a unique flexible electrode based on freestanding sandwich type ZnO/rGO/ZnO paper is fabricated by a simple low cost sol-gel method for utilizing in flexible supercapacitor. ZnO layers are deposited on both sides of rGO paper which is prepared by a modified Hummer's method and evaporation induced assembly. The uniform and densely packed ZnO layers are formed on graphene oxide paper and the paper is simultaneously reduced. Structural analysis reveals the formation of ZnO thin films on both sides of rGO nanosheets, which leads to the sandwich architecture. Also, the effect of ZnO sol-gel process parameters on microstructure of sandwich paper are investigated and the most suitable condition for highest supercapacity performance is the solvent of 1-PrOH, stabilizer of TeA, sol concentration of 0.2 M, deposition speed of 30 mm min-1, and 10 deposited layers. The results of electrochemical impedance spectroscopy, galvanostatic charge-discharge, and cyclic voltammetry confirm that the incorporation of ZnO improves the capacitive performance of rGO electrode. Moreover, ZnO/rGO/ZnO flexible electrode exhibits suitable capacitance value of 60.63 F g-1 at scan rate of 5 mV/s.

  6. A novel sandwich-type traveling wave piezoelectric tracked mobile system.

    Science.gov (United States)

    Wang, Liang; Shu, Chengyou; Zhang, Quan; Jin, Jiamei

    2017-03-01

    In this paper, a novel sandwich-type traveling wave piezoelectric tracked mobile system was proposed, designed, fabricated and experimentally investigated. The proposed system exhibits the advantages of simple structure, high mechanical integration, lack of electromagnetic interference, and lack of lubrication requirement, and hence shows potential application to robotic rovers for planetary exploration. The tracked mobile system is comprised of a sandwich actuating mechanism and a metal track. The actuating mechanism includes a sandwich piezoelectric transducer and two annular parts symmetrically placed at either end of the transducer, while the metal track is tensioned along the outer surfaces of the annular parts. Traveling waves with the same rotational direction are generated in the two annular parts, producing the microscopic elliptical motions of the surface particles on the annular parts. In this situation, if the pre-load is applied properly, the metal track can be driven by friction force to achieve bidirectional movement. At first, the finite element method was adopted to conduct the modal analysis and harmonic response analysis of the actuating mechanism, and the vibration characteristics were measured to confirm the operating principle. Then the optimal driving frequency of the system prototype, namely 35.1kHz, was measured by frequency sensitivity experiments. At last, the mechanical motion characteristics of the prototype were investigated experimentally. The results show that the average motion speeds of the prototype in dual directions were as 72mm/s and 61.5mm/s under the excitation voltage of 500VRMS, respectively. The optimal loading weights of the prototype in bi-directions were 0.32kg and 0.24kg with a maximum speed of 59.5mm/s and 61.67mm/s at the driving voltage of 300VRMS, respectively.

  7. SANDWICH-TYPE RESULTS FOR A CLASS OF CONVEX INTEGRAL OPERATORS

    Institute of Scientific and Technical Information of China (English)

    Teodor Bulboacǎ

    2012-01-01

    Let H(U) be the space of analytic functions in the unit disk U.For the integral operator Aφ,(ψ)α,β,γ:K → H(U),with K (C) H(U),defined by Aφ,(ψ)α,β,γ[f](z) =[β+γ/zγφ(z)∫z0fα(t)(ψ)(t)tδ-1 dt]1/β, where α,β,γ,δ ∈ (C) and φ,(ψ) ∈ H(U),we will determine sufficient conditions on g1,g2,α,βand γ,such thatz(ψ)(z)[Aφ(ψ)α,β,γ[g1](z)/z]β(<)z(ψ)(z)[f(z)/z]β(<)z(ψ)(z)[g3(z)/z]αimplieszφ(z)[Aφ,(ψ)α,β,γ[g1](z)/z]β (<)zφ(z)[Aφ,(ψ)α,β,γ[f](z)/z]β(<) zφ(z)[Aφ,(ψ)α,β,γ[g2](z)/z]β.The symbol “(<)” stands for subordination,and we call such a kind of result a sandwich-type theorem.In addition,zφ(z)[Aφ,(ψ)α,β,γ[g1](z)/z]β is the largest function and [Aφ,(ψ)α,β,γ[g2](z)/z]βφ(z) the smallest function so that the left-hand side,respectively the right-hand side of the above implications hold,for all f functions satisfying the assumption.We give a particular case of the main result obtained for appropriate choices of functions φ and (ψ),that also generalizes classic results of the theory of differential subordination and superordination.

  8. Research of error structure of standard time signal synchronization system via digital television channels

    OpenAIRE

    Троцько, Максим Леонідович; Тріщ, Роман Михайлович

    2014-01-01

    The error structure of the standard time signal synchronization system via digital television channels was investigated. The relevance of this research is determined by changing the format of television broadcasting in Ukraine from analog to digital, which has necessitated the creation of a new standard time signal transmission system, adapted to the current format.An estimate of the basic permissible error of the system of standard time signal transmission via digital television channels, wh...

  9. Standard errors as weights in multilateral price indexes

    NARCIS (Netherlands)

    Hill, R.; Timmer, M.P.

    2006-01-01

    Various multilateral methods for computing price indexes use bilateral comparisons as their basic building blocks. Some give greater weight to those bilateral comparisons deemed more reliable. However, none of the existing reliability measures adjusts for gaps in the data. We show how the standard e

  10. Sandwich-type PLLA-nanosheets loaded with BMP-2 induce bone regeneration in critical-sized mouse calvarial defects.

    Science.gov (United States)

    Huang, Kuo-Chin; Yano, Fumiko; Murahashi, Yasutaka; Takano, Shuta; Kitaura, Yoshiaki; Chang, Song Ho; Soma, Kazuhito; Ueng, Steve W N; Tanaka, Sakae; Ishihara, Kazuhiko; Okamura, Yosuke; Moro, Toru; Saito, Taku

    2017-09-01

    To overcome serious clinical problems caused by large bone defects, various approaches to bone regeneration have been researched, including tissue engineering, biomaterials, stem cells and drug screening. Previously, we developed a free-standing biodegradable polymer nanosheet composed of poly(L-lactic acid) (PLLA) using a simple fabrication process consisting of spin-coating and peeling techniques. Here, we loaded recombinant human bone morphogenetic protein-2 (rhBMP-2) between two 60-nm-thick PLLA nanosheets, and investigated these sandwich-type nanosheets in bone regeneration applications. The PLLA nanosheets displayed constant and sustained release of the loaded rhBMP-2 for over 2months in vitro. Moreover, we implanted the sandwich-type nanosheets with or without rhBMP-2 into critical-sized defects in mouse calvariae. Bone regeneration was evident 4weeks after implantation, and the size and robustness of the regenerated bone had increased by 8weeks after implantation in mice implanted with the rhBMP-2-loaded nanosheets, whereas no significant bone formation occurred over a period of 20weeks in mice implanted with blank nanosheets. The PLLA nanosheets loaded with rhBMP-2 may be useful in bone regenerative medicine; furthermore, the sandwich-type PLLA nanosheet structure may potentially be applied as a potent prolonged sustained-release carrier of other molecules or drugs. Here we describe sandwich-type poly(L-lactic acid) (PLLA) nanosheets loaded with recombinant human bone morphogenetic protein-2 (rhBMP-2) as a novel method for bone regeneration. Biodegradable 60-nm-thick PLLA nanosheets display strong adhesion without any adhesive agent. The sandwich-type PLLA nanosheets displayed constant and sustained release of the loaded rhBMP-2 for over 2months in vitro. The nanosheets with rhBMP-2 markedly enhanced bone regeneration when they were implanted into critical-sized defects in mouse calvariae. In addition to their application for bone regeneration, PLLA

  11. Characteristics of sandwich-type structural elements built of advanced composite materials from three dimensional fabrics

    Directory of Open Access Journals (Sweden)

    Castejón, L.

    1997-12-01

    Full Text Available Sandwich-type structures have proved to be alternatives of great success for several fields of application, and specially in the building sector. This is due to their outstanding properties of .specific rigidity and strength against bending loads and other range of advantages like fatigue and impact resistance, attainment of flat and smooth surfaces, high electric and thermal insulation, design versatility and some others. However, traditional sandwich structures present problems like their tendency towards delamination, stress concentrations in bores or screwed Joints, and pre resistance. These problems are alleviated thanks to the use of new sandwich structures built using three dimensional structures of advanced composite materials, maintaining the present advantages for more traditional sandwich structures. At this rate, these new structures can be applied in several areas where conventional sandwich structures used to be like walls, partitions, floor and ceiling structures, domes, vaults and dwellings, but with greater success.

    Las estructuras tipo sándwich han demostrado ser alternativas de gran éxito para diversos campos de aplicación y, en concreto, en el sector de la construcción, listo es gracias a sus excelentes propiedades de rigidez y resistencia específica frente a cargas de flexión y otra larga lista de ventajas, a la que pertenecen, por ejemplo, su buena resistencia a fatiga, resistencia al impacto, obtención de superficies lisas y suaves, elevado aislamiento térmico y eléctrico, versatilidad de diseño y otras. Sin embargo, las estructuras sándwich, tradicionales presentan una problemática consistente en su tendencia a la delaminación, concentraciones de tensiones ¿aparecidas ante la existencia de agujeros o uniones atornilladas y resistencia al fuego. Estos problemas son pifiados gracias a la aplicación de estructuras novedosas tipo sándwich, construidas a partir de tejidos tridimensionales de materiales

  12. The Asymptotic Standard Errors of Some Estimates of Uncertainty in the Two-Way Contingency Table

    Science.gov (United States)

    Brown, Morton B.

    1975-01-01

    Estimates of conditional uncertainty, contingent uncertainty, and normed modifications of contingent uncertainity have been proposed for the two-way contingency table. The asymptotic standard errors of the estimates are derived. (Author)

  13. Analytic Tools for Evaluating Variability of Standard Errors in Large-Scale Establishment Surveys

    Directory of Open Access Journals (Sweden)

    Cho MoonJung

    2014-12-01

    Full Text Available Large-scale establishment surveys often exhibit substantial temporal or cross-sectional variability in their published standard errors. This article uses a framework defined by survey generalized variance functions to develop three sets of analytic tools for the evaluation of these patterns of variability. These tools are for (1 identification of predictor variables that explain some of the observed temporal and cross-sectional variability in published standard errors; (2 evaluation of the proportion of variability attributable to the abovementioned predictors, equation error and estimation error, respectively; and (3 comparison of equation error variances across groups defined by observable predictor variables. The primary ideas are motivated and illustrated by an application to the U.S. Current Employment Statistics program.

  14. Research on Effective Electric-Mechanical Coupling Coefficient of Sandwich Type Piezoelectric Ultrasonic Transducer Using Bending Vibration Mode

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2015-01-01

    Full Text Available An analytical model on electromechanical coupling coefficient and the length optimization of a bending piezoelectric ultrasonic transducer are proposed. The piezoelectric transducer consists of 8 PZT elements sandwiched between four thin electrodes, and the PZT elements are clamped by a screwed connection between fore beam and back beam. Firstly, bending vibration model of the piezoelectric transducer is built based on the Timoshenko beam theory. Secondly, the analytical model of effective electromechanical coupling coefficient is built based on the bending vibration model. Energy method and electromechanical equivalent circuit method are involved in the modelling process. To validate the analytical model, sandwich type piezoelectric transducer example in second order bending vibration mode is analysed. Effective electromechanical coupling coefficient of the transducer is optimized with simplex reflection technique, and the optimized ratio of length of the transducers is obtained. Finally, experimental prototypes of the sandwich type piezoelectric transducers are fabricated. Bending vibration mode and impedance of the experimental prototypes are tested, and electromechanical coupling coefficient is obtained according to the testing results. Results show that the analytical model is in good agreement with the experimental model.

  15. Estimation of standard error of the parameter of change using simulations

    Directory of Open Access Journals (Sweden)

    Djordje Petkovic

    2015-06-01

    Full Text Available The main objective of this paper is to present the procedure for estimating standard error of parameter of change (index of turnover in R software (R core team, 2014 when samples are coordinated. The problem of estimating standard error is dealt with in the statistical literature by various types of approximations. In my paper I start from the method presented at the Consultation on Survey Methodology between Statistics Sweden and Statistical Office of the Republic of Serbia (SERSTAT 2013:22, make simulations and calculate estimate of the correlation and true value of standard error of change between turnovers from two years. I use two consecutive sampling frames of quarterly Structural Business Survey (SBS. These frames are updated with turnover from corresponding balance sheets. Important assumption is that annual turnover is highly correlated with quarterly turnover and that computed correlation can be referred to when comparing methods of estimation of correlation on the sample data.

  16. The Standard Error of a Proportion for Different Scores and Test Length.

    Directory of Open Access Journals (Sweden)

    David A. Walker

    2005-06-01

    Full Text Available This paper examines Smith's (2003 proposed standard error of a proportion index..associated with the idea of reliability as sufficiency of information. A detailed table..indexing all of the standard error values affiliated with assessments that range from 5 to..100 items, where students scored as low as 50% correct and 50% incorrect to as high as..95% correct and 5% incorrect, calculated in increments of 1 percentage point, is..presented, along with distributional qualities. Examples using this measure for classroom..teachers and higher education instructors of assessment are provided.

  17. Error analysis for duct leakage tests in ASHRAE standard 152P

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.

    1997-06-01

    This report presents an analysis of random uncertainties in the two methods of testing for duct leakage in Standard 152P of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). The test method is titled Standard Method of Test for Determining Steady-State and Seasonal Efficiency of Residential Thermal Distribution Systems. Equations have been derived for the uncertainties in duct leakage for given levels of uncertainty in the measured quantities used as inputs to the calculations. Tables of allowed errors in each of these independent variables, consistent with fixed criteria of overall allowed error, have been developed.

  18. Analytic Estimation of Standard Error and Confidence Interval for Scale Reliability.

    Science.gov (United States)

    Raykov, Tenko

    2002-01-01

    Proposes an analytic approach to standard error and confidence interval estimation of scale reliability with fixed congeneric measures. The method is based on a generally applicable estimator stability evaluation procedure, the delta method. The approach, which combines wide-spread point estimation of composite reliability in behavioral scale…

  19. Round-Robin Analysis of Social Interaction: Exact and Estimated Standard Errors.

    Science.gov (United States)

    Bond, Charles F., Jr.; Lashley, Brian R.

    1996-01-01

    The Social Relations model of D. A. Kenny estimates variances and covariances from a round-robin of two-person interactions. This paper presents a matrix formulation of the Social Relations model, using the formulation to derive exact and estimated standard errors for round-robin estimates of Social Relations parameters. (SLD)

  20. Encouraging the Flight of Error: Ethical Standards, Evidence Standards, and Randomized Trials

    Science.gov (United States)

    Boruch, Robert

    2007-01-01

    Thomas Jefferson recognized the value of reason and scientific experimentation in the eighteenth century. This chapter extends the idea in contemporary ways to standards that may be used to judge the ethical propriety of randomized trials and the dependability of evidence on effects of social interventions.

  1. Catalytic application of two novel sandwich-type polyoxometalates in synthesis of 14-substituted-14-dibenzo[, ]xanthenes

    Indian Academy of Sciences (India)

    Shabnam Sheshmani

    2013-03-01

    Two sandwich-type polyoxometalates K12[As2W18Cu3O68]·30H2O and K12[As2W18U3O74]·21H2O were found to be as novel efficient catalysts for one-pot synthesis of various 14-aryl or alkyl-14dibenzo[, ]xanthenes. Three-component condensation reactions of -naphthol with aromatic or aliphatic aldehydes in the presence of these catalysts were investigated. These reactions were studied under different conditions such as solvent-free media using both conventional heating and microwave irradiation and also several solvents. Results showed that the optimum reaction time and the yield were obtained when reactions were carried out under solvent-free conditions. Furthermore, the catalysts could be recovered conveniently and reused efficiently.

  2. A colorimetric sandwich-type assay for sensitive thrombin detection based on enzyme-linked aptamer assay.

    Science.gov (United States)

    Park, Jun Hee; Cho, Yea Seul; Kang, Sungmuk; Lee, Eun Jeong; Lee, Gwan-Ho; Hah, Sang Soo

    2014-10-01

    A colorimetric sandwich-type assay based on enzyme-linked aptamer assay has been developed for the fast and sensitive detection of as low as 25 fM of thrombin with high linearity. Aptamer-immobilized glass was used to capture the target analyte, whereas a second aptamer, functionalized with horseradish peroxidase (HRP), was employed for the conventional 3,5,3',5'-tetramethylbenzidine (TMB)-based colorimetric detection. Without the troublesome antibody requirement of the conventional enzyme-linked immunosorbent assay (ELISA), as low as 25 fM of thrombin could be rapidly and reproducibly detected. This assay has superior, or at least equal, recovery and accuracy to that of conventional antibody-based ELISA.

  3. Two New Sandwich-Type Manganese {Mn5}-Substituted Polyoxotungstates: Syntheses, Crystal Structures, Electrochemistry, and Magnetic Properties.

    Science.gov (United States)

    Gupta, Rakesh; Khan, Imran; Hussain, Firasat; Bossoh, A Martin; Mbomekallé, Israël M; de Oliveira, Pedro; Sadakane, Masahiro; Kato, Chisato; Ichihashi, Katsuya; Inoue, Katsuya; Nishihara, Sadafumi

    2017-08-07

    Herein we report two pentanuclear Mn(II)-substituted sandwich-type polyoxotungstate complexes, [{Mn(bpy)}2Na(H2O)2(MnCl)2{Mn(H2O)}(AsW9O33)2](9-) and [{Mn(bpy)}2Na(H2O)2(MnCl){Mn(H2O)}2(SbW9O33)2](8-) (bpy = 2,2'-bipyridine), whose structures have been obtained by single-crystal X-ray diffraction (SCXRD), complemented by results obtained from elemental analysis, electrospray ionization mass spectrometry, Fourier transform infrared spectroscopy, and thermogravimetric analysis. They consist of two [B-α-XW9O33](9-) subunits sandwiching a cyclic assembly of the hexagonal [{Mn(bpy)}2Na(H2O)2(MnCl)2{Mn(H2O)}](9+) and [{Mn(bpy)}2Na(H2O)2(MnCl){Mn(H2O)}2](10+) moieties, respectively, and represent the first pentanuclear Mn(II)-substituted sandwich-type polyoxometalates (POMs). Both compounds have been synthesized by reacting MnCl2·4H2O with trilacunary Na9[XW9O33]·27H2O (X = As(III) and Sb(III)) POM precursors in the presence of bpy in a 1 M aqueous sodium chloride solution under mild reaction conditions. SCXRD showed that the alternate arrangement of three five-coordinated Mn(II) ions and two six-coordinated Mn(II) ions with an internal Na cation formed a coplanar six-membered ring that was sandwiched between two [B-α-XW9O33](9-) (X = As(III) and Sb(III)) subunits. The results of temperature-dependent direct-current (dc) magnetic susceptibility data indicated ferromagnetic interactions between Mn ions in the cluster. Moreover, alternating-current magnetic susceptibility measurements with a dc-biased magnetic field showed the existence of a ferromagnetic order for both samples. Electrochemistry studies revealed the presence of redox processes assigned to the Mn centers. They are associated with the deposition of material on the working electrode surface, possibly MnxOy, as demonstrated by electrochemical quartz crystal microbalance experiments.

  4. Standardized sign-out reduces intern perception of medical errors on the general internal medicine ward.

    Science.gov (United States)

    Salerno, Stephen M; Arnett, Michael V; Domanski, Jeremy P

    2009-01-01

    Prior research on reducing variation in housestaff handoff procedures have depended on proprietary checkout software. Use of low-technology standardization techniques has not been widely studied. We wished to determine if standardizing the process of intern sign-out using low-technology sign-out tools could reduce perception of errors and missing handoff data. We conducted a pre-post prospective study of a cohort of 34 interns on a general internal medicine ward. Night interns coming off duty and day interns reassuming care were surveyed on their perception of erroneous sign-out data, mistakes made by the night intern overnight, and occurrences unanticipated by sign-out. Trainee satisfaction with the sign-out process was assessed with a 5-point Likert survey. There were 399 intern surveys performed 8 weeks before and 6 weeks after the introduction of a standardized sign-out form. The response rate was 95% for the night interns and 70% for the interns reassuming care in the morning. After the standardized form was introduced, night interns were significantly (p intern. However, the day teams thought there were significantly less perceived errors on the part of the night intern (p = .001) after introduction of the standardized sign-out sheet. There was no difference in mean Likert scores of resident satisfaction with sign-out before and after the intervention. Standardized written sign-out sheets significantly improve the completeness and effectiveness of handoffs between night and day interns. Further research is needed to determine if these process improvements are related to better patient outcomes.

  5. Inference of nonlinear state-space models for sandwich-type lateral flow immunoassay using extended Kalman filtering.

    Science.gov (United States)

    Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Liu, Xiaohui

    2011-07-01

    In this paper, a mathematical model for sandwich-type lateral flow immunoassay is developed via short available time series. A nonlinear dynamic stochastic model is considered that consists of the biochemical reaction system equations and the observation equation. After specifying the model structure, we apply the extended Kalman filter (EKF) algorithm for identifying both the states and parameters of the nonlinear state-space model. It is shown that the EKF algorithm can accurately identify the parameters and also predict the system states in the nonlinear dynamic stochastic model through an iterative procedure by using a small number of observations. The identified mathematical model provides a powerful tool for testing the system hypotheses and also for inspecting the effects from various design parameters in both rapid and inexpensive way. Furthermore, by means of the established model, the dynamic changes in the concentration of antigens and antibodies can be predicted, thereby making it possible for us to analyze, optimize, and design the properties of lateral flow immunoassay devices. © 2011 IEEE

  6. A sandwich-type immunosensor using Pd-Pt nanocrystals as labels for sensitive detection of human tissue polypeptide antigen

    Science.gov (United States)

    Wang, Yaoguang; Wei, Qin; Zhang, Yong; Wu, Dan; Ma, Hongmin; Guo, Aiping; Du, Bin

    2014-02-01

    A sandwich-type immunosensor was developed for the detection of human tissue polypeptide antigen (hTPA). In this work, a graphene sheet (GS) was synthesized to modify the surface of a glassy carbon electrode (GCE), and Pd-Pt bimetallic nanocrystals were used as secondary-antibody (Ab2) labels for the fabrication of the immunosensor. The amperometric response of the immunosensor for catalyzing hydrogen peroxide (H2O2) was recorded. And electrochemical impedance spectroscopy was used to characterize the fabrication process of the immunosensor. The anti-human tissue polypeptide antigen primary antibody (Ab1) was immobilized onto the GS modified GCE via cross-linking with 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride and N-hydroxysuccinimide (EDC/NHS). With Ab1 immobilized onto the GS modified GCE and Ab2 linked on Pd-Pt bimetallic nanocrystals, the immunosensor demonstrated a wide linear range (0.0050-15 ng ml-1), a low detection limit (1.2 pg ml-1), good reproducibility, good selectivity and acceptable stability. This design strategy may provide many potential applications in the detection of other cancer biomarkers.

  7. A novel carboxyethyltin functionalized sandwich-type germanotungstate: synthesis, crystal structure, photosensitivity, and application in dye-sensitized solar cells.

    Science.gov (United States)

    Sang, Xiaojing; Li, Jiansheng; Zhang, Lancui; Wang, Zanjiao; Chen, Weilin; Zhu, Zaiming; Su, Zhongmin; Wang, Enbo

    2014-05-28

    A novel sandwich-type germanotungstate [C(NH2)3]10[Mn2{Sn(CH2)2COOH}2(B-α-GeW9O34)2]·8H2O (1) represents the first single crystalline polyoxometalate (POM) functionalized by open chain carboxyethyltin, which was designed and synthesized in aqueous solution and applied to a dye-sensitized solar cell (DSSC) for the first time. Its photosensitivity was explored through a fluorescence spectrum (FL), surface photovoltage spectrum (SPV), electrochemical method, and solid diffuse spectrum. 1 displays the primary features of sensitizers in DSSCs, and the efficiency of the solar cell is 0.22%. Delightedly, when 1 was employed to assemble a cosensitized solar cell configuration by preparing a 1-doped TiO2 electrode and additionally adsorbing N719 dyes, a considerably improved efficiency was achieved through increasing spectral absorption and accelerating electron transport, which is 19.4% higher than that of single N719 sensitization. This result opens up a new way to position different dyes on a single TiO2 film for cosensitization.

  8. Graphene-based three-dimensional hierarchical sandwich-type architecture for high-performance Li/S batteries.

    Science.gov (United States)

    Chen, Renjie; Zhao, Teng; Lu, Jun; Wu, Feng; Li, Li; Chen, Junzheng; Tan, Guoqiang; Ye, Yusheng; Amine, Khalil

    2013-10-09

    A multiwalled carbon nanotube/sulfur (MWCNT@S) composite with core-shell structure was successfully embedded into the interlay galleries of graphene sheets (GS) through a facile two-step assembly process. Scanning and transmission electron microscopy images reveal a 3D hierarchical sandwich-type architecture of the composite GS-MWCNT@S. The thickness of the S layer on the MWCNTs is ~20 nm. Raman spectroscopy, X-ray diffraction, thermogravimetric analysis, and energy-dispersive X-ray analysis confirm that the sulfur in the composite is highly crystalline with a mass loading up to 70% of the composite. This composite is evaluated as a cathode material for Li/S batteries. The GS-MWCNT@S composite exhibits a high initial capacity of 1396 mAh/g at a current density of 0.2C (1C = 1672 mA/g), corresponding to 83% usage of the sulfur active material. Much improved cycling stability and rate capability are achieved for the GS-MWCNT@S composite cathode compared with the composite lacking GS or MWCNT. The superior electrochemical performance of the GS-MWCNT@S composite is mainly attributed to the synergistic effects of GS and MWCNTs, which provide a 3D conductive network for electron transfer, open channels for ion diffusion, strong confinement of soluble polysulfides, and effective buffer for volume expansion of the S cathode during discharge.

  9. Optical codeword demodulation with error rates below standard quantum limit using a conditional nulling receiver

    CERN Document Server

    Chen, Jian; Dutton, Zachary; Lazarus, Richard; Guha, Saikat

    2011-01-01

    The quantum states of two laser pulses---coherent states---are never mutually orthogonal, making perfect discrimination impossible. Even so, coherent states can achieve the ultimate quantum limit for capacity of a classical channel, the Holevo capacity. Attaining this requires the receiver to make joint-detection measurements on long codeword blocks, optical implementations of which remain unknown. We report the first experimental demonstration of a joint-detection receiver, demodulating quaternary pulse-position-modulation (PPM) codewords at a word error rate of up to 40% (2.2 dB) below that attained with direct-detection, the largest error-rate improvement over the standard quantum limit reported to date. This is accomplished with a conditional nulling receiver, which uses optimized-amplitude coherent pulse nulling, single photon detection and quantum feedforward. We further show how this translates into coding complexity improvements for practical PPM systems, such as in deep-space communication. We antici...

  10. Standard error of inverse prediction for dose-response relationship: approximate and exact statistical inference.

    Science.gov (United States)

    Demidenko, Eugene; Williams, Benjamin B; Flood, Ann Barry; Swartz, Harold M

    2013-05-30

    This paper develops a new metric, the standard error of inverse prediction (SEIP), for a dose-response relationship (calibration curve) when dose is estimated from response via inverse regression. SEIP can be viewed as a generalization of the coefficient of variation to regression problem when x is predicted using y-value. We employ nonstandard statistical methods to treat the inverse prediction, which has an infinite mean and variance due to the presence of a normally distributed variable in the denominator. We develop confidence intervals and hypothesis testing for SEIP on the basis of the normal approximation and using the exact statistical inference based on the noncentral t-distribution. We derive the power functions for both approaches and test them via statistical simulations. The theoretical SEIP, as the ratio of the regression standard error to the slope, is viewed as reciprocal of the signal-to-noise ratio, a popular measure of signal processing. The SEIP, as a figure of merit for inverse prediction, can be used for comparison of calibration curves with different dependent variables and slopes. We illustrate our theory with electron paramagnetic resonance tooth dosimetry for a rapid estimation of the radiation dose received in the event of nuclear terrorism.

  11. A comment on sampling error in the standardized mean difference with unequal sample sizes: avoiding potential errors in meta-analytic and primary research.

    Science.gov (United States)

    Laczo, Roxanne M; Sackett, Paul R; Bobko, Philip; Cortina, José M

    2005-07-01

    The authors discuss potential confusion in conducting primary studies and meta-analyses on the basis of differences between groups. First, the authors show that a formula for the sampling error of the standardized mean difference (d) that is based on equal group sample sizes can produce substantially biased results if applied with markedly unequal group sizes. Second, the authors show that the same concerns are present when primary analyses or meta-analyses are conducted with point-biserial correlations, as the point-biserial correlation (r) is a transformation of d. Third, the authors examine the practice of correcting a point-biserial r for unequal sample sizes and note that such correction would also increase the sampling error of the corrected r. Correcting rs for unequal sample sizes, but using the standard formula for sampling error in uncorrected r, can result in bias. The authors offer a set of recommendations for conducting meta-analyses of group differences.

  12. Standard Positioning Performance Evaluation of a Single-Frequency GPS Receiver Implementing Ionospheric and Tropospheric Error Corrections

    Directory of Open Access Journals (Sweden)

    Alban Rakipi

    2015-03-01

    Full Text Available This paper evaluates the positioning performance of a single-frequency software GPS receiver using Ionospheric and Tropospheric corrections. While a dual-frequency user has the ability to eliminate the ionosphere error by taking a linear combination of observables, a single-frequency user must remove or calibrate this error by other means. To remove the ionosphere error we take advantage of the Klobuchar correction model, while for troposphere error mitigation the Hopfield correction model is used. Real GPS measurements were gathered using a single frequency receiver and post–processed by our proposed adaptive positioning algorithm. The integrated Klobuchar and Hopfield error correction models yeild a considerable reduction of the vertical error. The positioning algorithm automatically combines all available GPS pseudorange measurements when more than four satellites are in use. Experimental results show that improved standard positioning is achieved after error mitigation.

  13. A Comparison of Item Parameter Standard Error Estimation Procedures for Unidimensional and Multidimensional Item Response Theory Modeling

    Science.gov (United States)

    Paek, Insu; Cai, Li

    2014-01-01

    The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…

  14. A Comparison of Item Parameter Standard Error Estimation Procedures for Unidimensional and Multidimensional Item Response Theory Modeling

    Science.gov (United States)

    Paek, Insu; Cai, Li

    2014-01-01

    The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…

  15. How personal standards perfectionism and evaluative concerns perfectionism affect the error positivity and post-error behavior with varying stimulus visibility.

    Science.gov (United States)

    Drizinsky, Jessica; Zülch, Joachim; Gibbons, Henning; Stahl, Jutta

    2016-10-01

    Error detection is required in order to correct or avoid imperfect behavior. Although error detection is beneficial for some people, for others it might be disturbing. We investigated Gaudreau and Thompson's (Personality and Individual Differences, 48, 532-537, 2010) model, which combines personal standards perfectionism (PSP) and evaluative concerns perfectionism (ECP). In our electrophysiological study, 43 participants performed a combination of a modified Simon task, an error awareness paradigm, and a masking task with a variation of stimulus onset asynchrony (SOA; 33, 67, and 100 ms). Interestingly, relative to low-ECP participants, high-ECP participants showed a better post-error accuracy (despite a worse classification accuracy) in the high-visibility SOA 100 condition than in the two low-visibility conditions (SOA 33 and SOA 67). Regarding the electrophysiological results, first, we found a positive correlation between ECP and the amplitude of the error positivity (Pe) under conditions of low stimulus visibility. Second, under the condition of high stimulus visibility, we observed a higher Pe amplitude for high-ECP-low-PSP participants than for high-ECP-high-PSP participants. These findings are discussed within the framework of the error-processing avoidance hypothesis of perfectionism (Stahl, Acharki, Kresimon, Völler, & Gibbons, International Journal of Psychophysiology, 97, 153-162, 2015).

  16. The neutral emergence of error minimized genetic codes superior to the standard genetic code.

    Science.gov (United States)

    Massey, Steven E

    2016-11-07

    The standard genetic code (SGC) assigns amino acids to codons in such a way that the impact of point mutations is reduced, this is termed 'error minimization' (EM). The occurrence of EM has been attributed to the direct action of selection, however it is difficult to explain how the searching of alternative codes for an error minimized code can occur via codon reassignments, given that these are likely to be disruptive to the proteome. An alternative scenario is that EM has arisen via the process of genetic code expansion, facilitated by the duplication of genes encoding charging enzymes and adaptor molecules. This is likely to have led to similar amino acids being assigned to similar codons. Strikingly, we show that if during code expansion the most similar amino acid to the parent amino acid, out of the set of unassigned amino acids, is assigned to codons related to those of the parent amino acid, then genetic codes with EM superior to the SGC easily arise. This scheme mimics code expansion via the gene duplication of charging enzymes and adaptors. The result is obtained for a variety of different schemes of genetic code expansion and provides a mechanistically realistic manner in which EM has arisen in the SGC. These observations might be taken as evidence for self-organization in the earliest stages of life. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Revisiting Case IV: a reassessment of bias and standard errors of Case IV under range restriction.

    Science.gov (United States)

    Fife, Dustin A; Mendoza, Jorge L; Terry, Robert

    2013-11-01

    In 2004, Hunter and Schmidt proposed a correction (called Case IV) that seeks to estimate disattenuated correlations when selection is made on an unmeasured variable. Although Case IV is an important theoretical development in the range restriction literature, it makes an untestable assumption, namely that the partial correlation between the unobserved selection variable and the performance measure is zero. We show in this paper why this assumption may be difficult to meet and why previous simulations have failed to detect the full extent of bias. We use meta-analytic literature to investigate the plausible range of bias. We also show how Case IV performs in terms of standard errors. Finally, we give practical recommendations about how the contributions of Hunter and Schmidt (2004) can be extended without making such stringent assumptions.

  18. Prediction and standard error estimation for a finite universe total when a stratum is not sampled

    Energy Technology Data Exchange (ETDEWEB)

    Wright, T.

    1994-01-01

    In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time, the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.

  19. Manganese(III)-containing Wells-Dawson sandwich-type polyoxometalates: comparison with their manganese(II) counterparts.

    Science.gov (United States)

    Lebrini, Mounim; Mbomekallé, Israël M; Dolbecq, Anne; Marrot, Jérôme; Berthet, Patrick; Ntienoue, Joseline; Sécheresse, Francis; Vigneron, Jacky; Etcheberry, Arnaud

    2011-07-18

    We present the synthesis and structural characterization, assessed by various techniques (FTIR, TGA, UV-vis, elemental analysis, single-crystal X-ray diffraction for three compounds, magnetic susceptibility, and electrochemistry) of five manganese-containing Wells-Dawson sandwich-type (WDST) complexes. The dimanganese(II)-containing complex, [Na(2)(H(2)O)(2)Mn(II)(2)(As(2)W(15)O(56))(2)](18-) (1), was obtained by reaction of MnCl(2) with 1 equiv of [As(2)W(15)O(56)](12-) in acetate medium (pH 4.7). Oxidation of 1 by Na(2)S(2)O(8) in aqueous solution led to the dimanganese(III) complex [Na(2)(H(2)O)(2)Mn(III)(2)(As(2)W(15)O(56))(2)](16-) (2), while its trimanganese(II) homologue, [Na(H(2)O)(2)Mn(II)(H(2)O)Mn(II)(2)(As(2)W(15)O(56))(2)](17-) (3), was obtained by addition of ca. 1 equiv of MnCl(2) to a solution of 1 in 1 M NaCl. The trimanganese(III) and tetramanganese(III) counterparts, [Mn(III)(H(2)O)Mn(III)(2)(As(2)W(15)O(56))(2)](15-) (4) and [Mn(III)(2)(H(2)O)(2)Mn(III)(2)(As(2)W(15)O(56))(2)](12-) (6), are, respectively, obtained by oxidation of aqueous solutions of 3 and [Mn(II)(2)(H(2)O)(2)Mn(II)(2)(As(2)W(15)O(56))(2)](16-) (5) by Na(2)S(2)O(8). Single-crystal X-ray analyses were carried out on 2, 3, and 4. BVS calculations and XPS confirmed that the oxidation state of Mn centers is +II for complexes 1, 3, and 5 and +III for 2, 4, and 6. A complete comparative electrochemical study was carried out on the six compounds cited above, and it was possible to observe the distinct redox steps Mn(IV/III) and Mn(III/II). Magnetization measurements, as a function of temperature, confirm the presence of antiferromagnetic interactions between the Mn ions in these compounds in all cases with the exception of compound 2.

  20. Synthesis and characterization of a 1D chain-like Cu{sub 6} substituted sandwich-type phosphotungstate with pendant dinuclear Cu–azido complexes

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yan-Ying [MOE Key Laboratory of Cluster Science, School of Chemistry, Beijing Institute of Technology, Beijing 100081 (China); Zhao, Jun-Wei, E-mail: zhaojunwei@henu.edu.cn [Henan Key Laboratory of Polyoxometalate Chemistry, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China); Wei, Qi [State Key Laboratory of Structural Chemistry, Fujian Institute of Research on the Structure of Matter, Chinese Academy of Sciences, Fuzhou, Fujian 350002 (China); Yang, Bai-Feng [MOE Key Laboratory of Cluster Science, School of Chemistry, Beijing Institute of Technology, Beijing 100081 (China); Yang, Guo-Yu, E-mail: ygy@bit.edu.cn [MOE Key Laboratory of Cluster Science, School of Chemistry, Beijing Institute of Technology, Beijing 100081 (China); State Key Laboratory of Structural Chemistry, Fujian Institute of Research on the Structure of Matter, Chinese Academy of Sciences, Fuzhou, Fujian 350002 (China)

    2014-02-15

    A novel Cu–azido complex modified hexa-Cu{sup II} substituted sandwich-type phosphotungstate [Cu(en){sub 2}]([Cu{sub 2}(en){sub 2}(μ-1,1-N{sub 3}){sub 2}(H{sub 2}O)]{sub 2}[Cu{sub 6}(en){sub 2}(H{sub 2}O){sub 2}(B-α-PW{sub 9}O{sub 34}){sub 2}])·6H{sub 2}O (1) (en=ethylene-diamine) has been prepared under hydrothermal conditions and structurally characterized by elemental analyses, IR spectra, powder X-ray diffraction (PXRD) and single-crystal X-ray diffraction. 1 displays a beautiful 1-D chain architecture constructed from sandwich-type [Cu{sub 2}(en){sub 2}(μ-1,1-N{sub 3}){sub 2}(H{sub 2}O)]{sub 2}[Cu{sub 6}(en){sub 2}(H{sub 2}O){sub 2}(B-α-PW{sub 9}O{sub 34}){sub 2}]{sup 2−} units and [Cu(en){sub 2}]{sup 2+} linkers. To our knowledge, 1 represents the first hexa-Cu{sup II} sandwiched phosphotungstate with supporting Cu–azido complexes. - Graphical abstract: The first hexa-Cu{sup II} sandwiched phosphotungstate with supporting Cu–azido complexes has been prepared and characterized. Display Omitted - Highlights: • Hexa-copper-substituted phosphotungstate. • Cu–azido complexes modified hexa-Cu{sup II} substituted sandwich-type polyoxometalate. • 1-D chain architecture built by hexa-copper-substituted polyoxotungstate units.

  1. On the use of robust estimators for standard errors in the presence of clustering when clustering membership is misspecified.

    Science.gov (United States)

    Desai, Manisha; Bryson, Susan W; Robinson, Thomas

    2013-03-01

    This paper examines the implications of using robust estimators (REs) of standard errors in the presence of clustering when cluster membership is unclear as may commonly occur in clustered randomized trials. For example, in such trials, cluster membership may not be recorded for one or more treatment arms and/or cluster membership may be dynamic. When clusters are well defined, REs have properties that are robust to misspecification of the correlation structure. To examine whether results were sensitive to assumptions about the clustering membership, we conducted simulation studies for a two-arm clinical trial, where the number of clusters, the intracluster correlation (ICC), and the sample size varied. REs of standard errors that incorrectly assumed clustering of data that were truly independent yielded type I error rates of up to 40%. Partial and complete misspecifications of membership (where some and no knowledge of true membership were incorporated into assumptions) for data generated from a large number of clusters (50) with a moderate ICC (0.20) yielded type I error rates that ranged from 7.2% to 9.1% and 10.5% to 45.6%, respectively; incorrectly assuming independence gave a type I error rate of 10.5%. REs of standard errors can be useful when the ICC and knowledge of cluster membership are high. When the ICC is weak, a number of factors must be considered. Our findings suggest guidelines for making sensible analytic choices in the presence of clustering.

  2. Efficient excitation of photoluminescence in a two-dimensional waveguide consisting of a quantum dot-polymer sandwich-type structure.

    Science.gov (United States)

    Suárez, I; Larrue, A; Rodríguez-Cantó, P J; Almuneau, G; Abargues, R; Chirvony, V S; Martínez-Pastor, J P

    2014-08-15

    In this Letter, we study a new kind of organic polymer waveguide numerically and experimentally by combining an ultrathin (10-50 nm) layer of compactly packed CdSe/ZnS core/shell colloidal quantum dots (QDs) sandwiched between two cladding poly(methyl methacrylate) (PMMA) layers. When a pumping laser beam is coupled into the waveguide edge, light is mostly confined around the QD layer, improving the efficiency of excitation. Moreover, the absence of losses in the claddings allows the propagation of the pumping laser beam along the entire waveguide length; hence, a high-intensity photoluminescence (PL) is produced. Furthermore, a novel fabrication technology is developed to pattern the PMMA into ridge structures by UV lithography in order to provide additional light confinement. The sandwich-type waveguide is analyzed in comparison to a similar one formed by a PMMA film homogeneously doped by the same QDs. A 100-fold enhancement in the waveguided PL is found for the sandwich-type case due to the higher concentration of QDs inside the waveguide.

  3. Inverted-sandwich-type and open-lantern-type dinuclear transition metal complexes: theoretical study of chemical bonds by electronic stress tensor

    CERN Document Server

    Ichikawa, Kazuhide; Kurokawa, Yusaku I; Sakaki, Shigeyoshi; Tachibana, Akitomo

    2011-01-01

    We study the electronic structure of two types of transition metal complexes, the inverted-sandwich-type and open-lantern-type, by the electronic stress tensor. In particular, the bond order b_e measured by the energy density which is defined from the electronic stress tensor is studied and compared with the conventional MO based bond order. We also examine the patterns found in the largest eigenvalue of the stress tensor and corresponding eigenvector field, the "spindle structure" and "pseudo-spindle structure". As for the inverted-sandwich-type complex, our bond order b_e calculation shows that relative strength of the metal-benzene bond among V, Cr and Mn complexes is V > Cr > Mn which is consistent with the MO based bond order. As for the open-lantern-type complex, we find that our energy density based bond order can properly describe the relative strength of Cr--Cr and Mo--Mo bonds by the surface integration of the energy density over the "Lagrange surface" which can take into account the spatial extent ...

  4. A novel sandwich-type electrochemical aptasensor based on GR-3D Au and aptamer-AuNPs-HRP for sensitive detection of oxytetracycline.

    Science.gov (United States)

    Liu, Su; Wang, Yu; Xu, Wei; Leng, Xueqi; Wang, Hongzhi; Guo, Yuna; Huang, Jiadong

    2017-02-15

    In this paper, a novel sandwich-type electrochemical aptasensor has been fabricated and applied for sensitive and selective detection of antibiotic oxytetracycline (OTC). This sensor was based on graphene-three dimensional nanostructure gold nanocomposite (GR-3D Au) and aptamer-AuNPs-horseradish peroxidase (aptamer-AuNPs-HRP) nanoprobes as signal amplification. Firstly, GR-3D Au film was modified on glassy carbon electrode only by one-step electrochemical coreduction with graphite oxide (GO) and HAuCl4 at cathodic potentials, which enhanced the electron transfer and loading capacity of biomolecules. Then the aptamer and HRP modified Au nanoparticles provide high affinity and ultrasensitive electrochemical probe with excellent specificity for OTC. Under the optimized conditions, the peak current was linearly proportional to the concentration of OTC in the range of 5×10(-10)-2×10(-3)gL(-1), with a detection limit of 4.98×10(-10)gL(-1). Additionally, this aptasensor had the advantages in high sensitivity, superb specificity and showed good recovery in synthetic samples. Hence, the developed sandwich-type electrochemical aptasensor might provide a useful and practical tool for OTC determination and related food safety analysis and clinical diagnosis.

  5. Asymptotic and Sampling-Based Standard Errors for Two Population Invariance Measures in the Linear Equating Case

    Science.gov (United States)

    Rijmen, Frank; Manalo, Jonathan R.; von Davier, Alina A.

    2009-01-01

    This article describes two methods for obtaining the standard errors of two commonly used population invariance measures of equating functions: the root mean square difference of the subpopulation equating functions from the overall equating function and the root expected mean square difference. The delta method relies on an analytical…

  6. Standardizing electrophoresis conditions: how to eliminate a major source of error in the comet assay.

    Directory of Open Access Journals (Sweden)

    Gunnar Brunborg

    2015-06-01

    Full Text Available In the alkaline comet assay, cells are embedded in agarose, lysed, and then subjected to further processing including electrophoresis at high pH (>13. We observed very large variations of mean comet tail lengths of cell samples from the same population when spread on a glass or plastic substrate and subjected to electrophoresis. These variations might be cancelled out if comets are scored randomly over a large surface, or if all the comets are scored. The mean tail length may then be representative of the population, although its standard error is large. However, the scoring process often involves selection of 50 – 100 comets in areas selected in an unsystematic way from a large gel on a glass slide. When using our 96-sample minigel format (1, neighbouring sample variations are easily detected. We have used this system to study the cause of the comet assay variations during electrophoresis and we have defined experimental conditions which reduce the variations to a minimum. We studied the importance of various physical parameters during electrophoresis: (i voltage; (ii duration of electrophoresis; (iii electric current; (iv temperature; and (v agarose concentration. We observed that the voltage (V/cm varied substantially during electrophoresis, even within a few millimetres of distance between gel samples. Not unexpectedly, both the potential ( V/cm and the time were linearly related to the mean comet tail, whereas the current was not. By measuring the local voltage with microelectrodes a few millimetres apart, we observed substantial local variations in V/cm, and they increased with time. This explains the large variations in neighbouring sample comet tails of 25% or more. By introducing simple technology (circulation of the solution during electrophoresis, and temperature control, these variations in mean comet tail were largely abolished, as were the V/cm variations. Circulation was shown to be particularly important and optimal conditions

  7. Laterally Sandwich-typed Hydrogel Columns with Liner Poly(N-isopropylacrylamide)Layer: Preparation, Swelling/ deswelling Kinetics and Drug Delivery Characteristics

    Institute of Scientific and Technical Information of China (English)

    LI Ying; XIAO Xincai

    2012-01-01

    A novel thermo-responsive hydrogel column,featured with both ends of linear poly(Nisopropylacrylamide) (PNIPAM) chains being grafted onto cross-linked PNIPAM chains,was reported.The laterally sandwich-typed hydrogel columns were fabricated by radical polymerization in a three-step process using a method of ice-melting synthesis.The initiating path,morphology and thermoresponsive characteristics of the prepared hydrogel columns were experimentally studied.The results show that the hydrogel column obtained by the initiator inside part has more quick swelling and deswelling rates responsing to temperature cycling than other hydrogels owing to linear PNIPAM chains to form supermacroporous structure.The proposed hydrogel structure provide a new mode of the phase transition behavior for thermo-sensitive "smart" or "intelligent" monodisperse micro-actuators,which is highly attractive for targeting drug delivery systems,chemical separations,and sensors and so on.

  8. Standardizing Medication Error Event Reporting in the U.S. Department of Defense

    Science.gov (United States)

    2005-01-01

    States due to medical errors is between 44,000 and 98,000. This number far exceeds the annual number of deaths resulting from AIDS, breast cancer , or...Potassium Chloride Furosemide Diazepam Fentanyl Ketorolac Potassium Chloride Furosemide Meperidine Metoprolol Ipatropium Hydromorphone Vancomycin * 2002

  9. Fabrication of sandwich-type MgB{sub 2}/Boron/MgB{sub 2} Josephson junctions with rapid annealing method

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Song; Wang, Xu; Ma, Junli; Cui, Ruirui; Deng, Chaoyong, E-mail: cydeng@gzu.edu.cn

    2015-11-15

    Sandwich-type MgB{sub 2}/Boron/MgB{sub 2} Josephson junctions were fabricated using magnetron sputtering system. The rapid-anneal process was adopted to replace traditional way of annealing, trying to solve the problem of interdiffusion and oxidation with multilayer films. The boron film was used as barrier layer to avoid the introduction of impurities and improve reproducibility of the junctions. The bottom MgB{sub 2} thin films deposited on c-plane sapphire substrate exhibits a critical temperature T{sub C} of 37.5 K and critical current density J{sub C} at 5 K of 8.7 × 10{sup 6} A cm{sup −2}. From the XRD pattern, the bottom MgB{sub 2} thin film shows c-axis orientation, whereas the top MgB{sub 2} became polycrystalline as Boron barrier layer grown thicker. Therefore, all junction samples show lower T{sub C} than single MgB{sub 2} thin film. The junctions exhibit excellent quasiparticle characteristics with ideal dependence on temperature and Boron barrier thickness. Subharmonic gap structure was appeared in conductance characteristics, which was attributed to the multiple Andreev reflections (MAR). The result demonstrates great promise of this new fabrication technology for MgB{sub 2} Josephson junction fabrication. - Highlights: • Sandwich-type MgB{sub 2}/Boron/MgB{sub 2} Josephson junctions were fabricated. • The junctions were annealed after deposition with the rapid-anneal process. • The highest critical current is 25.3 mA at 5 K and remains non-zero near 25 K. • Subharmonic gap features can be observed in the dI/dV – V curves.

  10. Sandwich-Type NbS2@S@I-Doped Graphene for High-Sulfur-Loaded, Ultrahigh-Rate, and Long-Life Lithium-Sulfur Batteries.

    Science.gov (United States)

    Xiao, Zhubing; Yang, Zhi; Zhang, Linjie; Pan, Hui; Wang, Ruihu

    2017-08-22

    Lithium-sulfur batteries practically suffer from short cycling life, low sulfur utilization, and safety concerns, particularly at ultrahigh rates and high sulfur loading. To address these problems, we have designed and synthesized a ternary NbS2@S@IG composite consisting of sandwich-type NbS2@S enveloped by iodine-doped graphene (IG). The sandwich-type structure provides an interconnected conductive network and plane-to-point intimate contact between layered NbS2 (or IG) and sulfur particles, enabling sulfur species to be efficiently entrapped and utilized at ultrahigh rates, while the structural integrity is well maintained. NbS2@S@IG exhibits prominent high-power charge/discharge performances. Reversible capacities of 195, 107, and 74 mA h g(-1) (1.05 mg cm(-2)) have been achieved after 2000 cycles at ultrahigh rates of 20, 30, and 40 C, respectively, and the corresponding average decay rates per cycle are 0.022%, 0.031% and 0.033%, respectively. When the area sulfur loading is increased to 3.25 mg cm(-2), the electrode still maintains a high discharge capacity of 405 mAh g(-1) after 600 cycles at 1 C. Three half-cells in series assembled with NbS2@S@IG can drive 60 indicators of LED modules after only 18 s of charging. The instantaneous current and power of the device reach 196.9 A g(-1) and 1369.7 W g(-1), respectively.

  11. [Error prevention through management of complications in urology: standard operating procedures from commercial aviation as a model].

    Science.gov (United States)

    Kranz, J; Sommer, K-J; Steffens, J

    2014-05-01

    Patient safety and risk/complication management rank among the current megatrends in modern medicine, which has undoubtedly become more complex. In time-critical, error-prone and difficult situations, which often occur repeatedly in everyday clinical practice, guidelines are inappropriate for acting rapidly and intelligently. With the establishment and consistent use of standard operating procedures like in commercial aviation, a possible strategic approach is available. These medical aids to decision-making - quick reference cards - are short, optimized instructions that enable a standardized procedure in case of medical claims.

  12. Improvement of least-squares collocation error estimates using local GOCE Tzz signal standard deviations

    DEFF Research Database (Denmark)

    Tscherning, Carl Christian

    2015-01-01

    The method of Least-Squares Collocation (LSC) may be used for the modeling of the anomalous gravity potential (T) and for the computation (prediction) of quantities related to T by a linear functional. Errors may also be estimated. However, when using an isotropic covariance function or equivalen...... on gravity anomalies (at 10 km altitude) predicted from GOCE Tzz. This has given an improved agreement between errors based on the differences between values derived from EGM2008 (to degree 512) and predicted gravity anomalies.......The method of Least-Squares Collocation (LSC) may be used for the modeling of the anomalous gravity potential (T) and for the computation (prediction) of quantities related to T by a linear functional. Errors may also be estimated. However, when using an isotropic covariance function or equivalent...... outside the data area. On the other hand, a comparison of predicted quantities with observed values show that the error also varies depending on the local data standard deviation. This quantity may be (and has been) estimated using the GOCE second order vertical derivative, Tzz, in the area covered...

  13. Error Estimates for Finite-Element Navier-Stokes Solvers without Standard Inf-Sup Conditions

    Institute of Scientific and Technical Information of China (English)

    JianGuo LIU; Jie LIU; Robert L.PEGO

    2009-01-01

    The authors establish error estimates for recently developed finite-element methods for incompressible viscous flow in domains with no-slip boundary conditions. The methods arise by discretization of a well-posed extended Navier-Stokes dynamics for which pressure is determined from current velocity and force fields. The methods use C1 elements for velocity and C0 elements for pressure. A stability estimate is proved for a related finite-element projection method close to classical time-splitting methods of Orszag, Israeli, DeVille and Karniadakis.

  14. A Renewable and Ultrasensitive Electrochemiluminescence Immunosenor Based on Magnetic RuL@SiO2-Au~RuL-Ab2 Sandwich-Type Nano-Immunocomplexes

    Directory of Open Access Journals (Sweden)

    Ning Gan

    2011-08-01

    Full Text Available An ultrasensitive and renewable electrochemiluminescence (ECL immunosensor was developed for the detection of tumor markers by combining a newly designed trace tag and streptavidin-coated magnetic particles (SCMPs. The trace tag (RuL@SiO2-Au~RuL-Ab2 was prepared by loading Ru(bpy32+(RuL-conjuged secondary antibodies (RuL-Ab2 on RuL@SiO2 (RuL-doped SiO2 doped Au (RuL@SiO2-Au. To fabricate the immunosensor, SCMPs were mixed with biotinylated AFP primary antibody (Biotin-Ab1, AFP, and RuL@SiO2-Au~RuL-Ab2 complexes, then the resulting SCMP/Biotin-Ab1/AFP/RuL@SiO2-Au~RuL-Ab2 (SBAR sandwich-type immunocomplexes were absorbed on screen printed carbon electrode (SPCE for detection. The immunocomplexes can be easily washed away from the surface of the SPCE when the magnetic field was removed, which made the immunosensor reusable. The present immunosensor showed a wide linear range of 0.05–100 ng mL–1 for detecting AFP, with a low detection limit of 0.02 ng mL–1 (defined as S/N = 3. The method takes advantage of three properties of the immunosensor: firstly, the RuL@SiO2-Au~RuL-Ab2 composite exhibited dual amplification since SiO2 could load large amount of reporter molecules (RuL for signal amplification. Gold particles could provide a large active surface to load more reporter molecules (RuL-Ab2. Accordingly, through the ECL response of RuL and tripropylamine (TPA, a strong ECL signal was obtained and an amplification analysis of protein interaction was achieved. Secondly, the sensor is renewable because the sandwich-type immunocomplexes can be readily absorbed or removed on the SPCE’s surface in a magnetic field. Thirdly, the SCMP modified probes can perform the rapid separation and purification of signal antibodies in a magnetic field. Thus, the present immunosensor can simultaneously realize separation, enrichment and determination. It showed potential application for the detection of AFP in human sera.

  15. Sandwich-type electrochemical immunosensor for the detection of AFP based on Pd octahedral and APTES-M-CeO₂-GS as signal labels.

    Science.gov (United States)

    Wei, Yicheng; Li, Yan; Li, Na; Zhang, Yong; Yan, Tao; Ma, Hongmin; Wei, Qin

    2016-05-15

    In the present work, an ultrasensitive sandwich-type electrochemical immunosensor based on a novel signal amplification strategy was designed for quantitative detection of alpha fetoprotein (AFP). Au nanoparticles with biocompatibility were electrodeposited on the surface of glassy carbon electrode (GCE) which can effectively capture and immobilize primary anti-AFP (Ab1) to significantly amplify the electrochemical signal. Graphene Oxide and CeO2 mesoporous nanocomposite functionalized by the 3-aminopropyltriethoxysilane supported Pd octahedral nanoparticles (Pd/APTES-M-CeO2-GS) were utilized as labels of detection anti-AFP (Ab2). Pd octahedral nanoparticles presented good catalytic activity towards the reduction of H2O2. Due to the large specific surface area and good adsorption properties of APTES-CeO2-GS nanocomposite, large amount of Pd octahedral nanoparticles could be immobilized, which could amplify the electrochemical signal and improve the sensitivity of the immunosensor. Under optimal conditions, the immunosensor exhibited wide linear range from 0.1 pg/mL to 50 ng/mL with a low detection limit of 0.033 pg/mL (S/N=3) for AFP detection. In addition, high sensitivity, excellent selectivity, good reproducibility and stability were obtained for the immunosensor, which has a promising application for quantitative detection of other tumor markers in clinical diagnosis.

  16. Facile fabrication of an ultrasensitive sandwich-type electrochemical immunosensor for the quantitative detection of alpha fetoprotein using multifunctional mesoporous silica as platform and label for signal amplification.

    Science.gov (United States)

    Wang, Yulan; Li, Xiaojian; Cao, Wei; Li, Yueyun; Li, He; Du, Bin; Wei, Qin

    2014-11-01

    A novel and ultrasensitive sandwich-type electrochemical immunosensor was designed for the quantitative detection of alpha fetoprotein (AFP) using multifunctional mesoporous silica (MCM-41) as platform and label for signal amplification. MCM-41 has high specific surface area, high pore volume, large density of surface silanol groups (SiOH) and good biocompatibility. MCM-41 functionalized with 3-aminopropyltriethoxysilane (APTES), gold nanoparticles (Au NPs) and toluidine blue (TB) could enhance electrochemical signals. Moreover, primary antibodies (Ab1) and secondary antibodies (Ab2) could be effectively immobilized onto the multifunctional MCM-41 by the interaction between Au NPs and amino groups (-NH2) on antibodies. Using multifunctional MCM-41 as a platform and label could greatly simplify the fabrication process and result in a high sensitivity of the designed immunosensor. Under optimal conditions, the designed immunosensor exhibited a wide liner range from 10(-4) ng/mL to 10(3) ng/mL with a low detection limit of 0.05 pg/mL for AFP. The designed immunosensor showed acceptable selectivity, reproducibility and stability, which could provide potential applications in clinical monitoring of AFP.

  17. Heterodyne mixing with a sandwich-type Josephson junction using a Bi-based high-T{sub c} oxide superconductor

    Energy Technology Data Exchange (ETDEWEB)

    Mizuno, K.; Higashino, H.; Setsune, K. [Central Research Laboratories, Matsushita Electric Industrial Co. Ltd, Seika, Soraku, Kyoto 619-02 (Japan)

    1996-04-01

    A sandwich-type high-T{sub c} Josephson junction coupled with a coplanar-type transmission line was fabricated and heterodyne mixing characteristics were investigated. The junction was fabricated from a stacked film structure of Bi{sub 2}Sr{sub 2}CaCu{sub 2}O{sub 8+{delta}}/Bi{sub 2}Sr{sub 2}NdCu{sub 2}O{sub 8+{delta}}/Bi{sub 2}Sr{sub 2}CaCu{sub 2}O{sub 8+{delta}} (BSCCO/BSNCO/BSCCO) and the transmission line was made of sputter-deposited Pt film. The junction had a rectangular shape of 20{mu}mx7{mu}m. Current-voltage (I-V) curves of the junction showed weak-link-type characteristics. Two microwave sources of a frequency synthesizer and a sweep oscillator were used as a local oscillator (LO: 20 GHz) and a radio frequency signal source (RF: 19 GHz) for heterodyne mixing experiments. Intermediate signal (IF: 1 GHz) was transmitted through the transmission line and detected by a power meter. The conversion efficiency of -44 dB was estimated for an LO oscillator level of -23 dB m at 5.7 K when the junction was biased at the point below the first Shapiro step. (author)

  18. Metal-doped inorganic nanoparticles for multiplex detection of biomarkers by a sandwich-type ICP-MS immunoassay.

    Science.gov (United States)

    Ko, Jung Aa; Lim, H B

    2016-09-28

    Metal-doped inorganic nanoparticles were synthesized for the multiplex detection of biomarkers by a sandwich-type inductively coupled plasma mass spectrometry (ICP-MS) immunoassay. The synthesized Cs-doped multicore magnetic nanoparticles (MMNPs) were used not only for magnetic extraction of targets but also for ratiometric measurement in ICP-MS. In addition, three different metal/dye-doped silica nanoparticles (SNPs) were synthesized as probes for multiplex detection: Y/RhBITC (rhodamine B isothiocyanate)-doped SNPs for CRP (cardiovascular disease), Cd/RhBITC-doped SNPs for AFP (tumor), and Au/5(6)-XRITC (X-rhodamine-5-(and-6)-isothiocyanate)-doped SNPs for NSE (heart disease). For quantification, the doped metals of SNPs were measured by ICP-MS and then the signal ratio to Cs of MMNPs was plotted with respect to the concentration of targets by a ratiometry. Limits of detection (LOD) of 0.35 ng/mL to 77 ng mL(-1) and recoveries of 83%-125% were obtained for serum samples spiked with the biomarkers. Since no sample treatment was necessary prior to the extraction, the proposed method provided short analysis time and convenience for the multiplex determination of biomarkers, which will be valuable for clinical application. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Enhancement in the microstructure and neutron shielding efficiency of sandwich type of 6061Al–B{sub 4}C composite material via hot isostatic pressing

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jin-Ju, E-mail: jinjupark@kaeri.re.kr [Nuclear Materials Development Division, Korea Atomic Energy Research Institute (KAERI), 1045 Daedeokdaero, Yuseong, Daejon 305-353 (Korea, Republic of); Hong, Sung-Mo [Nuclear Materials Development Division, Korea Atomic Energy Research Institute (KAERI), 1045 Daedeokdaero, Yuseong, Daejon 305-353 (Korea, Republic of); Division of Advanced Materials Engineering, Kongju National University, Cheonan 330-717 (Korea, Republic of); Lee, Min-Ku; Rhee, Chang-Kyu [Nuclear Materials Development Division, Korea Atomic Energy Research Institute (KAERI), 1045 Daedeokdaero, Yuseong, Daejon 305-353 (Korea, Republic of); Rhee, Won-Hyuk [Daewha Alloytech, Dangjin 343-882 (Korea, Republic of)

    2015-02-15

    Highlights: • 6061Al–B{sub 4}C neutron shielding composites are fabricated by sintering and HIP. • HIP process improves the wettability of B{sub 4}C particles into 6061Al matrix. • Neutron attenuation performance can be enhanced by application of HIP process. - Abstract: Sandwich type of 6061Al–B{sub 4}C composite plates, which are used as a thermal neutron absorber for spent nuclear fuel pool storage rack, were fabricated using two different consolidation ways as sintering and hot isostatic pressing (HIP) processes and their thermal neutron shielding efficiency was investigated as a function of B{sub 4}C concentration ranging from 0 to 40 wt.%. For this purpose, two respective inner core compaction parts of sintered and HIPped neutron absorbing composite materials were first produced and then cladded them between two outer plates by HIP process. The application of HIP process provided not only a lead of excellent interfacial adhesion due to the improved wettability but also an enhancement of thermal neutron shielding efficiency owing to the more uniform dispersion of B{sub 4}C particles.

  20. Modeling and analysis of the three-dimensional current density in sandwich-type single-carrier devices of disordered organic semiconductors

    Science.gov (United States)

    van der Holst, J. J. M.; Uijttewaal, M. A.; Balasubramanian, R.; Coehoorn, R.; Bobbert, P. A.; de Wijs, G. A.; de Groot, R. A.

    2009-02-01

    We present the results of a modeling study of the three-dimensional current density in single-carrier sandwich-type devices of disordered organic semiconductors. The calculations are based on a master-equation approach, assuming a Gaussian distribution of site energies without spatial correlations. The injection-barrier lowering due to the image potential is taken into account, so that the model provides a comprehensive treatment of the space-charge-limited current as well as the injection-limited current (ILC) regimes. We show that the current distribution can be highly filamentary for voltages, layer thicknesses, and disorder strengths that are realistic for organic light-emitting diodes and, that, as a result, the current density in both regimes can be significantly larger than as obtained from a one-dimensional continuum drift-diffusion device model. For devices with large injection barriers and strong disorder, in the ILC transport regime, good agreement is obtained with the average current density predicted from a model assuming injection and transport via one-dimensional filaments [A. L. Burin and M. A. Ratner, J. Chem. Phys. 113, 3941 (2000)].

  1. Lexico-Semantic Errors of the Learners of English: A Survey of Standard Seven Keiyo-Speaking Primary School Pupils in Keiyo District, Kenya

    Science.gov (United States)

    Jeptarus, Kipsamo E.; Ngene, Patrick K.

    2016-01-01

    The purpose of this research was to study the Lexico-semantic errors of the Keiyo-speaking standard seven primary school learners of English as a Second Language (ESL) in Keiyo District, Kenya. This study was guided by two related theories: Error Analysis Theory/Approach by Corder (1971) which approaches L2 learning through a detailed analysis of…

  2. Ultrasensitive sandwich-type electrochemical immunosensor based on a novel signal amplification strategy using highly loaded palladium nanoparticles/carbon decorated magnetic microspheres as signal labels.

    Science.gov (United States)

    Ji, Lei; Guo, Zhankui; Yan, Tao; Ma, Hongmin; Du, Bin; Li, Yueyun; Wei, Qin

    2015-06-15

    An ultrasensitive sandwich-type electrochemical immunosensor for quantitative detection of alpha fetoprotein (AFP) was proposed based on a novel signal amplification strategy in this work. Carbon decorated Fe3O4 magnetic microspheres (Fe3O4@C) with large specific surface area and good adsorption property were used as labels to anchor palladium nanoparticles (Pd NPs) and the secondary antibodies (Ab2). Pd NPs were loaded on Fe3O4@C to obtain Fe3O4@C@Pd with core-shell structure by electrostatic attraction, which were further used to immobilize Ab2 due to the bonding of Pd-NH2. A signal amplification strategy was the noble metal nanoparticles, such as Pd NPs, exhibiting high electrocatalytic activities toward hydrogen peroxide (H2O2) reduction. This signal amplification was novel not only because of the great capacity, but also the ease of magnetic separation from the sample solution based on their magnetic property. Moreover, carboxyl-functionalized multi-walled carbon nanotubes (MWCNTs-COOH) were used for the immobilization of primary antibodies (Ab1). Therefore, high sensitivity could be realized by the designed immunosensor based on this novel signal amplification strategy. Under optimal conditions, the immunosensor exhibited a wide linear range of 0.5 pg/mL to 10 ng/mL toward AFP with a detection limit of 0.16 pg/mL (S/N=3). Moreover, it revealed good selectivity, acceptable reproducibility and stability, indicating a potential application in clinical monitoring of tumor biomarkers. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Detection of oligonucleotide hybridization on a single microparticle by time-resolved fluorometry: quantitation and optimization of a sandwich type assay.

    Science.gov (United States)

    Hakala, H; Mäki, E; Lönnberg, H

    1998-01-01

    Uniformly sized (50 micro m) porous glycidyl methacrylate/ethylene dimethacrylate particles (SINTEF) were used as the solid phase in a sandwich type mixed-phase hybridization assay based on time-resolved fluorescence detection on a single particle. These particles were coated with oligodeoxyribonucleotide probes by conventional phosphoramidite chain assembly. An oligodeoxyribonucleotide bearing a photoluminescent europium(III) chelate, ¿2,2',2",2"'-¿¿4'-¿4"'-[(4, 6-dichloro-1,3,5-triazin-2-yl)amino]phenyl¿-2,2':6',2"-terpyrid ine-6, 6"-diyl¿bis(methylenenitrilo)¿tetrakis(acetato)¿eur opi um(III), was hybridized to a complementary sequence of the target oligonucleotide, and the resulting duplex was further hybridized to the particle-bound probes. The latter binding was quantified by time-resolved measurement of the emission signal of a single particle. Kinetics of hybridization and the effect of the concentration of the target oligomer and the fluorescently tagged probe on the efficiency of hybridization were studied. The intensity of the emission signal was linearly related to the concentration of the target oligomer over a range of 5 orders of magnitude. The length of the complementary region between the target oligomer and the particle-bound probe was varied, and the effect of point mutations and deletions on the hybridization efficiency was determined in each case. The maximal selectivity was observed with 10-16-base pair complementary sequences, the optimal length depending on the oligonucleotide loading on the particle. Discrimination between the complete matches and point mismatches was unequivocal, a single point mutation and/or deletion decreasing the efficiency of hybridization by more than 2 orders of magnitude.

  4. Simultaneous detection of several oligonucleotides by time-resolved fluorometry: the use of a mixture of categorized microparticles in a sandwich type mixed-phase hybridization assay.

    Science.gov (United States)

    Hakala, H; Virta, P; Salo, H; Lönnberg, H

    1998-12-15

    Porous, uniformly sized (50 micrometer) glycidyl methacrylate/ethylene dimethacrylate particles (SINTEF) were used as a solid phase to construct a sandwich type hybridization assay that allowed simultaneous detection of up to six oligonucleotides from a single sample. The assay was based on categorization of the particles by two organic prompt fluorophores, viz. fluorescein and dansyl, and quantification of the oligonucleotide hybridization by time-resolved fluorometry. Accordingly, allele-specific oligodeoxyribonucleotide probes were assembled on the particles by conventional phosphoramidite strategy using a non-cleavable linker, and the category defining fluorescein and/or dansyl tagged building blocks were inserted in the 3'-terminal sequence. An oligonucleotide bearing a photoluminescent europium(III) chelate was hybridized to the complementary 3'-terminal sequence of the target oligonucleotide, and the resulting duplex was further hybridized to the particle-bound allele-specific probes via the 5'-terminal sequence of the target. After hybridization each individual particle was subjected to three different fluorescence intensity measurements. The intensity of the prompt fluorescence signals of fluorescein and dansyl defined the particle category, while the europium(III) chelate emission quantified the hybridization. The length of the complementary region between the target oligonucleotide and the particle-bound probe was optimized to achieve maximal selectivity. Furthermore, the kinetics of hybridization and the effect of the concentration of the target oligomer on the efficiency of hybridization were evaluated. By this approach the possible presence of a three base deletion (DeltaF508), point mutation (G542X) and point deletion (1078delT) related to cystic fibrosis could unequivocally be detected from a single sample.

  5. Ultrasensitive sandwich-type photoelectrochemical immunosensor based on CdSe sensitized La-TiO2 matrix and signal amplification of polystyrene@Ab2 composites.

    Science.gov (United States)

    Fan, Dawei; Ren, Xiang; Wang, Haoyuan; Wu, Dan; Zhao, Di; Chen, Yucheng; Wei, Qin; Du, Bin

    2017-01-15

    A novel and sensitive sandwich-type photoelectrochemical (PEC) sensor was fabricated using signal amplification strategy for the quantitative detection of the prostate specific antigen (PSA). CdSe nanoparticles (NPs) sensitized lanthanum-doped titanium dioxide (La-TiO2) composites were used to bind the primary antibodies (Ab1). The doping of lanthanum promoted the visible light absorption of TiO2 and remarkably enhanced the photocurrent. Moreover, 0.3%La-TiO2 displayed the highest photocurrent in the La-TiO2 composites, which was twice as much as that of undoped TiO2. Carboxyl modified CdSe NPs were assembled onto La-TiO2 composites via the dentate binding between -COOH and Ti atom in TiO2 NPs, which dramatically promoted the photocurrent intensity by approximately 2.1 times. Carboxyl functionalized polystyrene (PS) microspheres were coated with the secondary antibodies (Ab2). Owing to the better insulation property and steric hindrance of the prepared polystyrene@Ab2 (PS@Ab2) composites, the significant reduction of the photocurrent signal was achieved after the specific immune recognition. Under the optimum experimental conditions, the fabricated PEC sensor realized ultrasensitive detection of PSA in the range of 0.05-100pgmL(-1) with a detection limit of 17fgmL(-1). Moreover, this well-designed PEC immunoassay exhibited ideal reproducibility, stability, and selectivity, which is a promising platform for the detection of other important tumor targets.

  6. Theoretical study of inverted sandwich type complexes of 4d transition metal elements: interesting similarities to and differences from 3d transition metal complexes.

    Science.gov (United States)

    Kurokawa, Yusaku I; Nakao, Yoshihide; Sakaki, Shigeyoshi

    2012-03-08

    Inverted sandwich type complexes (ISTCs) of 4d metals, (μ-η(6):η(6)-C(6)H(6))[M(DDP)](2) (DDPH = 2-{(2,6-diisopropylphenyl)amino}-4-{(2,6-diisopropylphenyl)imino}pent-2-ene; M = Y, Zr, Nb, Mo, and Tc), were investigated with density functional theory (DFT) and MRMP2 methods, where a model ligand AIP (AIPH = (Z)-1-amino-3-imino-prop-1-ene) was mainly employed. When going to Nb (group V) from Y (group III) in the periodic table, the spin multiplicity of the ground state increases in the order singlet, triplet, and quintet for M = Y, Zr, and Nb, respectively, like 3d ISTCs reported recently. This is interpreted with orbital diagram and number of d electrons. However, the spin multiplicity decreases to either singlet or triplet in ISTC of Mo (group VI) and to triplet in ISTC of Tc (group VII), where MRMP2 method is employed because the DFT method is not useful here. These spin multiplicities are much lower than the septet of ISTC of Cr and the nonet of that of Mn. When going from 3d to 4d, the position providing the maximum spin multiplicity shifts to group V from group VII. These differences arise from the size of the 4d orbital. Because of the larger size of the 4d orbital, the energy splitting between two d(δ) orbitals of M(AIP) and that between the d(δ) and d(π) orbitals are larger in the 4d complex than in the 3d complex. Thus, when occupation on the d(δ) orbital starts, the low spin state becomes ground state, which occurs at group VI. Hence, the ISTC of Nb (group V) exhibits the maximum spin multiplicity.

  7. A comparison of registration errors with imageless computer navigation during MIS total knee arthroplasty versus standard incision total knee arthroplasty: a cadaveric study.

    Science.gov (United States)

    Davis, Edward T; Pagkalos, Joseph; Gallie, Price A M; Macgroarty, Kelly; Waddell, James P; Schemitsch, Emil H

    2015-01-01

    Optimal component alignment in total knee arthroplasty has been associated with better functional outcome as well as improved implant longevity. The ability to align components optimally during minimally invasive (MIS) total knee replacement (TKR) has been a cause of concern. Computer navigation is a useful aid in achieving the desired alignment although it is limited by the error during the manual registration of landmarks. Our study aims to compare the registration process error between a standard and a MIS surgical approach. We hypothesized that performing the registration error via an MIS approach would increase the registration process error. Five fresh frozen lower limbs were routinely prepared and draped. The registration process was performed through an MIS approach. This was then extended to the standard approach and the registration was performed again. Two surgeons performed the registration process five times with each approach. Performing the registration process through the MIS approach was not associated with higher error compared to the standard approach in the alignment parameters of interest. This rejects our hypothesis. Image-free navigated MIS TKR does not appear to carry higher risk of component malalignment due to the registration process error. Navigation can be used during MIS TKR to improve alignment without reduced accuracy due to the approach.

  8. Developing Calibration Weights and Standard-Error Estimates for a Survey of Drug-Related Emergency-Department Visits

    Directory of Open Access Journals (Sweden)

    Kott Phillip S.

    2014-09-01

    Full Text Available This article describes a two-step calibration-weighting scheme for a stratified simple random sample of hospital emergency departments. The first step adjusts for unit nonresponse. The second increases the statistical efficiency of most estimators of interest. Both use a measure of emergency-department size and other useful auxiliary variables contained in the sampling frame. Although many survey variables are roughly a linear function of the measure of size, response is better modeled as a function of the log of that measure. Consequently the log of size is a calibration variable in the nonresponse-adjustment step, while the measure of size itself is a calibration variable in the second calibration step. Nonlinear calibration procedures are employed in both steps. We show with 2010 DAWN data that estimating variances as if a one-step calibration weighting routine had been used when there were in fact two steps can, after appropriately adjusting the finite-population correct in some sense, produce standard-error estimates that tend to be slightly conservative.

  9. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    Science.gov (United States)

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  10. Curriculum-Based Measurement of Oral Reading: Standard Errors Associated with Progress Monitoring Outcomes from DIBELS, AIMSweb, and an Experimental Passage Set

    Science.gov (United States)

    Ardoin, Scott P.; Christ, Theodore J.

    2009-01-01

    There are relatively few studies that evaluate the quality of progress monitoring estimates derived from curriculum-based measurement of reading. Those studies that are published provide initial evidence for relatively large magnitudes of standard error relative to the expected magnitude of weekly growth. A major contributor to the observed…

  11. Decision Making for Borderline Cases in Pass/Fail Clinical Anatomy Courses: The Practical Value of the Standard Error of Measurement and Likelihood Ratio in a Diagnostic Test

    Science.gov (United States)

    Severo, Milton; Silva-Pereira, Fernanda; Ferreira, Maria Amelia

    2013-01-01

    Several studies have shown that the standard error of measurement (SEM) can be used as an additional “safety net” to reduce the frequency of false-positive or false-negative student grading classifications. Practical examinations in clinical anatomy are often used as diagnostic tests to admit students to course final examinations. The aim of this…

  12. How Much Confidence Can We Have in EU-SILC? Complex Sample Designs and the Standard Error of the Europe 2020 Poverty Indicators

    Science.gov (United States)

    Goedeme, Tim

    2013-01-01

    If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…

  13. Establishment and application of medication error classification standards in nursing care based on the International Classification of Patient Safety

    Directory of Open Access Journals (Sweden)

    Xiao-Ping Zhu

    2014-09-01

    Conclusion: Application of this classification system will help nursing administrators to accurately detect system- and process-related defects leading to medication errors, and enable the factors to be targeted to improve the level of patient safety management.

  14. An ultrasensitive sandwich-type electrochemical immunosensor based on the signal amplification strategy of echinoidea-shaped Au@Ag-Cu2O nanoparticles for prostate specific antigen detection.

    Science.gov (United States)

    Yang, Yuying; Yan, Qin; Liu, Qing; Li, Yongpeng; Liu, Hui; Wang, Ping; Chen, Lei; Zhang, Daopeng; Li, Yueyun; Dong, Yunhui

    2018-01-15

    Highly sensitive determination of tumor markers plays an important role in early diagnosis of cancer. Herein, a novel and ultrasensitive sandwich-type electrochemical immunosensor was fabricated for quantitative detection of prostate specific antigen (PSA). In this process, gold nanoparticles functionalized nitrogen-doped graphene quantum dots (Au@N-GQDs) was synthesized through a simple and green hydrothermal procedure to enhance conductivity, specific electrode surface area and quantity of immobilized primary antibodies (Ab1). Subsequently, the prepared echinoidea-shaped nanocomposites (Au@Ag-Cu2O) composed of Au@Ag core-shell nanoparticles and disordered cuprous oxide were prepared successfully to label the secondary antibodies (Ab2), which convened the advantages of good biocompatibility and high specific surface area. Because of the synergetic effect present in Au, Ag and Cu2O, the novel nanocomposites exhibited excellent electrocatalytic activity towards the reduction of hydrogen peroxide (H2O2) for the amplified detection of PSA. Therefore, the as-proposed immunosensor for the detection of PSA possessed wide dynamic range from 0.01pg/mL to 100ng/mL with a low detection limit of 0.003pg/mL (S/N = 3). Furthermore, this sandwich-type immunosensor revealed high sensitivity, high selectivity and long-term stability, which had promising application in bioassay analysis. Copyright © 2017. Published by Elsevier B.V.

  15. The sandwich-type electrochemiluminescence immunosensor for α-fetoprotein based on enrichment by Fe3O4-Au magnetic nano probes and signal amplification by CdS-Au composite nanoparticles labeled anti-AFP.

    Science.gov (United States)

    Zhou, Hankun; Gan, Ning; Li, Tianhua; Cao, Yuting; Zeng, Saolin; Zheng, Lei; Guo, Zhiyong

    2012-10-09

    A novel and sensitive sandwich-type electrochemiluminescence (ECL) immunosensor was fabricated on a glassy carbon electrode (GCE) for ultra trace levels of α-fetoprotein (AFP) based on sandwich immunoreaction strategy by enrichment using magnetic capture probes and quantum dots coated with Au shell (CdS-Au) as the signal tag. The capture probe was prepared by immobilizing the primary antibody of AFP (Ab1) on the core/shell Fe(3)O(4)-Au nanoparticles, which was first employed to capture AFP antigens to form Fe(3)O(4)-Au/Ab1/AFP complex from the serum after incubation. The product can be separated from the background solution through the magnetic separation. Then the CdS-Au labeled secondary antibody (Ab2) as signal tag (CdS-Au/Ab2) was conjugated successfully with Fe(3)O(4)-Au/Ab1/AFP complex to form a sandwich-type immunocomplex (Fe(3)O(4)-Au/Ab1/AFP/Ab2/CdS-Au), which can be further separated by an external magnetic field and produce ECL signals at a fixed voltage. The signal was proportional to a certain concentration range of AFP for quantification. Thus, an easy-to-use immunosensor with magnetic probes and a quantum dots signal tag was obtained. The immunosensor performed at a level of high sensitivity and a broad concentration range for AFP between 0.0005 and 5.0 ng mL(-1) with a detection limit of 0.2 pg mL(-1). The use of magnetic probes was combined with pre-concentration and separation for trace levels of tumor markers in the serum. Due to the amplification of the signal tag, the immunosensor is highly sensitive, which can offer great promise for rapid, simple, selective and cost-effective detection of effective biomonitoring for clinical application.

  16. Achieving Extreme Utilization of Excitons by an Efficient Sandwich-Type Emissive Layer Architecture for Reduced Efficiency Roll-Off and Improved Operational Stability in Organic Light-Emitting Diodes.

    Science.gov (United States)

    Wu, Zhongbin; Sun, Ning; Zhu, Liping; Sun, Hengda; Wang, Jiaxiu; Yang, Dezhi; Qiao, Xianfeng; Chen, Jiangshan; Alshehri, Saad M; Ahamad, Tansir; Ma, Dongge

    2016-02-10

    It has been demonstrated that the efficiency roll-off is generally caused by the accumulation of excitons or charge carriers, which is intimately related to the emissive layer (EML) architecture in organic light-emitting diodes (OLEDs). In this article, an efficient sandwich-type EML structure with a mixed-host EML sandwiched between two single-host EMLs was designed to eliminate this accumulation, thus simultaneously achieving high efficiency, low efficiency roll-off and good operational stability in the resulting OLEDs. The devices show excellent electroluminescence performances, realizing a maximum external quantum efficiency (EQE) of 24.6% with a maximum power efficiency of 105.6 lm W(-1) and a maximum current efficiency of 93.5 cd A(-1). At the high brightness of 5,000 cd m(-2), they still remain as high as 23.3%, 71.1 lm W(-1), and 88.3 cd A(-1), respectively. And, the device lifetime is up to 2000 h at initial luminance of 1000 cd m(-2), which is significantly higher than that of compared devices with conventional EML structures. The improvement mechanism is systematically studied by the dependence of the exciton distribution in EML and the exciton quenching processes. It can be seen that the utilization of the efficient sandwich-type EML broadens the recombination zone width, thus greatly reducing the exciton quenching and increasing the probability of the exciton recombination. It is believed that the design concept provides a new avenue for us to achieve high-performance OLEDs.

  17. Flow injection amperometric sandwich-type aptasensor for the determination of human leukemic lymphoblast cancer cells using MWCNTs-Pdnano/PTCA/aptamer as labeled aptamer for the signal amplification.

    Science.gov (United States)

    Amouzadeh Tabrizi, Mahmoud; Shamsipur, Mojtaba; Saber, Reza; Sarkar, Saeed

    2017-09-08

    In this research, we demonstrated a flow injection amperometric sandwich-type aptasensor for the determination of human leukemic lymphoblasts (CCRF-CEM) based on poly(3,4-ethylenedioxythiophene) decorated with gold nanoparticles (PEDOT-Aunano) as a nano platform to immobilize thiolated sgc8c aptamer and multiwall carbon nanotubes decorated with palladium nanoparticles/3,4,9,10-perylene tetracarboxylic acid (MWCNTs-Pdnano/PTCA) to fabricate catalytic labeled aptamer. In the proposed sensing strategy, the CCRF-CEM cancer cells were sandwiched between immobilized sgc8c aptamer on PEDOT-Aunano modified surface electrode and catalytic labeled sgc8c aptamer (MWCNTs-Pdnano/PTCA/aptamer). After that, the concentration of CCRF-CEM cancer cells was determined in presence of 0.1 mM hydrogen peroxide (H2O2) as an electroactive component. The attached MWCNTs-Pdnano nanocomposites to CCRF-CEM cancer cells amplified the electrocatalytic reduction of H2O2 and improved the sensitivity of the sensor to CCRF-CEM cancer cells. The MWCNT-Pdnano nanocomposite was characterized with transmission electron microscopy (TEM) and energy dispersive X-ray (EDX). The electrochemical impedance spectroscopy (EIS) and cyclic voltammetry (CV) were used to confirm the stepwise changes in the electrochemical surface properties of the electrode. The proposed sandwich-type electrochemical aptasensor exhibited an excellent analytical performance for the detection of CCRF-CEM cancer cells ranging from 1.0 × 10(1) to 5.0 × 10(5) cells mL(-1). The limit of detection was 8 cells mL(-1). The proposed aptasensor showed high selectivity toward CCRF-CEM cancer cells. The proposed aptasensor was also applied to the determination of CCRF-CEM cancer cells in human serum samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Synthesis, Crystal Structure and Magnetic Properties of a Sandwich-type Polyoxometalate [H2N(CH2CH2)20]10[Mn2(AsVMo9Oa3)2]·12H2O%Synthesis, Crystal Structure and Magnetic Properties of a Sandwich-type Polyoxometalate [H2N(CH2CH2)20]10[Mn2(AsVMo9Oa3)2]·12H2O

    Institute of Scientific and Technical Information of China (English)

    YANG Yan-Yan; XU Lin; QU Xiao-Shu; LOU Da-Wei

    2012-01-01

    A new heteropolymolybdoarsenate [H2N(CH2CH2)20]lo[Mn2(AsVM09033)2].12H2O (1) has been synthesized in aqueous solution and characterized by elemental analysis, IR spectroscopy, and single-crystal X-ray diffraction. The crystal is made up of sandwich-type [Mn2(AsVM09033)2]10- anions, [H2N(CH2CH2)20]* cations and H20 molecules of crystallization. The magnetic properties of 1 have been studied by measuring its magnetic susceptibility in the temperature range of 2.0-300.0 K, indicating the existence of antiferromagnetic interactions.

  19. The Psychological Effect of Errors in Standardized Language Test Items on EFL Students' Responses to the Following Item

    Science.gov (United States)

    Khaksefidi, Saman

    2017-01-01

    This study investigates the psychological effect of a wrong question with wrong items on answering to the next question in a test of structure. Forty students selected through stratified random sampling are given 15 questions of a standardized test namely a TOEFL structure test in which questions number 7 and number 11 are wrong and their answers…

  20. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    Science.gov (United States)

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required

  1. The sandwich-type electrochemiluminescence immunosensor for {alpha}-fetoprotein based on enrichment by Fe{sub 3}O{sub 4}-Au magnetic nano probes and signal amplification by CdS-Au composite nanoparticles labeled anti-AFP

    Energy Technology Data Exchange (ETDEWEB)

    Zhou Hankun [State Key Laboratory Base of Novel Functional Materials and Preparation Science, Faculty of Material Science and Chemical Engineering of Ningbo University, Ningbo 315211 (China); Gan Ning, E-mail: ganning@nbu.edu.cn [State Key Laboratory Base of Novel Functional Materials and Preparation Science, Faculty of Material Science and Chemical Engineering of Ningbo University, Ningbo 315211 (China); Li Tianhua; Cao Yuting; Zeng Saolin [State Key Laboratory Base of Novel Functional Materials and Preparation Science, Faculty of Material Science and Chemical Engineering of Ningbo University, Ningbo 315211 (China); Zheng Lei, E-mail: nfyyzl@163.com [Department of Laboratory Medicine, Nanfang Hospital, Southern Medical University, Guangzhou 510515 (China); Guo Zhiyong [State Key Laboratory Base of Novel Functional Materials and Preparation Science, Faculty of Material Science and Chemical Engineering of Ningbo University, Ningbo 315211 (China)

    2012-10-09

    Highlights: Black-Right-Pointing-Pointer Sandwich immunoreaction, testing a large number of samples simultaneously. Black-Right-Pointing-Pointer The magnetic separation and enrichment by Fe{sub 3}O{sub 4}-Au magnetic nano probes. Black-Right-Pointing-Pointer The amplification of detection signal by CdS-Au composite nanoparticles labeled anti-AFP. Black-Right-Pointing-Pointer Almost no background signal, which greatly improve the sensitivity of detection. - Abstract: A novel and sensitive sandwich-type electrochemiluminescence (ECL) immunosensor was fabricated on a glassy carbon electrode (GCE) for ultra trace levels of {alpha}-fetoprotein (AFP) based on sandwich immunoreaction strategy by enrichment using magnetic capture probes and quantum dots coated with Au shell (CdS-Au) as the signal tag. The capture probe was prepared by immobilizing the primary antibody of AFP (Ab1) on the core/shell Fe{sub 3}O{sub 4}-Au nanoparticles, which was first employed to capture AFP antigens to form Fe{sub 3}O{sub 4}-Au/Ab1/AFP complex from the serum after incubation. The product can be separated from the background solution through the magnetic separation. Then the CdS-Au labeled secondary antibody (Ab2) as signal tag (CdS-Au/Ab2) was conjugated successfully with Fe{sub 3}O{sub 4}-Au/Ab1/AFP complex to form a sandwich-type immunocomplex (Fe{sub 3}O{sub 4}-Au/Ab1/AFP/Ab2/CdS-Au), which can be further separated by an external magnetic field and produce ECL signals at a fixed voltage. The signal was proportional to a certain concentration range of AFP for quantification. Thus, an easy-to-use immunosensor with magnetic probes and a quantum dots signal tag was obtained. The immunosensor performed at a level of high sensitivity and a broad concentration range for AFP between 0.0005 and 5.0 ng mL{sup -1} with a detection limit of 0.2 pg mL{sup -1}. The use of magnetic probes was combined with pre-concentration and separation for trace levels of tumor markers in the serum. Due to the

  2. Novel signal amplification strategy for ultrasensitive sandwich-type electrochemical immunosensor employing Pd-Fe3O4-GS as the matrix and SiO2 as the label.

    Science.gov (United States)

    Wang, Yulan; Ma, Hongmin; Wang, Xiaodong; Pang, Xuehui; Wu, Dan; Du, Bin; Wei, Qin

    2015-12-15

    An ultrasensitive sandwich-type electrochemical immunosensor based on a novel signal amplification strategy was developed for the quantitative determination of human immunoglobulin G (IgG). Pd nanocubes functionalized magnetic graphene sheet (Pd-Fe3O4-GS) was employed as the matrix to immobilize the primary antibodies (Ab1). Owing to the synergetic effect between Pd nanocubes and magnetic graphene sheet (Fe3O4-GS), Pd-Fe3O4-GS can provide an obviously increasing electrochemical signal by electrochemical catalysis towards hydrogen peroxide (H2O2). Silicon dioxide (SiO2) was functionalized as the label to conjugate with the secondary antibodies (Ab2). Due to the larger steric hindrance of the obtained conjugate (SiO2@Ab2), the sensitive decrease of the electrochemical signal can be achieved after the specific recognition between antibodies and antigens. In this sense, this proposed immunosensor can achieve a high sensitivity, especially in the presence of low concentrations of IgG. Under optimum conditions, the proposed immunosensor offered an ultrasensitive and specific determination of IgG down to 3.2 fg/mL. This immunoassay method would open up a new promising platform to detect various tumor markers at ultralow levels for early diagnoses of different cancers.

  3. Sensitivity improvement of a sandwich-type ELISA immunosensor for the detection of different prostate-specific antigen isoforms in human serum using electrochemical impedance spectroscopy and an ordered and hierarchically organized interfacial supramolecular architecture.

    Science.gov (United States)

    Gutiérrez-Zúñiga, Gabriela Guadalupe; Hernández-López, José Luis

    2016-01-01

    A gold millielectrode (GME) functionalized with a mixed (16-MHA + EG3SH) self-assembled monolayer (SAM) was used to fabricate an indirect enzyme-linked immunosorbent assay (ELISA) immunosensor for the sensitive detection of prostate-specific antigen (PSA), a prostate cancer (PCa) biomarker, in human serum samples. To address and minimize the issue of non-specific protein adsorption, an organic matrix (amine-PEG3-biotin/avidin) was assembled on the previously functionalized electrode surface to build up an ordered and hierarchically organized interfacial supramolecular architecture: Au/16-MHA/EG3SH/amine-PEG3-biotin/avidin. The electrode was then exposed to serum samples at different concentrations of a sandwich-type immunocomplex molecule ((Btn)Ab-AgPSA-(HRP)Ab), and its interfacial properties were characterized using electrochemical impedance spectroscopy (EIS). Calibration curves for polarization resistance (RP) and capacitance (1/C) vs. total and free PSA concentrations were obtained and their analytical quality parameters were determined. This approach was compared with results obtained from a commercially available ELISA immunosensor. The results obtained in this work showed that the proposed immunosensor can be successfully applied to analyze serum samples of patients representative of the Mexican population.

  4. Error bars in experimental biology

    OpenAIRE

    2007-01-01

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what er...

  5. Characterization of XR-RV3 GafChromic{sup ®} films in standard laboratory and in clinical conditions and means to evaluate uncertainties and reduce errors

    Energy Technology Data Exchange (ETDEWEB)

    Farah, J., E-mail: jad.farah@irsn.fr; Clairand, I.; Huet, C. [External Dosimetry Department, Institut de Radioprotection et de Sûreté Nucléaire (IRSN), BP-17, 92260 Fontenay-aux-Roses (France); Trianni, A. [Medical Physics Department, Udine University Hospital S. Maria della Misericordia (AOUD), p.le S. Maria della Misericordia, 15, 33100 Udine (Italy); Ciraj-Bjelac, O. [Vinca Institute of Nuclear Sciences (VINCA), P.O. Box 522, 11001 Belgrade (Serbia); De Angelis, C. [Department of Technology and Health, Istituto Superiore di Sanità (ISS), Viale Regina Elena 299, 00161 Rome (Italy); Delle Canne, S. [Fatebenefratelli San Giovanni Calibita Hospital (FBF), UOC Medical Physics - Isola Tiberina, 00186 Rome (Italy); Hadid, L.; Waryn, M. J. [Radiology Department, Hôpital Jean Verdier (HJV), Avenue du 14 Juillet, 93140 Bondy Cedex (France); Jarvinen, H.; Siiskonen, T. [Radiation and Nuclear Safety Authority (STUK), P.O. Box 14, 00881 Helsinki (Finland); Negri, A. [Veneto Institute of Oncology (IOV), Via Gattamelata 64, 35124 Padova (Italy); Novák, L. [National Radiation Protection Institute (NRPI), Bartoškova 28, 140 00 Prague 4 (Czech Republic); Pinto, M. [Istituto Nazionale di Metrologia delle Radiazioni Ionizzanti (ENEA-INMRI), C.R. Casaccia, Via Anguillarese 301, I-00123 Santa Maria di Galeria (RM) (Italy); Knežević, Ž. [Ruđer Bošković Institute (RBI), Bijenička c. 54, 10000 Zagreb (Croatia)

    2015-07-15

    Purpose: To investigate the optimal use of XR-RV3 GafChromic{sup ®} films to assess patient skin dose in interventional radiology while addressing the means to reduce uncertainties in dose assessment. Methods: XR-Type R GafChromic films have been shown to represent the most efficient and suitable solution to determine patient skin dose in interventional procedures. As film dosimetry can be associated with high uncertainty, this paper presents the EURADOS WG 12 initiative to carry out a comprehensive study of film characteristics with a multisite approach. The considered sources of uncertainties include scanner, film, and fitting-related errors. The work focused on studying film behavior with clinical high-dose-rate pulsed beams (previously unavailable in the literature) together with reference standard laboratory beams. Results: First, the performance analysis of six different scanner models has shown that scan uniformity perpendicular to the lamp motion axis and that long term stability are the main sources of scanner-related uncertainties. These could induce errors of up to 7% on the film readings unless regularly checked and corrected. Typically, scan uniformity correction matrices and reading normalization to the scanner-specific and daily background reading should be done. In addition, the analysis on multiple film batches has shown that XR-RV3 films have generally good uniformity within one batch (<1.5%), require 24 h to stabilize after the irradiation and their response is roughly independent of dose rate (<5%). However, XR-RV3 films showed large variations (up to 15%) with radiation quality both in standard laboratory and in clinical conditions. As such, and prior to conducting patient skin dose measurements, it is mandatory to choose the appropriate calibration beam quality depending on the characteristics of the x-ray systems that will be used clinically. In addition, yellow side film irradiations should be preferentially used since they showed a lower

  6. Longitudinal Errors and Its Extension on Terminology Standardization%术语标准化中的纵向错误及其延伸

    Institute of Scientific and Technical Information of China (English)

    刘富铀; 蔡晓晴; 张榕; 孟洁; 白杨; 周庆伟; 汪小勇; 杜敏; 丁杰; 石勇

    2015-01-01

    研究和解决标准化中存在的问题是做好标准文献知识管理与服务工作的基本保证。以海洋能资源术语标准化为例,比较与分析了国内外相关领域中相关术语的使用与差异,指出了目前我国海洋能资源术语在英文翻译和使用方面的概念混乱和错误,不符合我国相关领域常用术语的惯用法。说明了相关术语标准颁布不及时仅是术语使用产生混乱的原因之一,翻译者从字面上大致领会进行的“意译”所导致术语内涵和外延的变迁是术语使用混乱的一个重要原因。术语及其定义的规范和正确使用对于保证标准文献知识管理与服务的质量,促进国内外技术交流具有重要的理论意义和实用价值。%Researching and solving some problems in the standardization is a basic guarantee for the management and service of standards literature knowledge. Taking the terminology standardization of ocean energy resources as an example, this paper compares and analyzes the use of terminologies and their differences in the related fields of ocean energy resources at home and abroad, and points out the confusions and errors of concepts in translation and use of ocean energy resource terminologies, which is also inconsistent with our common terminology usage in China. And it explains the not timely promulgation of related terminology standards is one of reasons for the confusions, but the most important reason is the change of terminology meaning and extension caused by the"free translation"of the translators after having a literal comprehension roughly. The specification and proper use of terminologies and their definitions have an important theoretical significance and practical value for guaranteeing the quality of standards literature knowledge management and service, and promoting domestic and foreign technical communication.

  7. Estimation of the limit of detection with a bootstrap-derived standard error by a partly non-parametric approach. Application to HPLC drug assays

    DEFF Research Database (Denmark)

    Linnet, Kristian

    2005-01-01

    Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors......Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors...

  8. Error bars in experimental biology.

    Science.gov (United States)

    Cumming, Geoff; Fidler, Fiona; Vaux, David L

    2007-04-09

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.

  9. Selective staining of CdS on ZnO biolabel for ultrasensitive sandwich-type amperometric immunoassay of human heart-type fatty-acid-binding protein and immunoglobulin G.

    Science.gov (United States)

    Qin, Xiaoli; Xu, Aigui; Liu, Ling; Sui, Yuyun; Li, Yunlong; Tan, Yueming; Chen, Chao; Xie, Qingji

    2017-05-15

    We report on an ultrasensitive metal-labeled amperometric immunoassay of proteins, which is based on the selective staining of nanocrystalline cadmium sulfide (CdS) on ZnO nanocrystals and in-situ microliter-droplet anodic stripping voltammetry (ASV) detection on the immunoelectrode. Briefly, antibody 1 (Ab1), bovine serum albumin (BSA), antigen and ZnO-multiwalled carbon nanotubes (MWCNTs) labeled antibody 2 (Ab2-ZnO-MWCNTs) were successively anchored on a β-cyclodextrin-graphene sheets (CD-GS) nanocomposite modified glassy carbon electrode (GCE), forming a sandwich-type immunoelectrode (Ab2-ZnO-MWCNTs/antigen/BSA/Ab1/CD-GS/GCE). CdS was selectively grown on the catalytic ZnO surfaces through chemical reaction of Cd(NO3)2 and thioacetamide (ZnO-label/CdS-staining), due to the presence of an activated cadmium hydroxide complex on ZnO surfaces that can decompose thioacetamide. A beforehand cathodic "potential control" in air and then injection of 7μL of 0.1M aqueous HNO3 on the immunoelectrode allow dissolution of the stained CdS and simultaneous cathodic preconcentration of atomic Cd onto the electrode surface, thus the following in-situ ASV detection can be used for immunoassay with enhanced sensitivity. Under optimized conditions, human immunoglobulin G (IgG) and human heart-type fatty-acid-binding protein (FABP) are analyzed by this method with ultrahigh sensitivity, excellent selectivity and small reagent-consumption, and the limits of detection (LODs, S/N=3) are 0.4fgmL(-1) for IgG and 0.3fgmL(-1) for FABP (equivalent to 73 FABP molecules in the 6μL sample employed).

  10. Facile preparation of ZIF-8@Pd-CSS sandwich-type microspheres via in situ growth of ZIF-8 shells over Pd-loaded colloidal carbon spheres with aggregation-resistant and leach-proof properties for the Pd nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Tong; Lin, Lu [State Key Laboratory of Fine Chemicals, School of Chemical Engineering, Dalian University of Technology, Dalian, 116024 (China); Zhang, Xiongfu, E-mail: xfzhang@dlut.edu.cn [State Key Laboratory of Fine Chemicals, School of Chemical Engineering, Dalian University of Technology, Dalian, 116024 (China); Liu, Haiou; Yan, Xinjuan [State Key Laboratory of Fine Chemicals, School of Chemical Engineering, Dalian University of Technology, Dalian, 116024 (China); Liu, Zhang; Yeung, King Lun [Department of Chemical and Biomolecular Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong SAR (China)

    2015-10-01

    Graphical abstract: - Highlights: • Uniform-sized colloidal carbon spheres were synthesized from low-cost glucose. • Pd nanoparticles were loaded onto the carbon spheres via self-reduction method. • A layer of ZIF-8 shell was in situ grown over the Pd-loaded carbon spheres. • The ZIF-8@Pd-CCS showed leach-proof and aggregation-resistant properties of Pd. - Abstract: Aiming to enhance the stability of noble metal nanoparticles that are anchored on the surface of colloidal carbon spheres (CCSs), we designed and prepared a new kind of sandwich-structured ZIF-8@Pd-CCS microsphere. Typically, uniform CCSs were first synthesized by the aromatization and carbonization of glucose under hydrothermal conditions. Subsequently, noble metal nanoparticles, herein Pd nanoparticles, were attached to the surface of CCSs via self-reduction route, followed by in situ assembly of a thin layer of ZIF-8 over the Pd nanoparticles to form the sandwich-type ZIF-8@Pd-CCS microspheres. X-ray diffraction (XRD) patterns and Fourier transform infrared spectroscopy (FTIR) spectra confirmed the presence of crystalline ZIF-8, while TEM analysis revealed that the ZIF-8 shells were closely bound to the Pd-loaded CCSs. The shell thickness could be tuned by varying the ZIF-8 assembly cycles. Further, liquid-phase hydrogenation of 1-hexene as the probe reaction was carried out over the ZIF-8@Pd-CCS microspheres and results showed that the prepared microspheres exhibited excellent agglomeration-resistant and leach-proof properties for the Pd nanoparticles, thus leading to the good reusability of the ZIF-8@Pd-CCS microspheres.

  11. Synthesis and Crystal Structure of an Infinite Sandwich-type Cu(I) Coordination Polymer: {[Cu(abpy)_2](H_3bptc)·(H_2O)}_n Constructed by a Tetracarboxylic Acid

    Institute of Scientific and Technical Information of China (English)

    MEI Chong-Zhen; WANG Jian-Xu; SHAN Wen-Wen

    2011-01-01

    The title compound {[Cu(abpy)2](H3bptc)·(H2O)}n, an ion-pair complex of [Cu(abpy)2]+ with [(H3bptc)]- (abpy = 3,3'-dimethyl-2,2'-bipyridine and H4bptc = 1,1'-biphenyl-2,2',3,3'-tetracarboxylic acid), has been synthesized by a hydrothermal reaction, and its structure was deter- mined by X-ray diffraction and characterized by elemental analysis and IR spectrum. The crystal is of triclinic, space group P1 with a = 8.4955(12), b = 15.164(2), c = 15.303(2), α = 105.704(3), β = 97.374(3), γ = 96.764(3)o, CuC40H35N4O9, Mr = 779.26, V = 1857.9(4)3, Dc = 1.393 g/cm3, F(000) = 808, μ = 0.649 mm-1, S = 1.026 and Z = 2. The final R = 0.0493 and wR = 0.1034 for 4026 observed reflections with I 2σ(I). The copper(I) coordination polymer demonstrates a 3-D sandwich-type structure containing 2-D double H3bptc-chain layers intercalated with the [Cu(abpy)2]+ layers by extensive hydrogen bonding interactions.

  12. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  13. Inappropriate use of standard error of the mean when reporting variability of study samples: a critical evaluation of four selected journals of obstetrics and gynecology.

    Science.gov (United States)

    Ko, Wen-Ru; Hung, Wei-Te; Chang, Hui-Chin; Lin, Long-Yau

    2014-03-01

    The study was designed to investigate the frequency of misusing standard error of the mean (SEM) in place of standard deviation (SD) to describe study samples in four selected journals published in 2011. Citation counts of articles and the relationship between the misuse rate and impact factor, immediacy index, or cited half-life were also evaluated. All original articles in the four selected journals published in 2011 were searched for descriptive statistics reporting with either mean ± SD or mean ± SEM. The impact factor, immediacy index, and cited half-life of the journals were gathered from Journal Citation Reports Science edition 2011. Scopus was used to search for citations of individual articles. The difference in citation counts between the SD group and SEM group was tested by the Mann-Whitney U test. The relationship between the misuse rate and impact factor, immediacy index, or cited half-life was also evaluated. The frequency of inappropriate reporting of SEM was 13.60% for all four journals. For individual journals, the misuse rate was from 2.9% in Acta Obstetricia et Gynecologica Scandinavica to 22.68% in American Journal of Obstetrics & Gynecology. Articles using SEM were cited more frequently than those using SD (p = 0.025). An approximate positive correlation between the misuse rate and cited half-life was observed. Inappropriate reporting of SEM is common in medical journals. Authors of biomedical papers should be responsible for maintaining an integrated statistical presentation because valuable articles are in danger of being wasted through the misuse of statistics. Copyright © 2014. Published by Elsevier B.V.

  14. Systematic error revisited

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod, M.C.

    1996-08-05

    The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.

  15. Comparison of intraclass correlation coefficient estimates and standard errors between using cross-sectional and repeated measurement data: the Safety Check cluster randomized trial.

    Science.gov (United States)

    Ip, Edward H; Wasserman, Richard; Barkin, Shari

    2011-03-01

    Designing cluster randomized trials in clinical studies often requires accurate estimates of intraclass correlation, which quantifies the strength of correlation between units, such as participants, within a cluster, such as a practice. Published ICC estimates, even when available, often suffer from the problem of wide confidence intervals. Using data from a national, randomized, controlled study concerning violence prevention for children--the Safety Check--we compare the ICC values derived from two approaches only baseline data and using both baseline and follow-up data. Using a variance component decomposition approach, the latter method allows flexibility in handling complex data sets. For example, it allows for shifts in the outcome variable over time and for an unbalanced cluster design. Furthermore, we evaluate the large-sample formula for ICC estimates and standard errors using the bootstrap method. Our findings suggest that ICC estimates range from 0.012 to 0.11 for providers within practice and range from 0.018 to 0.11 for families within provider. The estimates derived from the baseline-only and repeated-measurements approaches agree quite well except in cases in which variation over repeated measurements is large. The reductions in the widths of ICC confidence limits from using repeated measurement over baseline only are, respectively, 62% and 42% at the practice and provider levels. The contribution of this paper therefore includes two elements, which are a methodology for improving the accuracy of ICC, and the reporting of such quantities for pediatric and other researchers who are interested in designing clustered randomized trials similar to the current study.

  16. High incorrect use of the standard error of the mean (SEM) in original articles in three cardiovascular journals evaluated for 2012.

    Science.gov (United States)

    Wullschleger, Marcel; Aghlmandi, Soheila; Egger, Marcel; Zwahlen, Marcel

    2014-01-01

    In biomedical journals authors sometimes use the standard error of the mean (SEM) for data description, which has been called inappropriate or incorrect. To assess the frequency of incorrect use of SEM in articles in three selected cardiovascular journals. All original journal articles published in 2012 in Cardiovascular Research, Circulation: Heart Failure and Circulation Research were assessed by two assessors for inappropriate use of SEM when providing descriptive information of empirical data. We also assessed whether the authors state in the methods section that the SEM will be used for data description. Of 441 articles included in this survey, 64% (282 articles) contained at least one instance of incorrect use of the SEM, with two journals having a prevalence above 70% and "Circulation: Heart Failure" having the lowest value (27%). In 81% of articles with incorrect use of SEM, the authors had explicitly stated that they use the SEM for data description and in 89% SEM bars were also used instead of 95% confidence intervals. Basic science studies had a 7.4-fold higher level of inappropriate SEM use (74%) than clinical studies (10%). The selection of the three cardiovascular journals was based on a subjective initial impression of observing inappropriate SEM use. The observed results are not representative for all cardiovascular journals. In three selected cardiovascular journals we found a high level of inappropriate SEM use and explicit methods statements to use it for data description, especially in basic science studies. To improve on this situation, these and other journals should provide clear instructions to authors on how to report descriptive information of empirical data.

  17. Short-Term Estimates of Growth Using Curriculum-Based Measurement of Oral Reading Fluency: Estimating Standard Error of the Slope to Construct Confidence Intervals

    Science.gov (United States)

    Christ, Theodore J.

    2006-01-01

    Curriculum-based measurement of oral reading fluency (CBM-R) is an established procedure used to index the level and trend of student growth. A substantial literature base exists regarding best practices in the administration and interpretation of CBM-R; however, research has yet to adequately address the potential influence of measurement error.…

  18. Refractive Errors

    Science.gov (United States)

    ... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...

  19. Medication Errors

    Science.gov (United States)

    ... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...

  20. Development and Monte Carlo Study of a Procedure for Correcting the Standardized Mean Difference for Measurement Error in the Independent Variable

    Science.gov (United States)

    Nugent, William Robert; Moore, Matthew; Story, Erin

    2015-01-01

    The standardized mean difference (SMD) is perhaps the most important meta-analytic effect size. It is typically used to represent the difference between treatment and control population means in treatment efficacy research. It is also used to represent differences between populations with different characteristics, such as persons who are…

  1. Errors on errors - Estimating cosmological parameter covariance

    CERN Document Server

    Joachimi, Benjamin

    2014-01-01

    Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.

  2. How does pharmacogenetic testing alter the treatment course and patient response for chronic-pain patients in comparison with the current "trial-and-error" standard of care?

    Science.gov (United States)

    DeFeo, Kelly; Sykora, Kristen; Eley, Susan; Vincent, Debra

    2014-10-01

    To evaluate if pharmacogenetic testing (PT) holds value for pain-management practitioners by identifying the potential applications of pharmacogenetic research as well as applications in practice. A review of the literature was conducted utilizing the databases EBSCOhost, Biomedical Reference Collection, CINAHL, Health Business: Full Text, Health Source: Nursing/Academic Edition, and MEDLINE with the keywords, personalized medicine, cytochrome P450, and phamacogenetics. Chronic-pain patients present some of the most challenging patients to manage medically. Often paired with persistent, life-altering pain, they might also have oncologic and psychological comorbidities that can further complicate their management. One-step in-office PT is now widely available to optimize management of complicated patients and affectively remove the "trial-and-error" process of medication therapy. Practitioners must be familiar with the genetic determinants that affect a patient's response to medications in order to decrease preventable morbidity and mortality associated with drug-drug and patient-drug interactions, and to provide cost-effective care through avoidance of inappropriate medications. Improved pain managements will impove patient outcomes and satisfaction. ©2014 American Association of Nurse Practitioners.

  3. WAIS-IV administration errors: effects of altered response requirements on Symbol Search and violation of standard surface-variety patterns on Block Design.

    Science.gov (United States)

    Ryan, Joseph J; Swopes-Willhite, Nicole; Franklin, Cassi; Kreiner, David S

    2015-01-01

    This study utilized a sample of 50 college students to assess the possibility that responding to the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) Symbol Search subtest items with an "x" instead of a "single slash mark" would affect performance. A second sample of 50 college students was used to assess the impact on WAIS-IV Block Design performance of presenting all the items with only red surfaces facing up. The modified Symbol Search and Block Design administrations yielded mean scaled scores and raw scores that did not differ significantly from mean scores obtained with standard administrations. Findings should not be generalized beyond healthy, well-educated young adults.

  4. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  5. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  6. Error analysis and passage dependency of test items from a standardized test of multiple-sentence reading comprehension for aphasic and non-brain-damaged adults.

    Science.gov (United States)

    Nicholas, L E; Brookshire, R H

    1987-11-01

    Aphasic and non-brain-damaged adults were tested with two forms of the Nelson Reading Skills Test (NRST; Hanna. Schell, & Schreiner, 1977). The NRST is a standardized measure of silent reading for students in Grades 3 through 9 and assesses comprehension of information at three levels of inference (literal, translational, and higher level). Subjects' responses to NRST test items were evaluated to determine if their performance differed on literal, translational, and higher level items. Subjects' performance was also evaluated to determine the passage dependency of NRST test items--the extent to which readers had to rely on information in the NRST reading passages to answer test items. Higher level NRST test items (requiring complex inferences) were significantly more difficult for both non-brain-damaged and aphasic adults than literal items (not requiring inferences) or translational items (requiring simple inferences). The passage dependency of NRST test items for aphasic readers was higher than those reported by Nicholas, MacLennan, and Brookshire (1986) for multiple-sentence reading tests designed for aphasic adults. This suggests that the NRST is a more valid measure of the multiple-sentence reading comprehension of aphasic adults than the other tests evaluated by Nicholas et al. (1986).

  7. Regression calibration with heteroscedastic error variance.

    Science.gov (United States)

    Spiegelman, Donna; Logan, Roger; Grove, Douglas

    2011-01-01

    The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses' Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice.

  8. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    OpenAIRE

    Kleiss, R. H. P.; Lazopoulos, A.

    2006-01-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction o...

  9. Onorbit IMU alignment error budget

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  10. The Standard Error of Equipercentile Equating

    Science.gov (United States)

    1981-11-01

    Martin F. Wiskoff Navy Personnel R & D Center I Dr. Harold F. O’Neil, Jr. San Diego, CA 92152 Attn: PERl -OK Army Research Institute Mr. Ted N. 1. Yellen...U13A Boerhaavelaan 2 Chapel Hill, NC 27514 2334 EN Leyden THE NETHERLANDS Charles tMyers Library Livingstone House I Dr. Fritz Drasgow Livintstone Road

  11. [Survey in hospitals. Nursing errors, error culture and error management].

    Science.gov (United States)

    Habermann, Monika; Cramer, Henning

    2010-09-01

    Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.

  12. Error image aware content restoration

    Science.gov (United States)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  13. Explorations in Statistics: Standard Deviations and Standard Errors

    Science.gov (United States)

    Curran-Everett, Douglas

    2008-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…

  14. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  15. Quantifying and handling errors in instrumental measurements using the measurement error theory

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.

    2003-01-01

    Measurement error modelling is used for investigating the influence of measurement/sampling error on univariate predictions of water content and water-holding capacity (reference measurement) from nuclear magnetic resonance (NMR) relaxations (instrumental) measured on two gadoid fish species....... This is a new way of using the measurement error theory. Reliability ratios illustrate that the models for the two fish species are influenced differently by the error. However, the error seems to influence the predictions of the two reference measures in the same way. The effect of using replicated x......-measurements is illustrated by simulated data and by NMR relaxations measured several times on each fish. The standard error of the Physical determination of the reference values is lower than the standard error of the NMR measurements. In this case, lower prediction error is obtained by replicating the instrumental...

  16. Self-assembly of [B-SbW9O33]9- subunit with transition metal ions (Mn2+, Cu2+, Co2+) in aqueous solution: syntheses, structures and magnetic properties of sandwich type polyoxometalates with Subvalent Sb(III) heteroatom.

    Science.gov (United States)

    Wang, Jing-Ping; Ma, Peng-Tao; Li, Jie; Niu, Hong-Yu; Niu, Jing-Yang

    2008-05-05

    Rational self-assembly of Sb(2)O(3) and Na(2)WO(4), or (NH(4))(18)[NaSb(9)W(21)O(86)] with transition-metal ions (Mn(2+), Cu(2+), Co(2+)), in aqueous solution under controlled conditions yield a series of sandwich type complexes, namely, Na(2)H(2)[Mn(2.5)W(1.5)(H(2)O)(8)(B-beta-SbW(9)O(33))(2)]32 H(2)O (1), Na(4)H(7)[Na(3)(H(2)O)(6)Mn(3)(mu-OAc)(2)(B-alpha-SbW(9)O(33))(2)]20 H(2)O (OAc=acetate anion) (2), NaH(8)[Na(2)Cu(4)Cl(B-alpha-SbW(9)O(33))(2)]21 H(2)O (3), Na(8)K[Na(2)K(H(2)O)(2){Co(H(2)O)}(3)(B-alpha-SbW(9)O(33))(2)] 10 H(2)O (4), and Na(5)H[{Co(H(2)O)(2)}(3)W(H(2)O)(2)(B-beta-SbW(9)O(33))(2)]11.5 H(2)O (5). These structures are determined by using the X-ray diffraction technique and further characterized by obtaining IR spectra and performing elemental analysis. Structure analysis reveals that polyoxoanions in 1 and 5 comprise of two [B-beta-SbW(9)O(33)](9-) building units, whereas 2, 3, and 4 consist of two isomerous [B-alpha-SbW(9)O(33)](9-) building blocks, which are all linked by different transition-metal ions (Mn(2+), Cu(2+), or Co(2+)) with different quantitative nuclearity. It should be noted that compound 2 represents the first one-dimensional sinusoidal chain based on sandwich like tungstoantimonate building blocks through the carboxylate-bridging ligands. Additionally, 3 is constructed from sandwiched anions [Na(2)Cu(4)Cl(B-alpha-SbW(9)O(33))(2)](9-) linked to each other to form an infinitely extended 2D network, whereas 5 shows an interesting 3D framework built up from offset sandwich type polyoxoanion [{Co(H(2)O)(2)}(3)W(H(2)O)(2)(B-beta-SbW(9)O(33))(2)](6-) linked by Co(2+) and Na(+) ions. EPR studies performed at 110 K and room temperature reveal that the metal cations (Mn(2+), Cu(2+), Co(2+)) reside in a square-pyramidal geometry in 2, 3, and 4. The magnetic behavior of 1-4 suggests the presence of weak antiferromagnetic coupling interactions between magnetic metal centers with the exchange integral J=-0.552 cm(-1) in 2.

  17. Relative Effects of Trajectory Prediction Errors on the AAC Autoresolver

    Science.gov (United States)

    Lauderdale, Todd

    2011-01-01

    Trajectory prediction is fundamental to automated separation assurance. Every missed alert, false alert and loss of separation can be traced to one or more errors in trajectory prediction. These errors are a product of many different sources including wind prediction errors, inferred pilot intent errors, surveillance errors, navigation errors and aircraft weight estimation errors. This study analyzes the impact of six different types of errors on the performance of an automated separation assurance system composed of a geometric conflict detection algorithm and the Advanced Airspace Concept Autoresolver resolution algorithm. Results show that, of the error sources considered in this study, top-of-descent errors were the leading contributor to missed alerts and failed resolution maneuvers. Descent-speed errors were another significant contributor, as were cruise-speed errors in certain situations. The results further suggest that increasing horizontal detection and resolution standards are not effective strategies for mitigating these types of error sources.

  18. Imidazole Coordinated Sandwich-type Antimony Polyoxotungstates Na9[{Na(H2O)2}3{M(C3H4N2)}3(SbW9O33)2]·xH2O(M=NiⅡ, CoⅡ, ZnⅡ, MnⅡ)

    Institute of Scientific and Technical Information of China (English)

    CUI Rui-Rui; WANG Hu-Lin; YANG Xin-Yu; REN Shu-Hu; HU Huai-Ming; FU Feng; WANG Ji-Wu; XUE Gang-Lin

    2007-01-01

    The imidazole covalently coordinated sandwich-type heteropolytungstates Na9[{Na(H2O)2}3{M(C3H4N2)}3-(SbW9O33)2]·xH2O (M=NiⅡ, x=32; M=CoⅡ, x=32; M=ZnⅡ, x=33; M=MnⅡ, x=34) were obtained by the reaction of Na2WO4·2H2O, SbCl3·6H2O, NiCl2·6H2O [MnSO4·H2O, Co(NO3)2·6H2O, ZnSO4·7H2O] and imidazole at pH≈7.5. The structure of Na9[{Na(H2O)2}3{Ni(C3H4N2)}3(SbW9O33)2]·32H2O was determined by single crystal X-ray diffraction. Polyanion [{Na(H2O)2}3{Ni(C3H4N2)}3(SbW9O33)2}3]9- has approximate C3v symmetry, imidazole coordinated six-nuclear cluster [{Na(H2O)2}3{Ni(C3H4N2)}3]9+ is encapsulated between two (α-SbW9O33)9-,the three rings of imidazole in the polyanion are perpendicular to the horizontal plane formed by six metals (Na-Ni-Na-Ni-Na-Ni) in the central belt, and π-stacking interactions exist between imidazoles of neighboring polyanions with dihedral angel of 60°. The compounds were also characterized by IR, UV-Vis spectra, TG and DSC, and the thermal decomposition mechanism of the four compounds was suggested by TG curves.

  19. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  20. Sandwich type plasmonic platform for MEF using silver fractals

    DEFF Research Database (Denmark)

    Raut, Sangram L.; Rich, Ryan; Shtoyko, Tanya

    2015-01-01

    In this report, we describe a plasmonic platform with silver fractals for metal enhanced fluorescence (MEF) measurements. When a dye containing surface was brought into contact with silver fractals, a significantly enhanced fluorescence signal from the dye was observed. Fluorescence enhancement...... was studied with the N-methyl-azadioxatriangulenium chloride salt (Me-ADOTA·Cl) in PVA films made from 0.2% PVA (w/v) solution spin-coated on a clean glass coverslip. The Plasmonic Platforms (PP) were assembled by pressing together silver fractals on one glass slide and a separate glass coverslip spin......-coated with a uniform Me-ADOTA·Cl in PVA film. In addition, we also tested ADOTA labeled human serum albumin (HSA) deposited on a glass slide for potential PP bioassay applications. Using the new PP, we could achieve more than a 20-fold fluorescence enhancement (bright spots) accompanied by a decrease...

  1. Sandwich type plasmonic platform for MEF using silver fractals.

    Science.gov (United States)

    Raut, Sangram L; Rich, Ryan; Shtoyko, Tanya; Bora, Ilkay; Laursen, Bo W; Sørensen, Thomas Just; Borejdo, Julian; Gryczynski, Zygmunt; Gryczynski, Ignacy

    2015-11-14

    In this report, we describe a plasmonic platform with silver fractals for metal enhanced fluorescence (MEF) measurements. When a dye containing surface was brought into contact with silver fractals, a significantly enhanced fluorescence signal from the dye was observed. Fluorescence enhancement was studied with the N-methyl-azadioxatriangulenium chloride salt (Me-ADOTA·Cl) in PVA films made from 0.2% PVA (w/v) solution spin-coated on a clean glass coverslip. The Plasmonic Platforms (PP) were assembled by pressing together silver fractals on one glass slide and a separate glass coverslip spin-coated with a uniform Me-ADOTA·Cl in PVA film. In addition, we also tested ADOTA labeled human serum albumin (HSA) deposited on a glass slide for potential PP bioassay applications. Using the new PP, we could achieve more than a 20-fold fluorescence enhancement (bright spots) accompanied by a decrease in the fluorescence lifetime. The experimental results were used to calculate the extinction (excitation) enhancement factor (GA) and fluorescence radiative rate enhancements factor (GF). No change in emission spectrum was observed for a dye with or without contact with fractals. Our studies indicate that this type of PP can be a convenient approach for constructing assays utilizing metal enhanced fluorescence (MEF) without the need for depositing the material directly on metal structures platforms.

  2. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  3. Classification of Spreadsheet Errors

    OpenAIRE

    Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian

    2008-01-01

    This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...

  4. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  5. Optimal correction of independent and correlated errors

    OpenAIRE

    Jacobsen, Sol H.; Mintert, Florian

    2013-01-01

    We identify optimal quantum error correction codes for situations that do not admit perfect correction. We provide analytic n-qubit results for standard cases with correlated errors on multiple qubits and demonstrate significant improvements to the fidelity bounds and optimal entanglement decay profiles.

  6. Error tracking in a clinical biochemistry laboratory

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Ødum, Lars

    2009-01-01

    BACKGROUND: We report our results for the systematic recording of all errors in a standard clinical laboratory over a 1-year period. METHODS: Recording was performed using a commercial database program. All individuals in the laboratory were allowed to report errors. The testing processes were cl...

  7. Quantum Error Correction Beyond Completely Positive Maps

    OpenAIRE

    Shabani, A.; Lidar, D. A.

    2006-01-01

    By introducing an operator sum representation for arbitrary linear maps, we develop a generalized theory of quantum error correction (QEC) that applies to any linear map, in particular maps that are not completely positive (CP). This theory of "linear quantum error correction" is applicable in cases where the standard and restrictive assumption of a factorized initial system-bath state does not apply.

  8. Orbit IMU alignment: Error analysis

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  9. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    CERN Document Server

    Kleiss, R H

    2006-01-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.

  10. Reducing medication errors.

    Science.gov (United States)

    Nute, Christine

    2014-11-25

    Most nurses are involved in medicines management, which is integral to promoting patient safety. Medicines management is prone to errors, which depending on the error can cause patient injury, increased hospital stay and significant legal expenses. This article describes a new approach to help minimise drug errors within healthcare settings where medications are prescribed, dispensed or administered. The acronym DRAINS, which considers all aspects of medicines management before administration, was devised to reduce medication errors on a cardiothoracic intensive care unit.

  11. Measurement Error with Different Computer Vision Techniques

    Science.gov (United States)

    Icasio-Hernández, O.; Curiel-Razo, Y. I.; Almaraz-Cabral, C. C.; Rojas-Ramirez, S. R.; González-Barbosa, J. J.

    2017-09-01

    The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  12. MEASUREMENT ERROR WITH DIFFERENT COMPUTER VISION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    O. Icasio-Hernández

    2017-09-01

    Full Text Available The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  13. Demand Forecasting Errors

    OpenAIRE

    Mackie, Peter; Nellthorp, John; Laird, James

    2005-01-01

    Demand forecasts form a key input to the economic appraisal. As such any errors present within the demand forecasts will undermine the reliability of the economic appraisal. The minimization of demand forecasting errors is therefore important in the delivery of a robust appraisal. This issue is addressed in this note by introducing the key issues, and error types present within demand fore...

  14. When errors are rewarding

    NARCIS (Netherlands)

    Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.

    2009-01-01

    For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br

  15. Assigning error to an M2 measurement

    Science.gov (United States)

    Ross, T. Sean

    2006-02-01

    The ISO 11146:1999 standard has been published for 6 years and set forth the proper way to measure the M2 parameter. In spite of the strong experimental guidance given by this standard and the many commercial devices based upon ISO 11146, it is still the custom to quote M2 measurements without any reference to significant figures or error estimation. To the author's knowledge, no commercial M2 measurement device includes error estimation. There exists, perhaps, a false belief that M2 numbers are high precision and of insignificant error. This paradigm causes program managers and purchasers to over-specify a beam quality parameter and researchers not to question the accuracy and precision of their M2 measurements. This paper will examine the experimental sources of error in an M2 measurement including discretization error, CCD noise, discrete filter sets, noise equivalent aperture estimation, laser fluctuation and curve fitting error. These sources of error will be explained in their experimental context and convenient formula given to properly estimate error in a given M2 measurement. This work is the result of the author's inability to find error estimation and disclosure of methods in commercial beam quality measurement devices and building an ISO 11146 compliant, computer- automated M2 measurement device and the resulting lessons learned and concepts developed.

  16. Application of an Error Statistics Estimation Method to the PSAS Forecast Error Covariance Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In atmospheric data assimilation systems, the forecast error covariance model is an important component. However, the parameters required by a forecast error covariance model are difficult to obtain due to the absence of the truth. This study applies an error statistics estimation method to the Physical-space Statistical Analysis System (PSAS) height-wind forecast error covariance model. This method consists of two components: the first component computes the error statistics by using the National Meteorological Center (NMC) method, which is a lagged-forecast difference approach, within the framework of the PSAS height-wind forecast error covariance model; the second obtains a calibration formula to rescale the error standard deviations provided by the NMC method. The calibration is against the error statistics estimated by using a maximum-likelihood estimation (MLE) with rawindsonde height observed-minus-forecast residuals. A complete set of formulas for estimating the error statistics and for the calibration is applied to a one-month-long dataset generated by a general circulation model of the Global Model and Assimilation Office (GMAO), NASA. There is a clear constant relationship between the error statistics estimates of the NMC-method and MLE. The final product provides a full set of 6-hour error statistics required by the PSAS height-wind forecast error covariance model over the globe. The features of these error statistics are examined and discussed.

  17. Error handling strategies in multiphase inverse modeling

    Energy Technology Data Exchange (ETDEWEB)

    Finsterle, S.; Zhang, Y.

    2010-12-01

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  18. Wrong drug administration errors amongst anaesthetists in a South ...

    African Journals Online (AJOL)

    Adele

    Key words: Anesthesiology; Safety; Standards; Drug labeling; Medication errors. Addendum: AUDIT ON INCIDENCE OF WRONG DRUG ADMINISTRATION BY. ANAESTHETISTS IN UCT .... safety really does matter. Anaesthesia 2002 ...

  19. FREIGHT CONTAINER LIFTING STANDARD

    Energy Technology Data Exchange (ETDEWEB)

    POWERS DJ; SCOTT MA; MACKEY TC

    2010-01-13

    This standard details the correct methods of lifting and handling Series 1 freight containers following ISO-3874 and ISO-1496. The changes within RPP-40736 will allow better reading comprehension, as well as correcting editorial errors.

  20. Measurement error analysis of taxi meter

    Science.gov (United States)

    He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu

    2011-12-01

    The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.

  1. Probabilistic quantum error correction

    CERN Document Server

    Fern, J; Fern, Jesse; Terilla, John

    2002-01-01

    There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.

  2. Coordinate Standard Measurement Development

    Energy Technology Data Exchange (ETDEWEB)

    Hanshaw, R.A.

    2000-02-18

    A Shelton Precision Interferometer Base, which is used for calibration of coordinate standards, was improved through hardware replacement, software geometry error correction, and reduction of vibration effects. Substantial increases in resolution and reliability, as well as reduction in sampling time, were achieved through hardware replacement; vibration effects were reduced substantially through modification of the machine component dampening and software routines; and the majority of the machine's geometry error was corrected through software geometry error correction. Because of these modifications, the uncertainty of coordinate standards calibrated on this device has been reduced dramatically.

  3. Hydrothermal synthesis and structural characterization of an organic–inorganic hybrid sandwich-type tungstoantimonate [Cu(en){sub 2}(H{sub 2}O)]{sub 4}[Cu(en){sub 2}(H{sub 2}O){sub 2}][Cu{sub 2}Na{sub 4}(α-SbW{sub 9}O{sub 33}){sub 2}]·6H{sub 2}O

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yingjie [Institute of Molecular and Crystal Engineering, Henan Key Lab of Polyoxometalate Chemistry, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China); College of Medicine, Henan University, Kaifeng, Henan 475004 (China); Cao, Jing; Wang, Yujie; Li, Yanzhou [Institute of Molecular and Crystal Engineering, Henan Key Lab of Polyoxometalate Chemistry, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China); Zhao, Junwei, E-mail: zhaojunwei@henu.edu.cn [Institute of Molecular and Crystal Engineering, Henan Key Lab of Polyoxometalate Chemistry, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China); State Key Laboratory of Structural Chemistry, Fujian Institute of Research on the Structure of Matter, Chinese Academy of Sciences, Fuzhou, Fujian 350002 (China); Chen, Lijuan, E-mail: ljchen@henu.edu.cn [Institute of Molecular and Crystal Engineering, Henan Key Lab of Polyoxometalate Chemistry, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China); Ma, Pengtao; Niu, Jingyang [Institute of Molecular and Crystal Engineering, Henan Key Lab of Polyoxometalate Chemistry, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China)

    2014-01-15

    An organic–inorganic hybrid sandwich-type tungstoantimonate [Cu(en){sub 2}(H{sub 2}O)]{sub 4}[Cu(en){sub 2}(H{sub 2}O){sub 2}][Cu{sub 2}Na{sub 4}(α-SbW{sub 9}O{sub 33}){sub 2}]·6H{sub 2}O (1) has been synthesized by reaction of Sb{sub 2}O{sub 3}, Na{sub 2}WO{sub 4}·2H{sub 2}O, CuCl{sub 2}·2H{sub 2}O with en (en=ethanediamine) under hydrothermal conditions and structurally characterized by elemental analysis, inductively coupled plasma atomic emission spectrometry, IR spectrum and single-crystal X-ray diffraction. 1 displays a centric dimeric structure formed by two equivalent trivacant Keggin [α-SbW{sub 9}O{sub 33}]{sup 9−} subunits sandwiching a hexagonal (Cu{sub 2}Na{sub 4}) cluster. Moreover, those related hexagonal hexa-metal cluster sandwiched tungstoantimonates have been also summarized and compared. The variable-temperature magnetic measurements of 1 exhibit the weak ferromagnetic exchange interactions within the hexagonal (Cu{sub 2}Na{sub 4}) cluster mediated by the oxygen bridges. - Graphical abstract: An organic–inorganic hybrid (Cu{sub 2}Na{sub 4}) sandwiched tungstoantimonate [Cu(en){sub 2}(H{sub 2}O)]{sub 4}[Cu (en){sub 2}(H{sub 2}O){sub 2}][Cu{sub 2}Na{sub 4}(α-SbW{sub 9}O{sub 33}){sub 2}]·6H{sub 2}O was synthesized and magnetic properties was investigated. Display Omitted - Highlights: • Organic–inorganic hybrid sandwich-type tungstoantimonate. • (Cu{sub 2}Na{sub 4} sandwiched) tungstoantimonate [Cu{sub 2}Na{sub 4}(α-SbW{sub 9}O{sub 33}){sub 2}]{sup 10−}. • Ferromagnetic tungstoantimonate.

  4. Correction for quadrature errors

    DEFF Research Database (Denmark)

    Netterstrøm, A.; Christensen, Erik Lintz

    1994-01-01

    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signal...

  5. ERRORS AND CORRECTION

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.

  6. ERROR AND ERROR CORRECTION AT ELEMENTARY LEVEL

    Institute of Scientific and Technical Information of China (English)

    1994-01-01

    Introduction Errors are unavoidable in language learning, however, to a great extent, teachers in most middle schools in China regard errors as undesirable, a sign of failure in language learning. Most middle schools are still using the grammar-translation method which aims at encouraging students to read scientific works and enjoy literary works. The other goals of this method are to gain a greater understanding of the first language and to improve the students’ ability to cope with difficult subjects and materials, i.e. to develop the students’ minds. The practical purpose of using this method is to help learners pass the annual entrance examination. "To achieve these goals, the students must first learn grammar and vocabulary,... Grammar is taught deductively by means of long and elaborate explanations... students learn the rules of the language rather than its use." (Tang Lixing, 1983:11-12)

  7. Quantifying truncation errors in effective field theory

    CERN Document Server

    Furnstahl, R J; Phillips, D R; Wesolowski, S

    2015-01-01

    Bayesian procedures designed to quantify truncation errors in perturbative calculations of quantum chromodynamics observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions ("priors") for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We first demonstrate the calculation of Bayesian probability distributions for the EFT truncation error in some representative examples, and then focus on the application of chiral EFT to neutron-pr...

  8. Proofreading for word errors.

    Science.gov (United States)

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  9. Experimental research on English vowel errors analysis

    Directory of Open Access Journals (Sweden)

    Huang Qiuhua

    2016-01-01

    Full Text Available Our paper analyzed relevant acoustic parameters of people’s speech samples and the results that compared with English standard pronunciation with methods of experimental phonetics by phonetic analysis software and statistical analysis software. Then we summarized phonetic pronunciation errors of college students through the analysis of English pronunciation of vowels, we found that college students’ English pronunciation are easy occur tongue position and lip shape errors during pronounce vowels. Based on analysis of pronunciation errors, we put forward targeted voice training for college students’ English pronunciation, eventually increased the students learning interest, and improved the teaching of English phonetics.

  10. An error resilient scheme for H.264 video coding based on distortion estimated mode decision and nearest neighbor error concealment

    Institute of Scientific and Technical Information of China (English)

    LEE Tien-hsu; WANG Jong-tzy; CHEN Jhih-bin; CHANG Pao-chi

    2006-01-01

    Although H.264 video coding standard provides several error resilience tools, the damage caused by error propagation may still be tremendous. This work is aimed at developing a robust and standard-compliant error resilient coding scheme for H.264and uses techniques of mode decision, data hiding, and error concealment to reduce the damage from error propagation. This paper proposes a system with two error resilience techniques that can improve the robustness of H.264 in noisy channels. The first technique is Nearest Neighbor motion compensated Error Concealment (NNEC) that chooses the nearest neighbors in the reference frames for error concealment. The second technique is Distortion Estimated Mode Decision (DEMD) that selects an optimal mode based on stochastically distorted frames. Observed simulation results showed that the rate-distortion performances of the proposed algorithms are better than those of the compared algorithms.

  11. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  12. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  13. Errors in Radiologic Reporting

    Directory of Open Access Journals (Sweden)

    Esmaeel Shokrollahi

    2010-05-01

    Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological  

  14. Errors in neuroradiology.

    Science.gov (United States)

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  15. Frame loss error concealment for SVC

    Institute of Scientific and Technical Information of China (English)

    CHEN Ying; XIE Kai; ZHANG Feng; PANDIT Purvin; BOYCE Jill

    2006-01-01

    Scalable video coding (SVC), as the Scalable Extension of H.264/AVC, is an ongoing international video coding standard designed for network adaptive or device adaptive applications and also offers high coding efficiency. However, packet losses often occur over unreliable networks even for base layer of SVC and have severe impact on the playback quality of compressed video. Until now, no literature has discussed error concealment support for standard SVC bit-stream. In this paper,we provide robust and effective error concealment techniques for SVC with spatial scalability. Experimental results showed that the proposed methods provide substantial improvement, both subjectively and objectively, without a significant complexity overhead.

  16. Error bounds from extra precise iterative refinement

    Energy Technology Data Exchange (ETDEWEB)

    Demmel, James; Hida, Yozo; Kahan, William; Li, Xiaoye S.; Mukherjee, Soni; Riedy, E. Jason

    2005-02-07

    We present the design and testing of an algorithm for iterative refinement of the solution of linear equations, where the residual is computed with extra precision. This algorithm was originally proposed in the 1960s [6, 22] as a means to compute very accurate solutions to all but the most ill-conditioned linear systems of equations. However two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5] has recently removed the first obstacle. To overcome the second obstacle, we show how a single application of iterative refinement can be used to compute an error bound in any norm at small cost, and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. We report extensive test results on over 6.2 million matrices of dimension 5, 10, 100, and 1000. As long as a normwise (resp. componentwise) condition number computed by the algorithm is less than 1/max{l_brace}10,{radical}n{r_brace} {var_epsilon}{sub w}, the computed normwise (resp. componentwise) error bound is at most 2 max{l_brace}10,{radical}n{r_brace} {center_dot} {var_epsilon}{sub w}, and indeed bounds the true error. Here, n is the matrix dimension and w is single precision roundoff error. For worse conditioned problems, we get similarly small correct error bounds in over 89.4% of cases.

  17. Inpatients’ medical prescription errors

    Directory of Open Access Journals (Sweden)

    Aline Melo Santos Silva

    2009-09-01

    Full Text Available Objective: To identify and quantify the most frequent prescription errors in inpatients’ medical prescriptions. Methods: A survey of prescription errors was performed in the inpatients’ medical prescriptions, from July 2008 to May 2009 for eight hours a day. Rresults: At total of 3,931 prescriptions was analyzed and 362 (9.2% prescription errors were found, which involved the healthcare team as a whole. Among the 16 types of errors detected in prescription, the most frequent occurrences were lack of information, such as dose (66 cases, 18.2% and administration route (26 cases, 7.2%; 45 cases (12.4% of wrong transcriptions to the information system; 30 cases (8.3% of duplicate drugs; doses higher than recommended (24 events, 6.6% and 29 cases (8.0% of prescriptions with indication but not specifying allergy. Cconclusion: Medication errors are a reality at hospitals. All healthcare professionals are responsible for the identification and prevention of these errors, each one in his/her own area. The pharmacist is an essential professional in the drug therapy process. All hospital organizations need a pharmacist team responsible for medical prescription analyses before preparation, dispensation and administration of drugs to inpatients. This study showed that the pharmacist improves the inpatient’s safety and success of prescribed therapy.

  18. Improved synthesis of glycine, taurine and sulfate conjugated bile acids as reference compounds and internal standards for ESI-MS/MS urinary profiling of inborn errors of bile acid synthesis.

    Science.gov (United States)

    Donazzolo, Elena; Gucciardi, Antonina; Mazzier, Daniela; Peggion, Cristina; Pirillo, Paola; Naturale, Mauro; Moretto, Alessandro; Giordano, Giuseppe

    2017-04-01

    Bile acid synthesis defects are rare genetic disorders characterized by a failure to produce normal bile acids (BAs), and by an accumulation of unusual and intermediary cholanoids. Measurements of cholanoids in urine samples by mass spectrometry are a gold standard for the diagnosis of these diseases. In this work improved methods for the chemical synthesis of 30 BAs conjugated with glycine, taurine and sulfate were developed. Diethyl phosphorocyanidate (DEPC) and diphenyl phosphoryl azide (DPPA) were used as coupling reagents for glycine and taurine conjugation. Sulfated BAs were obtained by sulfur trioxide-triethylamine complex (SO3-TEA) as sulfating agent and thereafter conjugated with glycine and taurine. All products were characterized by NMR, IR spectroscopy and high resolution mass spectrometry (HRMS). The use of these compounds as internal standards allows an improved accuracy of both identification and quantification of urinary bile acids. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Error monitoring in musicians

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-07-01

    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  20. Smoothing error pitfalls

    Science.gov (United States)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  1. PERCEPTION OF RADIOLOGISTS ABOUT DIAGNOSTIC ERRORS IN RADIOLOGY IN YEMEN

    Directory of Open Access Journals (Sweden)

    Hameed M Aklan

    2014-12-01

    Full Text Available Background: Errors of diagnosis in Radiology are common affecting patient’s care and management. Several types of radiological errors such as misperception, miscommunication, and procedure misconduct have been reported highlighting the important of Radiologists’ awareness about their own errors. However, no data are available from Yemen. The aim of this study is to assess radiological errors in Yemen. Method: A standard questionnaire of radiological errors was distributed conveniently to radiologists in the main public and private hospitals in Sana'a city, Yemen. Results: Of 80 questionnaires distributed, 58 were returned back (the response rate was 72.5%. About 88% participants had diagnostic errors in 2013. The radiology errors were classified as under-call (false negative (29.3%, communication errors (27.6%, overcall (false positive (25.9%, procedural complication (24.1% and interpretation errors (15.5%. Unavailability of previous studies and inadequate clinical information were mentioned as cause’s errors (37.9% and 36.2%, respectively. The majority of radiologists (70.7% did not keep record for their own errors, and only 24.1% of radiologists had errors meeting in their departments. Conclusion: It has been concluded that errors in radiology are still a significant problem affecting patient safety. Collaborative efforts must be established to reduce diagnostic errors in radiology through organizing regular meetings to educate radiologists about such matter and create a good environment for learning and improvement rather than blaming and embarrassing.

  2. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  3. Error Correction in Classroom

    Institute of Scientific and Technical Information of China (English)

    Dr. Grace Zhang

    2000-01-01

    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  4. Theoretical analysis of reflected ray error from surface slope error and their application to the solar concentrated collector

    CERN Document Server

    Huang, Weidong

    2011-01-01

    Surface slope error of concentrator is one of the main factors to influence the performance of the solar concentrated collectors which cause deviation of reflected ray and reduce the intercepted radiation. This paper presents the general equation to calculate the standard deviation of reflected ray error from slope error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 5 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope error is transferred to the reflected ray in more than 2 folds when the incidence angle is more than 0. The equation for reflected ray error is generally fit for all reflection surfaces, and can also be applied to control the error in designing an abaxial optical system.

  5. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  6. Textbook Error: Short Circuiting on Electrochemical Cell

    Science.gov (United States)

    Bonicamp, Judith M.; Clark, Roy W.

    2007-01-01

    Short circuiting an electrochemical cell is an unreported but persistent error in the electrochemistry textbooks. It is suggested that diagrams depicting a cell delivering usable current to a load be postponed, the theory of open-circuit galvanic cells is explained, the voltages from the tables of standard reduction potentials is calculated and…

  7. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  8. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  9. Orwell's Instructive Errors

    Science.gov (United States)

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  10. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  11. Error Model of Curves in GIS and Digitization Experiment

    Institute of Scientific and Technical Information of China (English)

    GUO Tongde; WANG Jiayao; WANG Guangxia

    2006-01-01

    A stochastic error process of curves is proposed as the error model to describe the errors of curves in GIS. In terms of the stochastic process, four characteristics concerning the local error of curves, namely, mean error function, standard error function, absolute error function, and the correlation function of errors , are put forward. The total error of a curve is expressed by a mean square integral of the stochastic error process. The probabilistic meanings and geometric meanings of the characteristics mentioned above are also discussed. A scan digitization experiment is designed to check the efficiency of the model. In the experiment, a piece of contour line is digitized for more than 100 times and lots of sample functions are derived from the experiment. Finally, all the error characteristics are estimated on the basis of sample functions. The experiment results show that the systematic error in digitized map data is not negligible, and the errors of points on curves are chiefly dependent on the curvature and the concavity of the curves.

  12. Cosine tuning minimizes motor errors.

    Science.gov (United States)

    Todorov, Emanuel

    2002-06-01

    Cosine tuning is ubiquitous in the motor system, yet a satisfying explanation of its origin is lacking. Here we argue that cosine tuning minimizes expected errors in force production, which makes it a natural choice for activating muscles and neurons in the final stages of motor processing. Our results are based on the empirically observed scaling of neuromotor noise, whose standard deviation is a linear function of the mean. Such scaling predicts a reduction of net force errors when redundant actuators pull in the same direction. We confirm this prediction by comparing forces produced with one versus two hands and generalize it across directions. Under the resulting neuromotor noise model, we prove that the optimal activation profile is a (possibly truncated) cosine--for arbitrary dimensionality of the workspace, distribution of force directions, correlated or uncorrelated noise, with or without a separate cocontraction command. The model predicts a negative force bias, truncated cosine tuning at low muscle cocontraction levels, and misalignment of preferred directions and lines of action for nonuniform muscle distributions. All predictions are supported by experimental data.

  13. Patient error: a preliminary taxonomy.

    NARCIS (Netherlands)

    Buetow, S.; Kiata, L.; Liew, T.; Kenealy, T.; Dovey, S.; Elwyn, G.

    2009-01-01

    PURPOSE: Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary ca

  14. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  15. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  16. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  17. Error estimation in the direct state tomography

    Science.gov (United States)

    Sainz, I.; Klimov, A. B.

    2016-10-01

    We show that reformulating the Direct State Tomography (DST) protocol in terms of projections into a set of non-orthogonal bases one can perform an accuracy analysis of DST in a similar way as in the standard projection-based reconstruction schemes, i.e., in terms of the Hilbert-Schmidt distance between estimated and true states. This allows us to determine the estimation error for any measurement strength, including the weak measurement case, and to obtain an explicit analytic form for the average minimum square errors.

  18. Upper Bounds on Numerical Approximation Errors

    DEFF Research Database (Denmark)

    Raahauge, Peter

    2004-01-01

    This paper suggests a method for determining rigorous upper bounds on approximationerrors of numerical solutions to infinite horizon dynamic programming models.Bounds are provided for approximations of the value function and the policyfunction as well as the derivatives of the value function....... The bounds apply to moregeneral problems than existing bounding methods do. For instance, since strict concavityis not required, linear models and piecewise linear approximations can bedealt with. Despite the generality, the bounds perform well in comparison with existingmethods even when applied...... to approximations of a standard (strictly concave)growth model.KEYWORDS: Numerical approximation errors, Bellman contractions, Error bounds...

  19. A Characterization of Prediction Errors

    OpenAIRE

    Meek, Christopher

    2016-01-01

    Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.

  20. Error Analysis and Its Implication

    Institute of Scientific and Technical Information of China (English)

    崔蕾

    2007-01-01

    Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.

  1. Unitary Application of the Quantum Error Correction Codes

    Institute of Scientific and Technical Information of China (English)

    游波; 许可; 吴小华

    2012-01-01

    For applying the perfect code to transmit quantum information over a noise channel, the standard protocol contains four steps: the encoding, the noise channel, the error-correction operation, and the decoding. In present work, we show that this protocol can be simplified. The error-correction operation is not necessary if the decoding is realized by the so-called complete unitary transformation. We also offer a quantum circuit, which can correct the arbitrary single-qubit errors.

  2. Data Extraction Errors in Meta-analyses That Use Standardized Mean Differences%运用标准均数差的汇总分析中的数据提取错误

    Institute of Scientific and Technical Information of China (English)

    Peter C.Gφtzsche; Asbjcrn Hróbjartsson; Katja Mafii?; Britta Tendal; 彭斌

    2008-01-01

    背景:对使用不同标准或等级量表记录性质相似结果的试验进行汇总分析(metaanalysis)时,需要通过复杂的数据处理及数据转换将其统一为一致的衡量尺度,即标准均数差(standardized mean difference,SMD).目前尚不清楚运用这种汇总分析的可靠性.目的:研究汇总分析中的标准均数差(SMDs)是否准确.资料来源:对2004年发表的、以SMD来报告结果的汇总分析进行系统审查,无语种限制.从每个汇总分析中随机选取2项试验研究.我们力图用Hedges校正g值,通过独立计算SMD重复原汇总分析结果.数据提取:我们的主要观察指标是在选择的2项试验中至少1项结果(总估计值或可信区间)与原作者相差0.1以上的汇总分析所占的比例.由于很多常用治疗方法的效果与安慰剂相比均在0.1与0.5之间,因此我们选择0.1作为切分点.结果:在纳入研究的27个汇总分析中,有10个(37%)不能在0.1切分值内于2项试验中至少一项重复其试验结果.有4个汇总分析其估计差异≥0.6.常见的问题是患者数、平均值、标准差及效果符号存在错误.总体上看,有17个汇总分析(63%)在检查的2项试验中至少有1项出现错误.对于差异至少为0.1的汇总分析,我们检查了所有试验的数据,并用作者的方法进行了我们自己的汇总分析.结果发现,在10个汇总分析中有7个是错误的(70%);1个汇总分析随后被撤回,2个显著性差异消失或出现.结论:根据SMDs进行汇总分析发生错误的比率很高,表明尽管统计过程看似简单,但资料提取特别容易出现误差,以致产生阴性甚或相反的结果.这不但给相关研究者以启示,也提示所有的读者,包括杂志审稿者和决策者,在接触这些汇总分析时应保持警惕.

  3. Sampling errors of quantile estimations from finite samples of data

    CERN Document Server

    Roy, Philippe; Gachon, Philippe

    2016-01-01

    Empirical relationships are derived for the expected sampling error of quantile estimations using Monte Carlo experiments for two frequency distributions frequently encountered in climate sciences. The relationships found are expressed as a scaling factor times the standard error of the mean; these give a quick tool to estimate the uncertainty of quantiles for a given finite sample size.

  4. Moderating Argos location errors in animal tracking data

    Science.gov (United States)

    Douglas, David C.; Weinziert, Rolf; Davidson, Sarah C.; Kays, Roland; Wikelski, Martin; Bohrer, Gil

    2012-01-01

    1. The Argos System is used worldwide to satellite-track free-ranging animals, but location errors can range from tens of metres to hundreds of kilometres. Low-quality locations (Argos classes A, 0, B and Z) dominate animal tracking data. Standard-quality animal tracking locations (Argos classes 3, 2 and 1) have larger errors than those reported in Argos manuals.

  5. Drug Administration Errors in Hospital Inpatients: A Systematic Review

    Science.gov (United States)

    Berdot, Sarah; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre; Sabatier, Brigitte

    2013-01-01

    Context Drug administration in the hospital setting is the last barrier before a possible error reaches the patient. Objectives We aimed to analyze the prevalence and nature of administration error rate detected by the observation method. Data Sources Embase, MEDLINE, Cochrane Library from 1966 to December 2011 and reference lists of included studies. Study Selection Observational studies, cross-sectional studies, before-and-after studies, and randomized controlled trials that measured the rate of administration errors in inpatients were included. Data Extraction Two reviewers (senior pharmacists) independently identified studies for inclusion. One reviewer extracted the data; the second reviewer checked the data. The main outcome was the error rate calculated as being the number of errors without wrong time errors divided by the Total Opportunity for Errors (TOE, sum of the total number of doses ordered plus the unordered doses given), and multiplied by 100. For studies that reported it, clinical impact was reclassified into four categories from fatal to minor or no impact. Due to a large heterogeneity, results were expressed as median values (interquartile range, IQR), according to their study design. Results Among 2088 studies, a total of 52 reported TOE. Most of the studies were cross-sectional studies (N=46). The median error rate without wrong time errors for the cross-sectional studies using TOE was 10.5% [IQR: 7.3%-21.7%]. No fatal error was observed and most errors were classified as minor in the 18 studies in which clinical impact was analyzed. We did not find any evidence of publication bias. Conclusions Administration errors are frequent among inpatients. The median error rate without wrong time errors for the cross-sectional studies using TOE was about 10%. A standardization of administration error rate using the same denominator (TOE), numerator and types of errors is essential for further publications. PMID:23818992

  6. Composite Gauss-Legendre Quadrature with Error Control

    Science.gov (United States)

    Prentice, J. S. C.

    2011-01-01

    We describe composite Gauss-Legendre quadrature for determining definite integrals, including a means of controlling the approximation error. We compare the form and performance of the algorithm with standard Newton-Cotes quadrature. (Contains 1 table.)

  7. Composite Gauss-Legendre Quadrature with Error Control

    Science.gov (United States)

    Prentice, J. S. C.

    2011-01-01

    We describe composite Gauss-Legendre quadrature for determining definite integrals, including a means of controlling the approximation error. We compare the form and performance of the algorithm with standard Newton-Cotes quadrature. (Contains 1 table.)

  8. The role of teamworking in error reduction during vascular procedures.

    Science.gov (United States)

    Soane, Emma; Bicknell, Colin; Mason, Sarah; Godard, Kathleen; Cheshire, Nick

    2014-07-01

    To examine the associations between teamworking processes and error rates during vascular surgical procedures and then make informed recommendations for future studies and practices in this area. This is a single-center observational pilot study. Twelve procedures were observed over a 3-week period by a trained observer. Errors were categorized using a standardized error capture tool. Leadership and teamworking processes were categorized based on the Malakis et al. (2010) framework. Data are expressed as frequencies, means, standard deviations and percentages. Errors rates (per hour) were likely to be reduced when there were effective prebriefing measures to ensure that members were aware of their roles and responsibilities (4.50 vs. 5.39 errors/hr), communications were kept to a practical and effective minimum (4.64 vs. 5.56 errors/hr), when the progress of surgery was communicated throughout (3.14 vs. 8.33 errors/hr), and when team roles changed during the procedure (3.17 vs. 5.97 errors/hr). Reduction of error rates is a critical goal for surgical teams. The present study of teamworking processes in this environment shows that there is a variation that should be further examined. More effective teamworking could prevent or mitigate a range of errors. The development of vascular surgical team members should incorporate principles of teamworking and appropriate communication. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Diagnostic errors in pediatric radiology

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, George A.; Voss, Stephan D. [Children' s Hospital Boston, Department of Radiology, Harvard Medical School, Boston, MA (United States); Melvin, Patrice R. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Graham, Dionne A. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Harvard Medical School, The Department of Pediatrics, Boston, MA (United States)

    2011-03-15

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  10. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    Science.gov (United States)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  11. Measurement Errors and Uncertainties Theory and Practice

    CERN Document Server

    Rabinovich, Semyon G

    2006-01-01

    Measurement Errors and Uncertainties addresses the most important problems that physicists and engineers encounter when estimating errors and uncertainty. Building from the fundamentals of measurement theory, the author develops the theory of accuracy of measurements and offers a wealth of practical recommendations and examples of applications. This new edition covers a wide range of subjects, including: - Basic concepts of metrology - Measuring instruments characterization, standardization and calibration -Estimation of errors and uncertainty of single and multiple measurements - Modern probability-based methods of estimating measurement uncertainty With this new edition, the author completes the development of the new theory of indirect measurements. This theory provides more accurate and efficient methods for processing indirect measurement data. It eliminates the need to calculate the correlation coefficient - a stumbling block in measurement data processing - and offers for the first time a way to obtain...

  12. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  13. Learning from error: identification and analysis of causative factors leading to medication error in an inpatient hospital setting

    Directory of Open Access Journals (Sweden)

    Subodh Kumar

    2016-06-01

    Conclusions: This study was an initial step in recognising error prone areas of medication management. It can be used to develop standard procedures and formulating guidelines for prevention of such errors. [Int J Basic Clin Pharmacol 2016; 5(3.000: 999-1005

  14. Transient Error Data Analysis.

    Science.gov (United States)

    1979-05-01

    Analysis is 3.2 Graphical Data Analysis 16 3.3 General Statistics and Confidence Intervals 1" 3.4 Goodness of Fit Test 15 4. Conclusions 31 Acknowledgements...MTTF per System Technology Mechanism Processor Processor MT IE . CMUA PDP-10, ECL Parity 44 hrs. 800-1600 hrs. 0.03-0.06 Cm* LSI-1 1, NMOS Diagnostics...OF BAD TIME ERRORS: 6 TOTAL NUMBER OF ENTRIES FOR ALL INPUT FILESs 18445 TIME SPAN: 1542 HRS., FROM: 17-Feb-79 5:3:11 TO: 18-1Mj-79 11:30:99

  15. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  16. Perfect error processing: Perfectionism-related variations in action monitoring and error processing mechanisms.

    Science.gov (United States)

    Stahl, Jutta; Acharki, Manuela; Kresimon, Miriam; Völler, Frederike; Gibbons, Henning

    2015-08-01

    Showing excellent performance and avoiding poor performance are the main characteristics of perfectionists. Perfectionism-related variations (N=94) in neural correlates of performance monitoring were investigated in a flanker task by assessing two perfectionism-related trait dimensions: Personal standard perfectionism (PSP), reflecting intrinsic motivation to show error-free performance, and evaluative concern perfectionism (ECP), representing the worry of being poorly evaluated based on bad performance. A moderating effect of ECP and PSP on error processing - an important performance monitoring system - was investigated by examining the error (-related) negativity (Ne/ERN) and the error positivity (Pe). The smallest Ne/ERN difference (error-correct) was obtained for pure-ECP participants (high-ECP-low-PSP), whereas the highest difference was shown for those with high-ECP-high-PSP (i.e., mixed perfectionists). Pe was positively correlated with PSP only. Our results encouraged the cognitive-bias hypothesis suggesting that pure-ECP participants reduce response-related attention to avoid intense error processing by minimising the subjective threat of negative evaluations. The PSP-related variations in late error processing are consistent with the participants' high in PSP goal-oriented tendency to optimise their behaviour.

  17. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    Science.gov (United States)

    Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Angels

    2013-01-01

    This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  18. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    Directory of Open Access Journals (Sweden)

    Macarena Suárez-Pellicioni

    Full Text Available This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA and seventeen low math-anxious (LMA individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN, the error positivity component (Pe, classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  19. Errors in CT colonography.

    Science.gov (United States)

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  20. Human error in daily intensive nursing care

    Directory of Open Access Journals (Sweden)

    Sabrina da Costa Machado Duarte

    2015-12-01

    Full Text Available Objectives: to identify the errors in daily intensive nursing care and analyze them according to the theory of human error. Method: quantitative, descriptive and exploratory study, undertaken at the Intensive Care Center of a hospital in the Brazilian Sentinel Hospital Network. The participants were 36 professionals from the nursing team. The data were collected through semistructured interviews, observation and lexical analysis in the software ALCESTE(r. Results: human error in nursing care can be related to the approach of the system, through active faults and latent conditions. The active faults are represented by the errors in medication administration and not raising the bedside rails. The latent conditions can be related to the communication difficulties in the multiprofessional team, lack of standards and institutional routines and absence of material resources. Conclusion: the errors identified interfere in nursing care and the clients' recovery and can cause damage. Nevertheless, they are treated as common events inherent in daily practice. The need to acknowledge these events is emphasized, stimulating the safety culture at the institution.

  1. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  2. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  3. Measurement Error in Maximal Oxygen Uptake Tests

    Science.gov (United States)

    2003-11-14

    Journal of Applied Physiology, 64, 434-436. De Vito, G., Bernardi, Sproviero, E., & Figura , F. (1995). Decrease of endurance performance during...would be unchanged. Alternative models were defined by imposing constraints on the standard errors. Every model imposed the constraint that SEM was...the same for both tests within each sample. Different models were obtained by varying whether equality constraints were imposed across samples

  4. Survey of Radar Refraction Error Corrections

    Science.gov (United States)

    2016-11-01

    Science Laboratory. “Data Systems Manual, Meteorology and Timing.” Prepared for White Sands Missile Range under contract DAAD07-76-0007, September, 1979...reflect the different meteorological layers within the troposphere. Atmospheric Modeling Parameters 5.1 Earth Model Refraction correction models use...Reference Atmosphere. Washington: U.S. Dept. of Commerce, National Bureau of Standards, 1959. Survey of Radar Refraction Error Corrections, RCC 266

  5. Error bounds for set inclusions

    Institute of Scientific and Technical Information of China (English)

    ZHENG; Xiyin(郑喜印)

    2003-01-01

    A variant of Robinson-Ursescu Theorem is given in normed spaces. Several error bound theorems for convex inclusions are proved and in particular a positive answer to Li and Singer's conjecture is given under weaker assumption than the assumption required in their conjecture. Perturbation error bounds are also studied. As applications, we study error bounds for convex inequality systems.

  6. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  7. Feature Referenced Error Correction Apparatus.

    Science.gov (United States)

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  8. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Medication error prevention in the school setting: a closer look.

    Science.gov (United States)

    Richmond, Sandra L

    2011-09-01

    Empirical evidence has identified that medication errors occur in the school setting; however, there is little research that identifies medication error prevention strategies specific to the school environment. This article reviews common medication errors that occur in the school setting and presents potential medication prevention strategies, such as developing medication error reporting systems, using technology, reviewing systems and processes that support current medication administration practices, and limiting distractions. The Standards of Professional Performance developed by the National Association of School Nurses identifies the need for school nurses to enhance the quality and effectiveness of their practice. Improving the safety of medication administration and preventing medication errors are examples of how nurses can demonstrate meeting this standard.

  10. Do calculation errors by nurses cause medication errors in clinical practice? A literature review.

    Science.gov (United States)

    Wright, Kerri

    2010-01-01

    This review aims to examine the literature available to ascertain whether medication errors in clinical practice are the result of nurses' miscalculating drug dosages. The research studies highlighting poor calculation skills of nurses and student nurses have been tested using written drug calculation tests in formal classroom settings [Kapborg, I., 1994. Calculation and administration of drug dosage by Swedish nurses, student nurses and physicians. International Journal for Quality in Health Care 6(4): 389 -395; Hutton, M., 1998. Nursing Mathematics: the importance of application Nursing Standard 13(11): 35-38; Weeks, K., Lynne, P., Torrance, C., 2000. Written drug dosage errors made by students: the threat to clinical effectiveness and the need for a new approach. Clinical Effectiveness in Nursing 4, 20-29]; Wright, K., 2004. Investigation to find strategies to improve student nurses' maths skills. British Journal Nursing 13(21) 1280-1287; Wright, K., 2005. An exploration into the most effective way to teach drug calculation skills to nursing students. Nurse Education Today 25, 430-436], but there have been no reviews of the literature on medication errors in practice that specifically look to see whether the medication errors are caused by nurses' poor calculation skills. The databases Medline, CINAHL, British Nursing Index (BNI), Journal of American Medical Association (JAMA) and Archives and Cochrane reviews were searched for research studies or systematic reviews which reported on the incidence or causes of drug errors in clinical practice. In total 33 articles met the criteria for this review. There were no studies that examined nurses' drug calculation errors in practice. As a result studies and systematic reviews that investigated the types and causes of drug errors were examined to establish whether miscalculations by nurses were the causes of errors. The review found insufficient evidence to suggest that medication errors are caused by nurses' poor

  11. Calculation of error bars for laser damage observations

    Science.gov (United States)

    Arenberg, Jonathan W.

    2008-10-01

    The use of the error bar is a critical means of communicating the quality of individual data points and a processed result. Understanding the error bar for a processed measurement depends on the measurement technique being used and is the subject of many recent works, as such, the paper will confine its scope to the determination of the error bar on a single data point. Many investigators either ignore the error bar altogether or use a "one size error fits all" method, both of these approaches are poor procedure and misleading. It is the goal of this work to lift the veil of mysticism surrounding error bars for damage observations and make their description, calculation and use, easy and commonplace. This paper will rigorously derive the error bar size as a function of the experimental parameters and observed data and will concentrate on the dependent variable, the cumulative probability of damage. The paper will begin with a discussion of the error bar as a measure of data quality or reliability. The expression for the variance in the parameters is derived via standard methods and converted to a standard deviation. The concept of the coverage factor is introduced to scale the error bar to the desired confidence level, completing the derivation

  12. Firewall Configuration Errors Revisited

    CERN Document Server

    Wool, Avishai

    2009-01-01

    The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point FireWall-1 rule-sets. In general that survey indicated that corporate firewalls were often enforcing poorly written rule-sets, containing many mistakes. The goal of this work is to revisit the first survey. The current study is much larger. Moreover, for the first time, the study includes configurations from two major vendors. The study also introduce a novel "Firewall Complexity" (FC) measure, that applies to both types of firewalls. The findings of the current study indeed validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule-set's complexity is (still) positively correlated with the number of detected risk items. Thus we can conclude that, for well-configured firewalls, ``small is (still) beautiful''. However, unlike the 2004 study, we see no significant indication that later software versions have fewer errors (for both vendors).

  13. Beta systems error analysis

    Science.gov (United States)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  14. Catalytic quantum error correction

    CERN Document Server

    Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-01-01

    We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.

  15. Experimental repetitive quantum error correction.

    Science.gov (United States)

    Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer

    2011-05-27

    The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.

  16. Register file soft error recovery

    Science.gov (United States)

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  17. Deterministic treatment of model error in geophysical data assimilation

    CERN Document Server

    Carrassi, Alberto

    2015-01-01

    This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...

  18. Boundary Integral Equations and A Posteriori Error Estimates

    Institute of Scientific and Technical Information of China (English)

    YU Dehao; ZHAO Longhua

    2005-01-01

    Adaptive methods have been rapidly developed and applied in many fields of scientific and engineering computing. Reliable and efficient a posteriori error estimates play key roles for both adaptive finite element and boundary element methods. The aim of this paper is to develop a posteriori error estimates for boundary element methods. The standard a posteriori error estimates for boundary element methods are obtained from the classical boundary integral equations. This paper presents hyper-singular a posteriori error estimates based on the hyper-singular integral equations. Three kinds of residuals are used as the estimates for boundary element errors. The theoretical analysis and numerical examples show that the hyper-singular residuals are good a posteriori error indicators in many adaptive boundary element computations.

  19. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  20. Prediction of discretization error using the error transport equation

    Science.gov (United States)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  1. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    2011-01-01

    Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...... errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary...

  2. Neutron multiplication error in TRU waste measurements

    Energy Technology Data Exchange (ETDEWEB)

    Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP

    2009-01-01

    more realistic and accurate. To do so, measurements of standards and waste drums were performed with High Efficiency Neutron Counters (HENC) located at Los Alamos National Laboratory (LANL). The data were analyzed for multiplication effects and new estimates of the multiplication error were computed. A concluding section will present alternatives for reducing the number of rejections of TRU waste containers due to neutron multiplication error.

  3. Improved Error Thresholds for Measurement-Free Error Correction

    Science.gov (United States)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  4. PREVENTABLE ERRORS: NEVER EVENTS

    Directory of Open Access Journals (Sweden)

    Narra Gopal

    2014-07-01

    Full Text Available Operation or any invasive procedure is a stressful event involving risks and complications. We should be able to offer a guarantee that the right procedure will be done on right person in the right place on their body. “Never events” are definable. These are the avoidable and preventable events. The people affected from consequences of surgical mistakes ranged from temporary injury in 60%, permanent injury in 33% and death in 7%”.World Health Organization (WHO [1] has earlier said that over seven million people across the globe suffer from preventable surgical injuries every year, a million of them even dying during or immediately after the surgery? The UN body quantified the number of surgeries taking place every year globally 234 million. It said surgeries had become common, with one in every 25 people undergoing it at any given time. 50% never events are preventable. Evidence suggests up to one in ten hospital admissions results in an adverse incident. This incident rate is not acceptable in other industries. In order to move towards a more acceptable level of safety, we need to understand how and why things go wrong and have to build a reliable system of working. With this system even though complete prevention may not be possible but we can reduce the error percentage2. To change present concept towards patient, first we have to change and replace the word patient with medical customer. Then our outlook also changes, we will be more careful towards our customers.

  5. Comparison of analytical error and sampling error for contaminated soil.

    Science.gov (United States)

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  6. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    2013-01-01

    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... in patients coming to harm. Often the root cause analysis of these adverse events can be traced back to Usability Errors in the Health Information Technology (HIT) or its interaction with users. Interoperability of the documentation of HIT related Usability Errors in a consistent fashion can improve our...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  7. Nested Quantum Error Correction Codes

    CERN Document Server

    Wang, Zhuo; Fan, Hen; Vedral, Vlatko

    2009-01-01

    The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

  8. Processor register error correction management

    Science.gov (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  9. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    2013-01-01

    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  10. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    Science.gov (United States)

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  11. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  12. Error quantification of the normalized right graph symbol for an errors-in-variables system

    Institute of Scientific and Technical Information of China (English)

    Lihui GENG; Shigang CUI; Zeyu XIA

    2015-01-01

    This paper proposes a novel method to quantify the error of a nominal normalized right graph symbol (NRGS) for an errors-in-variables (EIV) system corrupted with bounded noise. Following an identification framework for estimation of a perturbation model set, a worst-case v-gap error bound for the estimated nominal NRGS can be first determined from a priori and a posteriori information on the underlying EIV system. Then, an NRGS perturbation model set can be derived from a close relation between the v-gap metric of two models and H∞-norm of their NRGSs’ difference. The obtained NRGS perturbation model set paves the way for robust controller design using an H∞ loop-shaping method because it is a standard form of the well-known NCF (normalized coprime factor) perturbation model set. Finally, a numerical simulation is used to demonstrate the effectiveness of the proposed identification method.

  13. Filtered kriging for spatial data with heterogeneous measurement error variances.

    Science.gov (United States)

    Christensen, William F

    2011-09-01

    When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.

  14. Precomparator and postcomparator errors in monopulse.

    Energy Technology Data Exchange (ETDEWEB)

    Bickel, Douglas Lloyd

    2013-02-01

    Monopulse radar is a well-established technique for extracting accurate target location information in the presence of target scintillation. It relies on the comparison of at least two patterns being received simultaneously by the antenna. These two patterns are designed to differ in the direction in which we wish to obtain the target angle information. The two patterns are compared to each other through a standard method, typically by forming the ratio of the difference of the patterns to the sum of the patterns. The key to accurate angle information using monopulse is that the mapping function from the target angle to this ratio is well-behaved and well-known. Errors in the amplitude and phase of the signals prior and subsequent to the comparison operation affect the mapping function. The purpose of this report is to provide some intuition into these error effects upon the mapping function.

  15. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian;

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We...... examined all reviews approved and published by the Cochrane Heart Group in the 2012 Cochrane Library that included at least one meta-analysis with 5 or more randomized trials. We used trial sequential analysis to classify statistically significant meta-analyses as true positives if their pooled sample size...... but infrequently recognized, even among methodologically robust reviews published by the Cochrane Heart Group. Meta-analysts and readers should incorporate trial sequential analysis when interpreting results....

  16. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

  17. Mimicking Aphasic Semantic Errors in Normal Speech Production: Evidence from a Novel Experimental Paradigm

    Science.gov (United States)

    Hodgson, Catherine; Lambon Ralph, Matthew A.

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…

  18. Spatial frequency domain error budget

    Energy Technology Data Exchange (ETDEWEB)

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  19. Reducing errors in emergency surgery.

    Science.gov (United States)

    Watters, David A K; Truskett, Philip G

    2013-06-01

    Errors are to be expected in health care. Adverse events occur in around 10% of surgical patients and may be even more common in emergency surgery. There is little formal teaching on surgical error in surgical education and training programmes despite their frequency. This paper reviews surgical error and provides a classification system, to facilitate learning. The approach and language used to enable teaching about surgical error was developed through a review of key literature and consensus by the founding faculty of the Management of Surgical Emergencies course, currently delivered by General Surgeons Australia. Errors may be classified as being the result of commission, omission or inition. An error of inition is a failure of effort or will and is a failure of professionalism. The risk of error can be minimized by good situational awareness, matching perception to reality, and, during treatment, reassessing the patient, team and plan. It is important to recognize and acknowledge an error when it occurs and then to respond appropriately. The response will involve rectifying the error where possible but also disclosing, reporting and reviewing at a system level all the root causes. This should be done without shaming or blaming. However, the individual surgeon still needs to reflect on their own contribution and performance. A classification of surgical error has been developed that promotes understanding of how the error was generated, and utilizes a language that encourages reflection, reporting and response by surgeons and their teams. © 2013 The Authors. ANZ Journal of Surgery © 2013 Royal Australasian College of Surgeons.

  20. On the role of memory errors in quantum repeaters

    CERN Document Server

    Hartmann, L; Dür, W; Kraus, B

    2006-01-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory, and (ii) introducing two new operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e. without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an o...

  1. Error Analysis in English Language Learning

    Institute of Scientific and Technical Information of China (English)

    杜文婷

    2009-01-01

    Errors in English language learning are usually classified into interlingual errors and intralin-gual errors, having a clear knowledge of the causes of the errors will help students learn better English.

  2. Error Analysis And Second Language Acquisition

    Institute of Scientific and Technical Information of China (English)

    王惠丽

    2016-01-01

    Based on the theories of error and error analysis, the article is trying to explore the effect of error and error analysis on SLA. Thus give some advice to the language teachers and language learners.

  3. Quantifying error distributions in crowding.

    Science.gov (United States)

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  4. Discretization error of Stochastic Integrals

    CERN Document Server

    Fukasawa, Masaaki

    2010-01-01

    Asymptotic error distribution for approximation of a stochastic integral with respect to continuous semimartingale by Riemann sum with general stochastic partition is studied. Effective discretization schemes of which asymptotic conditional mean-squared error attains a lower bound are constructed. Two applications are given; efficient delta hedging strategies with transaction costs and effective discretization schemes for the Euler-Maruyama approximation are constructed.

  5. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  6. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  7. Unit of measurement used and parent medication dosing errors.

    Science.gov (United States)

    Yin, H Shonna; Dreyer, Benard P; Ugboaja, Donna C; Sanchez, Dayana C; Paul, Ian M; Moreira, Hannah A; Rodriguez, Luis; Mendelsohn, Alan L

    2014-08-01

    Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2-4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03-3.5) dose; associations greater for parents with low health literacy and non-English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon-associated measurement errors. Findings support a milliliter-only standard to reduce medication errors. Copyright © 2014 by the American Academy of Pediatrics.

  8. Systematic Errors in Dimensional X-ray Computed Tomography

    DEFF Research Database (Denmark)

    A result of a measurement is usually affected by a measurement error which cannot be avoided in practice. Particularly in production metrology, systematic errors from, e.g, temperature influences, drift effects, etc., may lead to high measurement deviation in respect to the true value of the meas......A result of a measurement is usually affected by a measurement error which cannot be avoided in practice. Particularly in production metrology, systematic errors from, e.g, temperature influences, drift effects, etc., may lead to high measurement deviation in respect to the true value...... of the measurement. As a consequence, there is a high risk that on the basis of defective measurement results, a manufactured product is wrongly rejected in conformance testing. According to guidelines and standards, systematic error influences on a final measurand must be avoided or at least known so....... Using practical examples, the speaker want to emphasis the importance of optimal scanning and evaluation strategies in CT metrology....

  9. Backward-gazing method for measuring solar concentrators shape errors.

    Science.gov (United States)

    Coquand, Mathieu; Henault, François; Caliot, Cyril

    2017-03-01

    This paper describes a backward-gazing method for measuring the optomechanical errors of solar concentrating surfaces. It makes use of four cameras placed near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. Simple data processing then allows reconstructing the slope and shape errors of the surfaces. The originality of the method is enforced by the use of generalized quad-cell formulas and approximate mathematical relations between the slope errors of the mirrors and their reflected wavefront in the case of sun-tracking heliostats at high-incidence angles. Numerical simulations demonstrate that the measurement accuracy is compliant with standard requirements of solar concentrating optics in the presence of noise or calibration errors. The method is suited to fine characterization of the optical and mechanical errors of heliostats and their facets, or to provide better control for real-time sun tracking.

  10. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

    Science.gov (United States)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

    2013-01-01

    Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

  11. Correcting the optimal resampling-based error rate by estimating the error rate of wrapper algorithms.

    Science.gov (United States)

    Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure

    2013-09-01

    High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.

  12. Elimination of Abbe error method of large-scale laser comparator

    Science.gov (United States)

    Li, Jianshuang; Zhang, Manshan; He, Mingzhao; Miao, Dongjing; Deng, Xiangrui; Li, Lianfu

    2015-02-01

    Abbe error is the inherent systematic error in all large-scale laser comparators because the standard laser axis is not in line with measured optical axis. Any angular error of the moving platform will result in the offset from the measured optical axis to the standard laser axis. This paper describes to an algorithm which could be used to calculate the displacement of an equivalent standard laser interferometer and to eliminate an Abbe error. The algorithm could also be used to reduce the Abbe error of a large-scale laser comparator. Experimental results indicated that the uncertainty of displacement measurement due to Abbe error could be effectively reduced when the position error of the measured optical axis was taken into account.

  13. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.

  14. Binary Error Correcting Network Codes

    CERN Document Server

    Wang, Qiwen; Li, Shuo-Yen Robert

    2011-01-01

    We consider network coding for networks experiencing worst-case bit-flip errors, and argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. We propose a new metric for errors under this model. Using this metric, we prove a new Hamming-type upper bound on the network capacity. We also show a commensurate lower bound based on GV-type codes that can be used for error-correction. The codes used to attain the lower bound are non-coherent (do not require prior knowledge of network topology). The end-to-end nature of our design enables our codes to be overlaid on classical distributed random linear network codes. Further, we free internal nodes from having to implement potentially computationally intensive link-by-link error-correction.

  15. Error Propagation in the Hypercycle

    CERN Document Server

    Campos, P R A; Stadler, P F

    1999-01-01

    We study analytically the steady-state regime of a network of n error-prone self-replicating templates forming an asymmetric hypercycle and its error tail. We show that the existence of a master template with a higher non-catalyzed self-replicative productivity, a, than the error tail ensures the stability of chains in which merror tail is guaranteed for catalytic coupling strengths (K) of order of a. We find that the hypercycle becomes more stable than the chains only for K of order of a2. Furthermore, we show that the minimal replication accuracy per template needed to maintain the hypercycle, the so-called error threshold, vanishes like sqrt(n/K) for large K and n<=4.

  16. FPU-Supported Running Error Analysis

    OpenAIRE

    T. Zahradnický; R. Lórencz

    2010-01-01

    A-posteriori forward rounding error analyses tend to give sharper error estimates than a-priori ones, as they use actual data quantities. One of such a-posteriori analysis – running error analysis – uses expressions consisting of two parts; one generates the error and the other propagates input errors to the output. This paper suggests replacing the error generating term with an FPU-extracted rounding error estimate, which produces a sharper error bound.

  17. Specification and Measurement of Mid-Frequency Wavefront Errors

    Institute of Scientific and Technical Information of China (English)

    XUAN Bin; XIE Jing-jiang

    2006-01-01

    Mid-frequency wavefront errors can be of the most importance for some optical components, but they're not explicitly covered by corresponding international standards such as ISO 10110. The testing methods for the errors also have a lot of aspects to be improved. This paper gives an overview of the specifications especially of PSD. NIF,developed by America, and XMM, developed by Europe, have both discovered some new testing methods.

  18. Variability and errors when applying the BIRADS mammography classification.

    Science.gov (United States)

    Boyer, Bruno; Canale, Sandra; Arfi-Rouche, Julia; Monzani, Quentin; Khaled, Wassef; Balleyguier, Corinne

    2013-03-01

    To standardize mammographic reporting, the American College of Radiology mammography developed the Breast Imaging Reporting and Data System (BIRADS) lexicon. However, wide variability is observed in practice in the application of the BIRADS terminology and this leads to classification errors. This review analyses the reasons for variations in BIRADS mammography, describes the types of errors made by readers with illustrated examples, and details BIRADS category 3 which is the most difficult category to use in practice.

  19. Reducing Clinical Errors in Cancer Education: Interpreter Training

    OpenAIRE

    2010-01-01

    Over 22 million US residents are limited English proficient. Hospitals often call upon untrained persons to interpret. There is a dearth of information on errors in medical interpreting and their impact upon cancer education. We conducted an experimental study of standardized medical interpreting training on interpreting errors in the cancer encounter, by comparing trained and untrained interpreters, using identical content. Nine interpreted cancer encounters with identical scripts were recor...

  20. On error distance of Reed-Solomon codes

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The complexity of decoding the standard Reed-Solomon code is a well known open prob-lem in coding theory. The main problem is to compute the error distance of a received word. Using the Weil bound for character sum estimate, we show that the error distance can be determined precisely when the degree of the received word is small. As an application of our method, we give a significant improvement of the recent bound of Cheng-Murray on non-existence of deep holes (words with maximal error distance).

  1. On error distance of Reed-Solomon codes

    Institute of Scientific and Technical Information of China (English)

    LI YuJuan; WAN DaQing

    2008-01-01

    The complexity of decoding the standard Reed-Solomon code is a well known open prob-lem in coding theory.The main problem is to compute the error distance of a received word.Using the Weil bound for character sum estimate,we show that the error distance can be determined precisely when the degree of the received word is small.As an application of our method,we give a significant improvement of the recent bound of Cheng-Murray on non-existence of deep holes (words with maximal error distance).

  2. OPTIMAL ERROR ESTIMATES OF THE PARTITION OF UNITY METHOD WITH LOCAL POLYNOMIAL APPROXIMATION SPACES

    Institute of Scientific and Technical Information of China (English)

    Yun-qing Huang; Wei Li; Fang Su

    2006-01-01

    In this paper, we provide a theoretical analysis of the partition of unity finite element method(PUFEM), which belongs to the family of meshfree methods. The usual error analysis only shows the order of error estimate to the same as the local approximations[12].Using standard linear finite element base functions as partition of unity and polynomials as local approximation space, in 1-d case, we derive optimal order error estimates for PUFEM interpolants. Our analysis show that the error estimate is of one order higher than the local approximations. The interpolation error estimates yield optimal error estimates for PUFEM solutions of elliptic boundary value problems.

  3. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    Science.gov (United States)

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  5. Perancangan Fasilitas Kerja untuk Mereduksi Human Error

    Directory of Open Access Journals (Sweden)

    Harmein Nasution

    2012-01-01

    Full Text Available Work equipments and environment which are not design ergonomically can cause physical exhaustion to the workers. As a result of that physical exhaustion, many defects in the production lines can happen due to human error and also cause musculoskeletal complaints. To overcome, those effects, we occupied methods for analyzing the workers posture based on the SNQ (Standard Nordic Questionnaire, plibel, QEC (Quick Exposure Check and biomechanism. Moreover, we applied those methods for designing rolling machines and grip egrek ergono-mically, so that the defects on those production lines can be minimized.

  6. ['Gold standard', not 'golden standard'

    NARCIS (Netherlands)

    Claassen, J.A.H.R.

    2005-01-01

    In medical literature, both 'gold standard' and 'golden standard' are employed to describe a reference test used for comparison with a novel method. The term 'gold standard' in its current sense in medical research was coined by Rudd in 1979, in reference to the monetary gold standard. In the same w

  7. ['Gold standard', not 'golden standard'

    NARCIS (Netherlands)

    Claassen, J.A.H.R.

    2005-01-01

    In medical literature, both 'gold standard' and 'golden standard' are employed to describe a reference test used for comparison with a novel method. The term 'gold standard' in its current sense in medical research was coined by Rudd in 1979, in reference to the monetary gold standard. In the same

  8. An Empirical State Error Covariance Matrix for Batch State Estimation

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  9. Triphasic MRI of pelvic organ descent: sources of measurement error

    Energy Technology Data Exchange (ETDEWEB)

    Morren, Geert L. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)]. E-mail: geert_morren@hotmail.com; Balasingam, Adrian G. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Wells, J. Elisabeth [Department of Public Health and General Medicine, Christchurch School of Medicine, St. Elmo Courts, Christchurch (New Zealand); Hunter, Anne M. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Coates, Richard H. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Perry, Richard E. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)

    2005-05-01

    Purpose: To identify sources of error when measuring pelvic organ displacement during straining using triphasic dynamic magnetic resonance imaging (MRI). Materials and methods: Ten healthy nulliparous woman underwent triphasic dynamic 1.5 T pelvic MRI twice with 1 week between studies. The bladder was filled with 200 ml of a saline solution, the vagina and rectum were opacified with ultrasound gel. T2 weighted images in the sagittal plane were analysed twice by each of the two observers in a blinded fashion. Horizontal and vertical displacement of the bladder neck, bladder base, introitus vaginae, posterior fornix, cul-de sac, pouch of Douglas, anterior rectal wall, anorectal junction and change of the vaginal axis were measured eight times in each volunteer (two images, each read twice by two observers). Variance components were calculated for subject, observer, week, interactions of these three factors, and pure error. An overall standard error of measurement was calculated for a single observation by one observer on a film from one woman at one visit. Results: For the majority of anatomical reference points, the range of displacements measured was wide and the overall measurement error was large. Intra-observer error and week-to-week variation within a subject were important sources of measurement error. Conclusion: Important sources of measurement error when using triphasic dynamic MRI to measure pelvic organ displacement during straining were identified. Recommendations to minimize those errors are made.

  10. The effect of model errors in variational assimilation

    Science.gov (United States)

    Wergen, Werner

    1992-08-01

    A linearized, one-dimensional shallow water model is used to investigate the effect of model errors in four-dimensional variational assimilation. A suitable initialization scheme for variational assimilation is proposed. Introducing deliberate phase speed errors in the model, the results from variational assimilation are compared to standard analysis/forecast cycle experiments. While the latter draws to the data and reflects the model errors only in the datavoid areas, variational assimilation with the model used as strong constraint is shown to distribute the model errors over the entire analysis domain. The implications for verification and diagnostics are discussed. Temporal weighting of the observations can reduce the errors towards the end of the assimilation period, but may deteriorate the subsequent forecasts. An extension to variational assimilation is proposed, which seeks not only to determine the initial state from the observations but also some of the tunable parameters of the model. The potentional usefulness of this approach for parameterization studies and for a separation of forecast errors into model- and analysis errors is discussed. Finally, variational assimilations with the model used as weak constraint are presented. While showing a good performance in the assimilation, forecasts can suffer severely if the extra term in the equations up to which the model is enforced are unable to compensate for the real model error. In the discussion, an overall appraisal of both assimilation methods is given.

  11. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  12. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  13. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Radatz, Hendrik

    1979-01-01

    Five types of errors in an information-processing classification are discussed: language difficulties; difficulties in obtaining spatial information; deficient mastery of prerequisite skills, facts, and concepts; incorrect associations; and application of irrelevant rules. (MP)

  14. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  15. Aging transition by random errors

    Science.gov (United States)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-02-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice.

  16. Aging transition by random errors

    Science.gov (United States)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-01-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice. PMID:28198430

  17. Accounting standards

    NARCIS (Netherlands)

    Stellinga, B.; Mügge, D.

    2014-01-01

    The European and global regulation of accounting standards have witnessed remarkable changes over the past twenty years. In the early 1990s, EU accounting practices were fragmented along national lines and US accounting standards were the de facto global standards. Since 2005, all EU listed companie

  18. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  19. Errors in Chemical Sensor Measurements

    Directory of Open Access Journals (Sweden)

    Artur Dybko

    2001-06-01

    Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.

  20. Disentangling timing and amplitude errors in streamflow simulations

    Science.gov (United States)

    Seibert, Simon Paul; Ehret, Uwe; Zehe, Erwin

    2016-09-01

    This article introduces an improvement in the Series Distance (SD) approach for the improved discrimination and visualization of timing and magnitude uncertainties in streamflow simulations. SD emulates visual hydrograph comparison by distinguishing periods of low flow and periods of rise and recession in hydrological events. Within these periods, it determines the distance of two hydrographs not between points of equal time but between points that are hydrologically similar. The improvement comprises an automated procedure to emulate visual pattern matching, i.e. the determination of an optimal level of generalization when comparing two hydrographs, a scaled error model which is better applicable across large discharge ranges than its non-scaled counterpart, and "error dressing", a concept to construct uncertainty ranges around deterministic simulations or forecasts. Error dressing includes an approach to sample empirical error distributions by increasing variance contribution, which can be extended from standard one-dimensional distributions to the two-dimensional distributions of combined time and magnitude errors provided by SD. In a case study we apply both the SD concept and a benchmark model (BM) based on standard magnitude errors to a 6-year time series of observations and simulations from a small alpine catchment. Time-magnitude error characteristics for low flow and rising and falling limbs of events were substantially different. Their separate treatment within SD therefore preserves useful information which can be used for differentiated model diagnostics, and which is not contained in standard criteria like the Nash-Sutcliffe efficiency. Construction of uncertainty ranges based on the magnitude of errors of the BM approach and the combined time and magnitude errors of the SD approach revealed that the BM-derived ranges were visually narrower and statistically superior to the SD ranges. This suggests that the combined use of time and magnitude errors to

  1. Rectifying calibration error of Goldmann applanation tonometer is easy!

    Directory of Open Access Journals (Sweden)

    Nikhil S Choudhari

    2014-01-01

    Full Text Available Purpose: Goldmann applanation tonometer (GAT is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn′t suffice. We followed the South East Asia Glaucoma Interest Group′s definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively. Results: Twelve out of 29 (41.3% GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6% faulty instruments. Only one (8.3% faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.

  2. Quantum error correction for beginners.

    Science.gov (United States)

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2013-07-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

  3. Dominant modes via model error

    Science.gov (United States)

    Yousuff, A.; Breida, M.

    1992-01-01

    Obtaining a reduced model of a stable mechanical system with proportional damping is considered. Such systems can be conveniently represented in modal coordinates. Two popular schemes, the modal cost analysis and the balancing method, offer simple means of identifying dominant modes for retention in the reduced model. The dominance is measured via the modal costs in the case of modal cost analysis and via the singular values of the Gramian-product in the case of balancing. Though these measures do not exactly reflect the more appropriate model error, which is the H2 norm of the output-error between the full and the reduced models, they do lead to simple computations. Normally, the model error is computed after the reduced model is obtained, since it is believed that, in general, the model error cannot be easily computed a priori. The authors point out that the model error can also be calculated a priori, just as easily as the above measures. Hence, the model error itself can be used to determine the dominant modes. Moreover, the simplicity of the computations does not presume any special properties of the system, such as small damping, orthogonal symmetry, etc.

  4. Assessing the impact of differential genotyping errors on rare variant tests of association.

    Science.gov (United States)

    Mayer-Jochimsen, Morgan; Fast, Shannon; Tintle, Nathan L

    2013-01-01

    Genotyping errors are well-known to impact the power and type I error rate in single marker tests of association. Genotyping errors that happen according to the same process in cases and controls are known as non-differential genotyping errors, whereas genotyping errors that occur with different processes in the cases and controls are known as differential genotype errors. For single marker tests, non-differential genotyping errors reduce power, while differential genotyping errors increase the type I error rate. However, little is known about the behavior of the new generation of rare variant tests of association in the presence of genotyping errors. In this manuscript we use a comprehensive simulation study to explore the effects of numerous factors on the type I error rate of rare variant tests of association in the presence of differential genotyping error. We find that increased sample size, decreased minor allele frequency, and an increased number of single nucleotide variants (SNVs) included in the test all increase the type I error rate in the presence of differential genotyping errors. We also find that the greater the relative difference in case-control genotyping error rates the larger the type I error rate. Lastly, as is the case for single marker tests, genotyping errors classifying the common homozygote as the heterozygote inflate the type I error rate significantly more than errors classifying the heterozygote as the common homozygote. In general, our findings are in line with results from single marker tests. To ensure that type I error inflation does not occur when analyzing next-generation sequencing data careful consideration of study design (e.g. use of randomization), caution in meta-analysis and using publicly available controls, and the use of standard quality control metrics is critical.

  5. Ethics, standards, and TQM.

    Science.gov (United States)

    Botticelli, M G

    1995-04-01

    The most important ethical issue for our profession is the responsibility to assure the care delivered by our colleagues and ourselves meets a self-imposed standard of excellence. There is anecdotal and experimental evidence that we have not fulfilled this obligation. Peer review has proven, for a number of reasons, to be ineffective; however, improvements in the epidemiologic sciences should provide better standards and total quality management (TQM) might prove to be of value in monitoring, comparing and improving the decisions made by physicians. Its promise lies in its emphasis on statistical analysis, its focus on systematic rather than human error, and its use of outcomes as standards. These methods, however, should not diminish our other professional responsibilities: Altruism, peer review, and in Hippocrates' words "to prescribe regimens for the good of our patients-and never do harm to anyone."

  6. Harmless error analysis: How do judges respond to confession errors?

    Science.gov (United States)

    Wallace, D Brian; Kassin, Saul M

    2012-04-01

    In Arizona v. Fulminante (1991), the U.S. Supreme Court opened the door for appellate judges to conduct a harmless error analysis of erroneously admitted, coerced confessions. In this study, 132 judges from three states read a murder case summary, evaluated the defendant's guilt, assessed the voluntariness of his confession, and responded to implicit and explicit measures of harmless error. Results indicated that judges found a high-pressure confession to be coerced and hence improperly admitted into evidence. As in studies with mock jurors, however, the improper confession significantly increased their conviction rate in the absence of other evidence. On the harmless error measures, judges successfully overruled the confession when required to do so, indicating that they are capable of this analysis.

  7. Explaining errors in children's questions.

    Science.gov (United States)

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  8. Pauli Exchange Errors in Quantum Computation

    CERN Document Server

    Ruskai, M B

    2000-01-01

    We argue that a physically reasonable model of fault-tolerant computation requires the ability to correct a type of two-qubit error which we call Pauli exchange errors as well as one qubit errors. We give an explicit 9-qubit code which can handle both Pauli exchange errors and all one-bit errors.

  9. Error-associated behaviors and error rates for robotic geology

    Science.gov (United States)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  10. Error-associated behaviors and error rates for robotic geology

    Science.gov (United States)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  11. Least squares evaluations for form and profile errors of ellipse using coordinate data

    Science.gov (United States)

    Liu, Fei; Xu, Guanghua; Liang, Lin; Zhang, Qing; Liu, Dan

    2016-09-01

    To improve the measurement and evaluation of form error of an elliptic section, an evaluation method based on least squares fitting is investigated to analyze the form and profile errors of an ellipse using coordinate data. Two error indicators for defining ellipticity are discussed, namely the form error and the profile error, and the difference between both is considered as the main parameter for evaluating machining quality of surface and profile. Because the form error and the profile error rely on different evaluation benchmarks, the major axis and the foci rather than the centre of an ellipse are used as the evaluation benchmarks and can accurately evaluate a tolerance range with the separated form error and profile error of workpiece. Additionally, an evaluation program based on the LS model is developed to extract the form error and the profile error of the elliptic section, which is well suited for separating the two errors by a standard program. Finally, the evaluation method about the form and profile errors of the ellipse is applied to the measurement of skirt line of the piston, and results indicate the effectiveness of the evaluation. This approach provides the new evaluation indicators for the measurement of form and profile errors of ellipse, which is found to have better accuracy and can thus be used to solve the difficult of the measurement and evaluation of the piston in industrial production.

  12. Detecting errors in micro and trace analysis by using statistics

    DEFF Research Database (Denmark)

    Heydorn, K.

    1993-01-01

    By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to results for chlorine in freshwater from BCR certification analyses by highly competent analytical laboratories in the EC. Titration showed systematic errors of several percent, while radiochemical neutron activation analysis produced results without detectable bias....

  13. Error-disturbance uncertainty relations studied in neutron optics

    Science.gov (United States)

    Sponar, Stephan; Sulyok, Georg; Demirel, Bulent; Hasegawa, Yuji

    2016-09-01

    Heisenberg's uncertainty principle is probably the most famous statement of quantum physics and its essential aspects are well described by a formulations in terms of standard deviations. However, a naive Heisenberg-type error-disturbance relation is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid Ozawa's relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg's original EDUR is violated, and the Ozawa's and Branciard's EDURs are valid in a wide range of experimental parameters, applying a new measurement procedure referred to as two-state method.

  14. Error Detection And Correction Systems For Optical Disk: Issues Of Media Defect Distribution, Defect Growth, Error Management, And Disk Longevity

    Science.gov (United States)

    Nugent, William R.

    1987-01-01

    We examine the principal systems of Error Detection and Correction (EDAC) which have been recently proposed as U.S. standards for optical disks and discuss the the two principal methodologies employed: Reed-Solomon Codes and Product Codes, and describe the variations in their operating characteristics and their overhead in disk space. We then present current knowledge of the nature of defect distributions on optical media including bit error rates, the incidence and extents of clustered errors and burst errors, and the controversial aspects of correlation between these forms of error. We show that if such forms are correlated then stronger EDAC systems are needed than if they are not. We discuss the nature of defect growth over time and its likely causes, and present the differing views on the growth of burst errors including nucleation and incubation effects which are not detectable in new media. We exhibit a mathematical model of a currently proposed end-of-life defect distribution for write once media and discuss its implications in EDAC selection. We show that standardization of an EDAC system unifies the data recording process and is permissive to data interchange, but that enhancements in EDAC computation during reading can achieve higher than normal EDAC performance, though sometimes at the expense of decoding time. Finally we examine vendor estimates of disk longevity and possible means of life extension where archival recording is desired.

  15. Strategies for reducing medication errors in the emergency department

    Directory of Open Access Journals (Sweden)

    Weant KA

    2014-07-01

    Full Text Available Kyle A Weant,1 Abby M Bailey,2 Stephanie N Baker2 1North Carolina Public Health Preparedness and Response, North Carolina Department of Health and Human Services, Raleigh, NC, 2University of Kentucky HealthCare, Department of Pharmacy Services, Department of Pharmacy Practice and Science, University of Kentucky College of Pharmacy, Lexington, KY, USA Abstract: Medication errors are an all-too-common occurrence in emergency departments across the nation. This is largely secondary to a multitude of factors that create an almost ideal environment for medication errors to thrive. To limit and mitigate these errors, it is necessary to have a thorough knowledge of the medication-use process in the emergency department and develop strategies targeted at each individual step. Some of these strategies include medication-error analysis, computerized provider-order entry systems, automated dispensing cabinets, bar-coding systems, medication reconciliation, standardizing medication-use processes, education, and emergency-medicine clinical pharmacists. Special consideration also needs to be given to the development of strategies for the pediatric population, as they can be at an elevated risk of harm. Regardless of the strategies implemented, the prevention of medication errors begins and ends with the development of a culture that promotes the reporting of medication errors, and a systematic, nonpunitive approach to their elimination. Keywords: emergency medicine, pharmacy, medication errors, pharmacists, pediatrics

  16. Eccentric error and compensation in rotationally symmetric laser triangulation

    Institute of Scientific and Technical Information of China (English)

    Wang Lei; Gao Jun; Wang Xiaojia; Johannes Eckstein; Peter Ott

    2007-01-01

    Rotationally symmetric triangulation (RST) sensor has more flexibility and less uncertainty limits becauseof the abaxial rotationally symmetric optical system.But if the incident laser is eccentric,the symmetry of the imagewill descend,and it will result in the eccentric error especially when some part of the imaged ring is blocked.Themodel of rotationally symmetric triangulation that meets the Schimpflug condition is presented in this paper.The errorfrom eccentric incident 1aser is analysed.It iS pointed out that the eccentric error is composed of two parts.one is acosine in circumference and proportional to the eccentric departure factor,and the other is a much smaller quadricfactor of the departure.When the ring is complete,the first error factor is zero because it is integrated in whole ring,but if some part of the ring iS blocked,the first factor will be the main error.Simulation verifies the result of the a-nalysis.At last,a compensation method to the error when some part of the ring is lost is presented based on neuralnetwork.The results of experiment show that the compensation will make the absolute maximum error descend tohalf,and the standard deviation of error descends to 1/3.

  17. Measuring worst-case errors in a robot workcell

    Energy Technology Data Exchange (ETDEWEB)

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K. [Sandia National Labs., Albuquerque, NM (United States). Intelligent Systems and Robotics Center

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.

  18. Communications standards

    CERN Document Server

    Stokes, A V

    1986-01-01

    Communications Standards deals with the standardization of computer communication networks. This book examines the types of local area networks (LANs) that have been developed and looks at some of the relevant protocols in more detail. The work of Project 802 is briefly discussed, along with a protocol which has developed from one of the LAN standards and is now a de facto standard in one particular area, namely the Manufacturing Automation Protocol (MAP). Factors that affect the usage of networks, such as network management and security, are also considered. This book is divided into three se

  19. POSITION ERROR IN STATION-KEEPING SATELLITE

    Science.gov (United States)

    of an error in satellite orientation and the sun being in a plane other than the equatorial plane may result in errors in position determination. The nature of the errors involved is described and their magnitudes estimated.

  20. Negligence, genuine error, and litigation

    Directory of Open Access Journals (Sweden)

    Sohn DH

    2013-02-01

    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  1. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  2. Redundant measurements for controlling errors

    Energy Technology Data Exchange (ETDEWEB)

    Ehinger, M. H.; Crawford, J. M.; Madeen, M. L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program.

  3. Accounting for model error due to unresolved scales within ensemble Kalman filtering

    CERN Document Server

    Mitchell, Lewis

    2014-01-01

    We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are described; a time-constant model error treatment where the same model error statistical description is time-invariant, and a time-varying treatment where the assumed model error statistics is randomly sampled at each analysis step. We compare both methods with the standard method of dealing with model error through inflation and localization, and illustrate our results with numerical simulations on a low order nonlinear system exhibiting chaotic dynamics. The results show that the filter skill is significantly improved through th...

  4. Toward a cognitive taxonomy of medical errors.

    OpenAIRE

    Zhang, Jiajie; Patel, Vimla L.; Johnson, Todd R.; Shortliffe, Edward H.

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of e...

  5. Robust Quantum Error Correction via Convex Optimization

    CERN Document Server

    Kosut, R L; Lidar, D A

    2007-01-01

    Quantum error correction procedures have traditionally been developed for specific error models, and are not robust against uncertainty in the errors. Using a semidefinite program optimization approach we find high fidelity quantum error correction procedures which present robust encoding and recovery effective against significant uncertainty in the error system. We present numerical examples for 3, 5, and 7-qubit codes. Our approach requires as input a description of the error channel, which can be provided via quantum process tomography.

  6. Errors depending on costs in sample surveys

    OpenAIRE

    Marella, Daniela

    2007-01-01

    "This paper presents a total survey error model that simultaneously treats sampling error, nonresponse error and measurement error. The main aim for developing the model is to determine the optimal allocation of the available resources for the total survey error reduction. More precisely, the paper is concerned with obtaining the best possible accuracy in survey estimate through an overall economic balance between sampling and nonsampling error." (author's abstract)

  7. Estimation of the linear relationship between the measurements of two methods with proportional errors.

    Science.gov (United States)

    Linnet, K

    1990-12-01

    The linear relationship between the measurements of two methods is estimated on the basis of a weighted errors-in-variables regression model that takes into account a proportional relationship between standard deviations of error distributions and true variable levels. Weights are estimated by an interative procedure. As shown by simulations, the regression procedure yields practically unbiased slope estimates in realistic situations. Standard errors of slope and location difference estimations are derived by the jackknife principle. For illustration, the linear relationship is estimated between the measurements of two albumin methods with proportional errors.

  8. Error-tolerant Tree Matching

    CERN Document Server

    Oflazer, K

    1996-01-01

    This paper presents an efficient algorithm for retrieving from a database of trees, all trees that match a given query tree approximately, that is, within a certain error tolerance. It has natural language processing applications in searching for matches in example-based translation systems, and retrieval from lexical databases containing entries of complex feature structures. The algorithm has been implemented on SparcStations, and for large randomly generated synthetic tree databases (some having tens of thousands of trees) it can associatively search for trees with a small error, in a matter of tenths of a second to few seconds.

  9. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    Science.gov (United States)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  10. Achieving Standardization

    DEFF Research Database (Denmark)

    Henningsson, Stefan

    2014-01-01

    competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...

  11. Achieving Standardization

    DEFF Research Database (Denmark)

    Henningsson, Stefan

    2016-01-01

    competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...

  12. Immediate error correction process following sleep deprivation

    National Research Council Canada - National Science Library

    HSIEH, SHULAN; CHENG, I‐CHEN; TSAI, LING‐LING

    2007-01-01

    ...) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event‐related potentials (ERPs...

  13. The error of our ways

    Science.gov (United States)

    Swartz, Clifford E.

    1999-10-01

    In Victorian literature it was usually some poor female who came to see the error of her ways. How prescient of her! How I wish that all writers of manuscripts for The Physics Teacher would come to similar recognition of this centerpiece of measurement. For, Brothers and Sisters, we all err.

  14. Measurement error in geometric morphometrics.

    Science.gov (United States)

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset.

  15. Finding errors in big data

    NARCIS (Netherlands)

    Puts, Marco; Daas, Piet; de Waal, A.G.

    No data source is perfect. Mistakes inevitably creep in. Spotting errors is hard enough when dealing with survey responses from several thousand people, but the difficulty is multiplied hugely when that mysterious beast Big Data comes into play. Statistics Netherlands is about to publish its first

  16. Having Fun with Error Analysis

    Science.gov (United States)

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  17. Typical errors of ESP users

    Science.gov (United States)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  18. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  19. A brief history of error.

    Science.gov (United States)

    Murray, Andrew W

    2011-10-03

    The spindle checkpoint monitors chromosome alignment on the mitotic and meiotic spindle. When the checkpoint detects errors, it arrests progress of the cell cycle while it attempts to correct the mistakes. This perspective will present a brief history summarizing what we know about the checkpoint, and a list of questions we must answer before we understand it.

  20. Error processing in Huntington's disease.

    Directory of Open Access Journals (Sweden)

    Christian Beste

    Full Text Available BACKGROUND: Huntington's disease (HD is a genetic disorder expressed by a degeneration of the basal ganglia inter alia accompanied with dopaminergic alterations. These dopaminergic alterations are related to genetic factors i.e., CAG-repeat expansion. The error (related negativity (Ne/ERN, a cognitive event-related potential related to performance monitoring, is generated in the anterior cingulate cortex (ACC and supposed to depend on the dopaminergic system. The Ne is reduced in Parkinson's Disease (PD. Due to a dopaminergic deficit in HD, a reduction of the Ne is also likely. Furthermore it is assumed that movement dysfunction emerges as a consequence of dysfunctional error-feedback processing. Since dopaminergic alterations are related to the CAG-repeat, a Ne reduction may furthermore also be related to the genetic disease load. METHODOLOGY/PRINCIPLE FINDINGS: We assessed the error negativity (Ne in a speeded reaction task under consideration of the underlying genetic abnormalities. HD patients showed a specific reduction in the Ne, which suggests impaired error processing in these patients. Furthermore, the Ne was closely related to CAG-repeat expansion. CONCLUSIONS/SIGNIFICANCE: The reduction of the Ne is likely to be an effect of the dopaminergic pathology. The result resembles findings in Parkinson's Disease. As such the Ne might be a measure for the integrity of striatal dopaminergic output function. The relation to the CAG-repeat expansion indicates that the Ne could serve as a gene-associated "cognitive" biomarker in HD.

  1. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  2. Input/output error analyzer

    Science.gov (United States)

    Vaughan, E. T.

    1977-01-01

    Program aids in equipment assessment. Independent assembly-language utility program is designed to operate under level 27 or 31 of EXEC 8 Operating System. It scans user-selected portions of system log file, whether located on tape or mass storage, and searches for and processes 1/0 error (type 6) entries.

  3. Amplify Errors to Minimize Them

    Science.gov (United States)

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  4. Toward a cognitive taxonomy of medical errors.

    Science.gov (United States)

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions.

  5. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  6. Using Si-doped diamond plate of sandwich type for spatial profiling of laser beam

    Science.gov (United States)

    Shershulin, V. A.; Samoylenko, S. R.; Sedov, V. S.; Kudryavtsev, O. S.; Ralchenko, V. G.; Nozhkina, A. V.; Vlasov, I. I.; Konov, V. I.

    2017-02-01

    We demonstrated a laser beam profiling method based on imaging of the laser induced photoluminescence of a transparent single-crystal diamond plate. The luminescence at 738 nm is caused by silicon-vacancy color centers formed in the epitaxial diamond film by its doping with Si during CVD growth of the film. The on-line beam monitor was tested for a cw laser emitting at 660 nm wavelength.

  7. A brilliant sandwich type fluorescent nanostructure incorporating a compact quantum dot layer and versatile silica substrates.

    Science.gov (United States)

    Huang, Liang; Wu, Qiong; Wang, Jing; Foda, Mohamed; Liu, Jiawei; Cai, Kai; Han, Heyou

    2014-03-18

    A "hydrophobic layer in silica" structure was designed to integrate a compact quantum dot (QD) layer with high quantum yield into scalable silica hosts containing desired functionality. This was based on metal affinity driven assembly of hydrophobic QDs with versatile silica substrates and homogeneous encapsulation of organosilica/silica layers.

  8. Analytical method for coupled transmission error of helical gear system with machining errors, assembly errors and tooth modifications

    Science.gov (United States)

    Lin, Tengjiao; He, Zeyin

    2017-07-01

    We present a method for analyzing the transmission error of helical gear system with errors. First a finite element method is used for modeling gear transmission system with machining errors, assembly errors, modifications and the static transmission error is obtained. Then the bending-torsional-axial coupling dynamic model of the transmission system based on the lumped mass method is established and the dynamic transmission error of gear transmission system is calculated, which provides error excitation data for the analysis and control of vibration and noise of gear system.

  9. The legal understanding of intentional medical error

    Directory of Open Access Journals (Sweden)

    Totić Mirza

    2017-01-01

    Full Text Available The paper is devoted to doctor, a professional and humanist who dedicated himself to medicine and is committed to lifelong learning, ethics and assistance to victims, even against their express consent. The theme is focused on the problem of intentional medical error in order to negate it in the context that the conscientious doctors should be protected from tort and free of moral burden. This paper seeks to answer the question, if the error represents a doctor's failure to the detriment of the user (patient, how should we treat his attempt, made professionally and with the best intentions, regardless of the fatal outcome? In addition, medical-legal theory and practice beside intentional medical mistake mention also the unintentional, whose formation does not require any kind of responsibility because the doctor's behavior in that case was not inconsistent with medical ethics, standards and rules. In this regard, the author's research was based on the following questions: is there a deliberate medical error, who is ready to knowingly endanger the patient by doing medical procedures contrary to the rules (neglect, avoidance of assistance, misdiagnosis, improper treatment, indifference, discrimination, who is competent to qualify the taken action as an error (intentional, unintentional and what evidences are required for the brutal attack on the integrity of top experts, that will be charged and prosecuted? Literature abounds with assertions that medical errors are as old as medicine, which is not true. Also, it is incorrect to say that it had appeared for the first time in the middle of the nineteenth century. That would be a roughly canceling of ancient medical marking, bearing in mind that even before the mentioned period, there had been a very successful medicine with high quality doctors and their brilliant achievements, but also with illnesses and dead persons. As far the data on the exact occurrence of medical errors are considered, the

  10. FMEA: a model for reducing medical errors.

    Science.gov (United States)

    Chiozza, Maria Laura; Ponzetti, Clemente

    2009-06-01

    Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).

  11. Noise, errors and information in quantum amplification

    CERN Document Server

    D'Ariano, G M; Maccone, L

    1997-01-01

    We analyze and compare the characterization of a quantum device in terms of noise, transmitted bit-error-rate (BER) and mutual information, showing how the noise description is meaningful only for Gaussian channels. After reviewing the description of a quantum communication channel, we study the insertion of an amplifier. We focus attention on the case of direct detection, where the linear amplifier has a 3 decibels noise figure, which is usually considered an unsurpassable limit, referred to as the standard quantum limit (SQL). Both noise and BER could be reduced using an ideal amplifier, which is feasible in principle. However, just a reduction of noise beyond the SQL does not generally correspond to an improvement of the BER or of the mutual information. This is the case of a laser amplifier, where saturation can greatly reduce the noise figure, although there is no corresponding improvement of the BER. Such mechanism is illustrated on the basis of Monte Carlo simulations.

  12. The check and error analysis of the BRDF experiment bench

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, a detailed introduction is given to the check method of the BRDF experiment bench which is built on our own. Measurement is made on the BRDF of the standard white plate made of polytetrafluoroethylene(PTFE) with reference to the existing standard white plate whose surface reflectance is known and by the method of theory approximate and relative comparison. On the basis of that, the BRDF value of the standard white plate in the wave band of 0. 6328 μm is given and the experiment bench is checked, the relative error of the experiment bench being within 20%.

  13. (Terminology standardization)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, R.A.

    1990-10-19

    Terminological requirements in information management was but one of the principal themes of the 2nd Congress on Terminology and Knowledge Engineering. The traveler represented the American Society for Testing and Materials' Committee on Terminology, of which he is the Chair. The traveler's invited workshop emphasized terminology standardization requirements in databases of material properties as well as practical terminology standardizing methods. The congress included six workshops in addition to approximately 82 lectures and papers from terminologists, artificial intelligence practitioners, and subject specialists from 18 countries. There were approximately 292 registrants from 33 countries who participated in the congress. The congress topics were broad. Examples were the increasing use of International Standards Organization (ISO) Standards in legislated systems such as the USSR Automated Data Bank of Standardized Terminology, the enhanced Physics Training Program based on terminology standardization in Physics in the Chinese province of Inner Mongolia, and the technical concept dictionary being developed at the Japan Electronic Dictionary Research Institute, which is considered to be the key to advanced artificial intelligence applications. The more usual roles of terminology work in the areas of machine translation. indexing protocols, knowledge theory, and data transfer in several subject specialties were also addressed, along with numerous special language terminology areas.

  14. Space Saving Statistics: An Introduction to Constant Error, Variable Error, and Absolute Error.

    Science.gov (United States)

    Guth, David

    1990-01-01

    Article discusses research on orientation and mobility (O&M) for individuals with visual impairments, examining constant, variable, and absolute error (descriptive statistics that quantify fundamentally different characteristics of distributions of spatially directed behavior). It illustrates the statistics with examples, noting their…

  15. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  16. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  17. Correction of errors in power measurements

    DEFF Research Database (Denmark)

    Pedersen, Knud Ole Helgesen

    1998-01-01

    Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....

  18. Error Analysis of Band Matrix Method

    OpenAIRE

    Taniguchi, Takeo; Soga, Akira

    1984-01-01

    Numerical error in the solution of the band matrix method based on the elimination method in single precision is investigated theoretically and experimentally, and the behaviour of the truncation error and the roundoff error is clarified. Some important suggestions for the useful application of the band solver are proposed by using the results of above error analysis.

  19. Error Correction in Oral Classroom English Teaching

    Science.gov (United States)

    Jing, Huang; Xiaodong, Hao; Yu, Liu

    2016-01-01

    As is known to all, errors are inevitable in the process of language learning for Chinese students. Should we ignore students' errors in learning English? In common with other questions, different people hold different opinions. All teachers agree that errors students make in written English are not allowed. For the errors students make in oral…

  20. 5 CFR 1601.34 - Error correction.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1601.34 Section 1601.34... Contribution Allocations and Interfund Transfer Requests § 1601.34 Error correction. Errors in processing... in the wrong investment fund, will be corrected in accordance with the error correction...

  1. STRUCTURED BACKWARD ERRORS FOR STRUCTURED KKT SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Xin-xiu Li; Xin-guo Liu

    2004-01-01

    In this paper we study structured backward errors for some structured KKT systems.Normwise structured backward errors for structured KKT systems are defined, and computable formulae of the structured backward errors are obtained. Simple numerical examples show that the structured backward errors may be much larger than the unstructured ones in some cases.

  2. New Approach for Error Reduction in the Volume Penalization Method

    CERN Document Server

    Iwakami-Nakano, Wakana; Hatakeyama, Nozomu; Hattori, Yuji

    2012-01-01

    The volume penalization method offers an efficient way to numerically simulate flows around complex-shaped bodies which move and/or deform in general. In this method a penalization term which has permeability eta and a mask function is added to a governing equation as a forcing term in order to impose different dynamics in solid and fluid regions. In this paper we investigate the accuracy of the volume penalization method in detail. We choose the one-dimensional Burgers' equation as a governing equation since it enables us extensive study and it has a nonlinear term similar to the Navier-Stokes equations. It is confirmed that the error which consists of the discretization/truncation error, the penalization error, the round-off error, and others has the same features as those in previous results when we use the standard definition of the mask function. As the number of grid points increases, the error converges to a non-zero constant which is equal to the penalization error. We propose a new approach for reduc...

  3. Experimental investigation of observation error in anuran call surveys

    Science.gov (United States)

    McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.

    2010-01-01

    Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.

  4. Non-Gaussian error distribution of 7Li abundance measurements

    Science.gov (United States)

    Crandall, Sara; Houston, Stephen; Ratra, Bharat

    2015-07-01

    We construct the error distribution of 7Li abundance measurements for 66 observations (with error bars) used by Spite et al. (2012) that give A(Li) = 2.21 ± 0.065 (median and 1σ symmetrized error). This error distribution is somewhat non-Gaussian, with larger probability in the tails than is predicted by a Gaussian distribution. The 95.4% confidence limits are 3.0σ in terms of the quoted errors. We fit the data to four commonly used distributions: Gaussian, Cauchy, Student’s t and double exponential with the center of the distribution found with both weighted mean and median statistics. It is reasonably well described by a widened n = 8 Student’s t distribution. Assuming Gaussianity, the observed A(Li) is 6.5σ away from that expected from standard Big Bang Nucleosynthesis (BBN) given the Planck observations. Accounting for the non-Gaussianity of the observed A(Li) error distribution reduces the discrepancy to 4.9σ, which is still significant.

  5. Error Analyses of the North Alabama Lightning Mapping Array (LMA)

    Science.gov (United States)

    Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.

    2003-01-01

    Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.

  6. Managing human error in aviation.

    Science.gov (United States)

    Helmreich, R L

    1997-05-01

    Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.

  7. Robot learning and error correction

    Science.gov (United States)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  8. Manson’s triple error

    Directory of Open Access Journals (Sweden)

    Delaporte F.

    2008-09-01

    Full Text Available The author discusses the significance, implications and limitations of Manson’s work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error.

  9. Escalation of error catastrophe for enzymatic self-replicators

    Science.gov (United States)

    Obermayer, B.; Frey, E.

    2009-11-01

    It is a long-standing question in origin-of-life research whether the information content of replicating molecules can be maintained in the presence of replication errors. Extending standard quasispecies models of non-enzymatic replication, we analyze highly specific enzymatic self-replication mediated through an otherwise neutral recognition region, which leads to frequency-dependent replication rates. We find a significant reduction of the maximally tolerable error rate, because the replication rate of the fittest molecules decreases with the fraction of functional enzymes. Our analysis is extended to hypercyclic couplings as an example for catalytic networks.

  10. Bayesian analysis of truncation errors in chiral effective field theory

    Science.gov (United States)

    Melendez, J.; Furnstahl, R. J.; Klco, N.; Phillips, D. R.; Wesolowski, S.

    2016-09-01

    In the Bayesian approach to effective field theory (EFT) expansions, truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. By encoding expectations about the naturalness of EFT expansion coefficients for observables, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. We extend and test previous calculations of DOB intervals for chiral EFT observables, examine correlations between contributions at different orders and energies, and explore methods to validate the statistical consistency of the EFT expansion parameter. Supported in part by the NSF and the DOE.

  11. Offset Error Compensation in Roundness Measurement

    Institute of Scientific and Technical Information of China (English)

    朱喜林; 史俊; 李晓梅

    2004-01-01

    This paper analyses three causes of offset error in roundness measurement and presents corresponding compensation methods.The causes of offset error include excursion error resulting from the deflection of the sensor's line of measurement from the rotational center in measurement (datum center), eccentricity error resulting from the variance between the workpiece's geometrical center and the rotational center, and tilt error resulting from the tilt between the workpiece's geometrical axes and the rotational centerline.

  12. Influence of X-ray Powder Diffraction Instrument Error on Crystalline Structure Analysis

    Institute of Scientific and Technical Information of China (English)

    HUANG Qing-Ming; YU Jian-Chang; WANG Yun-Min; WU Wan-Guo

    2005-01-01

    Standard mica was used to correct the X-ray powder diffraction instrument error and mathematic methods were employed to find the correction equation. By analyzing mullite sample and comparing the corrected and uncorrected analysis results we found the former is obviously more reasonable. So the conclusion is that the X-ray powder diffraction instrument error greatly affects the crystalline structure analysis, and the above method is convenient and effective for the correction of instrument error.

  13. FAKTOR PENYEBAB MEDICATION ERROR DI INSTALASI RAWAT DARURAT FACTORS AFFECTING MEDICATION ERRORS AT EMERGENCY UNIT

    OpenAIRE

    2014-01-01

    Background: Incident of medication errors is an importantindicator in patient safety and medication error is most commonmedical errors. However, most of medication errors can beprevented and efforts to reduce such errors are available.Due to high number of medications errors in the emergencyunit, understanding of the causes is important for designingsuccessful intervention. This research aims to identify typesand causes of medication errors.Method: Qualitative study was used and data were col...

  14. Error-resilient DNA computation

    Energy Technology Data Exchange (ETDEWEB)

    Karp, R.M.; Kenyon, C.; Waarts, O. [Univ. of California, Berkeley, CA (United States)

    1996-12-31

    The DNA model of computation, with test tubes of DNA molecules encoding bit sequences, is based on three primitives, Extract-A-Bit, which splits a test tube into two test tubes according to the value of a particular bit x, Merge-Two-Tubes and Detect-Emptiness. Perfect operations can test the satisfiability of any boolean formula in linear time. However, in reality the Extract operation is faulty; it misclassifies a certain proportion of the strands. We consider the following problem: given an algorithm based on perfect Extract, Merge and Detect operations, convert it to one that works correctly with high probability when the Extract operation is faulty. The fundamental problem in such a conversion is to construct a sequence of faulty Extracts and perfect Merges that simulates a highly reliable Extract operation. We first determine (up to a small constant factor) the minimum number of faulty Extract operations inherently required to simulate a highly reliable Extract operation. We then go on to derive a general method for converting any algorithm based on error-free operations to an error-resilient one, and give optimal error-resilient algorithms for realizing simple n-variable boolean functions such as Conjunction, Disjunction and Parity.

  15. Theoretical analysis of error transfer from surface slope to refractive ray and their application to the solar concentrated collector

    CERN Document Server

    Huang, Weidong

    2011-01-01

    This paper presents the general equation to calculate the standard deviation of reflected ray error from optical error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 8 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope errors in two direction is transferred to any one direction of the focus ray when the incidence angle is more than 0 for solar trough and heliostats reflector; for point focus Fresnel lens, point focus parabolic glass mirror, line focus parabolic galss mirror, the error transferring coefficient from optical to focus ray will increase when the rim angle increase; for TIR-R concentrator, it will decrease; for glass heliostat, it relates to the incidence angle and azimuth of the reflecting point. Keywords: optic error, standard deviation, refractive ray error, concentrated solar collector

  16. Algorithm-supported visual error correction (AVEC) of heart rate measurements in dogs, Canis lupus familiaris.

    Science.gov (United States)

    Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt

    2015-12-01

    Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data.

  17. Allowing for model error in strong constraint 4D-Var

    Science.gov (United States)

    Howes, Katherine; Lawless, Amos; Fowler, Alison

    2016-04-01

    Four dimensional variational data assimilation (4D-Var) can be used to obtain the best estimate of the initial conditions of an environmental forecasting model, namely the analysis. In practice, when the forecasting model contains errors, the analysis from the 4D-Var algorithm will be degraded to allow for errors later in the forecast window. This work focusses on improving the analysis at the initial time by allowing for the fact that the model contains error, within the context of strong constraint 4D-Var. The 4D-Var method developed acknowledges the presence of random error in the model at each time step by replacing the observation error covariance matrix with an error covariance matrix that includes both observation error and model error statistics. It is shown that this new matrix represents the correct error statistics of the innovations in the presence of model error. A method for estimating this matrix using innovation statistics, without requiring prior knowledge of the model error statistics, is presented. The method is demonstrated numerically using a non-linear chaotic system with erroneous parameter values. We show that that the new method works to reduce the analysis error covariance when compared with a standard strong constraint 4D-Var scheme. We discuss the fact that an improved analysis will not necessarily provide a better forecast.

  18. Frequency standards

    CERN Document Server

    Riehle, Fritz

    2006-01-01

    Of all measurement units, frequency is the one that may be determined with the highest degree of accuracy. It equally allows precise measurements of other physical and technical quantities, whenever they can be measured in terms of frequency.This volume covers the central methods and techniques relevant for frequency standards developed in physics, electronics, quantum electronics, and statistics. After a review of the basic principles, the book looks at the realisation of commonly used components. It then continues with the description and characterisation of important frequency standards

  19. Fast affine projections and the regularized modified filtered-error algorithm in multichannel active noise control

    NARCIS (Netherlands)

    Wesselink, J.M.; Berkhoff, A.P.

    2008-01-01

    In this paper, real-time results are given for broadband multichannel active noise control using the regularized modified filtered-error algorithm. As compared to the standard filtered-error algorithm, the improved convergence rate and stability of the algorithm are obtained by using an inner-outer

  20. Sampling of Common Items: An Unrecognized Source of Error in Test Equating. CSE Report 636

    Science.gov (United States)

    Michaelides, Michalis P.; Haertel, Edward H.

    2004-01-01

    There is variability in the estimation of an equating transformation because common-item parameters are obtained from responses of samples of examinees. The most commonly used standard error of equating quantifies this source of sampling error, which decreases as the sample size of examinees used to derive the transformation increases. In a…

  1. Using the Sampling Margin of Error to Assess the Interpretative Validity of Student Evaluations of Teaching

    Science.gov (United States)

    James, David E.; Schraw, Gregory; Kuch, Fred

    2015-01-01

    We present an equation, derived from standard statistical theory, that can be used to estimate sampling margin of error for student evaluations of teaching (SETs). We use the equation to examine the effect of sample size, response rates and sample variability on the estimated sampling margin of error, and present results in four tables that allow…

  2. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation.

    Science.gov (United States)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  3. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation

    Science.gov (United States)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  4. Righting errors in writing errors: the Wing and Baddeley (1980) spelling error corpus revisited.

    Science.gov (United States)

    Wing, Alan M; Baddeley, Alan D

    2009-03-01

    We present a new analysis of our previously published corpus of handwriting errors (slips) using the proportional allocation algorithm of Machtynger and Shallice (2009). As previously, the proportion of slips is greater in the middle of the word than at the ends, however, in contrast to before, the proportion is greater at the end than at the beginning of the word. The findings are consistent with the hypothesis of memory effects in a graphemic output buffer.

  5. Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills

    Science.gov (United States)

    Waggoner, Dori T.

    2011-01-01

    This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…

  6. Stereochemical errors and their implications for molecular dynamics simulations

    Directory of Open Access Journals (Sweden)

    Freddolino Peter L

    2011-05-01

    Full Text Available Abstract Background Biological molecules are often asymmetric with respect to stereochemistry, and correct stereochemistry is essential to their function. Molecular dynamics simulations of biomolecules have increasingly become an integral part of biophysical research. However, stereochemical errors in biomolecular structures can have a dramatic impact on the results of simulations. Results Here we illustrate the effects that chirality and peptide bond configuration flips may have on the secondary structure of proteins throughout a simulation. We also analyze the most common sources of stereochemical errors in biomolecular structures and present software tools to identify, correct, and prevent stereochemical errors in molecular dynamics simulations of biomolecules. Conclusions Use of the tools presented here should become a standard step in the preparation of biomolecular simulations and in the generation of predicted structural models for proteins and nucleic acids.

  7. Error Transmission in Video Coding with Gaussian Noise

    Directory of Open Access Journals (Sweden)

    A Purwadi

    2015-06-01

    Full Text Available In video transmission, there is a possibility of packet lost and a large load variation in the bandwidth. These are the sources of network congestion, which can interfere the communication data rate. The coding system used is a video coding standard, which is either MPEG-2 or H.263 with SNR scalability. The algorithm used for motion compensation, temporal redundancy and spatial redundancy is the Discrete Cosine Transform (DCT and quantization. The transmission error is simulated by adding Gaussian noise (error on motion vectors. From the simulation results, the SNR and Peak Signal to Noise Ratio (PSNR in the noisy video frames decline with averages of 3dB and increase Mean Square Error (MSE on video frames received noise.

  8. Installation errors in calculating large-panel buildings (rus

    Directory of Open Access Journals (Sweden)

    Nedviga E.S.

    2011-10-01

    Full Text Available Every year the problem of civil and erection work quality gets sharper in Russia. The article is devoted to solving the identified problem not from the point of organizational and technological aspects of building but from the point of design and calculation. The paper considers the influence of offsetting and axes fractures of wall panels in the process of its installation into large-panel building. Comparative analysis of design schemes that takes into account different types of errors in installation is done. The structure calculation taking into account errors of details installation was made. Obtained efforts in structural elements exceeded allowable values prescribed in the standard documentation. Conclusions about need to consider installation errors (caused by a deviation from the design of vertical structures in design model were made, including calculation in the CAD software.

  9. Formal Analysis of Soft Errors using Theorem Proving

    Directory of Open Access Journals (Sweden)

    Sofiène Tahar

    2013-07-01

    Full Text Available Modeling and analysis of soft errors in electronic circuits has traditionally been done using computer simulations. Computer simulations cannot guarantee correctness of analysis because they utilize approximate real number representations and pseudo random numbers in the analysis and thus are not well suited for analyzing safety-critical applications. In this paper, we present a higher-order logic theorem proving based method for modeling and analysis of soft errors in electronic circuits. Our developed infrastructure includes formalized continuous random variable pairs, their Cumulative Distribution Function (CDF properties and independent standard uniform and Gaussian random variables. We illustrate the usefulness of our approach by modeling and analyzing soft errors in commonly used dynamic random access memory sense amplifier circuits.

  10. The 95% confidence intervals of error rates and discriminant coefficients

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2015-02-01

    Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.

  11. SENSITIVE ERROR ANALYSIS OF CHAOS SYNCHRONIZATION

    Institute of Scientific and Technical Information of China (English)

    HUANG XIAN-GAO; XU JIAN-XUE; HUANG WEI; L(U) ZE-JUN

    2001-01-01

    We study the synchronizing sensitive errors of chaotic systems for adding other signals to the synchronizing signal.Based on the model of the Henon map masking, we examine the cause of the sensitive errors of chaos synchronization.The modulation ratio and the mean square error are defined to measure the synchronizing sensitive errors by quality.Numerical simulation results of the synchronizing sensitive errors are given for masking direct current, sinusoidal and speech signals, separately. Finally, we give the mean square error curves of chaos synchronizing sensitivity and threedimensional phase plots of the drive system and the response system for masking the three kinds of signals.

  12. High-speed parallel forward error correction for optical transport networks

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert;

    2010-01-01

    This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology.......This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology....

  13. Error signals driving locomotor adaptation

    DEFF Research Database (Denmark)

    Choi, Julia T; Jensen, Peter; Nielsen, Jens Bo

    2016-01-01

    perturbations. Forces were applied to the ankle joint during the early swing phase using an electrohydraulic ankle-foot orthosis. Repetitive 80 Hz electrical stimulation was applied to disrupt cutaneous feedback from the superficial peroneal nerve (foot dorsum) and medial plantar nerve (foot sole) during...... anaesthesia (n = 5) instead of repetitive nerve stimulation. Foot anaesthesia reduced ankle adaptation to external force perturbations during walking. Our results suggest that cutaneous input plays a role in force perception, and may contribute to the 'error' signal involved in driving walking adaptation when...

  14. (Errors in statistical tests3

    Directory of Open Access Journals (Sweden)

    Kaufman Jay S

    2008-07-01

    Full Text Available Abstract In 2004, Garcia-Berthou and Alcaraz published "Incongruence between test statistics and P values in medical papers," a critique of statistical errors that received a tremendous amount of attention. One of their observations was that the final reported digit of p-values in articles published in the journal Nature departed substantially from the uniform distribution that they suggested should be expected. In 2006, Jeng critiqued that critique, observing that the statistical analysis of those terminal digits had been based on comparing the actual distribution to a uniform continuous distribution, when digits obviously are discretely distributed. Jeng corrected the calculation and reported statistics that did not so clearly support the claim of a digit preference. However delightful it may be to read a critique of statistical errors in a critique of statistical errors, we nevertheless found several aspects of the whole exchange to be quite troubling, prompting our own meta-critique of the analysis. The previous discussion emphasized statistical significance testing. But there are various reasons to expect departure from the uniform distribution in terminal digits of p-values, so that simply rejecting the null hypothesis is not terribly informative. Much more importantly, Jeng found that the original p-value of 0.043 should have been 0.086, and suggested this represented an important difference because it was on the other side of 0.05. Among the most widely reiterated (though often ignored tenets of modern quantitative research methods is that we should not treat statistical significance as a bright line test of whether we have observed a phenomenon. Moreover, it sends the wrong message about the role of statistics to suggest that a result should be dismissed because of limited statistical precision when it is so easy to gather more data. In response to these limitations, we gathered more data to improve the statistical precision, and

  15. Errors associated with outpatient computerized prescribing systems

    Science.gov (United States)

    Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G

    2011-01-01

    Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428

  16. Error detection and reduction in blood banking.

    Science.gov (United States)

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle

  17. Antenna motion errors in bistatic SAR imagery

    Science.gov (United States)

    Wang, Ling; Yazıcı, Birsen; Cagri Yanik, H.

    2015-06-01

    Antenna trajectory or motion errors are pervasive in synthetic aperture radar (SAR) imaging. Motion errors typically result in smearing and positioning errors in SAR images. Understanding the relationship between the trajectory errors and position errors in reconstructed images is essential in forming focused SAR images. Existing studies on the effect of antenna motion errors are limited to certain geometries, trajectory error models or monostatic SAR configuration. In this paper, we present an analysis of position errors in bistatic SAR imagery due to antenna motion errors. Bistatic SAR imagery is becoming increasingly important in the context of passive imaging and multi-sensor imaging. Our analysis provides an explicit quantitative relationship between the trajectory errors and the positioning errors in bistatic SAR images. The analysis is applicable to arbitrary trajectory errors and arbitrary imaging geometries including wide apertures and large scenes. We present extensive numerical simulations to validate the analysis and to illustrate the results in commonly used bistatic configurations and certain trajectory error models.

  18. Shape Error Analysis of Functional Surface Based on Isogeometrical Approach

    Science.gov (United States)

    YUAN, Pei; LIU, Zhenyu; TAN, Jianrong

    2017-05-01

    The construction of traditional finite element geometry (i.e., the meshing procedure) is time consuming and creates geometric errors. The drawbacks can be overcame by the Isogeometric Analysis (IGA), which integrates the computer aided design and structural analysis in a unified way. A new IGA beam element is developed by integrating the displacement field of the element, which is approximated by the NURBS basis, with the internal work formula of Euler-Bernoulli beam theory with the small deformation and elastic assumptions. Two cases of the strong coupling of IGA elements, "beam to beam" and "beam to shell", are also discussed. The maximum relative errors of the deformation in the three directions of cantilever beam benchmark problem between analytical solutions and IGA solutions are less than 0.1%, which illustrate the good performance of the developed IGA beam element. In addition, the application of the developed IGA beam element in the Root Mean Square (RMS) error analysis of reflector antenna surface, which is a kind of typical functional surface whose precision is closely related to the product's performance, indicates that no matter how coarse the discretization is, the IGA method is able to achieve the accurate solution with less degrees of freedom than standard Finite Element Analysis (FEA). The proposed research provides an effective alternative to standard FEA for shape error analysis of functional surface.

  19. Repeated measurement sampling in genetic association analysis with genotyping errors.

    Science.gov (United States)

    Lai, Renzhen; Zhang, Hong; Yang, Yaning

    2007-02-01

    Genotype misclassification occurs frequently in human genetic association studies. When cases and controls are subject to the same misclassification model, Pearson's chi-square test has the correct type I error but may lose power. Most current methods adjusting for genotyping errors assume that the misclassification model is known a priori or can be assessed by a gold standard instrument. But in practical applications, the misclassification probabilities may not be completely known or the gold standard method can be too costly to be available. The repeated measurement design provides an alternative approach for identifying misclassification probabilities. With this design, a proportion of the subjects are measured repeatedly (five or more repeats) for the genotypes when the error model is completely unknown. We investigate the applications of the repeated measurement method in genetic association analysis. Cost-effectiveness study shows that if the phenotyping-to-genotyping cost ratio or the misclassification rates are relatively large, the repeat sampling can gain power over the regular case-control design. We also show that the power gain is not sensitive to the genetic model, genetic relative risk and the population high-risk allele frequency, all of which are typically important ingredients in association studies. An important implication of this result is that whatever the genetic factors are, the repeated measurement method can be applied if the genotyping errors must be accounted for or the phenotyping cost is high.

  20. Medication errors: hospital pharmacist perspective.

    Science.gov (United States)

    Guchelaar, Henk-Jan; Colen, Hadewig B B; Kalmeijer, Mathijs D; Hudson, Patrick T W; Teepe-Twiss, Irene M

    2005-01-01

    In recent years medication error has justly received considerable attention, as it causes substantial mortality, morbidity and additional healthcare costs. Risk assessment models, adapted from commercial aviation and the oil and gas industries, are currently being developed for use in clinical pharmacy. The hospital pharmacist is best placed to oversee the quality of the entire drug distribution chain, from prescribing, drug choice, dispensing and preparation to the administration of drugs, and can fulfil a vital role in improving medication safety. Most elements of the drug distribution chain can be optimised; however, because comparative intervention studies are scarce, there is little scientific evidence available demonstrating improvements in medication safety through such interventions. Possible interventions aimed at reducing medication errors, such as developing methods for detection of patients with increased risk of adverse drug events, performing risk assessment in clinical pharmacy and optimising the drug distribution chain are discussed. Moreover, the specific role of the clinical pharmacist in improving medication safety is highlighted, both at an organisational level and in individual patient care.

  1. Search, Memory, and Choice Error: An Experiment.

    Directory of Open Access Journals (Sweden)

    Adam Sanjurjo

    Full Text Available Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task, and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously, as well as the cognitive ability literature (in which cognitive ability is measured in a separate task. In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure.

  2. Analysis and Compensation for Gear Accuracy with Setting Error in Form Grinding

    Directory of Open Access Journals (Sweden)

    Chenggang Fang

    2015-01-01

    Full Text Available In the process of form grinding, gear setting error was the main factor that influenced the form grinding accuracy; we proposed an effective method to improve form grinding accuracy that corrected the error by controlling the machine operations. Based on establishing the geometry model of form grinding and representing the gear setting errors as homogeneous coordinate, tooth mathematic model was obtained and simplified under the gear setting error. Then, according to the gear standard of ISO1328-1: 1997 and the ANSI/AGMA 2015-1-A01: 2002, the relationship was investigated by changing the gear setting errors with respect to tooth profile deviation, helix deviation, and cumulative pitch deviation, respectively, under the condition of gear eccentricity error, gear inclination error, and gear resultant error. An error compensation method was proposed based on solving sensitivity coefficient matrix of setting error in a five-axis CNC form grinding machine; simulation and experimental results demonstrated that the method can effectively correct the gear setting error, as well as further improving the forming grinding accuracy.

  3. Analysis of offset error for segmented micro-structure optical element based on optical diffraction theory

    Science.gov (United States)

    Su, Jinyan; Wu, Shibin; Yang, Wei; Wang, Lihua

    2016-10-01

    Micro-structure optical elements are gradually applied in modern optical system due to their characters such as light weight, replicating easily, high diffraction efficiency and many design variables. Fresnel lens is a typical micro-structure optical element. So in this paper we take Fresnel lens as base of research. Analytic solution to the Point Spread Function (PSF) of the segmented Fresnel lens is derived based on the theory of optical diffraction, and the mathematical simulation model is established. Then we take segmented Fresnel lens with 5 pieces of sub-mirror as an example. In order to analyze the influence of different offset errors on the system's far-field image quality, we obtain the analytic solution to PSF of the system under the condition of different offset errors by using Fourier-transform. The result shows the translation error along XYZ axis and tilt error around XY axis will introduce phase errors which affect the imaging quality of system. The translation errors along XYZ axis constitute linear relationship with corresponding phase errors and the tilt errors around XY axis constitute trigonometric function relationship with corresponding phase errors. In addition, the standard deviations of translation errors along XY axis constitute quadratic nonlinear relationship with system's Strehl ratio. Finally, the tolerances of different offset errors are obtained according to Strehl Criteria.

  4. Panel positioning error and support mechanism for a 30-m THz radio telescope

    Institute of Scientific and Technical Information of China (English)

    De-Hua Yang; Daniel Okoh; Guo-Hua Zhou; Ai-Hua Li; Guo-Ping Li; Jing-Quan Cheng

    2011-01-01

    A 30-m TeraHertz (THz) radio telescope is proposed to operate at 200 μm with an active primary surface. This paper presents sensitivity analysis of active surface panel positioning errors with optical performance in terms of the Strehl ratio.Based on Ruze's surface error theory and using a Monte Carlo simulation, the effects of six rigid panel positioning errors, such as piston, tip, tilt, radial, azimuthal and twist displacements, were directly derived. The optical performance of the telescope was then evaluated using the standard Strehl ratio. We graphically illustrated the various panel error effects by presenting simulations of complete ensembles of full reflector surface errors for the six different rigid panel positioning errors. Study of the panel error sensitivity analysis revealed that the piston error and tilt/tip errors are dominant while the other rigid errors are much less important. Furthermore, as indicated by the results, we conceived of an alternative Master-Slave Concept-based (MSC-based) active surface by implementating a special Series-Parallel Concept-based (SPC-based) hexapod as the active panel support mechanism. A new 30-m active reflector based on the two concepts was demonstrated to achieve correction for all the six rigid panel positioning errors in an economically feasible way.

  5. Impact of error management culture on knowledge performance in professional service firms

    Directory of Open Access Journals (Sweden)

    Tabea Scheel

    2014-01-01

    Full Text Available Knowledge is the most crucial resource of the 21st century. For professional service firms (PSFs, knowledge represents the input as well as the output, and thus the fundamental base for performance. As every organization, PSFs have to deal with errors – and how they do that indicates their error culture. Considering the positive potential of errors (e.g., innovation, error management culture is positively related to organizational performance. This longitudinal quantitative study investigates the impact of error management culture on knowledge performance in four waves. The study was conducted in 131 PSFs, i.e. tax accounting offices. As a standard quality management system (QMS was assumed to moderate the relationship between error management culture and knowledge performance, offices' ISO 9000 certification was assessed. Error management culture correlated positively with knowledge performance at a significant level and predicted knowledge performance one year later. While the ISO 9000 certification correlated positively with knowledge performance, its assumed moderation of the relationship between error management culture and knowledge performance was not consistent. The process-oriented QMS seems to function as facilitator for the more behavior-oriented error management culture. However, the benefit of ISO 9000 certification for tax accounting remains to be proven. Given the impact of error management culture on knowledge performance, PSFs should focus on actively promoting positive attitudes towards errors.

  6. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...

  7. INFLUENCE OF MECHANICAL ERRORS IN A ZOOM CAMERA

    Directory of Open Access Journals (Sweden)

    Alfredo Gardel

    2011-05-01

    Full Text Available As it is well known, varying the focus and zoom of a camera lens system changes the alignment of the lens components resulting in a displacement of the image centre and field of view. Thus, knowledge of how the image centre shifts may be important for some aspects of camera calibration. As shown in other papers, the pinhole model is not adequate for zoom lenses. To ensure a calibration model for these lenses, the calibration parameters must be adjusted. The geometrical modelling of a zoom lens is realized from its lens specifications. The influence on the calibration parameters is calculated by introducing mechanical errors in the mobile lenses. Figures are given describing the errors obtained in the principal point coordinates and also in its standard deviation. A comparison is then made with the errors that come from the incorrect detection of the calibration points. It is concluded that mechanical errors of actual zoom lenses can be neglected in the calibration process because detection errors have more influence on the camera parameters.

  8. Field errors in hybrid insertion devices

    Energy Technology Data Exchange (ETDEWEB)

    Schlueter, R.D. [Lawrence Berkeley Lab., CA (United States)

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  9. Medical errors: legal and ethical responses.

    Science.gov (United States)

    Dickens, B M

    2003-04-01

    Liability to err is a human, often unavoidable, characteristic. Errors can be classified as skill-based, rule-based, knowledge-based and other errors, such as of judgment. In law, a key distinction is between negligent and non-negligent errors. To describe a mistake as an error of clinical judgment is legally ambiguous, since an error that a physician might have made when acting with ordinary care and the professional skill the physician claims, is not deemed negligent in law. If errors prejudice patients' recovery from treatment and/or future care, in physical or psychological ways, it is legally and ethically required that they be informed of them in appropriate time. Senior colleagues, facility administrators and others such as medical licensing authorities should be informed of serious forms of error, so that preventive education and strategies can be designed. Errors for which clinicians may be legally liable may originate in systemically defective institutional administration.

  10. Experimental demonstration of topological error correction.

    Science.gov (United States)

    Yao, Xing-Can; Wang, Tian-Xiong; Chen, Hao-Ze; Gao, Wei-Bo; Fowler, Austin G; Raussendorf, Robert; Chen, Zeng-Bing; Liu, Nai-Le; Lu, Chao-Yang; Deng, You-Jin; Chen, Yu-Ao; Pan, Jian-Wei

    2012-02-22

    Scalable quantum computing can be achieved only if quantum bits are manipulated in a fault-tolerant fashion. Topological error correction--a method that combines topological quantum computation with quantum error correction--has the highest known tolerable error rate for a local architecture. The technique makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the experimental demonstration of topological error correction with an eight-photon cluster state. We show that a correlation can be protected against a single error on any quantum bit. Also, when all quantum bits are simultaneously subjected to errors with equal probability, the effective error rate can be significantly reduced. Our work demonstrates the viability of topological error correction for fault-tolerant quantum information processing.

  11. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  12. L’errore nel laboratorio di Microbiologia

    Directory of Open Access Journals (Sweden)

    Paolo Lanzafame

    2006-03-01

    Full Text Available Error management plays one of the most important roles in facility process improvement efforts. By detecting and reducing errors quality and patient care improve. The records of errors was analysed over a period of 6 months and another was used to study the potential bias in the registrations.The percentage of errors detected was 0,17% (normalised 1720 ppm and the errors in the pre-analytical phase was the largest part.The major rate of errors was generated by the peripheral centres which send only sometimes the microbiology tests and don’t know well the specific procedures to collect and storage biological samples.The errors in the management of laboratory supplies were reported too. The conclusion is that improving operators training, in particular concerning samples collection and storage, is very important and that an affective system of error detection should be employed to determine the causes and the best corrective action should be applied.

  13. An Error Analysis on TFL Learners’ Writings

    Directory of Open Access Journals (Sweden)

    Arif ÇERÇİ

    2016-12-01

    Full Text Available The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner level and completed the process by taking C1 (advanced certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choice errors. The ratio and categorical distributions of identified errors were analyzed through error analysis. The data were analyzed through statistical procedures in an effort to determine whether error types differ according to the levels of the students. The errors in this study are limited to the linguistic and intralingual developmental errors

  14. Error Propagation in a System Model

    Science.gov (United States)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  15. Experimental demonstration of topological error correction

    OpenAIRE

    2012-01-01

    Scalable quantum computing can only be achieved if qubits are manipulated fault-tolerantly. Topological error correction - a novel method which combines topological quantum computing and quantum error correction - possesses the highest known tolerable error rate for a local architecture. This scheme makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the first experimental demonstration of topological error correction with a...

  16. Sampling error of observation impact statistics

    OpenAIRE

    Kim, Sung-Min; Kim, Hyun Mee

    2014-01-01

    An observation impact is an estimate of the forecast error reduction by assimilating observations with numerical model forecasts. This study compares the sampling errors of the observation impact statistics (OBIS) of July 2011 and January 2012 using two methods. One method uses the random error under the assumption that the samples are independent, and the other method uses the error with lag correlation under the assumption that the samples are correlated with each other. The OBIS are obtain...

  17. Acoustic Evidence for Phonologically Mismatched Speech Errors

    Science.gov (United States)

    Gormley, Andrea

    2015-01-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of…

  18. Medication errors: the importance of safe dispensing.

    NARCIS (Netherlands)

    Cheung, K.C.; Bouvy, M.L.; Smet, P.A.G.M. de

    2009-01-01

    1. Although rates of dispensing errors are generally low, further improvements in pharmacy distribution systems are still important because pharmacies dispense such high volumes of medications that even a low error rate can translate into a large number of errors. 2. From the perspective of pharmacy

  19. Understanding EFL Students' Errors in Writing

    Science.gov (United States)

    Phuket, Pimpisa Rattanadilok Na; Othman, Normah Binti

    2015-01-01

    Writing is the most difficult skill in English, so most EFL students tend to make errors in writing. In assisting the learners to successfully acquire writing skill, the analysis of errors and the understanding of their sources are necessary. This study attempts to explore the major sources of errors occurred in the writing of EFL students. It…

  20. Error Analysis of Quadrature Rules. Classroom Notes

    Science.gov (United States)

    Glaister, P.

    2004-01-01

    Approaches to the determination of the error in numerical quadrature rules are discussed and compared. This article considers the problem of the determination of errors in numerical quadrature rules, taking Simpson's rule as the principal example. It suggests an approach based on truncation error analysis of numerical schemes for differential…

  1. Error Analysis in Mathematics. Technical Report #1012

    Science.gov (United States)

    Lai, Cheng-Fei

    2012-01-01

    Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

  2. Error Analysis and the EFL Classroom Teaching

    Science.gov (United States)

    Xie, Fang; Jiang, Xue-mei

    2007-01-01

    This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…

  3. Human Error Mechanisms in Complex Work Environments

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1988-01-01

    will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...

  4. Errors and Uncertainty in Physics Measurement.

    Science.gov (United States)

    Blasiak, Wladyslaw

    1983-01-01

    Classifies errors as either systematic or blunder and uncertainties as either systematic or random. Discusses use of error/uncertainty analysis in direct/indirect measurement, describing the process of planning experiments to ensure lowest possible uncertainty. Also considers appropriate level of error analysis for high school physics students'…

  5. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  6. Jonas Olson's Evidence for Moral Error Theory

    NARCIS (Netherlands)

    Evers, Daan

    2016-01-01

    Jonas Olson defends a moral error theory in (2014). I first argue that Olson is not justified in believing the error theory as opposed to moral nonnaturalism in his own opinion. I then argue that Olson is not justified in believing the error theory as opposed to moral contextualism either (although

  7. AWARENESS OF DE NTISTS ABOUT MEDICATION ERRORS

    Directory of Open Access Journals (Sweden)

    Sangeetha

    2014-01-01

    Full Text Available OBJECTIVE: To assess the awareness of medication errors among dentists. METHODS: Medication errors are the most common single preventable cause o f adverse events in medication practice. We conducted a survey with a sample of sixty dentists. Among them 30 were general dentists (BDS and 30 were dental specialists (MDS. Questionnaires were distributed to them with questions regarding medication erro rs and they were asked to fill up the questionnaire. Data was collected and subjected to statistical analysis using Fisher exact and Chi square test. RESULTS: In our study, sixty percent of general dentists and 76.7% of dental specialists were aware about the components of medication error. Overall 66.7% of the respondents in each group marked wrong duration as the dispensing error. Almost thirty percent of the general dentists and 56.7% of the dental specialists felt that technologic advances could accompl ish diverse task in reducing medication errors. This was of suggestive statistical significance with a P value of 0.069. CONCLUSION: Medication errors compromise patient confidence in the health - care system and increase health - care costs. Overall, the dent al specialists were more knowledgeable than the general dentists about the Medication errors. KEY WORDS: Medication errors; Dosing error; Prevention of errors; Adverse drug events; Prescribing errors; Medical errors.

  8. Error-Compensated Integrate and Hold

    Science.gov (United States)

    Matlin, M.

    1984-01-01

    Differencing circuit cancels error caused by switching transistors capacitance. In integrate and hold circuit using JFET switch, gate-to-source capacitance causes error in output voltage. Differential connection cancels out error. Applications in systems where very low voltages sampled or many integrate-and -hold cycles before circuit is reset.

  9. Jonas Olson's Evidence for Moral Error Theory

    NARCIS (Netherlands)

    Evers, Daan

    2016-01-01

    Jonas Olson defends a moral error theory in (2014). I first argue that Olson is not justified in believing the error theory as opposed to moral nonnaturalism in his own opinion. I then argue that Olson is not justified in believing the error theory as opposed to moral contextualism either (although

  10. Human Errors and Bridge Management Systems

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, A. S.

    Human errors are divided in two groups. The first group contains human errors, which effect the reliability directly. The second group contains human errors, which will not directly effect the reliability of the structure. The methodology used to estimate so-called reliability distributions on ba...

  11. The Problematic of Second Language Errors

    Science.gov (United States)

    Hamid, M. Obaidul; Doan, Linh Dieu

    2014-01-01

    The significance of errors in explicating Second Language Acquisition (SLA) processes led to the growth of error analysis in the 1970s which has since maintained its prominence in English as a second/foreign language (L2) research. However, one problem with this research is errors are often taken for granted, without problematising them and their…

  12. Error estimate for Doo-Sabin surfaces

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Based on a general bound on the distance error between a uniform Doo-Sabin surface and its control polyhedron, an exponential error bound independent of the subdivision process is presented in this paper. Using the exponential bound, one can predict the depth of recursive subdivision of the Doo-Sabin surface within any user-specified error tolerance.

  13. Medication errors: the importance of safe dispensing.

    NARCIS (Netherlands)

    Cheung, K.C.; Bouvy, M.L.; Smet, P.A.G.M. de

    2009-01-01

    1. Although rates of dispensing errors are generally low, further improvements in pharmacy distribution systems are still important because pharmacies dispense such high volumes of medications that even a low error rate can translate into a large number of errors. 2. From the perspective of pharmacy

  14. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error pre

  15. Fast motion-including dose error reconstruction for VMAT with and without MLC tracking

    DEFF Research Database (Denmark)

    Ravkilde, Thomas; Keall, Paul J.; Grau, Cai

    2014-01-01

    of the algorithm for reconstruction of dose and motion-induced dose errors throughout the tracking and non-tracking beam deliveries was quantified. Doses were reconstructed with a mean dose difference relative to the measurements of -0.5% (5.5% standard deviation) for cumulative dose. More importantly, the root......-mean-square deviation between reconstructed and measured motion-induced 3%/3 mm γ failure rates (dose error) was 2.6%. The mean computation time for each calculation of dose and dose error was 295 ms. The motion-including dose reconstruction allows accurate temporal and spatial pinpointing of errors in absorbed dose...... validate a simple model for fast motion-including dose error reconstruction applicable to intrafractional QA of MLC tracking treatments of moving targets. MLC tracking experiments were performed on a standard linear accelerator with prototype MLC tracking software guided by an electromagnetic transponder...

  16. Standard deviations

    CERN Document Server

    Smith, Gary

    2015-01-01

    Did you know that having a messy room will make you racist? Or that human beings possess the ability to postpone death until after important ceremonial occasions? Or that people live three to five years longer if they have positive initials, like ACE? All of these ‘facts' have been argued with a straight face by researchers and backed up with reams of data and convincing statistics.As Nobel Prize-winning economist Ronald Coase once cynically observed, ‘If you torture data long enough, it will confess.' Lying with statistics is a time-honoured con. In Standard Deviations, ec

  17. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    Science.gov (United States)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  18. Correlated measurement error hampers association network inference.

    Science.gov (United States)

    Kaduk, Mateusz; Hoefsloot, Huub C J; Vis, Daniel J; Reijmers, Theo; van der Greef, Jan; Smilde, Age K; Hendriks, Margriet M W B

    2014-09-01

    Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the underlying biology. A property of chromatography-based metabolomics data is that the measurement error structure is complex: apart from the usual (random) instrumental error there is also correlated measurement error. This is intrinsic to the way the samples are prepared and the analyses are performed and cannot be avoided. The impact of correlated measurement errors on (partial) correlation networks can be large and is not always predictable. The interplay between relative amounts of uncorrelated measurement error, correlated measurement error and biological variation defines this impact. Using chromatography-based time-resolved lipidomics data obtained from a human intervention study we show how partial correlation based association networks are influenced by correlated measurement error. We show how the effect of correlated measurement error on partial correlations is different for direct and indirect associations. For direct associations the correlated measurement error usually has no negative effect on the results, while for indirect associations, depending on the relative size of the correlated measurement error, results can become unreliable. The aim of this paper is to generate awareness of the existence of correlated measurement errors and their influence on association networks. Time series lipidomics data is used for this purpose, as it makes it possible to visually distinguish the correlated measurement error from a biological response. Underestimating the phenomenon of correlated measurement error will result in the suggestion of biologically meaningful results that in reality rest solely on complicated error structures. Using proper experimental designs that allow

  19. Non-intercepted dose errors in prescribing anti-neoplastic treatment

    DEFF Research Database (Denmark)

    Mattsson, T O; Holm, B; Michelsen, H;

    2015-01-01

    BACKGROUND: The incidence of non-intercepted prescription errors and the risk factors involved, including the impact of computerised order entry (CPOE) systems on such errors, are unknown. Our objective was to determine the incidence, type, severity, and related risk factors of non...... 100 prescriptions. CPOE resulted in 1.60 and paper-based prescription forms in 1.84 errors per 100 prescriptions, i.e. odds ratio (OR) = 0.87 [95% confidence interval (CI) 0.59-1.29, P = 0.49]. Fifteen different types of errors and four potential risk factors were identified. None of the dose errors......-intercepted prescription dose errors. PATIENTS AND METHODS: A prospective, comparative cohort study in two clinical oncology units. One institution used a CPOE system with no connection to the electronic patient record system, while the other used paper-based prescription forms. All standard prescriptions were included...

  20. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    Science.gov (United States)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  1. H.264/AVC error resilience tools suitable for 3G mobile video services

    Institute of Scientific and Technical Information of China (English)

    LIU Lin; YE Xiu-zi; ZHANG San-yuan; ZHANG Yin

    2005-01-01

    The emergence of third generation mobile system (3G) makes video transmission in wireless environment possible,and the latest 3GPP/3GPP2 standards require 3G terminals support H.264/AVC. Due to high packet loss rate in wireless environment, error resilience for 3G terminals is necessary. Moreover, because of the hardware restrictions, 3G mobile terminals support only part of H.264/AVC error resilience tool. This paper analyzes various error resilience tools and their functions, and presents 2 error resilience strategies for 3G mobile streaming video services and mobile conversational services. Performances of the proposed error resilience strategies were tested using off-line common test conditions. Experiments showed that the proposed error resilience strategies can yield reasonably satisfactory results.

  2. The problem with total error models in establishing performance specifications and a simple remedy.

    Science.gov (United States)

    Krouwer, Jan S

    2016-08-01

    A recent issue in this journal revisited performance specifications since the Stockholm conference. Of the three recommended methods, two use total error models to establish performance specifications. It is shown that the most commonly used total error model - the Westgard model - is deficient, yet even more complete models fail to capture all errors that comprise total error. Moreover, total error models are often set at 95% of results, which leave 5% of results as unspecified. Glucose meter performance standards are used to illustrate these problems. The Westgard model is useful to asses assay performance but not to set performance specifications. Total error can be used to set performance specifications if the specifications include 100% of the results.

  3. Measurement accuracy of articulated arm CMMs with circular grating eccentricity errors

    Science.gov (United States)

    Zheng, Dateng; Yin, Sanfeng; Luo, Zhiyang; Zhang, Jing; Zhou, Taiping

    2016-11-01

    The 6 circular grating eccentricity errors model attempts to improve the measurement accuracy of an articulated arm coordinate measuring machine (AACMM) without increasing the corresponding hardware cost. We analyzed the AACMM’s circular grating eccentricity and obtained the 6 joints’ circular grating eccentricity error model parameters by conducting circular grating eccentricity error experiments. We completed the calibration operations for the measurement models by using home-made standard bar components. Our results show that the measurement errors from the AACMM’s measurement model without and with circular grating eccentricity errors are 0.0834 mm and 0.0462 mm, respectively. Significantly, we determined that measurement accuracy increased by about 44.6% when the circular grating eccentricity errors were corrected. This study is significant because it promotes wider applications of AACMMs both in theory and in practice.

  4. Effect of random surface errors on radiation characteristics of the side-fed offset Cassegrain antenna

    Institute of Scientific and Technical Information of China (English)

    LIU Shao-dong; JIAO Yong-chang; ZHANG Fu-shun

    2006-01-01

    In this paper the average power pattern of the side-fed offset Cassegrain (SFOC) dual reflector antenna is analyzed,and the effect of the random surface error on radiation characteristics of the antenna is introduced.Here,the random surface error is defined as the error of the standard reflector in its normal direction and the errors in a small zone of the reflector are considered as equal.We also assume that the phase error on the aperture led by the random surface error obeys a Gaussian distribution with zero mean,under which the expression of the average power pattern is deduced.Finally,the data related to the radiation characteristics of the antenna are calculated and the corresponding curves are presented.The obtained results can be used for the user to determine the manufacturing accuracy of the reflector of the SFOC antennas.

  5. Influences of observation errors in eddy flux data on inverse model parameter estimation

    Directory of Open Access Journals (Sweden)

    G. Lasslop

    2008-09-01

    Full Text Available Eddy covariance data are increasingly used to estimate parameters of ecosystem models. For proper maximum likelihood parameter estimates the error structure in the observed data has to be fully characterized. In this study we propose a method to characterize the random error of the eddy covariance flux data, and analyse error distribution, standard deviation, cross- and autocorrelation of CO2 and H2O flux errors at four different European eddy covariance flux sites. Moreover, we examine how the treatment of those errors and additional systematic errors influence statistical estimates of parameters and their associated uncertainties with three models of increasing complexity – a hyperbolic light response curve, a light response curve coupled to water fluxes and the SVAT scheme BETHY. In agreement with previous studies we find that the error standard deviation scales with the flux magnitude. The previously found strongly leptokurtic error distribution is revealed to be largely due to a superposition of almost Gaussian distributions with standard deviations varying by flux magnitude. The crosscorrelations of CO2 and H2O fluxes were in all cases negligible (R2 below 0.2, while the autocorrelation is usually below 0.6 at a lag of 0.5 h and decays rapidly at larger time lags. This implies that in these cases the weighted least squares criterion yields maximum likelihood estimates. To study the influence of the observation errors on model parameter estimates we used synthetic datasets, based on observations of two different sites. We first fitted the respective models to observations and then added the random error estimates described above and the systematic error, respectively, to the model output. This strategy enables us to compare the estimated parameters with true parameters. We illustrate that the correct implementation of the random error standard deviation scaling with flux

  6. Model error estimation in ensemble data assimilation

    Directory of Open Access Journals (Sweden)

    S. Gillijns

    2007-01-01

    Full Text Available A new methodology is proposed to estimate and account for systematic model error in linear filtering as well as in nonlinear ensemble based filtering. Our results extend the work of Dee and Todling (2000 on constant bias errors to time-varying model errors. In contrast to existing methodologies, the new filter can also deal with the case where no dynamical model for the systematic error is available. In the latter case, the applicability is limited by a matrix rank condition which has to be satisfied in order for the filter to exist. The performance of the filter developed in this paper is limited by the availability and the accuracy of observations and by the variance of the stochastic model error component. The effect of these aspects on the estimation accuracy is investigated in several numerical experiments using the Lorenz (1996 model. Experimental results indicate that the availability of a dynamical model for the systematic error significantly reduces the variance of the model error estimates, but has only minor effect on the estimates of the system state. The filter is able to estimate additive model error of any type, provided that the rank condition is satisfied and that the stochastic errors and measurement errors are significantly smaller than the systematic errors. The results of this study are encouraging. However, it remains to be seen how the filter performs in more realistic applications.

  7. Errors in quantum tomography: diagnosing systematic versus statistical errors

    Science.gov (United States)

    Langford, Nathan K.

    2013-03-01

    A prime goal of quantum tomography is to provide quantitatively rigorous characterization of quantum systems, be they states, processes or measurements, particularly for the purposes of trouble-shooting and benchmarking experiments in quantum information science. A range of techniques exist to enable the calculation of errors, such as Monte-Carlo simulations, but their quantitative value is arguably fundamentally flawed without an equally rigorous way of authenticating the quality of a reconstruction to ensure it provides a reasonable representation of the data, given the known noise sources. A key motivation for developing such a tool is to enable experimentalists to rigorously diagnose the presence of technical noise in their tomographic data. In this work, I explore the performance of the chi-squared goodness-of-fit test statistic as a measure of reconstruction quality. I show that its behaviour deviates noticeably from expectations for states lying near the boundaries of physical state space, severely undermining its usefulness as a quantitative tool precisely in the region which is of most interest in quantum information processing tasks. I suggest a simple, heuristic approach to compensate for these effects and present numerical simulations showing that this approach provides substantially improved performance.

  8. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  9. Adjoint Error Estimation for Linear Advection

    Energy Technology Data Exchange (ETDEWEB)

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  10. On the Combination Procedure of Correlated Errors

    CERN Document Server

    Erler, Jens

    2015-01-01

    When averages of different experimental determinations of the same quantity are computed, each with statistical and systematic error components, then frequently the statistical and systematic components of the combined error are quoted explicitly. These are important pieces of information since statistical errors scale differently and often more favorably with the sample size than most systematical or theoretical errors. In this communication we describe a transparent procedure by which the statistical and systematic error components of the combination uncertainty can be obtained. We develop a general method and derive a general formula for the case of Gaussian errors with or without correlations. The method can easily be applied to other error distributions, as well. For the case of two measurements, we also define disparity and misalignment angles, and discuss their relation to the combination weight factors.

  11. On the combination procedure of correlated errors

    Energy Technology Data Exchange (ETDEWEB)

    Erler, Jens [Universidad Nacional Autonoma de Mexico, Instituto de Fisica, Mexico D.F. (Mexico)

    2015-09-15

    When averages of different experimental determinations of the same quantity are computed, each with statistical and systematic error components, then frequently the statistical and systematic components of the combined error are quoted explicitly. These are important pieces of information since statistical errors scale differently and often more favorably with the sample size than most systematical or theoretical errors. In this communication we describe a transparent procedure by which the statistical and systematic error components of the combination uncertainty can be obtained. We develop a general method and derive a general formula for the case of Gaussian errors with or without correlations. The method can easily be applied to other error distributions, as well. For the case of two measurements, we also define disparity and misalignment angles, and discuss their relation to the combination weight factors. (orig.)

  12. Human error: A significant information security issue

    Energy Technology Data Exchange (ETDEWEB)

    Banks, W.W.

    1994-12-31

    One of the major threats to information security human error is often ignored or dismissed with statements such as {open_quotes}There is not much we can do about it.{close_quotes} This type of thinking runs counter to reality because studies have shown that, of all systems threats, human error has the highest probability of occurring and that, with professional assistance, human errors can be prevented or significantly reduced Security analysts often overlook human error as a major threat; however, other professionals such as human factors engineers are trained to deal with these probabilistic occurrences and mitigate them. In a recent study 55% of the respondents surveyed considered human error as the most important security threat. Documentation exists to show that human error was a major cause of the consequences suffered at Three Mile Island, Chernobyl, Bhopal, and the Exxon tanker, Valdez. Ironically, causes of human error can usually be quickly and easily eliminated.

  13. Radar error statistics for the space shuttle

    Science.gov (United States)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  14. Empirical analysis and modeling of errors of atmospheric profiles from GPS radio occultation

    Directory of Open Access Journals (Sweden)

    B. Scherllin-Pirscher

    2011-05-01

    Full Text Available The utilization of radio occultation (RO data in atmospheric studies requires precise knowledge of error characteristics. We present results of an empirical error analysis of GPS radio occultation (RO bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature. We find very good agreement between data characteristics of different missions (CHAMP, GRACE-A, and Formosat-3/COSMIC (F3C. In the global mean, observational errors (standard deviation from "true" profiles at mean tangent point location agree within 0.3 % in bending angle, 0.1 % in refractivity, and 0.2 K in dry temperature at all altitude levels between 4 km and 35 km. Above ≈20 km, the observational errors show a strong seasonal dependence at high latitudes. Larger errors occur in hemispheric wintertime and are associated mainly with background data used in the retrieval process. The comparison between UCAR and WEGC results (both data centers have independent inversion processing chains reveals different magnitudes of observational errors in atmospheric parameters, which are attributable to different background fields used. Based on the empirical error estimates, we provide a simple analytical error model for GPS RO atmospheric parameters and account for vertical, latitudinal, and seasonal variations. In the model, which spans the altitude range from 4 km to 35 km, a constant error is adopted around the tropopause region amounting to 0.8 % for bending angle, 0.35 % for refractivity, 0.15 % for dry pressure, 10 m for dry geopotential height, and 0.7 K for dry temperature. Below this region the observational error increases following an inverse height power-law and above it increases exponentially. The observational error model is the same for UCAR and WEGC data but due to somewhat different error characteristics below about 10 km and above about 20 km some parameters have to be adjusted. Overall, the observational error model is easily applicable and

  15. Report on errors in pretransfusion testing from a tertiary care center: A step toward transfusion safety

    Directory of Open Access Journals (Sweden)

    Meena Sidhu

    2016-01-01

    Full Text Available Introduction: Errors in the process of pretransfusion testing for blood transfusion can occur at any stage from collection of the sample to administration of the blood component. The present study was conducted to analyze the errors that threaten patients′ transfusion safety and actual harm/serious adverse events that occurred to the patients due to these errors. Materials and Methods: The prospective study was conducted in the Department Of Transfusion Medicine, Shri Maharaja Gulab Singh Hospital, Government Medical College, Jammu, India from January 2014 to December 2014 for a period of 1 year. Errors were defined as any deviation from established policies and standard operating procedures. A near-miss event was defined as those errors, which did not reach the patient. Location and time of occurrence of the events/errors were also noted. Results: A total of 32,672 requisitions for the transfusion of blood and blood components were received for typing and cross-matching. Out of these, 26,683 products were issued to the various clinical departments. A total of 2,229 errors were detected over a period of 1 year. Near-miss events constituted 53% of the errors and actual harmful events due to errors occurred in 0.26% of the patients. Sample labeling errors were 2.4%, inappropriate request for blood components 2%, and information on requisition forms not matching with that on the sample 1.5% of all the requisitions received were the most frequent errors in clinical services. In transfusion services, the most common event was accepting sample in error with the frequency of 0.5% of all requisitions. ABO incompatible hemolytic reactions were the most frequent harmful event with the frequency of 2.2/10,000 transfusions. Conclusion: Sample labeling, inappropriate request, and sample received in error were the most frequent high-risk errors.

  16. The Error Reporting in the ATLAS TDAQ System

    Science.gov (United States)

    Kolos, Serguei; Kazarov, Andrei; Papaevgeniou, Lykourgos

    2015-05-01

    of the available class constructors and send this instance to ERS. This paper presents the original design solutions exploited for the ERS implementation and describes how it was used during the first ATLAS run period. The cross-system error reporting standardization introduced by ERS was one of the key points for the successful implementation of automated mechanisms for online error recovery.

  17. Error analysis for mesospheric temperature profiling by absorptive occultation sensors

    Directory of Open Access Journals (Sweden)

    M. J. Rieder

    Full Text Available An error analysis for mesospheric profiles retrieved from absorptive occultation data has been performed, starting with realistic error assumptions as would apply to intensity data collected by available high-precision UV photodiode sensors. Propagation of statistical errors was investigated through the complete retrieval chain from measured intensity profiles to atmospheric density, pressure, and temperature profiles. We assumed unbiased errors as the occultation method is essentially self-calibrating and straight-line propagation of occulted signals as we focus on heights of 50–100 km, where refractive bending of the sensed radiation is negligible. Throughout the analysis the errors were characterized at each retrieval step by their mean profile, their covariance matrix and their probability density function (pdf. This furnishes, compared to a variance-only estimation, a much improved insight into the error propagation mechanism. We applied the procedure to a baseline analysis of the performance of a recently proposed solar UV occultation sensor (SMAS – Sun Monitor and Atmospheric Sounder and provide, using a reasonable exponential atmospheric model as background, results on error standard deviations and error correlation functions of density, pressure, and temperature profiles. Two different sensor photodiode assumptions are discussed, respectively, diamond diodes (DD with 0.03% and silicon diodes (SD with 0.1% (unattenuated intensity measurement noise at 10 Hz sampling rate. A factor-of-2 margin was applied to these noise values in order to roughly account for unmodeled cross section uncertainties. Within the entire height domain (50–100 km we find temperature to be retrieved to better than 0.3 K (DD / 1 K (SD accuracy, respectively, at 2 km height resolution. The results indicate that absorptive occultations acquired by a SMAS-type sensor could provide mesospheric profiles of fundamental variables such as temperature with

  18. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    Science.gov (United States)

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  19. CHANGES IN PRIOR PERIOD ERRORS

    OpenAIRE

    Alina Pietraru; Dorina Luţă

    2007-01-01

    In 2007, in Romania is continued the gradual implementation of the International Financial Reporting Standards which includes: the IFRS, the IAS and their Interpretation as approved by European Union, translated and published in Romanian. The financial statement must achieve one major purpose that is to provide information about the financial position, financial performance and changes in financial position of that entity. In this context, the accounting policy elaborated and assumed by the m...

  20. Error propagation in open respirometric assays

    Directory of Open Access Journals (Sweden)

    C. C. Lobo

    2014-06-01

    Full Text Available This work deals with the calculation of the uncertainty of the exogenous respiration rate (Rex and the total oxygen consumed (OCT derived from a single open respirometric profile. Uncertainties were evaluated by applying a linear error propagation method. Results show that standard deviations (SD of Rex and OCT depend not only on the SD of the dissolved oxygen (σC and kLa (σkLa, but also on the SD of the derivative term (dC/dt of the oxygen mass balance equation (σb. A Monte Carlo technique was employed to assess σb; a power law expression for the dependence of σb as a function of σC, the time window (t w and the sampling rate (Δt was proposed. The equations obtained in the present work are useful to calculate suitable conditions (e.g., biomass concentration, kLa that minimize the coefficient of variation corresponding to Rex and OCT.

  1. Orthogonality of inductosyn angle-measuring system error and error-separating technology

    Institute of Scientific and Technical Information of China (English)

    任顺清; 曾庆双; 王常虹

    2003-01-01

    Round inductosyn is widely used in inertial navigation test equipment, and its accuracy has significant effect on the general accuracy of the equipment. Four main errors of round inductosyn,i. e. the first-order long-period (360°) harmonic error, the second-order long-period harmonic error, the first-order short-period harmonic error and the second-order short-period harmonic error, are described, and the orthogonality of these tour kinds of errors is studied. An error separating technology is proposed to separate these four kinds of errors,and in the process of separating the short-period harmonic errors, the arrangement in the order of decimal part of the angle pitch number can be omitted. The effectiveness of the technology proposed is proved through measuring and adjusting the angular errors.

  2. Error processing network dynamics in schizophrenia.

    Science.gov (United States)

    Becerril, Karla E; Repovs, Grega; Barch, Deanna M

    2011-01-15

    Current theories of cognitive dysfunction in schizophrenia emphasize an impairment in the ability of individuals suffering from this disorder to monitor their own performance, and adjust their behavior to changing demands. Detecting an error in performance is a critical component of evaluative functions that allow the flexible adjustment of behavior to optimize outcomes. The dorsal anterior cingulate cortex (dACC) has been repeatedly implicated in error-detection and implementation of error-based behavioral adjustments. However, accurate error-detection and subsequent behavioral adjustments are unlikely to rely on a single brain region. Recent research demonstrates that regions in the anterior insula, inferior parietal lobule, anterior prefrontal cortex, thalamus, and cerebellum also show robust error-related activity, and integrate into a functional network. Despite the relevance of examining brain activity related to the processing of error information and supporting behavioral adjustments in terms of a distributed network, the contribution of regions outside the dACC to error processing remains poorly understood. To address this question, we used functional magnetic resonance imaging to examine error-related responses in 37 individuals with schizophrenia and 32 healthy controls in regions identified in the basic science literature as being involved in error processing, and determined whether their activity was related to behavioral adjustments. Our imaging results support previous findings showing that regions outside the dACC are sensitive to error commission, and demonstrated that abnormalities in brain responses to errors among individuals with schizophrenia extend beyond the dACC to almost all of the regions involved in error-related processing in controls. However, error related responses in the dACC were most predictive of behavioral adjustments in both groups. Moreover, the integration of this network of regions differed between groups, with the

  3. Embedded wavelet video coding with error concealment

    Science.gov (United States)

    Chang, Pao-Chi; Chen, Hsiao-Ching; Lu, Ta-Te

    2000-04-01

    We present an error-concealed embedded wavelet (ECEW) video coding system for transmission over Internet or wireless networks. This system consists of two types of frames: intra (I) frames and inter, or predicted (P), frames. Inter frames are constructed by the residual frames formed by variable block-size multiresolution motion estimation (MRME). Motion vectors are compressed by arithmetic coding. The image data of intra frames and residual frames are coded by error-resilient embedded zerotree wavelet (ER-EZW) coding. The ER-EZW coding partitions the wavelet coefficients into several groups and each group is coded independently. Therefore, the error propagation effect resulting from an error is only confined in a group. In EZW coding any single error may result in a totally undecodable bitstream. To further reduce the error damage, we use the error concealment at the decoding end. In intra frames, the erroneous wavelet coefficients are replaced by neighbors. In inter frames, erroneous blocks of wavelet coefficients are replaced by data from the previous frame. Simulations show that the performance of ECEW is superior to ECEW without error concealment by 7 to approximately 8 dB at the error-rate of 10-3 in intra frames. The improvement still has 2 to approximately 3 dB at a higher error-rate of 10-2 in inter frames.

  4. Medical errors recovered by critical care nurses.

    Science.gov (United States)

    Dykes, Patricia C; Rothschild, Jeffrey M; Hurley, Ann C

    2010-05-01

    : The frequency and types of medical errors are well documented, but less is known about potential errors that were intercepted by nurses. We studied the type, frequency, and potential harm of recovered medical errors reported by critical care registered nurses (CCRNs) during the previous year. : Nurses are known to protect patients from harm. Several studies on medical errors found that there would have been more medical errors reaching the patient had not potential errors been caught earlier by nurses. : The Recovered Medical Error Inventory, a 25-item empirically derived and internally consistent (alpha =.90) list of medical errors, was posted on the Internet. Participants were recruited via e-mail and healthcare-related listservs using a nonprobability snowball sampling technique. Investigators e-mailed contacts working in hospitals or who managed healthcare-related listservs and asked the contacts to pass the link on to others with contacts in acute care settings. : During 1 year, 345 CCRNs reported that they recovered 18,578 medical errors, of which they rated 4,183 as potentially lethal. : Surveillance, clinical judgment, and interventions by CCRNs to identify, interrupt, and correct medical errors protected seriously ill patients from harm.

  5. Standardized Referente Evapotranspiration Equation

    Directory of Open Access Journals (Sweden)

    M.D. Mundo–Molina

    2009-04-01

    Full Text Available In this paper is presented a discussion on the necessity to standardize the Penman–Monteith equations in order to estimate ETo. The proposal is to define an accuracy and standarize equation based in Penman–Monteith. The automated weather station named CIANO (27° 22 ' 144 North latitude and 109" 55' west longitude it was selected tomake comparisons. The compared equations we re: a CIANO weat her station, b Penman–Monteith ASCE (PMA, Penman–Monteith FAO 56 (PM FAO 56, Penman–Monteith estandarizado ASCE (PM Std. ASCE. The results were: a There are important differences between PMA and CIANO weather station. The differences are attributed to the nonstandardization of the equation CIANO weather station, b The coefficient of correlation between both methods was of 0,92, with a standard deviation of 1,63 mm, an average quadratic error of 0,60 mm and one efficiency in the estimation of ETo with respect to the method pattern of 87%.

  6. Common errors in disease mapping

    Directory of Open Access Journals (Sweden)

    Ricardo Ocaña-Riola

    2010-05-01

    Full Text Available Many morbid-mortality atlases and small-area studies have been carried out over the last decade. However, the methods used to draw up such research, the interpretation of results and the conclusions published are often inaccurate. Often, the proliferation of this practice has led to inefficient decision-making, implementation of inappropriate health policies and negative impact on the advancement of scientific knowledge. This paper reviews the most frequent errors in the design, analysis and interpretation of small-area epidemiological studies and proposes a diagnostic evaluation test that should enable the scientific quality of published papers to be ascertained. Nine common mistakes in disease mapping methods are discussed. From this framework, and following the theory of diagnostic evaluation, a standardised test to evaluate the scientific quality of a small-area epidemiology study has been developed. Optimal quality is achieved with the maximum score (16 points, average with a score between 8 and 15 points, and low with a score of 7 or below. A systematic evaluation of scientific papers, together with an enhanced quality in future research, will contribute towards increased efficacy in epidemiological surveillance and in health planning based on the spatio-temporal analysis of ecological information.

  7. On Nautical Observation Errors Evaluation

    Directory of Open Access Journals (Sweden)

    Wlodzimierz Filipowicz

    2015-12-01

    Full Text Available Mathematical Theory of Evidence (MTE enables upgrading models and solving crucial problems in many disciplines. MTE delivers new unique opportunity once one engages possibilistic concept. Since fuzziness is widely perceived as something that enables encoding knowledge thus models build upon fuzzy platforms accepts ones skill within given field. At the same time evidence combining scheme is a mechanism enabling enrichment initial data informative context. Therefore it can be exploited in many cases where uncertainty and lack of precision prevail. In nautical applications, for example, it can be used in order to handle data feature systematic and random deflections. Theoretical background was discussed and computer application was successfully implemented in order to cope with erroneous and uncertain data. Output of the application resulted in making a fix and a posteriori evaluating its quality. It was also proven that it can be useful for calibrating measurement appliances. Unique feature of the combination scheme proven by the author in his previous paper, enables identifying measurement systematic deflection. Based on the theorem the paper aims at further exploration of practical aspects of the problem. It concentrates on reduction of hypothesis frame reduction and random along with systematic errors identifications.

  8. Medication errors in anesthesia: unacceptable or unavoidable?

    Directory of Open Access Journals (Sweden)

    Ira Dhawan

    Full Text Available Abstract Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to ‘treat' drug errors is to prevent them. Wrong medication (due to syringe swap, overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error, incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and ‘just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors.

  9. [Medication errors in anesthesia: unacceptable or unavoidable?

    Science.gov (United States)

    Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra

    Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Publicado por Elsevier Editora Ltda.

  10. Medication errors in anesthesia: unacceptable or unavoidable?

    Science.gov (United States)

    Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra

    Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Published by Elsevier Editora Ltda.

  11. Error correction maintains post-error adjustments after one night of total sleep deprivation.

    Science.gov (United States)

    Hsieh, Shulan; Tsai, Cheng-Yin; Tsai, Ling-Ling

    2009-06-01

    Previous behavioral and electrophysiologic evidence indicates that one night of total sleep deprivation (TSD) impairs error monitoring, including error detection, error correction, and posterror adjustments (PEAs). This study examined the hypothesis that error correction, manifesting as an overtly expressed self-generated performance feedback to errors, can effectively prevent TSD-induced impairment in the PEAs. Sixteen healthy right-handed adults (seven women and nine men) aged 19-23 years were instructed to respond to a target arrow flanked by four distracted arrows and to correct their errors immediately after committing errors. Task performance and electroencephalogram (EEG) data were collected after normal sleep (NS) and after one night of TSD in a counterbalanced repeated-measures design. With the demand of error correction, the participants maintained the same level of PEAs in reducing the error rate for trial N + 1 after TSD as after NS. Corrective behavior further affected the PEAs for trial N + 1 in the omission rate and response speed, which decreased and speeded up following corrected errors, particularly after TSD. These results show that error correction effectively maintains posterror reduction in both committed and omitted errors after TSD. A cerebral mechanism might be involved in the effect of error correction as EEG beta (17-24 Hz) activity was increased after erroneous responses compared to after correct responses. The practical application of error correction to increasing work safety, which can be jeopardized by repeated errors, is suggested for workers who are involved in monotonous but attention-demanding monitoring tasks.

  12. Phase Error Modeling and Its Impact on Precise Orbit Determination of GRACE Satellites

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Limiting factors for the precise orbit determination (POD of low-earth orbit (LEO satellite using dual-frequency GPS are nowadays mainly encountered with the in-flight phase error modeling. The phase error is modeled as a systematic and a random component each depending on the direction of GPS signal reception. The systematic part and standard deviation of random part in phase error model are, respectively, estimated by bin-wise mean and standard deviation values of phase postfit residuals computed by orbit determination. By removing the systematic component and adjusting the weight of phase observation data according to standard deviation of random component, the orbit can be further improved by POD approach. The GRACE data of 1–31 January 2006 are processed, and three types of orbit solutions, POD without phase error model correction, POD with mean value correction of phase error model, and POD with phase error model correction, are obtained. The three-dimensional (3D orbit improvements derived from phase error model correction are 0.0153 m for GRACE A and 0.0131 m for GRACE B, and the 3D influences arisen from random part of phase error model are 0.0068 m and 0.0075 m for GRACE A and GRACE B, respectively. Thus the random part of phase error model cannot be neglected for POD. It is also demonstrated by phase postfit residual analysis, orbit comparison with JPL precise science orbit, and orbit validation with KBR data that the results derived from POD with phase error model correction are better than another two types of orbit solutions generated in this paper.

  13. Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors

    Science.gov (United States)

    San, Bingbing; Yang, Qingshan; Yin, Liwei

    2017-03-01

    Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.

  14. Identification errors in pathology and laboratory medicine.

    Science.gov (United States)

    Valenstein, Paul N; Sirota, Ronald L

    2004-12-01

    Identification errors involve misidentification of a patient or a specimen. Either has the potential to cause patients harm. Identification errors can occur during any part of the test cycle; however, most occur in the preanalytic phase. Patient identification errors in transfusion medicine occur in 0.05% of specimens; for general laboratory specimens the rate is much higher, around 1%. Anatomic pathology, which involves multiple specimen transfers and hand-offs, may have the highest identification error rate. Certain unavoidable cognitive failures lead to identification errors. Technology, ranging from bar-coded specimen labels to radio frequency identification tags, can be incorporated into protective systems that have the potential to detect and correct human error and reduce the frequency with which patients and specimens are misidentified.

  15. Beyond the answer: post-error processes.

    Science.gov (United States)

    Kleiter, G D; Schwarzenbacher, K

    1989-08-01

    When you suspect that you just gave an erroneous answer to a question you stop and rethink. Suspected errors lead to a shift in the control and content of cognitive processes. In the present experiment we investigated the influence of errors upon heart rates and response latencies. Sixty-four subjects participated in an experiment in which each subject solved a sequence of 60 verbal analogies. The results demonstrated increased latencies after errors and decelerated heart rates during the post-error period. The results were explained by a psychophysiological model in which the septo-hippocampal system functions as a control system which coordinates the priority and selection of cognitive processes. Error detection suppresses strategies which otherwise prevent looping and iterative reanalyses of old material. The inhibition is also responsible for the cardiac slowing during the post-error period.

  16. Quantum Error-Correcting Codes over Mixed Alphabets

    CERN Document Server

    Wang, Zhuo; Fan, Heng; Oh, C H

    2012-01-01

    Errors are inevitable during all kinds quantum informational tasks and quantum error-correcting codes (QECCs) are powerful tools to fight various quantum noises. For standard QECCs physical systems have the same number of energy levels. Here we shall propose QECCs over mixed alphabets, i.e., physical systems of different dimensions, and investigate their constructions as well as their quantum Singleton bound. We propose two kinds of constructions: a graphical construction based a graph-theoretical object composite coding clique and a projection-based construction. We illustrate our ideas using two alphabets by finding out some 1-error correcting or detecting codes over mixed alphabets, e.g., optimal $((6,8,3))_{4^52^1}$, $((6,4,3))_{4^42^2}$ and $((5,16,2))_{4^32^2}$ code and suboptimal $((5,9,2))_{3^42^1}$ code. Our methods also shed light to the constructions of standard QECCs, e.g., the construction of the optimal $((6,16,3))_4$ code as well as the optimal $((2n+3,p^{2n+1},2))_{p}$ codes with $p=4k$.

  17. Meteorological Error Budget Using Open Source Data

    Science.gov (United States)

    2016-09-01

    VBA ) script was created that would read the model - based output and corresponding sounding data for each message type (METCM or METB3), output type...produce artillery MET error budget tables that account for expected errors when using MET model -based systems. Representatives of the US and other...nations within the North Atlantic Treaty Organization expressed a need for shareable model -based MET error budgets. Use of an openly available civilian

  18. Soft errors in modern electronic systems

    CERN Document Server

    Nicolaidis, Michael

    2010-01-01

    This book provides a comprehensive presentation of the most advanced research results and technological developments enabling understanding, qualifying and mitigating the soft errors effect in advanced electronics, including the fundamental physical mechanisms of radiation induced soft errors, the various steps that lead to a system failure, the modelling and simulation of soft error at various levels (including physical, electrical, netlist, event driven, RTL, and system level modelling and simulation), hardware fault injection, accelerated radiation testing and natural environment testing, s

  19. Analysis of Errors Encountered in Simultaneous Interpreting

    Institute of Scientific and Technical Information of China (English)

    方峥

    2015-01-01

    I.Introduction1.1 Definition of an error An error happens when the interpreter’s delivery affects the communicative impact of the speaker’s message,including semantic inaccuracies and inaccuracies of presentation.Along with the development of simultaneous interpreting,there has been a number of professional interpreters and linguists present their definitions and points of views about the errors

  20. Medication errors in anesthesia: unacceptable or unavoidable?

    OpenAIRE

    Ira Dhawan; Anurag Tewari; Sankalp Sehgal; Ashish Chandra Sinha

    2017-01-01

    Abstract Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be succes...

  1. Group representations, error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  2. ERROR CORRECTION IN HIGH SPEED ARITHMETIC,

    Science.gov (United States)

    The errors due to a faulty high speed multiplier are shown to be iterative in nature. These errors are analyzed in various aspects. The arithmetic coding technique is suggested for the improvement of high speed multiplier reliability. Through a number theoretic investigation, a large class of arithmetic codes for single iterative error correction are developed. The codes are shown to have near-optimal rates and to render a simple decoding method. The implementation of these codes seems highly practical. (Author)

  3. Medication errors recovered by emergency department pharmacists.

    Science.gov (United States)

    Rothschild, Jeffrey M; Churchill, William; Erickson, Abbie; Munz, Kristin; Schuur, Jeremiah D; Salzberg, Claudia A; Lewinski, Daniel; Shane, Rita; Aazami, Roshanak; Patka, John; Jaggers, Rondell; Steffenhagen, Aaron; Rough, Steve; Bates, David W

    2010-06-01

    We assess the impact of emergency department (ED) pharmacists on reducing potentially harmful medication errors. We conducted this observational study in 4 academic EDs. Trained pharmacy residents observed a convenience sample of ED pharmacists' activities. The primary outcome was medication errors recovered by pharmacists, including errors intercepted before reaching the patient (near miss or potential adverse drug event), caught after reaching the patient but before causing harm (mitigated adverse drug event), or caught after some harm but before further or worsening harm (ameliorated adverse drug event). Pairs of physician and pharmacist reviewers confirmed recovered medication errors and assessed their potential for harm. Observers were unblinded and clinical outcomes were not evaluated. We conducted 226 observation sessions spanning 787 hours and observed pharmacists reviewing 17,320 medications ordered or administered to 6,471 patients. We identified 504 recovered medication errors, or 7.8 per 100 patients and 2.9 per 100 medications. Most of the recovered medication errors were intercepted potential adverse drug events (90.3%), with fewer mitigated adverse drug events (3.9%) and ameliorated adverse drug events (0.2%). The potential severities of the recovered errors were most often serious (47.8%) or significant (36.2%). The most common medication classes associated with recovered medication errors were antimicrobial agents (32.1%), central nervous system agents (16.2%), and anticoagulant and thrombolytic agents (14.1%). The most common error types were dosing errors, drug omission, and wrong frequency errors. ED pharmacists can identify and prevent potentially harmful medication errors. Controlled trials are necessary to determine the net costs and benefits of ED pharmacist staffing on safety, quality, and costs, especially important considerations for smaller EDs and pharmacy departments. Copyright (c) 2009 American College of Emergency Physicians

  4. Error Estimates of Theoretical Models: a Guide

    CERN Document Server

    Dobaczewski, J; Reinhard, P -G

    2014-01-01

    This guide offers suggestions/insights on uncertainty quantification of nuclear structure models. We discuss a simple approach to statistical error estimates, strategies to assess systematic errors, and show how to uncover inter-dependencies by correlation analysis. The basic concepts are illustrated through simple examples. By providing theoretical error bars on predicted quantities and using statistical methods to study correlations between observables, theory can significantly enhance the feedback between experiment and nuclear modeling.

  5. Maximum privacy without coherence, zero-error

    Science.gov (United States)

    Leung, Debbie; Yu, Nengkun

    2016-09-01

    We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.

  6. How social is error observation? The neural mechanisms underlying the observation of human and machine errors.

    Science.gov (United States)

    Desmet, Charlotte; Deschrijver, Eliane; Brass, Marcel

    2014-04-01

    Recently, it has been shown that the medial prefrontal cortex (MPFC) is involved in error execution as well as error observation. Based on this finding, it has been argued that recognizing each other's mistakes might rely on motor simulation. In the current functional magnetic resonance imaging (fMRI) study, we directly tested this hypothesis by investigating whether medial prefrontal activity in error observation is restricted to situations that enable simulation. To this aim, we compared brain activity related to the observation of errors that can be simulated (human errors) with brain activity related to errors that cannot be simulated (machine errors). We show that medial prefrontal activity is not only restricted to the observation of human errors but also occurs when observing errors of a machine. In addition, our data indicate that the MPFC reflects a domain general mechanism of monitoring violations of expectancies.

  7. Exceptional error minimization in putative primordial genetic codes

    Directory of Open Access Journals (Sweden)

    Koonin Eugene V

    2009-11-01

    Full Text Available Abstract Background The standard genetic code is redundant and has a highly non-random structure. Codons for the same amino acids typically differ only by the nucleotide in the third position, whereas similar amino acids are encoded, mostly, by codon series that differ by a single base substitution in the third or the first position. As a result, the code is highly albeit not optimally robust to errors of translation, a property that has been interpreted either as a product of selection directed at the minimization of errors or as a non-adaptive by-product of evolution of the code driven by other forces. Results We investigated the error-minimization properties of putative primordial codes that consisted of 16 supercodons, with the third base being completely redundant, using a previously derived cost function and the error minimization percentage as the measure of a code's robustness to mistranslation. It is shown that, when the 16-supercodon table is populated with 10 putative primordial amino acids, inferred from the results of abiotic synthesis experiments and other evidence independent of the code's evolution, and with minimal assumptions used to assign the remaining supercodons, the resulting 2-letter codes are nearly optimal in terms of the error minimization level. Conclusion The results of the computational experiments with putative primordial genetic codes that contained only two meaningful letters in all codons and encoded 10 to 16 amino acids indicate that such codes are likely to have been nearly optimal with respect to the minimization of translation errors. This near-optimality could be the outcome of extensive early selection during the co-evolution of the code with the primordial, error-prone translation system, or a result of a unique, accidental event. Under this hypothesis, the subsequent expansion of the code resulted in a decrease of the error minimization level that became sustainable owing to the evolution of a high

  8. Research on the technology for processing errors of photoelectric theodolite based on error design idea

    Science.gov (United States)

    Guo, Xiaosong; Pu, Pengcheng; Zhou, Zhaofa; Wang, Kunming

    2012-10-01

    The errors existing in photoelectric theodolite were studied according to the error design idea , that is - the correction of theodolite errors was achieved by analyzing the effect of errors actively instead of processing the data with error passively. Aiming at the shafting error, the relationship between different errors was analyzed by the error model based on coordinate transformation, and the real-time error compensation method based on the normal-reversed measuring method and levelness auto-detection was supposed. As to the eccentric error of dial, the idea of eccentric residual error was presented and its influence to measuring precision was studied, then the dynamic compensation model was build, so the influence of eccentric error of dial to measuring precision can be eliminated. For the centering deviation in the process of measuring angle, the compensation method based on the error model was supposed, in which the centering deviation was detected automatically based on computer vision. The above method based on error design idea reduced the influence to measuring result by software compensation method effectively, and improved the automation degree of azimuth angle measuring of theodolite, at the same time the precision was not depressed.

  9. Spelling Errors in University Students’ English Writing

    Institute of Scientific and Technical Information of China (English)

    王祥德; 邓兆红

    2012-01-01

      [3] Wyatt, V. An Analysis of Errors in Composition Writing[J]. ELT Journal,1973(2):177-188.%  This paper investigated the spelling errors made by university students in Hong Kong. By analyzing the spelling errors in the untimed essays and exam scripts, we found that students are prone to make more spelling mistakes in exam scripts, the same type of errors occur in both of the two kinds of texts; and their ranks of the frequency also are the same

  10. Error measuring system of rotary Inductosyn

    Science.gov (United States)

    Liu, Chengjun; Zou, Jibin; Fu, Xinghe

    2008-10-01

    The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).

  11. Estimating IMU heading error from SAR images.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2009-03-01

    Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

  12. Beam positioning error budget in ICF driver

    CERN Document Server

    Shi Zhi Quan; Su Jing Qin

    2002-01-01

    The author presents the method of linear weight sum to beam positioning budget on the basis of ICF request on targeting, the approach of equal or unequal probability to allocate errors to each optical element. Based on the relationship between the motion of the optical components and beam position on target, the position error of the optical components was evaluated, which was referred to as the maximum range. Lots of ray trace were performed, the position error budget were modified by law of the normal distribution. An overview of position error budget of the components is provided

  13. Study of Errors among Nursing Students

    Directory of Open Access Journals (Sweden)

    Ella Koren

    2007-09-01

    Full Text Available The study of errors in the health system today is a topic of considerable interest aimed at reducing errors through analysis of the phenomenon and the conclusions reached. Errors that occur frequently among health professionals have also been observed among nursing students. True, in most cases they are actually “near errors,” but these could be a future indicator of therapeutic reality and the effect of nurses' work environment on their personal performance. There are two different approaches to such errors: (a The EPP (error prone person approach lays full responsibility at the door of the individual involved in the error, whether a student, nurse, doctor, or pharmacist. According to this approach, handling consists purely in identifying and penalizing the guilty party. (b The EPE (error prone environment approach emphasizes the environment as a primary contributory factor to errors. The environment as an abstract concept includes components and processes of interpersonal communications, work relations, human engineering, workload, pressures, technical apparatus, and new technologies. The objective of the present study was to examine the role played by factors in and components of personal performance as compared to elements and features of the environment. The study was based on both of the aforementioned approaches, which, when combined, enable a comprehensive understanding of the phenomenon of errors among the student population as well as a comparison of factors contributing to human error and to error deriving from the environment. The theoretical basis of the study was a model that combined both approaches: one focusing on the individual and his or her personal performance and the other focusing on the work environment. The findings emphasize the work environment of health professionals as an EPE. However, errors could have been avoided by means of strict adherence to practical procedures. The authors examined error events in the

  14. Error Analysis and English Language Teaching

    Institute of Scientific and Technical Information of China (English)

    Ma; Jinling

    2015-01-01

    The theory of Error Analysis is a crucial part in the research of second language acquisition and it has significant influence on exploring the pattern of English teaching.Although there are some limitations in error analysis both in theory and practice, its significant role has been proved and recognized.It is inevitable that how to scientifically treat the errors will be become more and more popular in modern English teaching.The aim of this paper is to show the importance of error analysis in English teaching and also present how well it can function in English language teaching.

  15. Error computation for adaptive finite element analysis

    CERN Document Server

    Khan, A A; Memon, I R; Ming, X Y

    2002-01-01

    The paper gives a simple numerical procedure for computations of errors generated by the discretisation process of finite element method. The procedure given is based on the ZZ error estimator which is believed to be reasonable accurate and thus can be readily implemented in any existing finite element codes. The devised procedure not only estimates the global energy norm error but also evaluates the local errors in individual elements. In the example, the given procedure is combined with an adaptive refinement procedure, which provides guidance for optimal mesh designing and allows the user to obtain a desired accuracy with a limited number of interaction. (author)

  16. The NASTRAN Error Correction Information System (ECIS)

    Science.gov (United States)

    Rosser, D. C., Jr.; Rogers, J. L., Jr.

    1975-01-01

    A data management procedure, called Error Correction Information System (ECIS), is described. The purpose of this system is to implement the rapid transmittal of error information between the NASTRAN Systems Management Office (NSMO) and the NASTRAN user community. The features of ECIS and its operational status are summarized. The mode of operation for ECIS is compared to the previous error correction procedures. It is shown how the user community can have access to error information much more rapidly when using ECIS. Flow charts and time tables characterize the convenience and time saving features of ECIS.

  17. Journal standards.

    Science.gov (United States)

    Jackson, R

    2003-08-01

    Despite its many imperfections, the peer review process is a firmly established quality control system for scientific literature. It gives readers some assurance that the work and views that are reported meet standards that are acceptable to a journal. Maureen Revington's editorial in a recent issue of the Australian Veterinary Journal (Revington2002) gives a good concise warts and all overview of the process and is well worth reading. I have some concerns about several articles in the December 2002 issue of the New Zealand Veterinary Journal (Volume 50, Number 6), devoted to the health and welfare of farmed deer, that relate to extensive citing of non-peer reviewed papers. I can understand the need for information to flow from researchers to the wider community but that need is already satisfied by publications such as the proceedings of the Deer Branch of the New Zealand Veterinary Association and Proceedings of the New Zealand Society of Animal Production. Non-peer reviewed papers have been cited in the Journal in the past but never to the extent displayed in this particular issue. It degrades the peer-review process and creates an added burden for reviewers who are forced to grapple with the uncertainties of the science in non-peer reviewed citations. One of my fears is that this process allows science from non peer reviewed articles to be legitimised by its inclusion in a peer reviewed journal and perhaps go on to be accepted as dogma. This is a real danger given the difficulties associated with tracing back to original citations and the increasing volume of scientific literature. It also affords opportunities for agencies to pick up questionable and doubtful science and tout it as support for their products or particular points of view. If deer researchers choose to publish most of their work in proceedings then so be it. However this approach, which seems to becoming increasingly prevalent in the deer sector, is questionable from an established science point

  18. Empirical analysis and modeling of errors of atmospheric profiles from GPS radio occultation

    Directory of Open Access Journals (Sweden)

    U. Foelsche

    2011-09-01

    Full Text Available The utilization of radio occultation (RO data in atmospheric studies requires precise knowledge of error characteristics. We present results of an empirical error analysis of GPS RO bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature. We find very good agreement between data characteristics of different missions (CHAMP, GRACE-A, and Formosat-3/COSMIC (F3C. In the global mean, observational errors (standard deviation from "true" profiles at mean tangent point location agree within 0.3% in bending angle, 0.1% in refractivity, and 0.2 K in dry temperature at all altitude levels between 4 km and 35 km. Above 35 km the increase of the CHAMP raw bending angle observational error is more pronounced than that of GRACE-A and F3C leading to a larger observational error of about 1% at 42 km. Above ≈20 km, the observational errors show a strong seasonal dependence at high latitudes. Larger errors occur in hemispheric wintertime and are associated mainly with background data used in the retrieval process particularly under conditions when ionospheric residual is large. The comparison between UCAR and WEGC results (both data centers have independent inversion processing chains reveals different magnitudes of observational errors in atmospheric parameters, which are attributable to different background fields used. Based on the empirical error estimates, we provide a simple analytical error model for GPS RO atmospheric parameters for the altitude range of 4 km to 35 km and up to 50 km for UCAR raw bending angle and refractivity. In the model, which accounts for vertical, latitudinal, and seasonal variations, a constant error is adopted around the tropopause region amounting to 0.8% for bending angle, 0.35% for refractivity, 0.15% for dry pressure, 10 m for dry geopotential height, and 0.7 K for dry temperature. Below this region the observational error increases following an inverse height power-law and above it increases

  19. Exploring errors in paleoclimate proxy reconstructions using Monte Carlo simulations: paleotemperature from mollusk and coral geochemistry

    Directory of Open Access Journals (Sweden)

    M. Carré

    2012-03-01

    Full Text Available Quantitative reconstructions of the past climate statistics from geochemical coral or mollusk records require quantified error bars in order to properly interpret the amplitude of the climate change and to perform meaningful comparisons with climate model outputs. We introduce here a more precise categorization of reconstruction errors, differentiating the error bar due to the proxy calibration uncertainty from the standard error due to sampling and variability in the proxy formation process. Then, we propose a numerical approach based on Monte Carlo simulations with surrogate proxy-derived climate records. These are produced by perturbing a known time series in a way that mimics the uncertainty sources in the proxy climate reconstruction. A freely available algorithm, MoCo, was designed to be parameterized by the user and to calculate realistic systematic and standard errors of the mean and the variance of the annual temperature, and of the mean and the variance of the temperature seasonality reconstructed from marine accretionary archive geochemistry. In this study, the algorithm is used for sensitivity experiments in a case study to characterize and quantitatively evaluate the sensitivity of systematic and standard errors to sampling size, stochastic uncertainty sources, archive-specific biological limitations, and climate non-stationarity. The results of the experiments yield an illustrative example of the range of variations of the standard error and the systematic error in the reconstruction of climate statistics in the Eastern Tropical Pacific. Thus, we show that the sample size and the climate variability are the main sources of the standard error. The experiments allowed the identification and estimation of systematic bias that would not otherwise be detected because of limited modern datasets. Our study demonstrates that numerical simulations based on Monte Carlo analyses are a simple and powerful approach to improve the understanding

  20. Correcting image placement errors using registration control (RegC®) technology in the photomask periphery

    Science.gov (United States)

    Cohen, Avi; Lange, Falk; Ben-Zvi, Guy; Graitzer, Erez; Vladimir, Dmitriev

    2012-11-01

    The ITRS roadmap specifies wafer overlay control as one of the major tasks for the sub 40 nm nodes in addition to CD control and defect control. Wafer overlay is strongly dependent on mask image placement error (registration errors or Reg errors)1. The specifications for registration or mask placement accuracy are significantly tighter in some of the double patterning techniques (DPT). This puts a heavy challenge on mask manufacturers (mask shops) to comply with advanced node registration specifications. The conventional methods of feeding back the systematic registration error to the E-beam writer and re-writing the mask are becoming difficult, expensive and not sufficient for the advanced nodes especially for double pattering technologies. Six production masks were measured on a standard registration metrology tool and the registration errors were calculated and plotted. Specially developed algorithm along with the RegC Wizard (dedicated software) was used to compute a correction lateral strain field that would minimize the registration errors. This strain field was then implemented in the photomask bulk material using an ultra short pulse laser based system. Finally the post process registration error maps were measured and the resulting residual registration error field with and without scale and orthogonal errors removal was calculated. In this paper we present a robust process flow in the mask shop which leads up to 32% registration 3sigma improvement, bringing some out-of-spec masks into spec, utilizing the RegC® process in the photomask periphery while leaving the exposure field optically unaffected.