WorldWideScience

Sample records for including test errors

  1. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  2. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  3. (Errors in statistical tests3

    Directory of Open Access Journals (Sweden)

    Kaufman Jay S

    2008-07-01

    Full Text Available Abstract In 2004, Garcia-Berthou and Alcaraz published "Incongruence between test statistics and P values in medical papers," a critique of statistical errors that received a tremendous amount of attention. One of their observations was that the final reported digit of p-values in articles published in the journal Nature departed substantially from the uniform distribution that they suggested should be expected. In 2006, Jeng critiqued that critique, observing that the statistical analysis of those terminal digits had been based on comparing the actual distribution to a uniform continuous distribution, when digits obviously are discretely distributed. Jeng corrected the calculation and reported statistics that did not so clearly support the claim of a digit preference. However delightful it may be to read a critique of statistical errors in a critique of statistical errors, we nevertheless found several aspects of the whole exchange to be quite troubling, prompting our own meta-critique of the analysis. The previous discussion emphasized statistical significance testing. But there are various reasons to expect departure from the uniform distribution in terminal digits of p-values, so that simply rejecting the null hypothesis is not terribly informative. Much more importantly, Jeng found that the original p-value of 0.043 should have been 0.086, and suggested this represented an important difference because it was on the other side of 0.05. Among the most widely reiterated (though often ignored tenets of modern quantitative research methods is that we should not treat statistical significance as a bright line test of whether we have observed a phenomenon. Moreover, it sends the wrong message about the role of statistics to suggest that a result should be dismissed because of limited statistical precision when it is so easy to gather more data. In response to these limitations, we gathered more data to improve the statistical precision, and

  4. Measuring Test Measurement Error: A General Approach

    Science.gov (United States)

    Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2013-01-01

    Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…

  5. Less Truth Than Error: Massachusetts Teacher Tests

    Directory of Open Access Journals (Sweden)

    Walt Haney

    1999-02-01

    Full Text Available Scores on the Massachusetts Teacher Tests of reading and writing are highly unreliable. The tests' margin of error is close to double to triple the range found on well-developed tests. A person retaking the MTT several times could have huge fluctuations in their scores even if their skill level did not change significantly. In fact, the 9 to 17 point margin of error calculated for the tests represents more than 10 percent of the grading scale (assumed to be 0 to 100. The large margin of error means there is both a high false-pass rate and a high false-failure rate. For example, a person who received a score of 72 on the writing test could have scored an 89 or a 55 simply because of the unreliability of the test. Since adults' reading and writing skills do not change a great deal over several months, this range of scores on the same test should not be possible. While this test is being touted as an accurate assessment of a person's fitness to be a teacher, one would expect the scores to accurately reflect a test-taker's verbal ability level. In addition to the large margin of error, the MTT contain questionable content that make them poor tools for measuring test-takers' reading and writing skills. The content and lack of correlation between the reading and writing scores reduces the meaningfulness, or validity, of the tests. The validity is affected not just by the content, but by a host of factors, such as the conditions under which tests were administered and how they were scored. Interviews with a small sample of test-takers confirmed published reports concerning problems with the content and administration.

  6. Error probabilities in default Bayesian hypothesis testing

    NARCIS (Netherlands)

    Gu, Xin; Hoijtink, Herbert; Mulder, J,

    2016-01-01

    This paper investigates the classical type I and type II error probabilities of default Bayes factors for a Bayesian t test. Default Bayes factors quantify the relative evidence between the null hypothesis and the unrestricted alternative hypothesis without needing to specify prior distributions for

  7. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    Science.gov (United States)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and

  8. Measurement Error in Maximal Oxygen Uptake Tests

    Science.gov (United States)

    2003-11-14

    Journal of Applied Physiology, 64, 434-436. De Vito, G., Bernardi, Sproviero, E., & Figura , F. (1995). Decrease of endurance performance during...would be unchanged. Alternative models were defined by imposing constraints on the standard errors. Every model imposed the constraint that SEM was...the same for both tests within each sample. Different models were obtained by varying whether equality constraints were imposed across samples

  9. Robust Principal Component Test in Gross Error Detection and Identification

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Principle component analysis (PCA) based chi-square test is more sensitive to subtle gross errors and has greater power to correctly detect gross errors than classical chi-square test. However, classical principal component test (PCT) is non-robust and can be very sensitive to one or more outliers. In this paper, a Huber function liked robust weight factor was added in the collective chi-square test to eliminate the influence of gross errors on the PCT. Meanwhile, robust chi-square test was applied to modified simultaneous estimation of gross error (MSEGE) strategy to detect and identify multiple gross errors. Simulation results show that the proposed robust test can reduce the possibility of type Ⅱ errors effectively. Adding robust chi-square test into MSEGE does not obviously improve the power of multiple gross error identification, the proposed approach considers the influence of outliers on hypothesis statistic test and is more reasonable.

  10. Fast motion-including dose error reconstruction for VMAT with and without MLC tracking

    DEFF Research Database (Denmark)

    Ravkilde, Thomas; Keall, Paul J.; Grau, Cai

    2014-01-01

    of the algorithm for reconstruction of dose and motion-induced dose errors throughout the tracking and non-tracking beam deliveries was quantified. Doses were reconstructed with a mean dose difference relative to the measurements of -0.5% (5.5% standard deviation) for cumulative dose. More importantly, the root......-mean-square deviation between reconstructed and measured motion-induced 3%/3 mm γ failure rates (dose error) was 2.6%. The mean computation time for each calculation of dose and dose error was 295 ms. The motion-including dose reconstruction allows accurate temporal and spatial pinpointing of errors in absorbed dose...... validate a simple model for fast motion-including dose error reconstruction applicable to intrafractional QA of MLC tracking treatments of moving targets. MLC tracking experiments were performed on a standard linear accelerator with prototype MLC tracking software guided by an electromagnetic transponder...

  11. Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware

    Science.gov (United States)

    Winnitoy, Susan

    2012-01-01

    Located at the NASA Johnson Space Center in Houston, TX, the Six-Degree-of-Freedom Dynamic Test System (SDTS) is a real-time, six degree-of-freedom, short range motion base simulator originally designed to simulate the relative dynamics of two bodies in space mating together (i.e., docking or berthing). The SDTS has the capability to test full scale docking and berthing systems utilizing a two body dynamic docking simulation for docking operations and a Space Station Remote Manipulator System (SSRMS) simulation for berthing operations. The SDTS can also be used for nonmating applications such as sensors and instruments evaluations requiring proximity or short range motion operations. The motion base is a hydraulic powered Stewart platform, capable of supporting a 3,500 lb payload with a positional accuracy of 0.03 inches. The SDTS is currently being used for the NASA Docking System testing and has been also used by other government agencies. The SDTS is also under consideration for use by commercial companies. Examples of tests include the verification of on-orbit robotic inspection systems, space vehicle assembly procedures and docking/berthing systems. The facility integrates a dynamic simulation of on-orbit spacecraft mating or de-mating using flight-like mechanical interface hardware. A force moment sensor is used for input during the contact phase, thus simulating the contact dynamics. While the verification of flight hardware presents unique challenges, one particular area of interest involves the use of external measurement systems to ensure accurate feedback of dynamic contact. The measurement systems for the test facility have two separate functions. The first is to take static measurements of facility and test hardware to determine both the static and moving frames used in the simulation and control system. The test hardware must be measured after each configuration change to determine both sets of reference frames. The second function is to take dynamic

  12. Testing and error analysis of a real-time controller

    Science.gov (United States)

    Savolaine, C. G.

    1983-01-01

    Inexpensive ways to organize and conduct system testing that were used on a real-time satellite network control system are outlined. This system contains roughly 50,000 lines of executable source code developed by a team of eight people. For a small investment of staff, the system was thoroughly tested, including automated regression testing, before field release. Detailed records were kept for fourteen months, during which several versions of the system were written. A separate testing group was not established, but testing itself was structured apart from the development process. The errors found during testing are examined by frequency per subsystem by size and complexity as well as by type. The code was released to the user in March, 1983. To date, only a few minor problems found with the system during its pre-service testing and user acceptance has been good.

  13. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  14. Assessing the impact of differential genotyping errors on rare variant tests of association.

    Science.gov (United States)

    Mayer-Jochimsen, Morgan; Fast, Shannon; Tintle, Nathan L

    2013-01-01

    Genotyping errors are well-known to impact the power and type I error rate in single marker tests of association. Genotyping errors that happen according to the same process in cases and controls are known as non-differential genotyping errors, whereas genotyping errors that occur with different processes in the cases and controls are known as differential genotype errors. For single marker tests, non-differential genotyping errors reduce power, while differential genotyping errors increase the type I error rate. However, little is known about the behavior of the new generation of rare variant tests of association in the presence of genotyping errors. In this manuscript we use a comprehensive simulation study to explore the effects of numerous factors on the type I error rate of rare variant tests of association in the presence of differential genotyping error. We find that increased sample size, decreased minor allele frequency, and an increased number of single nucleotide variants (SNVs) included in the test all increase the type I error rate in the presence of differential genotyping errors. We also find that the greater the relative difference in case-control genotyping error rates the larger the type I error rate. Lastly, as is the case for single marker tests, genotyping errors classifying the common homozygote as the heterozygote inflate the type I error rate significantly more than errors classifying the heterozygote as the common homozygote. In general, our findings are in line with results from single marker tests. To ensure that type I error inflation does not occur when analyzing next-generation sequencing data careful consideration of study design (e.g. use of randomization), caution in meta-analysis and using publicly available controls, and the use of standard quality control metrics is critical.

  15. The Dynamic Modeling of Multiple Pairs of Spur Gears in Mesh, Including Friction and Geometrical Errors

    Directory of Open Access Journals (Sweden)

    Shengxiang Jia

    2003-01-01

    Full Text Available This article presents a dynamic model of three shafts and two pair of gears in mesh, with 26 degrees of freedom, including the effects of variable tooth stiffness, pitch and profile errors, friction, and a localized tooth crack on one of the gears. The article also details howgeometrical errors in teeth can be included in a model. The model incorporates the effects of variations in torsional mesh stiffness in gear teeth by using a common formula to describe stiffness that occurs as the gears mesh together. The comparison between the presence and absence of geometrical errors in teeth was made by using Matlab and Simulink models, which were developed from the equations of motion. The effects of pitch and profile errors on the resultant input pinion angular velocity coherent-signal of the input pinion's average are discussed by investigating some of the common diagnostic functions and changes to the frequency spectra results.

  16. Testing Error Correcting Codes by Multicanonical Sampling of Rare Events

    OpenAIRE

    Iba, Yukito; Hukushima, Koji

    2007-01-01

    The idea of rare event sampling is applied to the estimation of the performance of error-correcting codes. The essence of the idea is importance sampling of the pattern of noises in the channel by Multicanonical Monte Carlo, which enables efficient estimation of tails of the distribution of bit error rate. The idea is successfully tested with a convolutional code.

  17. Framed bit error rate testing for 100G ethernet equipment

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2010-01-01

    The Internet users behavioural patterns are migrating towards bandwidth-intensive applications, which require a corresponding capacity extension. The emerging 100 Gigabit Ethernet (GE) technology is a promising candidate for providing a ten-fold increase of todays available Internet transmission...... rate. As the need for 100 Gigabit Ethernet equipment rises, so does the need for equipment, which can properly test these systems during development, deployment and use. This paper presents early results from a work-in-progress academia-industry collaboration project and elaborates on the challenges...... of performing bit error rate testing at 100Gbps. In particular, we show how Bit Error Rate Testing (BERT) can be performed over an aggregated 100G Attachment Unit Interface (CAUI) by encapsulating the test data in Ethernet frames at line speed. Our results show that framed bit error rate testing can...

  18. Evaluating the impact of genotype errors on rare variant tests of association.

    Science.gov (United States)

    Cook, Kaitlyn; Benitez, Alejandra; Fu, Casey; Tintle, Nathan

    2014-01-01

    The new class of rare variant tests has usually been evaluated assuming perfect genotype information. In reality, rare variant genotypes may be incorrect, and so rare variant tests should be robust to imperfect data. Errors and uncertainty in SNP genotyping are already known to dramatically impact statistical power for single marker tests on common variants and, in some cases, inflate the type I error rate. Recent results show that uncertainty in genotype calls derived from sequencing reads are dependent on several factors, including read depth, calling algorithm, number of alleles present in the sample, and the frequency at which an allele segregates in the population. We have recently proposed a general framework for the evaluation and investigation of rare variant tests of association, classifying most rare variant tests into one of two broad categories (length or joint tests). We use this framework to relate factors affecting genotype uncertainty to the power and type I error rate of rare variant tests. We find that non-differential genotype errors (an error process that occurs independent of phenotype) decrease power, with larger decreases for extremely rare variants, and for the common homozygote to heterozygote error. Differential genotype errors (an error process that is associated with phenotype status), lead to inflated type I error rates which are more likely to occur at sites with more common homozygote to heterozygote errors than vice versa. Finally, our work suggests that certain rare variant tests and study designs may be more robust to the inclusion of genotype errors. Further work is needed to directly integrate genotype calling algorithm decisions, study costs and test statistic choices to provide comprehensive design and analysis advice which appropriately accounts for the impact of genotype errors.

  19. Measurement of Transmission Error Including Backlash in Angle Transmission Mechanisms for Mechatronic Systems

    Science.gov (United States)

    Ming, Aiguo; Kajitani, Makoto; Kanamori, Chisato; Ishikawa, Jiro

    The characteristics of angle transmission mechanisms exert a great influence on the servo performance in the robotic or mechatronic mechanism. Especially, the backlash of angle transmission mechanism is preferable the small amount. Recently, some new types of gear reducers with non-backlash have been developed for robots. However, the measurement and evaluation method of the backlash of gear trains has not been considered except old methods which can statically measure at only several meshing points of gears. This paper proposes an overall performance testing method of angle transmission mechanisms for the mechatronic systems. This method can measure the angle transmission error both clockwise and counterclockwise. In addition the backlash can be continuously measured in all meshing positions automatically. This system has been applied to the testing process in the production line of gear reducers for robots, and it has been effective for reducing the backlash of the gear trains.

  20. Some effects of experimental error in fracture testing.

    Science.gov (United States)

    Orange, T. W.

    1973-01-01

    The purpose of this paper is to show the effects of experimental imprecision on the stress intensity factors calculated for various practical specimen types. A general form equation for the stress intensity factor is presented, and a general error equation is derived. The expected error in the stress intensity factor is given in terms of the precision levels of the basic experimental measurements and derivatives of the stress intensity calibration factor. Nine common fracture specimen types are considered, and the sensitivity of the various types to experimental error is illustrated. Some implications for fracture toughness testing and crack growth rate testing are discussed, and methods of analysis are proposed to compensate for the effects of experimental error.

  1. Fast motion-including dose error reconstruction for VMAT with and without MLC tracking

    DEFF Research Database (Denmark)

    Ravkilde, Thomas; Keall, Paul J.; Grau, Cai

    2014-01-01

    validate a simple model for fast motion-including dose error reconstruction applicable to intrafractional QA of MLC tracking treatments of moving targets. MLC tracking experiments were performed on a standard linear accelerator with prototype MLC tracking software guided by an electromagnetic transponder...... system. A three-axis motion stage reproduced eight representative tumour trajectories; four lung and four prostate. Low and high modulation 6 MV single-arc volumetric modulated arc therapy treatment plans were delivered for each trajectory with and without MLC tracking, as well as without motion...... for reference. Temporally resolved doses were measured during all treatments using a biplanar dosimeter. Offline, the dose delivered to each of 1069 diodes in the dosimeter was reconstructed with 500 ms temporal resolution by a motion-including pencil beam convolution algorithm developed in-house. The accuracy...

  2. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  3. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...

  4. AXAF Alignment Test System Autocollimating Flat Error Correction

    Science.gov (United States)

    Lewis, Timothy S.

    1995-01-01

    The alignment test system for the advanced x ray astrophysics facility (AXAF) high-resolution mirror assembly (HRMA) determines the misalignment of the HRMA by measuring the displacement of a beam of light reflected by the HRMA mirrors and an autocollimating flat (ACF). This report shows how to calibrate the system to compensate for errors introduced by the ACF, using measurements taken with the ACF in different positions. It also shows what information can be obtained from alignment test data regarding errors in the shapes of the HRMA mirrors. Simulated results based on measured ACF surface data are presented.

  5. Fast optical 3D form measurement of aspheres including determination of thickness and wedge and decenter errors

    Science.gov (United States)

    Stover, E.; Berger, G.; Wendel, M.; Petter, J.

    2015-10-01

    A method for non-contact 3D form testing of aspheric surfaces including determination of decenter and wedge errors and lens thickness is presented. The principle is based on the absolute measurement capability of multi-wavelength interferometry (MWLI). The approach produces high density 3D shape information and geometric parameters at high accuracy in short measurement times. The system allows inspection of aspheres without restrictions in terms of spherical departures, of segmented and discontinuous optics. The optics can be polished or ground and made of opaque or transparent materials.

  6. Direct cointegration testing in error-correction models

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); H.K. van Dijk (Herman)

    1994-01-01

    textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The con

  7. Uncertainty of rotating shadowband irradiometers and Si-pyranometers including the spectral irradiance error

    Science.gov (United States)

    Wilbert, Stefan; Kleindiek, Stefan; Nouri, Bijan; Geuder, Norbert; Habte, Aron; Schwandt, Marko; Vignola, Frank

    2016-05-01

    Concentrating solar power projects require accurate direct normal irradiance (DNI) data including uncertainty specifications for plant layout and cost calculations. Ground measured data are necessary to obtain the required level of accuracy and are often obtained with Rotating Shadowband Irradiometers (RSI) that use photodiode pyranometers and correction functions to account for systematic effects. The uncertainty of Si-pyranometers has been investigated, but so far basically empirical studies were published or decisive uncertainty influences had to be estimated based on experience in analytical studies. One of the most crucial estimated influences is the spectral irradiance error because Si-photodiode-pyranometers only detect visible and color infrared radiation and have a spectral response that varies strongly within this wavelength interval. Furthermore, analytic studies did not discuss the role of correction functions and the uncertainty introduced by imperfect shading. In order to further improve the bankability of RSI and Si-pyranometer data, a detailed uncertainty analysis following the Guide to the Expression of Uncertainty in Measurement (GUM) has been carried out. The study defines a method for the derivation of the spectral error and spectral uncertainties and presents quantitative values of the spectral and overall uncertainties. Data from the PSA station in southern Spain was selected for the analysis. Average standard uncertainties for corrected 10 min data of 2 % for global horizontal irradiance (GHI), and 2.9 % for DNI (for GHI and DNI over 300 W/m²) were found for the 2012 yearly dataset when separate GHI and DHI calibration constants were used. Also the uncertainty in 1 min resolution was analyzed. The effect of correction functions is significant. The uncertainties found in this study are consistent with results of previous empirical studies.

  8. Uncertainty of Rotating Shadowband Irradiometers and Si-Pyranometers Including the Spectral Irradiance Error

    Energy Technology Data Exchange (ETDEWEB)

    Wilbert, Stefan; Kleindiek, Stefan; Nouri, Bijan; Geuder, Norbert; Habte, Aron; Schwandt, Marko; Vignola, Frank

    2016-05-31

    Concentrating solar power projects require accurate direct normal irradiance (DNI) data including uncertainty specifications for plant layout and cost calculations. Ground measured data are necessary to obtain the required level of accuracy and are often obtained with Rotating Shadowband Irradiometers (RSI) that use photodiode pyranometers and correction functions to account for systematic effects. The uncertainty of Si-pyranometers has been investigated, but so far basically empirical studies were published or decisive uncertainty influences had to be estimated based on experience in analytical studies. One of the most crucial estimated influences is the spectral irradiance error because Si-photodiode-pyranometers only detect visible and color infrared radiation and have a spectral response that varies strongly within this wavelength interval. Furthermore, analytic studies did not discuss the role of correction functions and the uncertainty introduced by imperfect shading. In order to further improve the bankability of RSI and Si-pyranometer data, a detailed uncertainty analysis following the Guide to the Expression of Uncertainty in Measurement (GUM) has been carried out. The study defines a method for the derivation of the spectral error and spectral uncertainties and presents quantitative values of the spectral and overall uncertainties. Data from the PSA station in southern Spain was selected for the analysis. Average standard uncertainties for corrected 10 min data of 2% for global horizontal irradiance (GHI), and 2.9% for DNI (for GHI and DNI over 300 W/m2) were found for the 2012 yearly dataset when separate GHI and DHI calibration constants were used. Also the uncertainty in 1 min resolution was analyzed. The effect of correction functions is significant. The uncertainties found in this study are consistent with results of previous empirical studies.

  9. Cosmological parameters from weak lensing power spectrum and bispectrum tomography: including the non-Gaussian errors

    CERN Document Server

    Kayo, Issha

    2013-01-01

    We re-examine a genuine power of weak lensing bispectrum tomography for constraining cosmological parameters, when combined with the power spectrum tomography, based on the Fisher information matrix formalism. To account for the full information at two- and three-point levels, we include all the power spectrum and bispectrum information built from all-available combinations of tomographic redshift bins, multipole bins and different triangle configurations over a range of angular scales (up to lmax=2000 as our fiducial choice). For the parameter forecast, we use the halo model approach in Kayo, Takada & Jain (2013) to model the non-Gaussian error covariances as well as the cross-covariance between the power spectrum and the bispectrum, including the halo sample variance or the nonlinear version of beat-coupling. We find that adding the bispectrum information leads to about 60% improvement in the dark energy figure-of-merit compared to the lensing power spectrum tomography alone, for three redshift-bin tomo...

  10. Should Listening and Speaking be included in a Language Test?

    Institute of Scientific and Technical Information of China (English)

    李宗文

    2013-01-01

      Language test is widely used in our daily life for various purposes, while a lot of language tests fail to include listening or speaking for one reason or another. The question that whether this type of test is still valid enough to serve test purposes and have a positive washback for English learning has come up.

  11. BEAM DYNAMICS SIMULATIONS OF SARAF ACCELERATOR INCLUDING ERROR PROPAGATION AND IMPLICATIONS FOR THE EURISOL DRIVER

    CERN Document Server

    J. Rodnizki, D. Berkovits, K. Lavie, I. Mardor, A. Shor and Y. Yanay (Soreq NRC, Yavne), K. Dunkel, C. Piel (ACCEL, Bergisch Gladbach), A. Facco (INFN/LNL, Legnaro, Padova), V. Zviagintsev (TRIUMF, Vancouver)

    AbstractBeam dynamics simulations of SARAF (Soreq Applied Research Accelerator Facility) superconducting RF linear accelerator have been performed in order to establish the accelerator design. The multi-particle simulation includes 3D realistic electromagnetic field distributions, space charge forces and fabrication, misalignment and operation errors. A 4 mA proton or deuteron beam is accelerated up to 40 MeV with a moderated rms emittance growth and a high real-estate gradient of 2 MeV/m. An envelope of 40,000 macro-particles is kept under a radius of 1.1 cm, well below the beam pipe bore radius. The accelerator design of SARAF is proposed as an injector for the EURISOL driver accelerator. The Accel 176 MHZ β0=0.09 and β0=0.15 HWR lattice was extended to 90 MeV based on the LNL 352 MHZ β0=0.31 HWR. The matching between both lattices ensures smooth transition and the possibility to extend the accelerator to the required EURISOL ion energy.

  12. Revised error propagation of 40Ar/39Ar data, including covariances

    Science.gov (United States)

    Vermeesch, Pieter

    2015-12-01

    The main advantage of the 40Ar/39Ar method over conventional K-Ar dating is that it does not depend on any absolute abundance or concentration measurements, but only uses the relative ratios between five isotopes of the same element -argon- which can be measured with great precision on a noble gas mass spectrometer. The relative abundances of the argon isotopes are subject to a constant sum constraint, which imposes a covariant structure on the data: the relative amount of any of the five isotopes can always be obtained from that of the other four. Thus, the 40Ar/39Ar method is a classic example of a 'compositional data problem'. In addition to the constant sum constraint, covariances are introduced by a host of other processes, including data acquisition, blank correction, detector calibration, mass fractionation, decay correction, interference correction, atmospheric argon correction, interpolation of the irradiation parameter, and age calculation. The myriad of correlated errors arising during the data reduction are best handled by casting the 40Ar/39Ar data reduction protocol in a matrix form. The completely revised workflow presented in this paper is implemented in a new software platform, Ar-Ar_Redux, which takes raw mass spectrometer data as input and generates accurate 40Ar/39Ar ages and their (co-)variances as output. Ar-Ar_Redux accounts for all sources of analytical uncertainty, including those associated with decay constants and the air ratio. Knowing the covariance matrix of the ages removes the need to consider 'internal' and 'external' uncertainties separately when calculating (weighted) mean ages. Ar-Ar_Redux is built on the same principles as its sibling program in the U-Pb community (U-Pb_Redux), thus improving the intercomparability of the two methods with tangible benefits to the accuracy of the geologic time scale. The program can be downloaded free of charge from

  13. Testing accelerometer rectification error caused by multidimensional composite inputs with double turntable centrifuge.

    Science.gov (United States)

    Guan, W; Meng, X F; Dong, X M

    2014-12-01

    Rectification error is a critical characteristic of inertial accelerometers. Accelerometers working in operational situations are stimulated by composite inputs, including constant acceleration and vibration, from multiple directions. However, traditional methods for evaluating rectification error only use one-dimensional vibration. In this paper, a double turntable centrifuge (DTC) was utilized to produce the constant acceleration and vibration simultaneously and we tested the rectification error due to the composite accelerations. At first, we deduced the expression of the rectification error with the output of the DTC and a static model of the single-axis pendulous accelerometer under test. Theoretical investigation and analysis were carried out in accordance with the rectification error model. Then a detailed experimental procedure and testing results were described. We measured the rectification error with various constant accelerations at different frequencies and amplitudes of the vibration. The experimental results showed the distinguished characteristics of the rectification error caused by the composite accelerations. The linear relation between the constant acceleration and the rectification error was proved. The experimental procedure and results presented in this context can be referenced for the investigation of the characteristics of accelerometer with multiple inputs.

  14. Analysis of sensitivity and errors in Maglev vibration test system

    Institute of Scientific and Technical Information of China (English)

    JIANG; Dong; LIU; Xukun; WANG; Deyu; YANG; Jiaxiang

    2016-01-01

    In order to improve work performance of M aglev vibration test systems,the relationships of operating parameters between different components and system were researched. The working principle of photoelectric displacement sensor was analyzed. The relationship between displacement of transducer and the infrared light area received by sensor was given. The method of expanding the dynamic range of vibrator was proposed,which makes dynamic range of Maglev vibrator doubled. By increasing the amplification of the amplifier,the sensitive photoelectric displacement sensor can be maintained. Two modes of operation of the controller were analyzed. Bilateral work of vibration test system designed can further improve the stability of the system.An object vibration was measured by Maglev vibration test system designed when different vibration exciter frequencies were loaded. Experiments showthat the output frequency measured by Maglev vibration test system and loaded are the same. Finally,the errors of test system were analyzed. These errors of vibration test system designed can meet the requirements of application. The results laid the foundation for the practical application of magnetic levitation vibration test system.

  15. DEVELOPMENT AND TESTING OF ERRORS CORRECTION ALGORITHM IN ELECTRONIC DESIGN AUTOMATION

    Directory of Open Access Journals (Sweden)

    E. B. Romanova

    2016-03-01

    Full Text Available Subject of Research. We have developed and presented a method of design errors correction for printed circuit boards (PCB in electronic design automation (EDA. Control of process parameters of PCB in EDA is carried out by means of Design Rule Check (DRC program. The DRC program monitors compliance with the design rules (minimum width of the conductors and gaps, the parameters of pads and via-holes, the parameters of polygons, etc. and also checks the route tracing, short circuits, the presence of objects outside PCB edge and other design errors. The result of the DRC program running is the generated error report. For quality production of circuit boards DRC-errors should be corrected, that is ensured by the creation of error-free DRC report. Method. A problem of correction repeatability of DRC-errors was identified as a result of trial operation of P-CAD, Altium Designer and KiCAD programs. For its solution the analysis of DRC-errors was carried out; the methods of their correction were studied. DRC-errors were proposed to be clustered. Groups of errors include the types of errors, which correction sequence has no impact on the correction time. The algorithm for correction of DRC-errors is proposed. Main Results. The best correction sequence of DRC-errors has been determined. The algorithm has been tested in the following EDA: P-CAD, Altium Designer and KiCAD. Testing has been carried out on two and four-layer test PCB (digital and analog. Comparison of DRC-errors correction time with the algorithm application to the same time without it has been done. It has been shown that time saved for the DRC-errors correction increases with the number of error types up to 3.7 times. Practical Relevance. The proposed algorithm application will reduce PCB design time and improve the quality of the PCB design. We recommend using the developed algorithm when the number of error types is equal to four or more. The proposed algorithm can be used in different

  16. Stepwise multiple test procedures and control of directional errors

    OpenAIRE

    1999-01-01

    One of the most difficult problems occurring with stepwise multiple test procedures for a set of two-sided hypotheses is the control of direc-tional errors if rejection of a hypothesis is accomplished with a directional decision. In this paper we generalize a result for so-called step-down procedures derived by Shaffer to a large class of stepwise or closed multiple test procedures. In a unifying way we obtain results for a large class of order statistics procedures includin...

  17. Extensions of nonlinear error propagation analysis for explicit pseudodynamic testing

    Institute of Scientific and Technical Information of China (English)

    Shuenn-Yih Chang

    2009-01-01

    Two important extensions of a technique to perform a nonlinear error propagation analysis for an explicit pseudodynamic algorithm (Chang, 2003) are presented. One extends the stability study from a given time step to a complete step-by-step integration procedure. It is analytically proven that ensuring stability conditions in each time step leads to a stable computation of the entire step-by-step integration procedure. The other extension shows that the nonlinear error propagation results, which are derived for a nonlinear single degree of freedom (SDOF) system, can be applied to a nonlinear multiple degree of freedom (MDOF) system. This application is dependent upon the determination of the natural frequencies of the system in each time step, since all the numerical properties and error propagation properties in the time step are closely related to these frequencies. The results are derived from the step degree of nonlinearity. An instantaneous degree of nonlinearity is introduced to replace the step degree of nonlinearity and is shown to be easier to use in practice. The extensions can be also applied to the results derived from a SDOF system based on the instantaneous degree of nonlinearity, and hence a time step might be appropriately chosen to perform a pseudodynamic test prior to testing.

  18. Performance of muon reconstruction including Alignment Position Errors for 2016 Collision Data

    CERN Document Server

    CMS Collaboration

    2016-01-01

    From 2016 Run muon reconstruction is using non-zero Alignment Position Errors to account for the residual uncertainties of muon chambers' positions. Significant improvements are obtained in particular for the startup phase after opening/closing the muon detector. Performance results are presented for real data and MC simulations, related to both the offline reconstruction and the High-Level Trigger.

  19. Blood glucose testing in the hospital: error sources and risk management.

    Science.gov (United States)

    Nichols, James H

    2011-01-01

    Glucose testing in the hospital with point-of-care devices presents multiple opportunities for error. Any device can fail under the right conditions. For glucose monitoring in the hospital, with thousands of operators, hundreds of devices, and dozens of locations involved, there is ample opportunity for errors that can impact the quality of test results. Errors can occur in any phase of the testing process: preanalytic, analytic, or postanalytic. Common sources of meter error include patient or methodology interferences, operator mistakes, environmental exposure, and device malfunction. Early models of glucose meters had few internal checks or capability to warn the operator of meter problems. The latest generation of glucose monitors has a number of internal checks and controls engineered into the testing process to prevent serious errors or warn the operator by suppressing test results. Some of these control processes are built into the software and data management system of the meters, others require the hospital to do something, such as regularly clean the meter or analyze control samples of known glucose concentration, to verify meter performance. Hospitals need to be aware of the potential for errors by understanding weaknesses in the testing process that could lead to erroneous results and take steps to prevent errors from occurring or to minimize the harm to patients when errors do occur. The reliability of a glucose result will depend on the balance of internal control features available from manufacturers in conjunction with the liquid control analysis and other control processes (operator training, device validation, and maintenance) utilized by the hospitals.

  20. Spatial autocorrelation among automated geocoding errors and its effects on testing for disease clustering.

    Science.gov (United States)

    Zimmerman, Dale L; Li, Jie; Fang, Xiangming

    2010-04-30

    Automated geocoding of patient addresses is an important data assimilation component of many spatial epidemiologic studies. Inevitably, the geocoding process results in positional errors. Positional errors incurred by automated geocoding tend to reduce the power of tests for disease clustering and otherwise affect spatial analytic methods. However, there are reasons to believe that the errors may often be positively spatially correlated and that this may mitigate their deleterious effects on spatial analyses. In this article, we demonstrate explicitly that the positional errors associated with automated geocoding of a data set of more than 6000 addresses in Carroll County, Iowa are spatially autocorrelated. Furthermore, through two simulation studies of disease processes, including one in which the disease process is overlain upon the Carroll County addresses, we show that spatial autocorrelation among geocoding errors maintains the power of two tests for disease clustering at a level higher than that which would occur if the errors were independent. Implications of these results for cluster detection, privacy protection, and measurement error modeling of geographic health data are discussed.

  1. Ar-Ar_Redux: rigorous error propagation of 40Ar/39Ar data, including covariances

    Science.gov (United States)

    Vermeesch, P.

    2015-12-01

    Rigorous data reduction and error propagation algorithms are needed to realise Earthtime's objective to improve the interlaboratory accuracy of 40Ar/39Ar dating to better than 1% and thereby facilitate the comparison and combination of the K-Ar and U-Pb chronometers. Ar-Ar_Redux is a new data reduction protocol and software program for 40Ar/39Ar geochronology which takes into account two previously underappreciated aspects of the method: 1. 40Ar/39Ar measurements are compositional dataIn its simplest form, the 40Ar/39Ar age equation can be written as: t = log(1+J [40Ar/39Ar-298.5636Ar/39Ar])/λ = log(1 + JR)/λ Where λ is the 40K decay constant and J is the irradiation parameter. The age t does not depend on the absolute abundances of the three argon isotopes but only on their relative ratios. Thus, the 36Ar, 39Ar and 40Ar abundances can be normalised to unity and plotted on a ternary diagram or 'simplex'. Argon isotopic data are therefore subject to the peculiar mathematics of 'compositional data', sensu Aitchison (1986, The Statistical Analysis of Compositional Data, Chapman & Hall). 2. Correlated errors are pervasive throughout the 40Ar/39Ar methodCurrent data reduction protocols for 40Ar/39Ar geochronology propagate the age uncertainty as follows: σ2(t) = [J2 σ2(R) + R2 σ2(J)] / [λ2 (1 + R J)], which implies zero covariance between R and J. In reality, however, significant error correlations are found in every step of the 40Ar/39Ar data acquisition and processing, in both single and multi collector instruments, during blank, interference and decay corrections, age calculation etc. Ar-Ar_Redux revisits every aspect of the 40Ar/39Ar method by casting the raw mass spectrometer data into a contingency table of logratios, which automatically keeps track of all covariances in a compositional context. Application of the method to real data reveals strong correlations (r2 of up to 0.9) between age measurements within a single irradiation batch. Propertly taking

  2. Robust and Adaptive OMR System Including Fuzzy Modeling, Fusion of Musical Rules, and Possible Error Detection

    Directory of Open Access Journals (Sweden)

    Isabelle Bloch

    2007-01-01

    Full Text Available This paper describes a system for optical music recognition (OMR in case of monophonic typeset scores. After clarifying the difficulties specific to this domain, we propose appropriate solutions at both image analysis level and high-level interpretation. Thus, a recognition and segmentation method is designed, that allows dealing with common printing defects and numerous symbol interconnections. Then, musical rules are modeled and integrated, in order to make a consistent decision. This high-level interpretation step relies on the fuzzy sets and possibility framework, since it allows dealing with symbol variability, flexibility, and imprecision of music rules, and merging all these heterogeneous pieces of information. Other innovative features are the indication of potential errors and the possibility of applying learning procedures, in order to gain in robustness. Experiments conducted on a large data base show that the proposed method constitutes an interesting contribution to OMR.

  3. Robust and Adaptive OMR System Including Fuzzy Modeling, Fusion of Musical Rules, and Possible Error Detection

    Science.gov (United States)

    Rossant, Florence; Bloch, Isabelle

    2006-12-01

    This paper describes a system for optical music recognition (OMR) in case of monophonic typeset scores. After clarifying the difficulties specific to this domain, we propose appropriate solutions at both image analysis level and high-level interpretation. Thus, a recognition and segmentation method is designed, that allows dealing with common printing defects and numerous symbol interconnections. Then, musical rules are modeled and integrated, in order to make a consistent decision. This high-level interpretation step relies on the fuzzy sets and possibility framework, since it allows dealing with symbol variability, flexibility, and imprecision of music rules, and merging all these heterogeneous pieces of information. Other innovative features are the indication of potential errors and the possibility of applying learning procedures, in order to gain in robustness. Experiments conducted on a large data base show that the proposed method constitutes an interesting contribution to OMR.

  4. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    Energy Technology Data Exchange (ETDEWEB)

    Audren, Benjamin; Lesgourgues, Julien [Institut de Théorie des Phénomènes Physiques, École PolytechniqueFédérale de Lausanne, CH-1015, Lausanne (Switzerland); Bird, Simeon [Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ, 08540 (United States); Haehnelt, Martin G. [Kavli Institute for Cosmology and Institute of Astronomy, Madingley Road, Cambridge, CB3 0HA (United Kingdom); Viel, Matteo, E-mail: benjamin.audren@epfl.ch, E-mail: julien.lesgourgues@cern.ch, E-mail: spb@ias.edu, E-mail: haehnelt@ast.cam.ac.uk, E-mail: viel@oats.inaf.it [INAF/Osservatorio Astronomico di Trieste, Via Tiepolo 11, 34143, Trieste (Italy)

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fourier space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.

  5. SITE project. Phase 1: Continuous data bit-error-rate testing

    Science.gov (United States)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-01-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  6. Research and implementation of the burst-mode optical signal bit-error test

    Science.gov (United States)

    Huang, Qiu-yuan; Ma, Chao; Shi, Wei; Chen, Wei

    2009-08-01

    On the basis of the characteristic of TDMA uplink optical signal of PON system, this article puts forward a method of high-speed optical burst bit-error rate testing based on FPGA. The article proposes a new method of generating the burst signal pattern include user-defined pattern and pseudo-random pattern, realizes the slip synchronization, self-synchronization of error detection using data decomposition technique and the traditional irrigation code synchronization technology, completes high-speed burst signal clock synchronization using the rapid synchronization technology of phase-locked loop delay in the external circuit and finishes the bit-error rate test of high-speed burst optical signal.

  7. Impact of Probe Placement Error on MIMO OTA Test Zone Performance

    DEFF Research Database (Denmark)

    Fan, Wei; Nielsen, Jesper Ødum; Carreño, Xavier;

    2012-01-01

    Standardization work for MIMO OTA testing methods is currently ongoing, where a multi-probe anechoic chamber based solution is an important candidate. In this paper, the probes located on an OTA ring are used to synthesize a plane wave field in the center of the OTA ring, and the EM field for each...... probe is obtained using FDTD simulation. This paper investigates the extent to which we can control the field structure inside the test zone where the device under test is located. The focus is on performance deterioration introduced by probe placement error including OTA probe orientation error...... and location mismatch, which are general non-idealities in practical MIMO OTA test systems....

  8. The pseudonoise test set: Communication system's performance evaluation based upon RMS error testing

    Science.gov (United States)

    Wallace, G. R.; Gussow, S. S.; Salter, W. E.; Weathers, G. D.

    1972-01-01

    A pseudonoise (PN) test set was built to provide a relatively easy means of accurately determining the end-to-end rms error introduced by a communication system when subjected to wideband data. It utilizes a filtered pseudorandom sequence generator as a wideband data source, providing a convenient means for digitally delaying the input reference signal for comparison with the distorted test communication system output. In addition to providing a means to measure the end-to-end rms error and the average delay of a communication system, the PN test set also provides a means to determine the tested system's impulse response and correlation function. The theory of PN testing is discussed in detail along with the most difficult aspects of implementation, the building of matched filter pairs. Both analytical and empirical results are reported which support the contentions that this is an accurate and practical way to acquire figures of merit for complete communication systems.

  9. OOK power model based dynamic error testing for smart electricity meter

    Science.gov (United States)

    Wang, Xuewei; Chen, Jingxia; Yuan, Ruiming; Jia, Xiaolu; Zhu, Meng; Jiang, Zhenyu

    2017-02-01

    This paper formulates the dynamic error testing problem for a smart meter, with consideration and investigation of both the testing signal and the dynamic error testing method. To solve the dynamic error testing problems, the paper establishes an on-off-keying (OOK) testing dynamic current model and an OOK testing dynamic load energy (TDLE) model. Then two types of TDLE sequences and three modes of OOK testing dynamic power are proposed. In addition, a novel algorithm, which helps to solve the problem of dynamic electric energy measurement’s traceability, is derived for dynamic errors. Based on the above researches, OOK TDLE sequence generation equipment is developed and a dynamic error testing system is constructed. Using the testing system, five kinds of meters were tested in the three dynamic power modes. The test results show that the dynamic error is closely related to dynamic power mode and the measurement uncertainty is 0.38%.

  10. Financial errors in dementia: Testing a neuroeconomic conceptual framework

    Science.gov (United States)

    Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.

    2013-01-01

    Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (pAD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884

  11. Information content of weak lensing bispectrum: including the non-Gaussian error covariance matrix

    CERN Document Server

    Kayo, Issha; Jain, Bhuvnesh

    2013-01-01

    We address a long-standing problem, how can we extract information in the non-Gaussian regime of weak lensing surveys, by accurate modeling of all relevant covariances between the power spectra and bispectra. We use 1000 ray-tracing simulation realizations for a Lambda-CDM model and an analytical halo model. We develop a formalism to describe the covariance matrices of power spectra and bispectra of all triangle configurations, which extend to 6-point correlation functions. We include a new contribution arising from coupling of the lensing Fourier modes with large-scale mass fluctuations on scales comparable with the survey region via halo bias theory, which we call the halo sample variance (HSV) effect. We show that the model predictions are in excellent agreement with the simulation results for the power spectrum and bispectrum covariances. The HSV effect gives a dominant contribution to the covariances at multipoles l > 10^3, which arise from massive halos with masses of about 10^14 solar masses and at rel...

  12. Testing Theories of Transfer Using Error Rate Learning Curves.

    Science.gov (United States)

    Koedinger, Kenneth R; Yudelson, Michael V; Pavlik, Philip I

    2016-07-01

    We analyze naturally occurring datasets from student use of educational technologies to explore a long-standing question of the scope of transfer of learning. We contrast a faculty theory of broad transfer with a component theory of more constrained transfer. To test these theories, we develop statistical models of them. These models use latent variables to represent mental functions that are changed while learning to cause a reduction in error rates for new tasks. Strong versions of these models provide a common explanation for the variance in task difficulty and transfer. Weak versions decouple difficulty and transfer explanations by describing task difficulty with parameters for each unique task. We evaluate these models in terms of both their prediction accuracy on held-out data and their power in explaining task difficulty and learning transfer. In comparisons across eight datasets, we find that the component models provide both better predictions and better explanations than the faculty models. Weak model variations tend to improve generalization across students, but hurt generalization across items and make a sacrifice to explanatory power. More generally, the approach could be used to identify malleable components of cognitive functions, such as spatial reasoning or executive functions. Copyright © 2016 Cognitive Science Society, Inc.

  13. Content Coverage of Single-Word Tests Used to Assess Common Phonological Error Patterns

    Science.gov (United States)

    Kirk, Cecilia; Vigeland, Laura

    2015-01-01

    Purpose: This review evaluated whether 9 single-word tests of phonological error patterns provide adequate content coverage to accurately identify error patterns that are active in a child's speech. Method: Tests in the current study were considered to display sufficient opportunities to assess common phonological error patterns if they…

  14. Content Coverage of Single-Word Tests Used to Assess Common Phonological Error Patterns

    Science.gov (United States)

    Kirk, Cecilia; Vigeland, Laura

    2015-01-01

    Purpose: This review evaluated whether 9 single-word tests of phonological error patterns provide adequate content coverage to accurately identify error patterns that are active in a child's speech. Method: Tests in the current study were considered to display sufficient opportunities to assess common phonological error patterns if they…

  15. Experimental research of error restraint for dynamic interferometer in optical testing

    Science.gov (United States)

    Wu, Xin; Zhang, Xiaoqiang; Wu, Xiaoyan; Zhang, Linna; Yu, Yingjie

    2015-02-01

    Phase shifting interferometry is commonly used in precision optical surface measurement, but which possesses some limitation because of the sensitivity to environment. Therefore, it is hardly used in optical testing in the workshop environment. Thus, the instantaneous interferometry is a good choice because of the insensitive to vibration. This paper will describe an instantaneous interferometry utilizing spatial carrier and Fourier transform, and discuss the accuracy of the interferometer for optical testing when phase-shifting interferometry is unable to realize the precision measurement. With a lot of experiments, some problems were analyzed, including the relationships between the measurement accuracy and systematic error, vibration, temperature, the test surface cleanliness and so on. The discussed work of error restraint can provide a reference for the instantaneous interferometry applications.

  16. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  17. How to reduce errors in applying impairment tests

    OpenAIRE

    Petersen, Christian; Plenborg, Thomas

    2007-01-01

    Fair value accounting has become predominant in accounting as a vast number of IAS/IFRS standards are based on fair value accounting, including IAS 36 Impairment of assets. Fair value accounting for goodwill is technically challenging, since market prices are not observable. Thus, valuation technologies must be applied in order to test goodwill for impairment. While prior research on goodwill has concentrated on either the (dis)advantages for each accounting procedure for goodwill or exami...

  18. Testing and assessment strategies, including alternative and new approaches

    DEFF Research Database (Denmark)

    Meyer, Otto A.

    2003-01-01

    The object of toxicological testing is to predict possible adverse effect in humans when exposed to chemicals whether used as industrial chemicals, pharmaceuticals or pesticides. Animal models are predominantly used in identifying potential hazards of chemicals. The use of laboratory animals raises...... ethical concern. However, irrespective of animal welfare it is an important aspect of the discipline of toxicology that the primary object is human health. The ideal testing and assessment strategy is simple to use all the available test methods and preferably more in laboratory animal species from which...... we get as many data as possible in order to obtain the most extensive database for the toxicological evaluation of a chemical. Consequently, the society has decided that certain group of chemicals should be tested accordingly. However, realising that, this idea is not obtainable in practice because...

  19. Identification of Kinematic Errors of Five-axis Machine Tool Trunnion Axis from Finished Test Piece

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ya; FU Jianzhong; CHEN Zichen

    2014-01-01

    Compared with the traditional non-cutting measurement, machining tests can more accurately reflect the kinematic errors of five-axis machine tools in the actual machining process for the users. However, measurement and calculation of the machining tests in the literature are quite difficult and time-consuming. A new method of the machining tests for the trunnion axis of five-axis machine tool is proposed. Firstly, a simple mathematical model of the cradle-type five-axis machine tool was established by optimizing the coordinate system settings based on robot kinematics. Then, the machining tests based on error-sensitive directions were proposed to identify the kinematic errors of the trunnion axis of cradle-type five-axis machine tool. By adopting the error-sensitive vectors in the matrix calculation, the functional relationship equations between the machining errors of the test piece in the error-sensitive directions and the kinematic errors of C-axis and A-axis of five-axis machine tool rotary table was established based on the model of the kinematic errors. According to our previous work, the kinematic errors of C-axis can be treated as the known quantities, and the kinematic errors of A-axis can be obtained from the equations. This method was tested in Mikron UCP600 vertical machining center. The machining errors in the error-sensitive directions can be obtained by CMM inspection from the finished test piece to identify the kinematic errors of five-axis machine tool trunnion axis. Experimental results demonstrated that the proposed method can reduce the complexity, cost, and the time consumed substantially, and has a wider applicability. This paper proposes a new method of the machining tests for the trunnion axis of five-axis machine tool.

  20. Phoneme Error Pattern by Heritage Speakers of Spanish on an English Word Recognition Test.

    Science.gov (United States)

    Shi, Lu-Feng

    2017-04-01

    Heritage speakers acquire their native language from home use in their early childhood. As the native language is typically a minority language in the society, these individuals receive their formal education in the majority language and eventually develop greater competency with the majority than their native language. To date, there have not been specific research attempts to understand word recognition by heritage speakers. It is not clear if and to what degree we may infer from evidence based on bilingual listeners in general. This preliminary study investigated how heritage speakers of Spanish perform on an English word recognition test and analyzed their phoneme errors. A prospective, cross-sectional, observational design was employed. Twelve normal-hearing adult Spanish heritage speakers (four men, eight women, 20-38 yr old) participated in the study. Their language background was obtained through the Language Experience and Proficiency Questionnaire. Nine English monolingual listeners (three men, six women, 20-41 yr old) were also included for comparison purposes. Listeners were presented with 200 Northwestern University Auditory Test No. 6 words in quiet. They repeated each word orally and in writing. Their responses were scored by word, word-initial consonant, vowel, and word-final consonant. Performance was compared between groups with Student's t test or analysis of variance. Group-specific error patterns were primarily descriptive, but intergroup comparisons were made using 95% or 99% confidence intervals for proportional data. The two groups of listeners yielded comparable scores when their responses were examined by word, vowel, and final consonant. However, heritage speakers of Spanish misidentified significantly more word-initial consonants and had significantly more difficulty with initial /p, b, h/ than their monolingual peers. The two groups yielded similar patterns for vowel and word-final consonants, but heritage speakers made significantly

  1. Error analysis and system optimization of non-null aspheric testing system

    Science.gov (United States)

    Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo

    2010-10-01

    A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.

  2. Robust testing for normality of error terms with presence of autocorrelation and conditional heteroscedasticity

    Science.gov (United States)

    Střelec, Luboš; Stehlík, Milan

    2017-01-01

    Normality of the error terms in regression models is one of the basic assumptions in the applied regression analysis. Therefore, testing for normality of the error terms constitutes one of the most important steps of regression model verification and validation. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Within the applied regression analysis there is a frequent problem of the presence of autocorrelation and conditional heteroscedasticity of the error terms. Under both autocorrelation and heteroscedasticity, the usual OLS estimators are still unbiased, linear and asymptotically normally distributed, however, no longer have the minimum variance property among all linear unbiased estimators. Therefore, the aim of this paper is to present and discuss normality testing of the error terms with presence of autocorrelation and conditional heteroscedasticity. To explore the power of selected classical tests and robust tests for normality, we perform simulation study.

  3. Non-parametric tests of productive efficiency with errors-in-variables

    NARCIS (Netherlands)

    Kuosmanen, T.K.; Post, T.; Scholtes, S.

    2007-01-01

    We develop a non-parametric test of productive efficiency that accounts for errors-in-variables, following the approach of Varian. [1985. Nonparametric analysis of optimizing behavior with measurement error. Journal of Econometrics 30(1/2), 445-458]. The test is based on the general Pareto-Koopmans

  4. Zernike test. I - Analytical aspects. II - Experimental aspects. [interferometric phase error test

    Science.gov (United States)

    Golden, L. J.

    1977-01-01

    The Zernike phenomenon is interpreted in general interferometric terms to gain insight into the optimum design of disks suitable for a particular experimental situation. The design of Zernike disks for measuring small low-order aberrations is considered and evaluated; optimum parameters for disks 2, 3, 4, and 5 microns in radius are determined for an f/12 large-space-telescope system with an obscuration ratio of 0.4 at 0.6 micron. It is shown that optimization in this case provides sensitivities of better than one hundredth of a wavelength for the measurement of low-order aberrations. The procedure for manufacturing a Zernike disk is then described in detail, and results are reported for tests of a laboratory Zernike figure sensor containing a disk manufactured according to this procedure. In the tests, a laboratory wavefront-error simulator was used to introduce small aberration ranges, measurements of the changes in reimaged pupil intensity introduced by the disk were made for several aberration settings, and the measured changes were compared with the values predicted by the interferometric theory of Zernike tests. The results are found to agree within an error of one two-hundredth of a wavelength.

  5. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis...

  6. Testing and inference in nonlinear cointegrating vector error correction models

    DEFF Research Database (Denmark)

    Kristensen, D.; Rahbek, A.

    2013-01-01

    the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full...

  7. Wavefront-error evaluation by mathematical analysis of experimental Foucault-test data

    Science.gov (United States)

    Wilson, R. G.

    1975-01-01

    The diffraction theory of the Foucault test provides an integral formula expressing the complex amplitude and irradiance distribution in the Foucault pattern of a test mirror (lens) as a function of wavefront error. Recent literature presents methods of inverting this formula to express wavefront error in terms of irradiance in the Foucault pattern. The present paper describes a study in which the inversion formulation was applied to photometric Foucault-test measurements on a nearly diffraction-limited mirror to determine wavefront errors for direct comparison with ones determined from scatter-plate interferometer measurements. The results affirm the practicability of the Foucault test for quantitative wavefront analysis of very small errors, and they reveal the fallacy of the prevalent belief that the test is limited to qualitative use only. Implications of the results with regard to optical testing and the potential use of the Foucault test for wavefront analysis in orbital space telescopes are discussed.

  8. How allele frequency and study design affect association test statistics with misrepresentation errors.

    Science.gov (United States)

    Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael

    2014-04-01

    We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.

  9. An Extended Quadratic Frobenius Primality Test with Average- and Worst-Case Error Estimate

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2006-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t ite......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point.......We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...

  10. An Extended Quadratic Frobenius Primality Test with Average and Worst Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2003-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t ite......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point.......We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...

  11. An Extended Quadratic Frobenius Primality Test with Average Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2001-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t ite......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point.......We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...

  12. Report on errors in pretransfusion testing from a tertiary care center: A step toward transfusion safety

    Directory of Open Access Journals (Sweden)

    Meena Sidhu

    2016-01-01

    Full Text Available Introduction: Errors in the process of pretransfusion testing for blood transfusion can occur at any stage from collection of the sample to administration of the blood component. The present study was conducted to analyze the errors that threaten patients′ transfusion safety and actual harm/serious adverse events that occurred to the patients due to these errors. Materials and Methods: The prospective study was conducted in the Department Of Transfusion Medicine, Shri Maharaja Gulab Singh Hospital, Government Medical College, Jammu, India from January 2014 to December 2014 for a period of 1 year. Errors were defined as any deviation from established policies and standard operating procedures. A near-miss event was defined as those errors, which did not reach the patient. Location and time of occurrence of the events/errors were also noted. Results: A total of 32,672 requisitions for the transfusion of blood and blood components were received for typing and cross-matching. Out of these, 26,683 products were issued to the various clinical departments. A total of 2,229 errors were detected over a period of 1 year. Near-miss events constituted 53% of the errors and actual harmful events due to errors occurred in 0.26% of the patients. Sample labeling errors were 2.4%, inappropriate request for blood components 2%, and information on requisition forms not matching with that on the sample 1.5% of all the requisitions received were the most frequent errors in clinical services. In transfusion services, the most common event was accepting sample in error with the frequency of 0.5% of all requisitions. ABO incompatible hemolytic reactions were the most frequent harmful event with the frequency of 2.2/10,000 transfusions. Conclusion: Sample labeling, inappropriate request, and sample received in error were the most frequent high-risk errors.

  13. Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports

    Science.gov (United States)

    Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary

    2014-01-01

    Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…

  14. Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports

    Science.gov (United States)

    Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary

    2014-01-01

    Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…

  15. Reducing patient identification errors related to glucose point-of-care testing

    Directory of Open Access Journals (Sweden)

    Gaurav Alreja

    2011-01-01

    Full Text Available Background: Patient identification (ID errors in point-of-care testing (POCT can cause test results to be transferred to the wrong patient′s chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number is checked against patient registration data from admission, discharge, and transfer (ADT feeds and only matched results are transferred to the patient′s electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015% in comparison with 61.5 errors/month (0.319% before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.

  16. Linking Errors between Two Populations and Tests: A Case Study in International Surveys in Education

    Directory of Open Access Journals (Sweden)

    Dirk Hastedt

    2015-06-01

    Full Text Available This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average achievement scores from the latter can be linked to the international metric. In addition to raising issues associated with different testing conditions, administrative procedures, and the like, this approach also poses psychometric challenges. This paper endeavors to shed some light on the effects that can be expected, the linkage errors in particular, by countries using this practice. The ILSA selected for this simulation study was IEA TIMSS 2011, and the three countries used as the national assessment cases were Botswana, Honduras, and Tunisia, all of which participated in TIMSS 2011. The items selected as items common to the simulated national tests and the international test came from the Grade 4 TIMSS 2011 mathematics items that IEA released into the public domain after completion of this assessment. The findings of the current study show that linkage errors seemed to achieve acceptable levels if 30 or more items were used for the linkage, although the errors were still significantly higher compared to the TIMSS' cutoffs. Comparison of the estimated country averages based on the simulated national surveys and the averages based on the international TIMSS assessment revealed only one instance across the three countries of the estimates approaching parity. Also, the percentages of students in these countries who actually reached the defined benchmarks on the TIMSS achievement scale differed significantly from the results based on TIMSS and the results for the simulated national assessments. As a conclusion, we advise against using groups of released items from international assessments in national

  17. Measurement and analysis of typical motion error traces from a circular test

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The circular test provides a rapid and efficient way of measuring the contouring accuracy of a machine tool.To get the actual point coordinate in the work plane,an improved measurement instrument - a new ball bar test system - is presented in this paper to identify both the radial error and the rotation angle error when the machine is manipulated to move in circular traces.Based on the measured circular error,a combination of Fourier components is chosen to represent the systematic form error that fluctuates in the radial direction.The typical motion errors represented by the corresponding Fourier components can thus be identified.The values for machine compensation can be calculated and adjusted until the desired results are achieved.

  18. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    CERN Document Server

    Audren, Benjamin; Bird, Simeon; Haehnelt, Martin G.; Viel, Matteo

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fourier space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservat...

  19. Development of a simple test device for spindle error measurement using a position sensitive detector

    Science.gov (United States)

    Liu, Chien-Hung; Jywe, Wen-Yuh; Lee, Hau-Wei

    2004-09-01

    A new spindle error measurement system has been developed in this paper. It employs a design development rotational fixture with a built-in laser diode and four batteries to replace a precision reference master ball or cylinder used in the traditional method. Two measuring devices with two position sensitive detectors (one is designed for the measurement of the compound X-axis and Y-axis errors and the other is designed with a lens for the measurement of the tilt angular errors) are fixed on the machine table to detect the laser point position from the laser diode in the rotational fixture. When the spindle rotates, the spindle error changes the direction of the laser beam. The laser beam is then divided into two separated beams by a beam splitter. The two separated beams are projected onto the two measuring devices and are detected by two position sensitive detectors, respectively. Thus, the compound motion errors and the tilt angular errors of the spindle can be obtained. Theoretical analysis and experimental tests are presented in this paper to separate the compound errors into two radial errors and tilt angular errors. This system is proposed as a new instrument and method for spindle metrology.

  20. TESTING SERIAL CORRELATION IN SEMIPARAMETRIC VARYING COEFFICIENT PARTIALLY LINEAR ERRORS-IN-VARIABLES MODEL

    Institute of Scientific and Technical Information of China (English)

    Xuemei HU; Feng LIU; Zhizhong WANG

    2009-01-01

    The authors propose a V_(N,P) test statistic for testing finite-order serial correlation in a semiparametric varying coefficient partially linear errors-in-variables model. The test statistic is shown to have asymptotic normal distribution under the null hypothesis of no serial correlation. Some Monte Carlo experiments are conducted to examine the finite sample performance of the proposed V_(N,P) test statistic. Simulation results confirm that the proposed test performs satisfactorily in estimated size and power.

  1. Measurement of implicit associations between emotional states and computer errors using the implicit association test

    Directory of Open Access Journals (Sweden)

    Maricutoiu, Laurentiu P.

    2011-12-01

    Full Text Available Previous research identified two main emotional outcomes of computer error: anxiety and frustration. These emotions have been associated with low levels of performance in using a computer. The present research used innovative methodology for studying the relations between computer error messages, user anxiety and user frustration. We used the Implicit Association Test (IAT to measure automated associations between error messages and these two emotional outcomes. A sample of 80 participants completed two questionnaires and two IAT designs. Results indicated that user error messages are more strongly associated with anxiety, than with frustration. Personal characteristics such as emotional stability and English proficiency were significantly associated with the implicit anxiety measure, but not with the frustration measure. No significant relations were found between two measures of computer experience and the emotional measures. These results indicated that error related anxiety is associated with personal characteristics.

  2. Notes on power of normality tests of error terms in regression models

    Energy Technology Data Exchange (ETDEWEB)

    Střelec, Luboš [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, Brno, 61300 (Czech Republic)

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.

  3. Efficient Time-Domain Imaging Processing for One-Stationary Bistatic Forward-Looking SAR Including Motion Errors

    Directory of Open Access Journals (Sweden)

    Hongtu Xie

    2016-11-01

    Full Text Available With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA. Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement.

  4. Efficient Time-Domain Imaging Processing for One-Stationary Bistatic Forward-Looking SAR Including Motion Errors.

    Science.gov (United States)

    Xie, Hongtu; Shi, Shaoying; Xiao, Hui; Xie, Chao; Wang, Feng; Fang, Qunle

    2016-11-12

    With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement.

  5. [Preliminary Study on Error Control of Medical Devices Test Reports Based on the Analytic Hierarchy Process].

    Science.gov (United States)

    Huang, Yanhong; Xu, Honglei; Tu, Rong; Zhang, Xu; Huang, Min

    2016-01-01

    In this paper, the common errors in medical devices test reports are classified and analyzed. And then the main 11 influence factors for these inspection report errors are summarized. The hierarchy model was also developed and verified by presentation data using MATLAB. The feasibility of comprehensive weights quantitative comparison has been analyzed by using the analytic hierarchy process. In the end, this paper porspects the further research direction.

  6. Error analysis for duct leakage tests in ASHRAE standard 152P

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.

    1997-06-01

    This report presents an analysis of random uncertainties in the two methods of testing for duct leakage in Standard 152P of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). The test method is titled Standard Method of Test for Determining Steady-State and Seasonal Efficiency of Residential Thermal Distribution Systems. Equations have been derived for the uncertainties in duct leakage for given levels of uncertainty in the measured quantities used as inputs to the calculations. Tables of allowed errors in each of these independent variables, consistent with fixed criteria of overall allowed error, have been developed.

  7. A Psychometric Review of Norm-Referenced Tests Used to Assess Phonological Error Patterns

    Science.gov (United States)

    Kirk, Celia; Vigeland, Laura

    2014-01-01

    Purpose: The authors provide a review of the psychometric properties of 6 norm-referenced tests designed to measure children's phonological error patterns. Three aspects of the tests' psychometric adequacy were evaluated: the normative sample, reliability, and validity. Method: The specific criteria used for determining the psychometric…

  8. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  9. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  10. Empirical Bayes Test for the Parameter of Rayleigh Distribution with Error of Measurement

    Institute of Scientific and Technical Information of China (English)

    HUANG JUAN

    2011-01-01

    For the data with error of measurement in historical samples,the empirical Bayes test rule for the parameter of Rayleigh distribution is constructed,and the asymptotically optimal property is obtained.It is shown that the convergence rate of the proposed EB test rule can be arbitrarily close to O(n-1/2) under suitable conditions.

  11. Nine Loci for Ocular Axial Length Identified through Genome-wide Association Studies, Including Shared Loci with Refractive Error

    OpenAIRE

    Cheng, Ching-Yu; Schache, Maria; Ikram, M. Kamran; Young, Terri L.; Guggenheim, Jeremy A.; Vitart, Veronique; MacGregor, Stuart; Verhoeven, Virginie J.M.; Barathi, Veluchamy A.; Liao, Jiemin; Hysi, Pirro G.; Bailey-Wilson, Joan E.; St. Pourcain, Beate; Kemp, John P.; McMahon, George

    2013-01-01

    Refractive errors are common eye disorders of public health importance worldwide. Ocular axial length (AL) is the major determinant of refraction and thus of myopia and hyperopia. We conducted a meta-analysis of genome-wide association studies for AL, combining 12,531 Europeans and 8,216 Asians. We identified eight genome-wide significant loci for AL (RSPO1, C3orf26, LAMA2, GJD2, ZNRF3, CD55, MIP, and ALPPL2) and confirmed one previously reported AL locus (ZC3H11B). Of the nine loci, five (LA...

  12. A family-based likelihood ratio test for general pedigree structures that allows for genotyping error and missing data.

    Science.gov (United States)

    Yang, Yang; Wise, Carol A; Gordon, Derek; Finch, Stephen J

    2008-01-01

    The purpose of this work is the development of a family-based association test that allows for random genotyping errors and missing data and makes use of information on affected and unaffected pedigree members. We derive the conditional likelihood functions of the general nuclear family for the following scenarios: complete parental genotype data and no genotyping errors; only one genotyped parent and no genotyping errors; no parental genotype data and no genotyping errors; and no parental genotype data with genotyping errors. We find maximum likelihood estimates of the marker locus parameters, including the penetrances and population genotype frequencies under the null hypothesis that all penetrance values are equal and under the alternative hypothesis. We then compute the likelihood ratio test. We perform simulations to assess the adequacy of the central chi-square distribution approximation when the null hypothesis is true. We also perform simulations to compare the power of the TDT and this likelihood-based method. Finally, we apply our method to 23 SNPs genotyped in nuclear families from a recently published study of idiopathic scoliosis (IS). Our simulations suggest that this likelihood ratio test statistic follows a central chi-square distribution with 1 degree of freedom under the null hypothesis, even in the presence of missing data and genotyping errors. The power comparison shows that this likelihood ratio test is more powerful than the original TDT for the simulations considered. For the IS data, the marker rs7843033 shows the most significant evidence for our method (p = 0.0003), which is consistent with a previous report, which found rs7843033 to be the 2nd most significant TDTae p value among a set of 23 SNPs.

  13. Data Quality in Linear Regression Models: Effect of Errors in Test Data and Errors in Training Data on Predictive Accuracy

    Directory of Open Access Journals (Sweden)

    Barbara D. Klein

    1999-01-01

    Full Text Available Although databases used in many organizations have been found to contain errors, little is known about the effect of these errors on predictions made by linear regression models. The paper uses a real-world example, the prediction of the net asset values of mutual funds, to investigate the effect of data quality on linear regression models. The results of two experiments are reported. The first experiment shows that the error rate and magnitude of error in data used in model prediction negatively affect the predictive accuracy of linear regression models. The second experiment shows that the error rate and the magnitude of error in data used to build the model positively affect the predictive accuracy of linear regression models. All findings are statistically significant. The findings have managerial implications for users and builders of linear regression models.

  14. Detection and correct handling of prescribing errors in Dutch hospital pharmacies using test patients.

    Science.gov (United States)

    Beex-Oosterhuis, Marieke M; de Vogel, Ed M; van der Sijs, Heleen; Dieleman, Hetty G; van den Bemt, Patricia M L A

    2013-12-01

    Hospital pharmacists and pharmacy technicians play a major role in detecting prescribing errors by medication surveillance. At present the frequency of detected and correctly handled prescribing errors is unclear, as are factors associated with correct handling. To examine the frequency of detection of prescribing errors and the frequency of correct handling, as well as factors associated with correct handling of prescribing errors by hospital pharmacists and pharmacy technicians. This study was conducted in 57 Dutch hospital pharmacies. Prospective observational study with test patients, using a case-control design to identify factors associated with correct handling. A questionnaire was used to collect the potential factors. Test patients containing prescribing errors were developed by an expert panel of hospital pharmacists (a total of 40 errors in nine medication records divided among three test patients; each test patient was used in 3 rounds; on average 4.5 prescribing error per patient per round). Prescribing errors were defined as dosing errors or therapeutic errors (contra-indication, drug-drug interaction, (pseudo)duplicate medication). The errors were selected on relevance and unequivocalness. The panel also defined how the errors should be handled in practice using national guidelines and this was defined as 'correct handling'. The test patients had to be treated as real patients while conducting medication surveillance. The pharmacists and technicians were asked to report detected errors to the investigator. The percentages of detected and correctly handled prescribing errors were the main outcome measures. Factors associated with correct handling were determined, using multivariate logistic regression analysis. Fifty-nine percent of the total number of intentionally added prescribing errors were detected and 57 % were handled correctly by the hospital pharmacists and technicians. The use of a computer system for medication surveillance compared to no

  15. Some Properties of A Lack-of-Fit Test for a Linear Errors in Variables Model

    Institute of Scientific and Technical Information of China (English)

    Li-xing Zhu; Heng-jian Cui; K.W.Ng

    2004-01-01

    The relationship between the linear errors-in-variables model and the corresponding ordinary linear model in statistical inference is studied.It is shown that normality of the distribution of covariate is a necessary and su cient condition for the equivalence.Therefore,testing for lack-of-t in linear errors-in-variables model can be converted into testing for it in the corresponding ordinary linear model under normality assumption.A test of score type is constructed and the limiting chi-squared distribution is derived under the null hypothesis.Furthermore,we discuss the power of the test and the choice of the weight function involved in the test statistic.

  16. Error Budget for SR-POEM, a Test of the Weak Equivalence Principle

    Science.gov (United States)

    Phillips, James D.; Patla, Bijunath R.; Reasenberg, Robert D.

    2014-03-01

    SR-POEM is a test of the weak equivalence principle (WEP) using free fall provided by a sounding rocket. The differential motion of two test masses (TMs) will be measured during eight drops of 120 s each to reach the planned accuracy, σ (η) POEM over other planned missions. The TFG will measure the length of an SR-POEM resonant cavity to 0.1 pm in 1 s. The rapid measurement allows superior thermal control by inexpensive, passive means. It also allows the TMs to be unconstrained, eliminating both systematic error and noise due to constraints or springs. The sounding rocket reduces mission cost and has a near-vertical trajectory, which reduces Coriolis error. We discuss the errors due to distance measurement, Coriolis and related pseudo-accelerations, gravity, electric fields, magnetic fields, gas, and radiation pressure. Supported in part by NASA grant NNX08AO04G.

  17. Nine Loci for Ocular Axial Length Identified through Genome-wide Association Studies, Including Shared Loci with Refractive Error

    Science.gov (United States)

    Cheng, Ching-Yu; Schache, Maria; Ikram, M. Kamran; Young, Terri L.; Guggenheim, Jeremy A.; Vitart, Veronique; MacGregor, Stuart; Verhoeven, Virginie J.M.; Barathi, Veluchamy A.; Liao, Jiemin; Hysi, Pirro G.; Bailey-Wilson, Joan E.; St. Pourcain, Beate; Kemp, John P.; McMahon, George; Timpson, Nicholas J.; Evans, David M.; Montgomery, Grant W.; Mishra, Aniket; Wang, Ya Xing; Wang, Jie Jin; Rochtchina, Elena; Polasek, Ozren; Wright, Alan F.; Amin, Najaf; van Leeuwen, Elisabeth M.; Wilson, James F.; Pennell, Craig E.; van Duijn, Cornelia M.; de Jong, Paulus T.V.M.; Vingerling, Johannes R.; Zhou, Xin; Chen, Peng; Li, Ruoying; Tay, Wan-Ting; Zheng, Yingfeng; Chew, Merwyn; Rahi, Jugnoo S.; Hysi, Pirro G.; Yoshimura, Nagahisa; Yamashiro, Kenji; Miyake, Masahiro; Delcourt, Cécile; Maubaret, Cecilia; Williams, Cathy; Guggenheim, Jeremy A.; Northstone, Kate; Ring, Susan M.; Davey-Smith, George; Craig, Jamie E.; Burdon, Kathryn P.; Fogarty, Rhys D.; Iyengar, Sudha K.; Igo, Robert P.; Chew, Emily; Janmahasathian, Sarayut; Iyengar, Sudha K.; Igo, Robert P.; Chew, Emily; Janmahasathian, Sarayut; Stambolian, Dwight; Wilson, Joan E. Bailey; MacGregor, Stuart; Lu, Yi; Jonas, Jost B.; Xu, Liang; Saw, Seang-Mei; Baird, Paul N.; Rochtchina, Elena; Mitchell, Paul; Wang, Jie Jin; Jonas, Jost B.; Nangia, Vinay; Hayward, Caroline; Wright, Alan F.; Vitart, Veronique; Polasek, Ozren; Campbell, Harry; Vitart, Veronique; Rudan, Igor; Vatavuk, Zoran; Vitart, Veronique; Paterson, Andrew D.; Hosseini, S. Mohsen; Iyengar, Sudha K.; Igo, Robert P.; Fondran, Jeremy R.; Young, Terri L.; Feng, Sheng; Verhoeven, Virginie J.M.; Klaver, Caroline C.; van Duijn, Cornelia M.; Metspalu, Andres; Haller, Toomas; Mihailov, Evelin; Pärssinen, Olavi; Wedenoja, Juho; Wilson, Joan E. Bailey; Wojciechowski, Robert; Baird, Paul N.; Schache, Maria; Pfeiffer, Norbert; Höhn, René; Pang, Chi Pui; Chen, Peng; Meitinger, Thomas; Oexle, Konrad; Wegner, Aharon; Yoshimura, Nagahisa; Yamashiro, Kenji; Miyake, Masahiro; Pärssinen, Olavi; Yip, Shea Ping; Ho, Daniel W.H.; Pirastu, Mario; Murgia, Federico; Portas, Laura; Biino, Genevra; Wilson, James F.; Fleck, Brian; Vitart, Veronique; Stambolian, Dwight; Wilson, Joan E. Bailey; Hewitt, Alex W.; Ang, Wei; Verhoeven, Virginie J.M.; Klaver, Caroline C.; van Duijn, Cornelia M.; Saw, Seang-Mei; Wong, Tien-Yin; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Wong, Tien-Yin; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Wong, Tien-Yin; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Tai, E-Shyong; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Mackey, David A.; MacGregor, Stuart; Hammond, Christopher J.; Hysi, Pirro G.; Deangelis, Margaret M.; Morrison, Margaux; Zhou, Xiangtian; Chen, Wei; Paterson, Andrew D.; Hosseini, S. Mohsen; Mizuki, Nobuhisa; Meguro, Akira; Lehtimäki, Terho; Mäkelä, Kari-Matti; Raitakari, Olli; Kähönen, Mika; Burdon, Kathryn P.; Craig, Jamie E.; Iyengar, Sudha K.; Igo, Robert P.; Lass, Jonathan H.; Reinhart, William; Belin, Michael W.; Schultze, Robert L.; Morason, Todd; Sugar, Alan; Mian, Shahzad; Soong, Hunson Kaz; Colby, Kathryn; Jurkunas, Ula; Yee, Richard; Vital, Mark; Alfonso, Eduardo; Karp, Carol; Lee, Yunhee; Yoo, Sonia; Hammersmith, Kristin; Cohen, Elisabeth; Laibson, Peter; Rapuano, Christopher; Ayres, Brandon; Croasdale, Christopher; Caudill, James; Patel, Sanjay; Baratz, Keith; Bourne, William; Maguire, Leo; Sugar, Joel; Tu, Elmer; Djalilian, Ali; Mootha, Vinod; McCulley, James; Bowman, Wayne; Cavanaugh, H. Dwight; Verity, Steven; Verdier, David; Renucci, Ann; Oliva, Matt; Rotkis, Walter; Hardten, David R.; Fahmy, Ahmad; Brown, Marlene; Reeves, Sherman; Davis, Elizabeth A.; Lindstrom, Richard; Hauswirth, Scott; Hamilton, Stephen; Lee, W. Barry; Price, Francis; Price, Marianne; Kelly, Kathleen; Peters, Faye; Shaughnessy, Michael; Steinemann, Thomas; Dupps, B.J.; Meisler, David M.; Mifflin, Mark; Olson, Randal; Aldave, Anthony; Holland, Gary; Mondino, Bartly J.; Rosenwasser, George; Gorovoy, Mark; Dunn, Steven P.; Heidemann, David G.; Terry, Mark; Shamie, Neda; Rosenfeld, Steven I.; Suedekum, Brandon; Hwang, David; Stone, Donald; Chodosh, James; Galentine, Paul G.; Bardenstein, David; Goddard, Katrina; Chin, Hemin; Mannis, Mark; Varma, Rohit; Borecki, Ingrid; Chew, Emily Y.; Haller, Toomas; Mihailov, Evelin; Metspalu, Andres; Wedenoja, Juho; Simpson, Claire L.; Wojciechowski, Robert; Höhn, René; Mirshahi, Alireza; Zeller, Tanja; Pfeiffer, Norbert; Lackner, Karl J.; Donnelly, Peter; Barroso, Ines; Blackwell, Jenefer M.; Bramon, Elvira; Brown, Matthew A.; Casas, Juan P.; Corvin, Aiden; Deloukas, Panos; Duncanson, Audrey; Jankowski, Janusz; Markus, Hugh S.; Mathew, Christopher G.; Palmer, Colin N.A.; Plomin, Robert; Rautanen, Anna; Sawcer, Stephen J.; Trembath, Richard C.; Viswanathan, Ananth C.; Wood, Nicholas W.; Spencer, Chris C.A.; Band, Gavin; Bellenguez, Céline; Freeman, Colin; Hellenthal, Garrett; Giannoulatou, Eleni; Pirinen, Matti; Pearson, Richard; Strange, Amy; Su, Zhan; Vukcevic, Damjan; Donnelly, Peter; Langford, Cordelia; Hunt, Sarah E.; Edkins, Sarah; Gwilliam, Rhian; Blackburn, Hannah; Bumpstead, Suzannah J.; Dronov, Serge; Gillman, Matthew; Gray, Emma; Hammond, Naomi; Jayakumar, Alagurevathi; McCann, Owen T.; Liddle, Jennifer; Potter, Simon C.; Ravindrarajah, Radhi; Ricketts, Michelle; Waller, Matthew; Weston, Paul; Widaa, Sara; Whittaker, Pamela; Barroso, Ines; Deloukas, Panos; Mathew, Christopher G.; Blackwell, Jenefer M.; Brown, Matthew A.; Corvin, Aiden; Spencer, Chris C.A.; Bettecken, Thomas; Meitinger, Thomas; Oexle, Konrad; Pirastu, Mario; Portas, Laura; Nag, Abhishek; Williams, Katie M.; Yonova-Doing, Ekaterina; Klein, Ronald; Klein, Barbara E.; Hosseini, S. Mohsen; Paterson, Andrew D.; Genuth, S.; Nathan, D.M.; Zinman, B.; Crofford, O.; Crandall, J.; Reid, M.; Brown-Friday, J.; Engel, S.; Sheindlin, J.; Martinez, H.; Shamoon, H.; Engel, H.; Phillips, M.; Gubitosi-Klug, R.; Mayer, L.; Pendegast, S.; Zegarra, H.; Miller, D.; Singerman, L.; Smith-Brewer, S.; Novak, M.; Quin, J.; Dahms, W.; Genuth, Saul; Palmert, M.; Brillon, D.; Lackaye, M.E.; Kiss, S.; Chan, R.; Reppucci, V.; Lee, T.; Heinemann, M.; Whitehouse, F.; Kruger, D.; Jones, J.K.; McLellan, M.; Carey, J.D.; Angus, E.; Thomas, A.; Galprin, A.; Bergenstal, R.; Johnson, M.; Spencer, M.; Morgan, K.; Etzwiler, D.; Kendall, D.; Aiello, Lloyd Paul; Golden, E.; Jacobson, A.; Beaser, R.; Ganda, O.; Hamdy, O.; Wolpert, H.; Sharuk, G.; Arrigg, P.; Schlossman, D.; Rosenzwieg, J.; Rand, L.; Nathan, D.M.; Larkin, M.; Ong, M.; Godine, J.; Cagliero, E.; Lou, P.; Folino, K.; Fritz, S.; Crowell, S.; Hansen, K.; Gauthier-Kelly, C.; Service, J.; Ziegler, G.; Luttrell, L.; Caulder, S.; Lopes-Virella, M.; Colwell, J.; Soule, J.; Fernandes, J.; Hermayer, K.; Kwon, S.; Brabham, M.; Blevins, A.; Parker, J.; Lee, D.; Patel, N.; Pittman, C.; Lindsey, P.; Bracey, M.; Lee, K.; Nutaitis, M.; Farr, A.; Elsing, S.; Thompson, T.; Selby, J.; Lyons, T.; Yacoub-Wasef, S.; Szpiech, M.; Wood, D.; Mayfield, R.; Molitch, M.; Schaefer, B.; Jampol, L.; Lyon, A.; Gill, M.; Strugula, Z.; Kaminski, L.; Mirza, R.; Simjanoski, E.; Ryan, D.; Kolterman, O.; Lorenzi, G.; Goldbaum, M.; Sivitz, W.; Bayless, M.; Counts, D.; Johnsonbaugh, S.; Hebdon, M.; Salemi, P.; Liss, R.; Donner, T.; Gordon, J.; Hemady, R.; Kowarski, A.; Ostrowski, D.; Steidl, S.; Jones, B.; Herman, W.H.; Martin, C.L.; Pop-Busui, R.; Sarma, A.; Albers, J.; Feldman, E.; Kim, K.; Elner, S.; Comer, G.; Gardner, T.; Hackel, R.; Prusak, R.; Goings, L.; Smith, A.; Gothrup, J.; Titus, P.; Lee, J.; Brandle, M.; Prosser, L.; Greene, D.A.; Stevens, M.J.; Vine, A.K.; Bantle, J.; Wimmergren, N.; Cochrane, A.; Olsen, T.; Steuer, E.; Rath, P.; Rogness, B.; Hainsworth, D.; Goldstein, D.; Hitt, S.; Giangiacomo, J.; Schade, D.S.; Canady, J.L.; Chapin, J.E.; Ketai, L.H.; Braunstein, C.S.; Bourne, P.A.; Schwartz, S.; Brucker, A.; Maschak-Carey, B.J.; Baker, L.; Orchard, T.; Silvers, N.; Ryan, C.; Songer, T.; Doft, B.; Olson, S.; Bergren, R.L.; Lobes, L.; Rath, P. Paczan; Becker, D.; Rubinstein, D.; Conrad, P.W.; Yalamanchi, S.; Drash, A.; Morrison, A.; Bernal, M.L.; Vaccaro-Kish, J.; Malone, J.; Pavan, P.R.; Grove, N.; Iyer, M.N.; Burrows, A.F.; Tanaka, E.A.; Gstalder, R.; Dagogo-Jack, S.; Wigley, C.; Ricks, H.; Kitabchi, A.; Murphy, M.B.; Moser, S.; Meyer, D.; Iannacone, A.; Chaum, E.; Yoser, S.; Bryer-Ash, M.; Schussler, S.; Lambeth, H.; Raskin, P.; Strowig, S.; Zinman, B.; Barnie, A.; Devenyi, R.; Mandelcorn, M.; Brent, M.; Rogers, S.; Gordon, A.; Palmer, J.; Catton, S.; Brunzell, J.; Wessells, H.; de Boer, I.H.; Hokanson, J.; Purnell, J.; Ginsberg, J.; Kinyoun, J.; Deeb, S.; Weiss, M.; Meekins, G.; Distad, J.; Van Ottingham, L.; Dupre, J.; Harth, J.; Nicolle, D.; Driscoll, M.; Mahon, J.; Canny, C.; May, M.; Lipps, J.; Agarwal, A.; Adkins, T.; Survant, L.; Pate, R.L.; Munn, G.E.; Lorenz, R.; Feman, S.; White, N.; Levandoski, L.; Boniuk, I.; Grand, G.; Thomas, M.; Joseph, D.D.; Blinder, K.; Shah, G.; Boniuk; Burgess; Santiago, J.; Tamborlane, W.; Gatcomb, P.; Stoessel, K.; Taylor, K.; Goldstein, J.; Novella, S.; Mojibian, H.; Cornfeld, D.; Lima, J.; Bluemke, D.; Turkbey, E.; van der Geest, R.J.; Liu, C.; Malayeri, A.; Jain, A.; Miao, C.; Chahal, H.; Jarboe, R.; Maynard, J.; Gubitosi-Klug, R.; Quin, J.; Gaston, P.; Palmert, M.; Trail, R.; Dahms, W.; Lachin, J.; Cleary, P.; Backlund, J.; Sun, W.; Braffett, B.; Klumpp, K.; Chan, K.; Diminick, L.; Rosenberg, D.; Petty, B.; Determan, A.; Kenny, D.; Rutledge, B.; Younes, Naji; Dews, L.; Hawkins, M.; Cowie, C.; Fradkin, J.; Siebert, C.; Eastman, R.; Danis, R.; Gangaputra, S.; Neill, S.; Davis, M.; Hubbard, L.; Wabers, H.; Burger, M.; Dingledine, J.; Gama, V.; Sussman, R.; Steffes, M.; Bucksa, J.; Nowicki, M.; Chavers, B.; O’Leary, D.; Polak, J.; Harrington, A.; Funk, L.; Crow, R.; Gloeb, B.; Thomas, S.; O’Donnell, C.; Soliman, E.; Zhang, Z.M.; Prineas, R.; Campbell, C.; Ryan, C.; Sandstrom, D.; Williams, T.; Geckle, M.; Cupelli, E.; Thoma, F.; Burzuk, B.; Woodfill, T.; Low, P.; Sommer, C.; Nickander, K.; Budoff, M.; Detrano, R.; Wong, N.; Fox, M.; Kim, L.; Oudiz, R.; Weir, G.; Espeland, M.; Manolio, T.; Rand, L.; Singer, D.; Stern, M.; Boulton, A.E.; Clark, C.; D’Agostino, R.; Lopes-Virella, M.; Garvey, W.T.; Lyons, T.J.; Jenkins, A.; Virella, G.; Jaffa, A.; Carter, Rickey; Lackland, D.; Brabham, M.; McGee, D.; Zheng, D.; Mayfield, R.K.; Boright, A.; Bull, S.; Sun, L.; Scherer, S.; Zinman, B.; Natarajan, R.; Miao, F.; Zhang, L.; Chen;, Z.; Nathan, D.M.; Makela, Kari-Matti; Lehtimaki, Terho; Kahonen, Mika; Raitakari, Olli; Yoshimura, Nagahisa; Matsuda, Fumihiko; Chen, Li Jia; Pang, Chi Pui; Yip, Shea Ping; Yap, Maurice K.H.; Meguro, Akira; Mizuki, Nobuhisa; Inoko, Hidetoshi; Foster, Paul J.; Zhao, Jing Hua; Vithana, Eranga; Tai, E-Shyong; Fan, Qiao; Xu, Liang; Campbell, Harry; Fleck, Brian; Rudan, Igor; Aung, Tin; Hofman, Albert; Uitterlinden, André G.; Bencic, Goran; Khor, Chiea-Chuen; Forward, Hannah; Pärssinen, Olavi; Mitchell, Paul; Rivadeneira, Fernando; Hewitt, Alex W.; Williams, Cathy; Oostra, Ben A.; Teo, Yik-Ying; Hammond, Christopher J.; Stambolian, Dwight; Mackey, David A.; Klaver, Caroline C.W.; Wong, Tien-Yin; Saw, Seang-Mei; Baird, Paul N.

    2013-01-01

    Refractive errors are common eye disorders of public health importance worldwide. Ocular axial length (AL) is the major determinant of refraction and thus of myopia and hyperopia. We conducted a meta-analysis of genome-wide association studies for AL, combining 12,531 Europeans and 8,216 Asians. We identified eight genome-wide significant loci for AL (RSPO1, C3orf26, LAMA2, GJD2, ZNRF3, CD55, MIP, and ALPPL2) and confirmed one previously reported AL locus (ZC3H11B). Of the nine loci, five (LAMA2, GJD2, CD55, ALPPL2, and ZC3H11B) were associated with refraction in 18 independent cohorts (n = 23,591). Differential gene expression was observed for these loci in minus-lens-induced myopia mouse experiments and human ocular tissues. Two of the AL genes, RSPO1 and ZNRF3, are involved in Wnt signaling, a pathway playing a major role in the regulation of eyeball size. This study provides evidence of shared genes between AL and refraction, but importantly also suggests that these traits may have unique pathways. PMID:24144296

  18. Write error rate of spin-transfer-torque random access memory including micromagnetic effects using rare event enhancement

    CERN Document Server

    Roy, Urmimala; Register, Leonard F; Banerjee, Sanjay K

    2016-01-01

    Spin-transfer-torque random access memory (STT-RAM) is a promising candidate for the next-generation of random-access-memory due to improved scalability, read-write speeds and endurance. However, the write pulse duration must be long enough to ensure a low write error rate (WER), the probability that a bit will remain unswitched after the write pulse is turned off, in the presence of stochastic thermal effects. WERs on the scale of 10$^{-9}$ or lower are desired. Within a macrospin approximation, WERs can be calculated analytically using the Fokker-Planck method to this point and beyond. However, dynamic micromagnetic effects within the bit can affect and lead to faster switching. Such micromagnetic effects can be addressed via numerical solution of the stochastic Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation. However, determining WERs approaching 10$^{-9}$ would require well over 10$^{9}$ such independent simulations, which is infeasible. In this work, we explore calculation of WER using "rare event en...

  19. Nine loci for ocular axial length identified through genome-wide association studies, including shared loci with refractive error.

    Science.gov (United States)

    Cheng, Ching-Yu; Schache, Maria; Ikram, M Kamran; Young, Terri L; Guggenheim, Jeremy A; Vitart, Veronique; MacGregor, Stuart; Verhoeven, Virginie J M; Barathi, Veluchamy A; Liao, Jiemin; Hysi, Pirro G; Bailey-Wilson, Joan E; St Pourcain, Beate; Kemp, John P; McMahon, George; Timpson, Nicholas J; Evans, David M; Montgomery, Grant W; Mishra, Aniket; Wang, Ya Xing; Wang, Jie Jin; Rochtchina, Elena; Polasek, Ozren; Wright, Alan F; Amin, Najaf; van Leeuwen, Elisabeth M; Wilson, James F; Pennell, Craig E; van Duijn, Cornelia M; de Jong, Paulus T V M; Vingerling, Johannes R; Zhou, Xin; Chen, Peng; Li, Ruoying; Tay, Wan-Ting; Zheng, Yingfeng; Chew, Merwyn; Burdon, Kathryn P; Craig, Jamie E; Iyengar, Sudha K; Igo, Robert P; Lass, Jonathan H; Chew, Emily Y; Haller, Toomas; Mihailov, Evelin; Metspalu, Andres; Wedenoja, Juho; Simpson, Claire L; Wojciechowski, Robert; Höhn, René; Mirshahi, Alireza; Zeller, Tanja; Pfeiffer, Norbert; Lackner, Karl J; Bettecken, Thomas; Meitinger, Thomas; Oexle, Konrad; Pirastu, Mario; Portas, Laura; Nag, Abhishek; Williams, Katie M; Yonova-Doing, Ekaterina; Klein, Ronald; Klein, Barbara E; Hosseini, S Mohsen; Paterson, Andrew D; Makela, Kari-Matti; Lehtimaki, Terho; Kahonen, Mika; Raitakari, Olli; Yoshimura, Nagahisa; Matsuda, Fumihiko; Chen, Li Jia; Pang, Chi Pui; Yip, Shea Ping; Yap, Maurice K H; Meguro, Akira; Mizuki, Nobuhisa; Inoko, Hidetoshi; Foster, Paul J; Zhao, Jing Hua; Vithana, Eranga; Tai, E-Shyong; Fan, Qiao; Xu, Liang; Campbell, Harry; Fleck, Brian; Rudan, Igor; Aung, Tin; Hofman, Albert; Uitterlinden, André G; Bencic, Goran; Khor, Chiea-Chuen; Forward, Hannah; Pärssinen, Olavi; Mitchell, Paul; Rivadeneira, Fernando; Hewitt, Alex W; Williams, Cathy; Oostra, Ben A; Teo, Yik-Ying; Hammond, Christopher J; Stambolian, Dwight; Mackey, David A; Klaver, Caroline C W; Wong, Tien-Yin; Saw, Seang-Mei; Baird, Paul N

    2013-08-08

    Refractive errors are common eye disorders of public health importance worldwide. Ocular axial length (AL) is the major determinant of refraction and thus of myopia and hyperopia. We conducted a meta-analysis of genome-wide association studies for AL, combining 12,531 Europeans and 8,216 Asians. We identified eight genome-wide significant loci for AL (RSPO1, C3orf26, LAMA2, GJD2, ZNRF3, CD55, MIP, and ALPPL2) and confirmed one previously reported AL locus (ZC3H11B). Of the nine loci, five (LAMA2, GJD2, CD55, ALPPL2, and ZC3H11B) were associated with refraction in 18 independent cohorts (n = 23,591). Differential gene expression was observed for these loci in minus-lens-induced myopia mouse experiments and human ocular tissues. Two of the AL genes, RSPO1 and ZNRF3, are involved in Wnt signaling, a pathway playing a major role in the regulation of eyeball size. This study provides evidence of shared genes between AL and refraction, but importantly also suggests that these traits may have unique pathways. Copyright © 2013 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  20. Practical retrace error correction in non-null aspheric testing: A comparison

    Science.gov (United States)

    Shi, Tu; Liu, Dong; Zhou, Yuhao; Yan, Tianliang; Yang, Yongying; Zhang, Lei; Bai, Jian; Shen, Yibing; Miao, Liang; Huang, Wei

    2017-01-01

    In non-null aspheric testing, retrace error forms the primary error source, making it hard to recognize the desired figure error from the aliasing interferograms. Careful retrace error correction is a must bearing on the testing results. Performance of three commonly employed methods in practical, i.e. the GDI (geometrical deviation based on interferometry) method, the TRW (theoretical reference wavefront) method and the ROR (reverse optimization reconstruction) method, are compared with numerical simulations and experiments. Dynamic range of these methods are sought out and the application is recommended. It is proposed that with aspherical reference wavefront, dynamic range can be further enlarged. Results show that the dynamic range of the GDI method is small while that of the TRW method can be enlarged with aspherical reference wavefront, and the ROR method achieves the largest dynamic range with highest accuracy. It is recommended that the GDI and TRW methods be applied to apertures with small figure error and small asphericity, and the ROR method for commercial and research applications calling for high accuracy and large dynamic range.

  1. Many Tests of Significance: New Methods for Controlling Type I Errors

    Science.gov (United States)

    Keselman, H. J.; Miller, Charles W.; Holland, Burt

    2011-01-01

    There have been many discussions of how Type I errors should be controlled when many hypotheses are tested (e.g., all possible comparisons of means, correlations, proportions, the coefficients in hierarchical models, etc.). By and large, researchers have adopted familywise (FWER) control, though this practice certainly is not universal. Familywise…

  2. Experimental test of error-disturbance uncertainty relations by weak measurement.

    Science.gov (United States)

    Kaneda, Fumihiro; Baek, So-Young; Ozawa, Masanao; Edamatsu, Keiichi

    2014-01-17

    We experimentally test the error-disturbance uncertainty relation (EDR) in generalized, strength-variable measurement of a single photon polarization qubit, making use of weak measurement that keeps the initial signal state practically unchanged. We demonstrate that the Heisenberg EDR is violated, yet the Ozawa and Branciard EDRs are valid throughout the range of our measurement strength.

  3. A Percentile Regression Model for the Number of Errors in Group Conversation Tests.

    Science.gov (United States)

    Liski, Erkki P.; Puntanen, Simo

    A statistical model is presented for analyzing the results of group conversation tests in English, developed in a Finnish university study from 1977 to 1981. The model is illustrated with the findings from the study. In this study, estimates of percentile curves for the number of errors are of greater interest than the mean regression line. It was…

  4. Testing Error Management Theory: Exploring the Commitment Skepticism Bias and the Sexual Overperception Bias

    Science.gov (United States)

    Henningsen, David Dryden; Henningsen, Mary Lynn Miller

    2010-01-01

    Research on error management theory indicates that men tend to overestimate women's sexual interest and women underestimate men's interest in committed relationships (Haselton & Buss, 2000). We test the assumptions of the theory in face-to-face, stranger interactions with 111 man-woman dyads. Support for the theory emerges, but potential boundary…

  5. TESTING SPHERICITY IN A GMANOVA-MANOVA MODEL WITH NORMAL ERROR

    Institute of Scientific and Technical Information of China (English)

    Bai Peng; Shi Lei

    2008-01-01

    This article presents a statistic for testing the sphericity in a GMANOVA-MANOVA model with normal error. It is shown that the null distribution of this statistic is beta and its nonnull distribution is given in series form of beta distributions.

  6. ITER test blanket module error field simulation experiments at DIII-D

    NARCIS (Netherlands)

    Schaffer, M. J.; Snipes, J. A.; Gohil, P.; P. de Vries,; Evans, T. E.; Fenstermacher, M.E.; Gao, X.; Garofalo, A. M.; Gates, D. A.; Greenfield, C.M.; Heidbrink, W. W.; Kramer, G. J.; La Haye, R. J.; Liu, S.; Loarte, A.; Nave, M. F. F.; Osborne, T. H.; Oyama, N.; Park, J. K.; Ramasubramanian, N.; Reimerdes, H.; Saibene, G.; Salmi, A.; Shinohara, K.; Spong, D. A.; Solomon, W. M.; Tala, T.; Zhu, Y. B.; Boedo, J. A.; Chuyanov, V.; Doyle, E. J.; Jakubowski, M.; Jhang, H.; Nazikian, R. M.; Pustovitov, V. D.; Schmitz, O.; Srinivasan, R.; Taylor, T. S.; Wade, M. R.; You, K. I.; Zeng, L.

    2011-01-01

    Experiments at DIII-D investigated the effects of magnetic error fields similar to those expected from proposed ITER test blanket modules (TBMs) containing ferromagnetic material. Studied were effects on: plasma rotation and locking, confinement, L-H transition, the H-mode pedestal, edge localized m

  7. Lotka's Law and the Kolmogorov-Smirnov Test: An Error in Calculation.

    Science.gov (United States)

    Loughner, William

    1992-01-01

    Corrects an error in the calculation of the Kolmogorov-Smirnov (KS) statistic when it is used to empirically confirm or deny the generalized Lotka's law. Examples from the literature are given of both correct and incorrect uses of the KS test and Lotka equations with cumulative distribution functions (CDFs). (six references) (LRW)

  8. Testing Error Management Theory: Exploring the Commitment Skepticism Bias and the Sexual Overperception Bias

    Science.gov (United States)

    Henningsen, David Dryden; Henningsen, Mary Lynn Miller

    2010-01-01

    Research on error management theory indicates that men tend to overestimate women's sexual interest and women underestimate men's interest in committed relationships (Haselton & Buss, 2000). We test the assumptions of the theory in face-to-face, stranger interactions with 111 man-woman dyads. Support for the theory emerges, but potential boundary…

  9. Testing the protein error theory of ageing: a reply to Baird, Samis, Massie and Zimmerman.

    Science.gov (United States)

    Holliday, R

    1975-01-01

    A major prediction of Orgel's theory is that the misincorporation of amino acids into proteins will increase with age. This has not yet been tested experimentally. Indirect methods have been used to search for the presence of altered proteins in ageing cells or organisms, but these would not necessarily detect a low level of mistakes, nor do they distinquish between errors in synthesis and post-synthetic changes. Nevertheless, some experimental results have been obtained from genetic and biochemical studies with fungi and fibroblasts which confirm certain predictions of the protein error theory.

  10. A Study on Measurement Error during Alternating Current Induced Voltage Tests on Large Transformers

    Institute of Scientific and Technical Information of China (English)

    WANG Xuan; LI Yun-ge; CAO Xiao-long; LIU Ying

    2006-01-01

    The large transformer is pivotal equipment in an electric power supply system; Its partial discharge test and the induced voltage withstand test on large transformers are carried out at a frequency about twice the working frequency. If the magnetizing inductance cannot compensate for the stray capacitance, the test sample turns into a capacitive load and a capacitive rise exhibits in the testing circuit. For self-restoring insulation, a method has been recommended in IEC60-1 that an unapproved measuring system be calibrated by an approved system at a voltage not less than 50% of the rated testing voltage, and the result then be extrapolated linearly. It has been found that this method leads to great error due to the capacitive rise if it is not correctly used during a withstand voltage test under certain testing conditions, especially for a test on high voltage transformers with large capacity. Since the withstand voltage test is the most important means to examine the operation reliability of a transformer, and it can be destructive to the insulation, a precise measurement must be guaranteed. In this paper a factor, named as the capacitive rise factor, is introduced to assess the rise. The voltage measurement error during the calibration is determined by the parameters of the test sample and the testing facilities, as well as the measuring point. Based on theoretical analysis in this paper, a novel method is suggested and demonstrated to estimate the error by using the capacitive rise factor and other known parameters of the testing circuit.

  11. Modified likelihood ratio tests in heteroskedastic multivariate regression models with measurement error

    CERN Document Server

    Melo, Tatiane F N; Patriota, Alexandre G

    2012-01-01

    In this paper, we develop a modified version of the likelihood ratio test for multivariate heteroskedastic errors-in-variables regression models. The error terms are allowed to follow a multivariate distribution in the elliptical class of distributions, which has the normal distribution as a special case. We derive the Skovgaard adjusted likelihood ratio statistic, which follows a chi-squared distribution with a high degree of accuracy. We conduct a simulation study and show that the proposed test displays superior finite sample behavior as compared to the standard likelihood ratio test. We illustrate the usefulness of our results in applied settings using a data set from the WHO MONICA Project on cardiovascular disease.

  12. Testing alternative uses of electromagnetic data to reduce the prediction error of groundwater models

    Science.gov (United States)

    Kruse Christensen, Nikolaj; Christensen, Steen; Ferre, Ty Paul A.

    2016-05-01

    In spite of geophysics being used increasingly, it is often unclear how and when the integration of geophysical data and models can best improve the construction and predictive capability of groundwater models. This paper uses a newly developed HYdrogeophysical TEst-Bench (HYTEB) that is a collection of geological, groundwater and geophysical modeling and inversion software to demonstrate alternative uses of electromagnetic (EM) data for groundwater modeling in a hydrogeological environment consisting of various types of glacial deposits with typical hydraulic conductivities and electrical resistivities covering impermeable bedrock with low resistivity (clay). The synthetic 3-D reference system is designed so that there is a perfect relationship between hydraulic conductivity and electrical resistivity. For this system it is investigated to what extent groundwater model calibration and, often more importantly, model predictions can be improved by including in the calibration process electrical resistivity estimates obtained from TEM data. In all calibration cases, the hydraulic conductivity field is highly parameterized and the estimation is stabilized by (in most cases) geophysics-based regularization. For the studied system and inversion approaches it is found that resistivities estimated by sequential hydrogeophysical inversion (SHI) or joint hydrogeophysical inversion (JHI) should be used with caution as estimators of hydraulic conductivity or as regularization means for subsequent hydrological inversion. The limited groundwater model improvement obtained by using the geophysical data probably mainly arises from the way these data are used here: the alternative inversion approaches propagate geophysical estimation errors into the hydrologic model parameters. It was expected that JHI would compensate for this, but the hydrologic data were apparently insufficient to secure such compensation. With respect to reducing model prediction error, it depends on the type

  13. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-06-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  14. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains

    Science.gov (United States)

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-01-01

    Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially

  15. The critical influence of the intermediate category on interpretation errors in revised EUCAST and CLSI antimicrobial susceptibility testing guidelines.

    Science.gov (United States)

    Hombach, M; Böttger, E C; Roos, M

    2013-02-01

    Erroneous assignments of clinical isolates to the interpretative categories susceptible, intermediate and resistant can deprive a patient of successful antimicrobial therapy. The rate of major errors (ME) and very major errors (vME) is dependent on: (i) the precision/standard deviation (σ) of the antibiotic susceptibility testing (AST) method, (ii) the diameter distributions, (iii) clinical breakpoints, and (iv) the width of the intermediate zone. The European Committee on AST (EUCAST) has abandoned or decreased the intermediate zone for several drug/species combinations. This study focused on the effects of discontinuing the intermediate category on the rate of interpretation errors. In total, 10,341 non-duplicate clinical isolates were included in the study. For susceptibility testing the disc diffusion method was used. Error probabilities were calculated separately for diameter values flanking the interpretative category borders. Error probabilities were then applied to the actual numbers of clinical isolates investigated and expected rates of ME and vME were calculated. Applying EUCAST AST guidelines, significant rates of ME/vME were demonstrated for all drug/species combinations without an intermediate range. Virtually all ME/vME expected were eliminated in CLSI guidelines that retained an intermediate zone. If wild-type and resistant isolates are not clearly separated in susceptibility distributions, the retaining of an intermediate zone will decrease the number of ME and vME. An intermediate zone of 2-3 mm avoids almost all ME/vME for most species/drug combinations depending on diameter distributions. Laboratories should know their epidemiology settings to be able to detect problems of individual species/drug/clinical breakpoint combinations and take measures to improve precision of diameter measurements.

  16. Analytical Tests for Ray Effect Errors in Discrete Ordinate Methods for Solving the Neutron Transport Equation

    Energy Technology Data Exchange (ETDEWEB)

    Chang, B

    2004-03-22

    This paper contains three analytical solutions of transport problems which can be used to test ray-effect errors in the numerical solutions of the Boltzmann Transport Equation (BTE). We derived the first two solutions and the third was shown to us by M. Prasad. Since this paper is intended to be an internal LLNL report, no attempt was made to find the original derivations of the solutions in the literature in order to cite the authors for their work.

  17. Testing Lack-of-fit for a Polynomial Errors-in-variables Model

    Institute of Scientific and Technical Information of China (English)

    Li-xing Zhu; Wei-xing Song; Heng-jian Gui

    2003-01-01

    When a regression model is applied as an approximation of underlying model of data, the model checking is important and relevant. In this paper, we investigate the lack-of-fit test for a polynomial errorin-variables model. As the ordinary residuals are biased when there exist measurement errors in covariables,we correct them and then construct a residual-based test of score type. The constructed test is asymptotically chi-squared under null hypotheses. Simulation study shows that the test can maintain the significance level well.The choice of weight functions involved in the test statistic and the related power study are also investigated.The application to two examples is illustrated. The approach can be readily extended to handle more general models.

  18. Polymorphisms in the CAG repeat--a source of error in Huntington disease DNA testing.

    Science.gov (United States)

    Yu, S; Fimmel, A; Fung, D; Trent, R J

    2000-12-01

    Five of 400 patients (1.3%), referred for Huntington disease DNA testing, demonstrated a single allele on CAG alone, but two alleles when the CAG + CCG repeats were measured. The PCR assay failed to detect one allele in the CAG alone assay because of single-base silent polymorphisms in the penultimate or the last CAG repeat. The region around and within the CAG repeat sequence in the Huntington disease gene is a hot-spot for DNA polymorphisms, which can occur in up to 1% of subjects tested for Huntington disease. These polymorphisms may interfere with amplification by PCR, and so have the potential to produce a diagnostic error.

  19. ERROR REDUCTION IN DUCT LEAKAGE TESTING THROUGH DATA CROSS-CHECKS

    Energy Technology Data Exchange (ETDEWEB)

    ANDREWS, J.W.

    1998-12-31

    One way to reduce uncertainty in scientific measurement is to devise a protocol in which more quantities are measured than are absolutely required, so that the result is over constrained. This report develops a method for so combining data from two different tests for air leakage in residential duct systems. An algorithm, which depends on the uncertainty estimates for the measured quantities, optimizes the use of the excess data. In many cases it can significantly reduce the error bar on at least one of the two measured duct leakage rates (supply or return), and it provides a rational method of reconciling any conflicting results from the two leakage tests.

  20. Error compensation in computer generated hologram-based form testing of aspheres.

    Science.gov (United States)

    Stuerwald, Stephan

    2014-12-10

    Computer-generated holograms (CGHs) are used relatively often to test aspheric surfaces in the case of medium and high lot sizes. Until now differently modified measurement setups for optical form testing interferometry have been presented, like subaperture stitching interferometry and scanning interferometry. In contrast, for testing low to medium lot sizes in research and development, a variety of other tactile and nontactile measurement methods have been developed. In the case of CGH-based interferometric form testing, measurement deviations in the region of several tens of nanometers typically occur. Deviations arise especially due to a nonperfect alignment of the asphere relative to the testing wavefront. Therefore, the null test is user- and adjustment-dependent, which results in insufficient repeatability and reproducibility of the form errors. When adjusting a CGH, an operator usually performs a minimization of the spatial frequency of the fringe pattern. An adjustment to the ideal position, however, often cannot be performed with sufficient precision by the operator as the position of minimum spatial fringe density is often not unique, which also depends on the asphere. Thus, the scientific and technical objectives of this paper comprise the development of a simulation-based approach to explain and quantify typical experimental errors due to misalignment of the specimen toward a CGH in an optical form testing measurement system. A further step is the programming of an iterative method to realize a virtual optimized realignment of the system on the basis of Zernike polynomial decomposition, which should allow for the calculation of the measured form for an ideal alignment and thus a careful subtraction of a typical alignment-based form error. To validate the simulation-based findings, a series of systematic experiments is performed with a recently developed hexapod positioning system in order to allow an exact and reproducible positioning of the optical CGH

  1. Expected linking error resulting from item parameter drift among the common Items on Rasch calibrated tests.

    Science.gov (United States)

    Miller, G Edward; Gesn, Paul Randall; Rotou, Jourania

    2005-01-01

    In state assessment programs that employ Rasch-based common item linking procedures, the linking constant is usually estimated with only those common items not identified as exhibiting item difficulty parameter drift. Since state assessments typically contain a fixed number of items, an item classified as exhibiting parameter drift during the linking process remains on the exam as a scorable item even if it is removed from the common item set. Under the assumption that item parameter drift has occurred for one or more of the common items, the expected effect of including or excluding the "affected" item(s) in the estimation of the linking constant is derived in this article. If the item parameter drift is due solely to factors not associated with a change in examinee achievement, no linking error will (be expected to) occur given that the linking constant is estimated only with the items not identified as "affected"; linking error will (be expected to) occur if the linking constant is estimated with all common items. However, if the item parameter drift is due solely to change in examinee achievement, the opposite is true: no linking error will (be expected to) occur if the linking constant is estimated with all common items; linking error will (be expected to) occur if the linking constant is estimated only with the items not identified as "affected".

  2. Testing Constancy of the Error Covariance Matrix in Vector Models against Parametric Alternatives using a Spectral Decomposition

    DEFF Research Database (Denmark)

    Yang, Yukay

    I consider multivariate (vector) time series models in which the error covariance matrix may be time-varying. I derive a test of constancy of the error covariance matrix against the alternative that the covariance matrix changes over time. I design a new family of Lagrange-multiplier tests against...

  3. The Propagation of Errors in Experimental Data Analysis: A Comparison of Pre-and Post-Test Designs

    Science.gov (United States)

    Gorard, Stephen

    2013-01-01

    Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre-and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The…

  4. Development of Items for a Pedagogical Content Knowledge Test Based on Empirical Analysis of Pupils' Errors

    Science.gov (United States)

    Jüttner, Melanie; Neuhaus, Birgit J.

    2012-05-01

    In view of the lack of instruments for measuring biology teachers' pedagogical content knowledge (PCK), this article reports on a study about the development of PCK items for measuring teachers' knowledge of pupils' errors and ways for dealing with them. This study investigated 9th and 10th grade German pupils' (n = 461) drawings in an achievement test about the knee-jerk in biology, which were analysed by using the inductive qualitative analysis of their content. The empirical data were used for the development of the items in the PCK test. The validation of the items was determined with think-aloud interviews of German secondary school teachers (n = 5). If the item was determined, the reliability was tested by the results of German secondary school biology teachers (n = 65) who took the PCK test. The results indicated that these items are satisfactorily reliable (Cronbach's alpha values ranged from 0.60 to 0.65). We suggest a larger sample size and American biology teachers be used in our further studies. The findings of this study about teachers' professional knowledge from the PCK test could provide new information about the influence of teachers' knowledge on their pupils' understanding of biology and their possible errors in learning biology.

  5. Balancing Type One and Two Errors in Multiple Testing for Differential Expression of Genes.

    Science.gov (United States)

    Gordon, Alexander; Chen, Linlin; Glazko, Galina; Yakovlev, Andrei

    2009-03-15

    A new procedure is proposed to balance type I and II errors in significance testing for differential expression of individual genes. Suppose that a collection, F(k), of k lists of selected genes is available, each of them approximating by their content the true set of differentially expressed genes. For example, such sets can be generated by a subsampling counterpart of the delete-d-jackknife method controlling the per-comparison error rate for each subsample. A final list of candidate genes, denoted by S(*), is composed in such a way that its contents be closest in some sense to all the sets thus generated. To measure "closeness" of gene lists, we introduce an asymmetric distance between sets with its asymmetry arising from a generally unequal assignment of the relative costs of type I and type II errors committed in the course of gene selection. The optimal set S(*) is defined as a minimizer of the average asymmetric distance from an arbitrary set S to all sets in the collection F(k). The minimization problem can be solved explicitly, leading to a frequency criterion for the inclusion of each gene in the final set. The proposed method is tested by resampling from real microarray gene expression data with artificially introduced shifts in expression levels of pre-defined genes, thereby mimicking their differential expression.

  6. Testing capability indices for one-sided processes with measurement errors

    Directory of Open Access Journals (Sweden)

    Grau D.

    2013-01-01

    Full Text Available In the manufacturing industry, many product characteristics are of one-sided tolerances. The process capability indices Cpu (u, v and Cpl (u, v can be used to measure process performance. Most research work related to capability indices assumes no gauge measurement errors. This assumption insufficiently reflects real situations even when advanced measuring instruments are used. In this paper we show that using a critical value without taking into account these errors, severely underestimates the α-risk which causes a less accurate testing capacity. In order to improve the results we suggest the use of an adjusted critical value, and we give a Maple program to get it. An example in a polymer granulates manufactory is presented to illustrate this approach.

  7. Mathematical errors made by high performing candidates writing the National Benchmark Tests

    Directory of Open Access Journals (Sweden)

    Carol A. Bohlmann

    2017-04-01

    Full Text Available When the National Benchmark Tests (NBTs were first considered, it was suggested that the results would assess entry-level students’ academic and quantitative literacy, and mathematical competence, assess the relationships between higher education entry-level requirements and school-level exit outcomes, provide a service to higher education institutions with regard to selection and placement, and assist with curriculum development, particularly in relation to foundation and augmented courses. We recognise there is a need for better communication of the findings arising from analysis of test data, in order to inform teaching and learning and thus attempt to narrow the gap between basic education outcomes and higher education requirements. Specifically, we focus on identification of mathematical errors made by those who have performed in the upper third of the cohort of test candidates. This information may help practitioners in basic and higher education. The NBTs became operational in 2009. Data have been systematically accumulated and analysed. Here, we provide some background to the data, discuss some of the issues relevant to mathematics, present some of the common errors and problems in conceptual understanding identified from data collected from Mathematics (MAT tests in 2012 and 2013, and suggest how this could be used to inform mathematics teaching and learning. While teachers may anticipate some of these issues, it is important to note that the identified problems are exhibited by the top third of those who wrote the Mathematics NBTs. This group will constitute a large proportion of first-year students in mathematically demanding programmes. Our aim here is to raise awareness in higher education and at school level of the extent of the common errors and problems in conceptual understanding of mathematics. We cannot analyse all possible interventions that could be put in place to remediate the identified mathematical problems, but we do

  8. Common errors and clinical guidelines for manual muscle testing: "the arm test" and other inaccurate procedures

    Science.gov (United States)

    Schmitt, Walter H; Cuthbert, Scott C

    2008-01-01

    Background The manual muscle test (MMT) has been offered as a chiropractic assessment tool that may help diagnose neuromusculoskeletal dysfunction. We contend that due to the number of manipulative practitioners using this test as part of the assessment of patients, clinical guidelines for the MMT are required to heighten the accuracy in the use of this tool. Objective To present essential operational definitions of the MMT for chiropractors and other clinicians that should improve the reliability of the MMT as a diagnostic test. Controversy about the usefulness and reliability of the MMT for chiropractic diagnosis is ongoing, and clinical guidelines about the MMT are needed to resolve confusion regarding the MMT as used in clinical practice as well as the evaluation of experimental evidence concerning its use. Discussion We expect that the resistance to accept the MMT as a reliable and valid diagnostic tool will continue within some portions of the manipulative professions if clinical guidelines for the use of MMT methods are not established and accepted. Unreliable assessments of this method of diagnosis will continue when non-standard MMT research papers are considered representative of the methods used by properly trained clinicians. Conclusion Practitioners who employ the MMT should use these clinical guidelines for improving their use of the MMT in their assessments of muscle dysfunction in patients with musculoskeletal pain. PMID:19099575

  9. Comparative test on several forms of background error covariance in 3DVar

    Science.gov (United States)

    Shao, Aimei

    2013-04-01

    The background error covariance matrix (Hereinafter referred to as B matrix) plays an important role in the three-dimensional variational (3DVar) data assimilation method. However, it is difficult to get B matrix accurately because true atmospheric state is unknown. Therefore, some methods were developed to estimate B matrix (e.g. NMC method, innovation analysis method, recursive filters, and ensemble method such as EnKF). Prior to further development and application of these methods, the function of several B matrixes estimated by these methods in 3Dvar is worth studying and evaluating. For this reason, NCEP reanalysis data and forecast data are used to test the effectiveness of the several B matrixes with VAF (Huang, 1999) method. Here the NCEP analysis is treated as the truth and in this case the forecast error is known. The data from 2006 to 2007 is used as the samples to estimate B matrix and the data in 2008 is used to verify the assimilation effects. The 48h and 24h forecast valid at the same time is used to estimate B matrix with NMC method. B matrix can be represented by a correlation part (a non-diagonal matrix) and a variance part (a diagonal matrix of variances). Gaussian filter function as an approximate approach is used to represent the variation of correlation coefficients with distance in numerous 3DVar systems. On the basis of the assumption, the following several forms of B matrixes are designed and test with VAF in the comparative experiments: (1) error variance and the characteristic lengths are fixed and setted to their mean value averaged over the analysis domain; (2) similar to (1), but the mean characteristic lengths reduce to 50 percent for the height and 60 percent for the temperature of the original; (3) similar to (2), but error variance calculated directly by the historical data is space-dependent; (4) error variance and characteristic lengths are all calculated directly by the historical data; (5) B matrix is estimated directly by the

  10. An experimental test of the accumulated copying error model of cultural mutation for Acheulean handaxe size.

    Science.gov (United States)

    Kempe, Marius; Lycett, Stephen; Mesoudi, Alex

    2012-01-01

    Archaeologists interested in explaining changes in artifact morphology over long time periods have found it useful to create models in which the only source of change is random and unintentional copying error, or 'cultural mutation'. These models can be used as null hypotheses against which to detect non-random processes such as cultural selection or biased transmission. One proposed cultural mutation model is the accumulated copying error model, where individuals attempt to copy the size of another individual's artifact exactly but make small random errors due to physiological limits on the accuracy of their perception. Here, we first derive the model within an explicit mathematical framework, generating the predictions that multiple independently-evolving artifact chains should diverge over time such that their between-chain variance increases while the mean artifact size remains constant. We then present the first experimental test of this model in which 200 participants, split into 20 transmission chains, were asked to faithfully copy the size of the previous participant's handaxe image on an iPad. The experimental findings supported the model's prediction that between-chain variance should increase over time and did so in a manner quantitatively in line with the model. However, when the initial size of the image that the participants resized was larger than the size of the image they were copying, subjects tended to increase the size of the image, resulting in the mean size increasing rather than staying constant. This suggests that items of material culture formed by reductive vs. additive processes may mutate differently when individuals attempt to replicate faithfully the size of previously-produced artifacts. Finally, we show that a dataset of 2601 Acheulean handaxes shows less variation than predicted given our empirically measured copying error variance, suggesting that other processes counteracted the variation in handaxe size generated by perceptual

  11. ITER test blanket module error field simulation experiments at DIII-D

    Science.gov (United States)

    Schaffer, M. J.; Snipes, J. A.; Gohil, P.; de Vries, P.; Evans, T. E.; Fenstermacher, M. E.; Gao, X.; Garofalo, A. M.; Gates, D. A.; Greenfield, C. M.; Heidbrink, W. W.; Kramer, G. J.; La Haye, R. J.; Liu, S.; Loarte, A.; Nave, M. F. F.; Osborne, T. H.; Oyama, N.; Park, J.-K.; Ramasubramanian, N.; Reimerdes, H.; Saibene, G.; Salmi, A.; Shinohara, K.; Spong, D. A.; Solomon, W. M.; Tala, T.; Zhu, Y. B.; Boedo, J. A.; Chuyanov, V.; Doyle, E. J.; Jakubowski, M.; Jhang, H.; Nazikian, R. M.; Pustovitov, V. D.; Schmitz, O.; Srinivasan, R.; Taylor, T. S.; Wade, M. R.; You, K.-I.; Zeng, L.; DIII-D Team

    2011-10-01

    Experiments at DIII-D investigated the effects of magnetic error fields similar to those expected from proposed ITER test blanket modules (TBMs) containing ferromagnetic material. Studied were effects on: plasma rotation and locking, confinement, L-H transition, the H-mode pedestal, edge localized modes (ELMs) and ELM suppression by resonant magnetic perturbations, energetic particle losses, and more. The experiments used a purpose-built three-coil mock-up of two magnetized ITER TBMs in one ITER equatorial port. The largest effect was a reduction in plasma toroidal rotation velocity v across the entire radial profile by as much as Δv/v ~ 60% via non-resonant braking. Changes to global Δn/n, Δβ/β and ΔH98/H98 were ~3 times smaller. These effects are stronger at higher β. Other effects were smaller. The TBM field increased sensitivity to locking by an applied known n = 1 test field in both L- and H-mode plasmas. Locked mode tolerance was completely restored in L-mode by re-adjusting the DIII-D n = 1 error field compensation system. Numerical modelling by IPEC reproduces the rotation braking and locking semi-quantitatively, and identifies plasma amplification of a few n = 1 Fourier harmonics as the main cause of braking. IPEC predicts that TBM braking in H-mode may be reduced by n = 1 control. Although extrapolation from DIII-D to ITER is still an open issue, these experiments suggest that a TBM-like error field will produce only a few potentially troublesome problems, and that they might be made acceptably small.

  12. ITER Test Blanket Module Error Field Simulation Experiments at DIII-D

    Energy Technology Data Exchange (ETDEWEB)

    Schaffer, M. J. [General Atomics, San Diego; Testa, D. [CRPP, Switzerland; Snipes, J. A. [ITER Organization, Cadarache, France; Gohil, P. [General Atomics; De Vries, P. [Culham Centre for Fusion Energy, Culham, UK; Evans, T. E. [General Atomics, San Diego; Fenstermacher, M. E. [Lawrence Livermore National Laboratory (LLNL); Gao, X. [Academia Sinica, Institute of Plasma Physics, Hefei, China; Garofalo, A. [General Atomics, San Diego; Gates, D.A. [Princeton Plasma Physics Laboratory (PPPL); Greenfield, C. M. [General Atomics; Heidbrink, W. [University of California, Irvine; La Haye, R. [General Atomics, San Diego; Liu, S. [ASIPP, Hefei, China; Loarte, A. [ITER Organization, Cadarache, France; Nave, M. F. F. [Association EURATOM/IST, Lisbon, Portugal; Osborne, T.H. [General Atomics, San Diego; Oyama, N. [Japan Atomic Energy Agency (JAEA); Osakabe, M. [National Institute for Fusion Science, Toki, Japan; Park, J. K. [Princeton Plasma Physics Laboratory (PPPL); Ramasubramanian, N. [Institute for Plasma Research, Gandhinagar, India; Reimerdes, H. [Columbia University; Saibene, G. [Fusion for Energy (F4E), Barcelona, Spain; Saimi, A. [Aalto University, Finland; Shinohara, K. [Japan Atomic Energy Agency (JAEA), Naka; Spong, Donald A [ORNL; Solomon, W. M. [Princeton Plasma Physics Laboratory (PPPL); Tala, T. [Association Euratom-Tekes, Finland; Zhu, Y. B. [University of California, Irvine; Zhai, K. [University of Wisconsin, Madison; Boedo, J. [University of California, San Diego; Chuyanov, V. [ITER Organization, Cadarache, France; Doyle, E. J. [University of California, Los Angeles; Jakubowski, M. W. [Max-Planck-Institute for Plasmaphysik, EURATOM-Association, Greifswald, Germany; Jhang, H. [National Fusion Research Institute, Daejon, South Korea; Nazikian, Raffi [Princeton Plasma Physics Laboratory (PPPL); Pustovitov, V. D. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Schmitz, O. [Forschungszentrum Julich, Julich, Germany; Sanchez, Raul [ORNL; Srinivasan, R. [Institute for Plasma Research, Gandhinagar, India; Taylor, T. S. [General Atomics, San Diego; Wade, M. [General Atomics, San Diego; You, K. I. [National Fusion Research Institute, Daejon, South Korea; Zeng, L. [University of California, Los Angeles

    2011-01-01

    Experiments at DIII-D investigated the effects of magnetic error fields similar to those expected from proposed ITER test blanket modules (TBMs) containing ferromagnetic material. Studied were effects on: plasma rotation and locking, confinement, L-H transition, the H-mode pedestal, edge localized modes (ELMs) and ELM suppression by resonant magnetic perturbations, energetic particle losses, and more. The experiments used a purpose-built three-coil mock-up of two magnetized ITER TBMs in one ITER equatorial port. The largest effect was a reduction in plasma toroidal rotation velocity v across the entire radial profile by as much as Delta upsilon/upsilon similar to 60% via non-resonant braking. Changes to global Delta n/n, Delta beta/beta and Delta H(98)/H(98) were similar to 3 times smaller. These effects are stronger at higher beta. Other effects were smaller. The TBM field increased sensitivity to locking by an applied known n = 1 test field in both L-and H-mode plasmas. Locked mode tolerance was completely restored in L-mode by re-adjusting the DIII-D n = 1 error field compensation system. Numerical modelling by IPEC reproduces the rotation braking and locking semi-quantitatively, and identifies plasma amplification of a few n = 1 Fourier harmonics as the main cause of braking. IPEC predicts that TBM braking in H-mode may be reduced by n = 1 control. Although extrapolation from DIII-D to ITER is still an open issue, these experiments suggest that a TBM-like error field will produce only a few potentially troublesome problems, and that they might be made acceptably small.

  13. Application of the HWVP measurement error model and feed test algorithms to pilot scale feed testing

    Energy Technology Data Exchange (ETDEWEB)

    Adams, T.L.

    1996-03-01

    The purpose of the feed preparation subsystem in the Hanford Waste Vitrification Plant (HWVP) is to provide, for control of the properties of the slurry that are sent to the melter. The slurry properties are adjusted so that two classes of constraints are satisfied. Processability constraints guarantee that the process conditions required by the melter can be obtained. For example, there are processability constraints associated with electrical conductivity and viscosity. Acceptability constraints guarantee that the processed glass can be safely stored in a repository. An example of an acceptability constraint is the durability of the product glass. The primary control focus for satisfying both processability and acceptability constraints is the composition of the slurry. The primary mechanism for adjusting the composition of the slurry is mixing the waste slurry with frit of known composition. Spent frit from canister decontamination is also recycled by adding it to the melter feed. A number of processes in addition to mixing are used to condition the waste slurry prior to melting, including evaporation and the addition of formic acid. These processes also have an effect on the feed composition.

  14. TESTING OF CORRELATION AND HETEROSCEDASTICITY IN NONLINEAR REGRESSION MODELS WITH DBL(p,q,1) RANDOM ERRORS

    Institute of Scientific and Technical Information of China (English)

    Liu Yingan; Wei Bocheng

    2008-01-01

    Chaos theory has taught us that a system which has both nonlinearity and random input will most likely produce irregular data. If random errors are irregular data, then random error process will raise nonlinearity (Kantz and Schreiber (1997)). Tsai (1986) introduced a composite test for autocorrelation and heteroscedasticity in linear models with AR(1) errors. Liu (2003) introduced a composite test for correlation and heteroscedasticity in nonlinear models with DBL(p, 0, 1) errors. Therefore, the important problems in regres- sion model are detections of bilinearity, correlation and heteroscedasticity. In this article, the authors discuss more general case of nonlinear models with DBL(p, q, 1) random errors by score test. Several statistics for the test of bilinearity, correlation, and heteroscedas-ticity are obtained, and expressed in simple matrix formulas. The results of regression models with linear errors are extended to those with bilinear errors. The simulation study is carried out to investigate the powers of the test statistics. All results of this article extend and develop results of Tsai (1986), Wei, et al (1995), and Liu, et al (2003).

  15. Automatic detection of MLC relative position errors for VMAT using the EPID-based picket fence test

    Science.gov (United States)

    Christophides, Damianos; Davies, Alex; Fleckney, Mark

    2016-12-01

    Multi-leaf collimators (MLCs) ensure the accurate delivery of treatments requiring complex beam fluences like intensity modulated radiotherapy and volumetric modulated arc therapy. The purpose of this work is to automate the detection of MLC relative position errors  ⩾0.5 mm using electronic portal imaging device-based picket fence tests and compare the results to the qualitative assessment currently in use. Picket fence tests with and without intentional MLC errors were measured weekly on three Varian linacs. The picket fence images analysed covered a time period ranging between 14-20 months depending on the linac. An algorithm was developed that calculated the MLC error for each leaf-pair present in the picket fence images. The baseline error distributions of each linac were characterised for an initial period of 6 months and compared with the intentional MLC errors using statistical metrics. The distributions of median and one-sample Kolmogorov-Smirnov test p-value exhibited no overlap between baseline and intentional errors and were used retrospectively to automatically detect MLC errors in routine clinical practice. Agreement was found between the MLC errors detected by the automatic method and the fault reports during clinical use, as well as interventions for MLC repair and calibration. In conclusion the method presented provides for full automation of MLC quality assurance, based on individual linac performance characteristics. The use of the automatic method has been shown to provide early warning for MLC errors that resulted in clinical downtime.

  16. Standardising the lactulose mannitol test of gut permeability to minimise error and promote comparability.

    Directory of Open Access Journals (Sweden)

    Ivana R Sequeira

    Full Text Available BACKGROUND: Lactulose mannitol ratio tests are clinically useful for assessing disorders characterised by changes in gut permeability and for assessing mixing in the intestinal lumen. Variations between currently used test protocols preclude meaningful comparisons between studies. We determined the optimal sampling period and related this to intestinal residence. METHODS: Half-hourly lactulose and mannitol urinary excretions were determined over 6 hours in 40 healthy female volunteers after administration of either 600 mg aspirin or placebo, in randomised order at weekly intervals. Gastric and small intestinal transit times were assessed by the SmartPill in 6 subjects from the same population. Half-hourly percentage recoveries of lactulose and mannitol were grouped on a basis of compartment transit time. The rate of increase or decrease of each sugar within each group was explored by simple linear regression to assess the optimal period of sampling. KEY RESULTS: The between subject standard errors for each half-hourly lactulose and mannitol excretion were lowest, the correlation of the quantity of each sugar excreted with time was optimal and the difference between the two sugars in this temporal relationship maximal during the period from 2½-4 h after ingestion. Half-hourly lactulose excretions were generally increased after dosage with aspirin whilst those of mannitol were unchanged as was the temporal pattern and period of lowest between subject standard error for both sugars. CONCLUSION: The results indicate that between subject variation in the percentage excretion of the two sugars would be minimised and the differences in the temporal patterns of excretion would be maximised if the period of collection of urine used in clinical tests of small intestinal permeability were restricted to 2½-4 h post dosage. This period corresponds to a period when the column of digesta column containing the probes is passing from the small to the large

  17. Dynamic error research and application of an angular measuring system belonging to a high precision excursion test turntable

    Institute of Scientific and Technical Information of China (English)

    DENG Hui-yu; WANG Xin-li; MA Pei-sun

    2006-01-01

    Angular measuring system is the most important component of a servo turntable in inertial test apparatus. Its function and precision determine the turntable's function and precision. It attaches importance to research on inertial test equipment. This paper introduces the principle of the angular measuring system using amplitude discrimination mode. The dynamic errors are analyzed from the aspects of inductosyn, amplitude and function error of double-phase voltage and waveform distortion. Through detailed calculation, theory is provided for practical application; system errors are allocated and the angular measuring system meets the accuracy requirement. As a result, the schedule of the angular measuring system can be used in practice.

  18. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    Science.gov (United States)

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  19. The New Tapered Fiber Connector and the Test of Its Error Rate and Coupling Characteristics

    Directory of Open Access Journals (Sweden)

    Qinggui Hu

    2017-01-01

    Full Text Available Since the fiber core is very small, the communication fiber connector requires high precision. In this paper, the effect of lateral deviation on coupling efficiency of fiber connector is analyzed. Then, considering the fact that optical fiber is generally used in pairs, one for transmitting data and the other for receiving, the novel directional tapered communication optical fiber connector is designed. In the new connector, the structure of the fiber head is tapered according to the signal transmission direction. In order to study the performance of the new connector, several samples were made in the laboratory of corporation CDSEI and two testing experiments were done. The experiment results show that compared with the traditional connector, for the same lateral deviation, the coupling efficiency of the tapered connector is higher and the error rate is lower.

  20. Testing for Productive Efficiency with Errors-in-Variables: with an application to the Dutch electricity sesctor

    NARCIS (Netherlands)

    T. Kuosmanen (Timo); G.T. Post (Thierry)

    2001-01-01

    textabstractWe develop a nonparametric test of productive efficiency that accounts for the possibility of errors-in-variables. The test allows for statistical inference based on the extreme value distribution of the L?? norm. In contrast to the test proposed by Varian, H (1985): 'Nonparametric

  1. Testing for Productive Efficiency with Errors-in-Variables: with an application to the Dutch electricity sesctor

    NARCIS (Netherlands)

    T. Kuosmanen (Timo); G.T. Post (Thierry)

    2001-01-01

    textabstractWe develop a nonparametric test of productive efficiency that accounts for the possibility of errors-in-variables. The test allows for statistical inference based on the extreme value distribution of the L?? norm. In contrast to the test proposed by Varian, H (1985): 'Nonparametric Analy

  2. A note on self-normalized Dickey-Fuller test for unit root in autoregressive time series with GARCH errors

    Institute of Scientific and Technical Information of China (English)

    YANG Xiao-rong; ZHANG Li-xin

    2008-01-01

    In this article, the unit root test for AR (p) model with GARCH errors is considered. The Dickey-Fuller test statistics are rewritten in the form of self-normalized sums, and the asymptotic distribution of the test statistics is derived under the weak conditions.

  3. Interpretation of Errors Made by Mandarin-Speaking Children on the Preschool Language Scales--5th Edition Screening Test

    Science.gov (United States)

    Ren, Yonggang; Rattanasone, Nan Xu; Wyver, Shirley; Hinton, Amber; Demuth, Katherine

    2016-01-01

    We investigated typical errors made by Mandarin-speaking children when measured by the Preschool Language Scales-fifth edition, Screening Test (PLS-5 Screening Test). The intention was to provide preliminary data for the development of a guideline for early childhood educators and psychologists who use the test with Mandarin-speaking children.…

  4. Computational Analysis of Arc-Jet Wedge Tests Including Ablation and Shape Change

    Science.gov (United States)

    Goekcen, Tahir; Chen, Yih-Kanq; Skokova, Kristina A.; Milos, Frank S.

    2010-01-01

    Coupled fluid-material response analyses of arc-jet wedge ablation tests conducted in a NASA Ames arc-jet facility are considered. These tests were conducted using blunt wedge models placed in a free jet downstream of the 6-inch diameter conical nozzle in the Ames 60-MW Interaction Heating Facility. The fluid analysis includes computational Navier-Stokes simulations of the nonequilibrium flowfield in the facility nozzle and test box as well as the flowfield over the models. The material response analysis includes simulation of two-dimensional surface ablation and internal heat conduction, thermal decomposition, and pyrolysis gas flow. For ablating test articles undergoing shape change, the material response and fluid analyses are coupled in order to calculate the time dependent surface heating and pressure distributions that result from shape change. The ablating material used in these arc-jet tests was Phenolic Impregnated Carbon Ablator. Effects of the test article shape change on fluid and material response simulations are demonstrated, and computational predictions of surface recession, shape change, and in-depth temperatures are compared with the experimental measurements.

  5. Predicting pilot error: testing a new methodology and a multi-methods and analysts approach.

    Science.gov (United States)

    Stanton, Neville A; Salmon, Paul; Harris, Don; Marshall, Andrew; Demagalski, Jason; Young, Mark S; Waldmann, Thomas; Dekker, Sidney

    2009-05-01

    The Human Error Template (HET) is a recently developed methodology for predicting design-induced pilot error. This article describes a validation study undertaken to compare the performance of HET against three contemporary Human Error Identification (HEI) approaches when used to predict pilot errors for an approach and landing task and also to compare analyst error predictions to an approach to enhancing error prediction sensitivity: the multiple analysts and methods approach, whereby multiple analyst predictions using a range of HEI techniques are pooled. The findings indicate that, of the four methodologies used in isolation, analysts using the HET methodology offered the most accurate error predictions, and also that the multiple analysts and methods approach was more successful overall in terms of error prediction sensitivity than the three other methods but not the HET approach. The results suggest that when predicting design-induced error, it is appropriate to use a toolkit of different HEI approaches and multiple analysts in order to heighten error prediction sensitivity.

  6. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error-based weighting and one objective function

    Science.gov (United States)

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall-runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error-based weighting of observation and prior information data, local sensitivity analysis, and single-objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  7. Hematology point of care testing and laboratory errors: an example of multidisciplinary management at a children's hospital in northeast Italy

    Directory of Open Access Journals (Sweden)

    Parco S

    2014-01-01

    Full Text Available Sergio Parco, Patrizia Visconti, Fulvia Vascotto Institute for Maternal and Child Health, Trieste, Italy Abstract: Involvement of health personnel in a medical audit can reduce the number of errors in laboratory medicine. The checked control of point of care testing (POCT could be an answer to developing a better medical service in the emergency department and decreasing the time taken to report tests. The performance of sanitary personnel from different disciplines was studied over an 18-month period in a children's hospital. Clinical errors in the emergency and laboratory departments were monitored by: nursing instruction using specific courses, POCT, and external quality control; improvement of test results and procedural accuracy; and reduction of hemolyzed and nonprotocol-conforming samples sent to the laboratory department. In January 2012, point of care testing (POCT was instituted in three medical units (neonatology, resuscitation, delivery room at the Children's Hospital in Trieste, northeast Italy, for analysis of hematochemical samples. In the same period, during the months of January 2012 and June 2013, 1,600 samples sent to central laboratory and their related preanalytical errors were examined for accuracy. External quality control for POCT was also monitored in the emergency department; three meetings were held with physicians, nurses, and laboratory technicians to highlight problems, ie, preanalytical errors and analytical methodologies associated with POCT. During the study, there was an improvement in external quality control for POCT from -3 or -2 standard deviations or more to one standard deviation for all parameters. Of 800 samples examined in the laboratory in January 2012, we identified 64 preanalytical errors (8.0%; in June 2013, there were 17 preanalytical errors (2.1%, representing a significant decrease (P<0.05, χ2 test. Multidisciplinary management and clinical audit can be used as tools to detect errors caused by

  8. Using Randomization Tests to Preserve Type I Error With Response-Adaptive and Covariate-Adaptive Randomization.

    Science.gov (United States)

    Simon, Richard; Simon, Noah Robin

    2011-07-01

    We demonstrate that clinical trials using response adaptive randomized treatment assignment rules are subject to substantial bias if there are time trends in unknown prognostic factors and standard methods of analysis are used. We develop a general class of randomization tests based on generating the null distribution of a general test statistic by repeating the adaptive randomized treatment assignment rule holding fixed the sequence of outcome values and covariate vectors actually observed in the trial. We develop broad conditions on the adaptive randomization method and the stochastic mechanism by which outcomes and covariate vectors are sampled that ensure that the type I error is controlled at the level of the randomization test. These conditions ensure that the use of the randomization test protects the type I error against time trends that are independent of the treatment assignments. Under some conditions in which the prognosis of future patients is determined by knowledge of the current randomization weights, the type I error is not strictly protected. We show that response-adaptive randomization can result in substantial reduction in statistical power when the type I error is preserved. Our results also ensure that type I error is controlled at the level of the randomization test for adaptive stratification designs used for balancing covariates.

  9. Language, Dialect, and Register: Sociolinguistics and the Estimation of Measurement Error in the Testing of English Language Learners

    Science.gov (United States)

    Solano-Flores, Guillermo

    2006-01-01

    This article examines the intersection of psychometrics and sociolinguists in the testing of English language learners (ELLs); it discusses language, dialect, and register as sources of measurement error. Research findings show that the dialect of the language in which students are tested (e.g., local or standard English) is as important as…

  10. Testing in a Random Effects Panel Data Model with Spatially Correlated Error Components and Spatially Lagged Dependent Variables

    Directory of Open Access Journals (Sweden)

    Ming He

    2015-11-01

    Full Text Available We propose a random effects panel data model with both spatially correlated error components and spatially lagged dependent variables. We focus on diagnostic testing procedures and derive Lagrange multiplier (LM test statistics for a variety of hypotheses within this model. We first construct the joint LM test for both the individual random effects and the two spatial effects (spatial error correlation and spatial lag dependence. We then provide LM tests for the individual random effects and for the two spatial effects separately. In addition, in order to guard against local model misspecification, we derive locally adjusted (robust LM tests based on the Bera and Yoon principle (Bera and Yoon, 1993. We conduct a small Monte Carlo simulation to show the good finite sample performances of these LM test statistics and revisit the cigarette demand example in Baltagi and Levin (1992 to illustrate our testing procedures.

  11. Sampling of Common Items: An Unrecognized Source of Error in Test Equating. CSE Report 636

    Science.gov (United States)

    Michaelides, Michalis P.; Haertel, Edward H.

    2004-01-01

    There is variability in the estimation of an equating transformation because common-item parameters are obtained from responses of samples of examinees. The most commonly used standard error of equating quantifies this source of sampling error, which decreases as the sample size of examinees used to derive the transformation increases. In a…

  12. Learners’ errors in secondary algebra: Insights from tracking a cohort from Grade 9 to Grade 11 on a diagnostic algebra test

    Directory of Open Access Journals (Sweden)

    Craig Pournara

    2016-05-01

    Full Text Available It is well known that learner performance in mathematics in South Africa is poor. However, less is known about what learners actually do and the extent to which this changes as they move through secondary school mathematics. In this study a cohort of 250 learners was tracked from Grade 9 to Grade 11 to investigate changes in their performance on a diagnostic algebra test drawn from the well-known Concepts in Secondary Maths and Science (CSMS tests. Although the CSMS tests were initially developed for Year 8 and Year 9 learners in the UK, a Rasch analysis on the Grade 11 results showed that the test performed adequately for older learners in SA. Error analysis revealed that learners make a wide variety of errors even on simple algebra items. Typical errors include conjoining, difficulties with negatives and brackets and a tendency to evaluate expressions rather than leaving them in the required open form. There is substantial evidence of curriculum impact in learners’ responses such as the inappropriate application of the addition law of exponents and the distributive law. Although such errors dissipate in the higher grades, this happens later than expected. While many learner responses do not appear to be sensible initially, interview data reveals that there is frequently an underlying logic related to mathematics that has been previously learned.

  13. More Than Just Accuracy: A Novel Method to Incorporate Multiple Test Attributes in Evaluating Diagnostic Tests Including Point of Care Tests.

    Science.gov (United States)

    Thompson, Matthew; Weigl, Bernhard; Fitzpatrick, Annette; Ide, Nicole

    2016-01-01

    Current frameworks for evaluating diagnostic tests are constrained by a focus on diagnostic accuracy, and assume that all aspects of the testing process and test attributes are discrete and equally important. Determining the balance between the benefits and harms associated with new or existing tests has been overlooked. Yet, this is critically important information for stakeholders involved in developing, testing, and implementing tests. This is particularly important for point of care tests (POCTs) where tradeoffs exist between numerous aspects of the testing process and test attributes. We developed a new model that multiple stakeholders (e.g., clinicians, patients, researchers, test developers, industry, regulators, and health care funders) can use to visualize the multiple attributes of tests, the interactions that occur between these attributes, and their impacts on health outcomes. We use multiple examples to illustrate interactions between test attributes (test availability, test experience, and test results) and outcomes, including several POCTs. The model could be used to prioritize research and development efforts, and inform regulatory submissions for new diagnostics. It could potentially provide a way to incorporate the relative weights that various subgroups or clinical settings might place on different test attributes. Our model provides a novel way that multiple stakeholders can use to visualize test attributes, their interactions, and impacts on individual and population outcomes. We anticipate that this will facilitate more informed decision making around diagnostic tests.

  14. Single Event Test Methodologies and System Error Rate Analysis for Triple Modular Redundant Field Programmable Gate Arrays

    Science.gov (United States)

    Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael

    2010-01-01

    We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.

  15. Clinical features of congenital adrenal insufficiency including growth patterns and significance of ACTH stimulation test.

    Science.gov (United States)

    Koh, Ji Won; Kim, Gu Hwan; Yoo, Han Wook; Yu, Jeesuk

    2013-11-01

    Congenital adrenal insufficiency is caused by specific genetic mutations. Early suspicion and definite diagnosis are crucial because the disease can precipitate a life-threatening hypovolemic shock without prompt treatment. This study was designed to understand the clinical manifestations including growth patterns and to find the usefulness of ACTH stimulation test. Sixteen patients with confirmed genotyping were subdivided into three groups according to the genetic study results: congenital adrenal hyperplasia due to 21-hydroxylase deficiency (CAH, n=11), congenital lipoid adrenal hyperplasia (n=3) and X-linked adrenal hypoplasia congenita (n=2). Bone age advancement was prominent in patients with CAH especially after 60 months of chronologic age (n=6, 67%). They were diagnosed in older ages in group with bone age advancement (Pcongenital adrenal insufficiency. ACTH stimulation test played an important role to support the diagnosis and serum 17-hydroxyprogesterone levels were significantly elevated in all of the CAH patients. The test will be important for monitoring growth and puberty during follow up of patients with congenital adrenal insufficiency.

  16. FY 2016 Status Report: Documentation of All CIRFT Data including Hydride Reorientation Tests (Draft M2)

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jy-An John [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Materials Science and Technology Division; Wang, Hong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Materials Science and Technology Division; Jiang, Hao [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Materials Science and Technology Division; Yan, Yong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Materials Science and Technology Division; Bevard, Bruce B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Materials Science and Technology Division; Scaglione, John M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Materials Science and Technology Division

    2016-09-04

    The first portion of this report provides a detailed description of fiscal year (FY) 2015 test result corrections and analysis updates based on FY 2016 updates to the Cyclic Integrated Reversible-Bending Fatigue Tester (CIRFT) program methodology, which is used to evaluate the vibration integrity of spent nuclear fuel (SNF) under normal conditions of transport (NCT). The CIRFT consists of a U-frame test setup and a real-time curvature measurement method. The three-component U-frame setup of the CIRFT has two rigid arms and linkages connecting to a universal testing machine. The curvature SNF rod bending is obtained through a three-point deflection measurement method. Three linear variable differential transformers (LVDTs) are clamped to the side connecting plates of the U-frame and used to capture deformation of the rod. The second portion of this report provides the latest CIRFT data, including data for the hydride reorientation test. The variations in fatigue life are provided in terms of moment, equivalent stress, curvature, and equivalent strain for the tested SNFs. The equivalent stress plot collapsed the data points from all of the SNF samples into a single zone. A detailed examination revealed that, at the same stress level, fatigue lives display a descending order as follows: H. B. Robinson Nuclear Power Station (HBR), LMK, and mixed uranium-plutonium oxide (MOX). Just looking at the strain, LMK fuel has a slightly longer fatigue life than HBR fuel, but the difference is subtle. The third portion of this report provides finite element analysis (FEA) dynamic deformation simulation of SNF assemblies . In a horizontal layout under NCT, the fuel assembly’s skeleton, which is formed by guide tubes and spacer grids, is the primary load bearing apparatus carrying and transferring vibration loads within an SNF assembly. These vibration loads include interaction forces between the SNF assembly and the canister basket walls. Therefore, the integrity of the guide

  17. FY 2016 Status Report: Documentation of All CIRFT Data including Hydride Reorientation Tests (Draft M2)

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jy-An John [ORNL; Wang, Hong [ORNL

    2016-09-01

    The first portion of this report provides a detailed description of fiscal year (FY) 2015 test result corrections and analysis updates based on FY 2016 updates to the Cyclic Integrated Reversible-Bending Fatigue Tester (CIRFT) program methodology, which is used to evaluate the vibration integrity of spent nuclear fuel (SNF) under normal conditions of transport (NCT). The CIRFT consists of a U-frame test setup and a real-time curvature measurement method. The three-component U-frame setup of the CIRFT has two rigid arms and linkages connecting to a universal testing machine. The curvature SNF rod bending is obtained through a three-point deflection measurement method. Three linear variable differential transformers (LVDTs) are clamped to the side connecting plates of the U-frame and used to capture deformation of the rod. The second portion of this report provides the latest CIRFT data, including data for the hydride reorientation test. The variations in fatigue life are provided in terms of moment, equivalent stress, curvature, and equivalent strain for the tested SNFs. The equivalent stress plot collapsed the data points from all of the SNF samples into a single zone. A detailed examination revealed that, at the same stress level, fatigue lives display a descending order as follows: H. B. Robinson Nuclear Power Station (HBR), LMK, and mixed uranium-plutonium oxide (MOX). Just looking at the strain, LMK fuel has a slightly longer fatigue life than HBR fuel, but the difference is subtle. The third portion of this report provides finite element analysis (FEA) dynamic deformation simulation of SNF assemblies . In a horizontal layout under NCT, the fuel assembly’s skeleton, which is formed by guide tubes and spacer grids, is the primary load bearing apparatus carrying and transferring vibration loads within an SNF assembly. These vibration loads include interaction forces between the SNF assembly and the canister basket walls. Therefore, the integrity of the guide

  18. Conjunction error rates on a continuous recognition memory test: little evidence for recollection.

    Science.gov (United States)

    Jones, Todd C; Atchley, Paul

    2002-03-01

    Two experiments examined conjunction memory errors on a continuous recognition task where the lag between parent words (e.g., blackmail, jailbird) and later conjunction lures (blackbird) was manipulated. In Experiment 1, contrary to expectations, the conjunction error rate was highest at the shortest lag (1 word) and decreased as the lag increased. In Experiment 2 the conjunction error rate increased significantly from a 0- to a 1-word lag, then decreased slightly from a 1- to a 5-word lag. The results provide mixed support for simple familiarity and dual-process accounts of recognition. Paradoxically, searching for an item in memory does not appear to be a good encoding task.

  19. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  20. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  1. Bit-error-rate testing of high-power 30-GHz traveling-wave tubes for ground-terminal applications

    Science.gov (United States)

    Shalkhauser, Kurt A.

    1987-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30-GHz 200-W coupled-cavity traveling-wave tubes (TWTs). The transmission effects of each TWT on a band-limited 220-Mbit/s SMSK signal were investigated. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20-GHz technology development program. This paper describes the approach taken to test the 30-GHz tubes and discusses the test data. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  2. On a Test of Hypothesis to Verify the Operating Risk Due to Accountancy Errors

    Directory of Open Access Journals (Sweden)

    Paola Maddalena Chiodini

    2014-12-01

    Full Text Available According to the Statement on Auditing Standards (SAS No. 39 (AU 350.01, audit sampling is defined as “the application of an audit procedure to less than 100 % of the items within an account balance or class of transactions for the purpose of evaluating some characteristic of the balance or class”. The audit system develops in different steps: some are not susceptible to sampling procedures, while others may be held using sampling techniques. The auditor may also be interested in two types of accounting error: the number of incorrect records in the sample that overcome a given threshold (natural error rate, which may be indicative of possible fraud, and the mean amount of monetary errors found in incorrect records. The aim of this study is to monitor jointly both types of errors through an appropriate system of hypotheses, with particular attention to the second type error that indicates the risk of non-reporting errors overcoming the upper precision limits.

  3. Characterization of waferstepper and process related front- to backwafer overlay errors in bulk micro machining using electrical overlay test structures

    NARCIS (Netherlands)

    Van Zeijl, H.W.; Bijnen, F.G.C.; Slabbekoorn, J.

    2004-01-01

    To validate the Front- To Backwafer Alignment (FTBA) calibration and to investigate process related overlay errors, electrical overlay test structures are used that requires FTBA [1]. Anisotropic KOH etch through the wafer is applied to transfer the backwafer pattern to the frontwafer. Consequently,

  4. The Effects of Grammar Testing on the Writing Quality and Reduction of Errors in College Freshmen's Essays

    Science.gov (United States)

    Davis, Wes; Mahoney, Kelley

    2005-01-01

    This experimental, statistical study investigated the effects that the testing of grammar and writing mechanics would have on the overall quality and reduction of errors in college students' essays for freshman composition. In the experimental group of 42 students, the professor assigned several exercises in grammar and mechanics as a review…

  5. Analysis of Round Off Errors with Reversibility Test as a Dynamical Indicator

    CERN Document Server

    Faranda, Davide; Turchetti, Giorgio

    2012-01-01

    We compare the divergence of orbits and the reversibility error for discrete time dynamical systems. These two quantities are used to explore the behavior of the global error induced by round off in the computation of orbits. The similarity of results found for any system we have analysed suggests the use of the reversibility error, whose computation is straightforward since it does not require the knowledge of the exact orbit, as a dynamical indicator. The statistics of fluctuations induced by round off for an ensemble of initial conditions has been compared with the results obtained in the case of random perturbations. Significant differences are observed in the case of regular orbits due to the correlations of round off error, whereas the results obtained for the chaotic case are nearly the same. Both the reversibility error and the orbit divergence computed for the same number of iterations on the whole phase space provide an insight on the local dynamical properties with a detail comparable with other dy...

  6. Economics of resynchronization strategies including chemical tests to identify nonpregnant cows.

    Science.gov (United States)

    Giordano, J O; Fricke, P M; Cabrera, V E

    2013-02-01

    Our objectives were to assess (1) the economic value of decreasing the interval between timed artificial insemination (TAI) services when using a pregnancy test that allows earlier identification of nonpregnant cows; and (2) the effect of pregnancy loss and inaccuracy of a chemical test (CT) on the economic value of a pregnancy test for dairy farms. Simulation experiments were performed using a spreadsheet-based decision support tool. In experiment 1, we assessed the effect of changing the interbreeding interval (IBI) for cows receiving TAI on the value of reproductive programs by simulating a 1,000-cow dairy herd using a combination of detection of estrus (30 to 80% of cows detected in estrus) and TAI. The IBI was incremented by 7d from 28 to 56 d to reflect intervals either observed (35 to 56 d) or potentially observed (28 d) in dairy operations. In experiment 2, we evaluated the effect of accuracy of the CT and additional pregnancy loss due to earlier testing on the value of reproductive programs. The first scenario compared the use of a CT 31 ± 3 d after a previous AI with rectal palpation (RP) 39 ± 3 d after AI. The second scenario used a CT 24 ± 3 d after AI or transrectal ultrasound (TU) 32 d after AI. Parameters evaluated included sensitivity (Se), specificity (Sp), questionable diagnosis (Qd), cost of the CT, and expected pregnancy loss. Sensitivity analysis was performed for all possible combinations of parameter values to determine their relative importance on the value of the CT. In experiment 1, programs with a shorter IBI had greater economic net returns at all levels of detection of estrus, and use of chemical tests available on the market today might be beneficial compared with RP. In experiment 2, the economic value of programs using a CT could be either greater or less than that of RP and TU, depending on the value for each of the parameters related to the CT evaluated. The value of the program using the CT was affected (in order) by (1) Se, (2

  7. Cost-effectiveness analysis of chemical testing for decision-support: How to include animal welfare?

    NARCIS (Netherlands)

    Gabbert, S.G.M.; Ierland, van E.C.

    2010-01-01

    Toxicity testing for regulatory purposes raises the question of test selection for a particular endpoint. Given the public's concern for animal welfare, test selection is a multi-objective decision problem that requires balancing information outcome, animal welfare loss, and monetary testing costs.

  8. The Standard Error of a Proportion for Different Scores and Test Length.

    Directory of Open Access Journals (Sweden)

    David A. Walker

    2005-06-01

    Full Text Available This paper examines Smith's (2003 proposed standard error of a proportion index..associated with the idea of reliability as sufficiency of information. A detailed table..indexing all of the standard error values affiliated with assessments that range from 5 to..100 items, where students scored as low as 50% correct and 50% incorrect to as high as..95% correct and 5% incorrect, calculated in increments of 1 percentage point, is..presented, along with distributional qualities. Examples using this measure for classroom..teachers and higher education instructors of assessment are provided.

  9. [Measures to prevent patient identification errors in blood collection/physiological function testing utilizing a laboratory information system].

    Science.gov (United States)

    Shimazu, Chisato; Hoshino, Satoshi; Furukawa, Taiji

    2013-08-01

    We constructed an integrated personal identification workflow chart using both bar code reading and an all in-one laboratory information system. The information system not only handles test data but also the information needed for patient guidance in the laboratory department. The reception terminals at the entrance, displays for patient guidance and patient identification tools at blood-sampling booths are all controlled by the information system. The number of patient identification errors was greatly reduced by the system. However, identification errors have not been abolished in the ultrasound department. After re-evaluation of the patient identification process in this department, we recognized that the major reason for the errors came from excessive identification workflow. Ordinarily, an ultrasound test requires patient identification 3 times, because 3 different systems are required during the entire test process, i.e. ultrasound modality system, laboratory information system and a system for producing reports. We are trying to connect the 3 different systems to develop a one-time identification workflow, but it is not a simple task and has not been completed yet. Utilization of the laboratory information system is effective, but is not yet perfect for patient identification. The most fundamental procedure for patient identification is to ask a person's name even today. Everyday checks in the ordinary workflow and everyone's participation in safety-management activity are important for the prevention of patient identification errors.

  10. 血液检验标本误差的影响因素分析%Analysis of the factors affecting blood test specimens error

    Institute of Scientific and Technical Information of China (English)

    叶云

    2015-01-01

    目的:对血液检验标本误差的影响因素进行探讨。方法:选取我院100份出现误差的血液检验标本为研究对象,对检验误差的原因进行分析和统计,并针对影响因素制定相应的预防对策。结果:100份血液检验标本的误差诱因包括采集因素、患者自身因素、标本处理因素、标本送检因素及标本检验因素。结论:在对患者实施血液检验时,对血液检验标本造成误差的诱因较多,医院单位需要规范血液检验的流程,加强把控,从而减少临床检验的误差率。%AIM:To discuss the factors of the blood specimens error. METHODS: 100 blood test specimens with error in our hospital were selected as the research subjects. Analyzing the fac⁃tors and developing appropriate preventive measures. RESULTS:The incentive factors of 100 blood test specimens include the patient�s own factors, collecting factors, specimens handing and specimens inspecting factors. CONCLUSION: Because of the varies factors of the blood specimens testing error, hospitals need to regulate the flow of blood tests and strengthen the control of the blood tests, which can reduce the error rate in clinical testing.

  11. Thermal Testing for Resin Structure Including Artificial Cavity Made by Three-Dimensional Printer

    OpenAIRE

    倉橋, 貴彦; 丸岡, 宏太郎; 井山, 徹郎; Kurahashi, Takahiko; Maruoka, Kotaro; Iyama, Tetsuro

    2015-01-01

    In this study, the thermal testing is carried out by using the resin structure with artificial cavity made by three-dimensional printer. There are some non-destructive testing method such as the ultrasonic testing method, the thermal testing method and so on. It is known that the size of defect can be found at the target point by using the ultrasonic testing method. On the other hand, the thermal testing method has a characteristic that the location of the defect, the cavity, the corrosion an...

  12. Equivalency of Spanish language versions of the trail making test part B including or excluding "CH".

    Science.gov (United States)

    Cherner, Mariana; Suarez, Paola; Posada, Carolina; Fortuny, Lidia Artiola I; Marcotte, Thomas; Grant, Igor; Heaton, Robert

    2008-07-01

    Spanish speakers commonly use two versions of the alphabet, one that includes the sound "Ch" between C and D and another that goes directly to D, as in English. Versions of the Trail Making Test Part B (TMT-B) have been created accordingly to accommodate this preference. The pattern and total number of circles to be connected are identical between versions. However, the equivalency of these alternate forms has not been reported. We compared the performance of 35 healthy Spanish speakers who completed the "Ch" form (CH group) to that of 96 individuals who received the standard form (D group), based on whether they mentioned "Ch" in their oral recitation of the alphabet. The groups had comparable demographic characteristics and overall neuropsychological performance. There were no significant differences in TMT-B scores between the CH and D groups, and relationships with demographic variables were comparable. The findings suggest that both versions are equivalent and can be administered to Spanish speakers based on their preference without sacrificing comparability.

  13. Flight Tests of Pilotage Error in Area Navigation with Vertical Guidance: Effects of Navigation Procedural Complexity

    Science.gov (United States)

    1974-08-01

    effectiveness in ait transport ~..peroflons. Savoy, fItI.: t niversity of Illinois at Urbano - Champaign, Institute of Aviation, Aviation Rosearci) Lboratory...National Technical Information Service, Springfield, Virginia 22151. Prepared for U.S. DEPARTMENT OF TRANSPORTATION FEDERAL AVIATION ADMINISTRATION...differences between airline transport pilots and commercial pilots with instrument ratings were found for horizontal steering error only. Differences

  14. Impact of measurement error on testing genetic association with quantitative traits.

    Directory of Open Access Journals (Sweden)

    Jiemin Liao

    Full Text Available Measurement error of a phenotypic trait reduces the power to detect genetic associations. We examined the impact of sample size, allele frequency and effect size in presence of measurement error for quantitative traits. The statistical power to detect genetic association with phenotype mean and variability was investigated analytically. The non-centrality parameter for a non-central F distribution was derived and verified using computer simulations. We obtained equivalent formulas for the cost of phenotype measurement error. Effects of differences in measurements were examined in a genome-wide association study (GWAS of two grading scales for cataract and a replication study of genetic variants influencing blood pressure. The mean absolute difference between the analytic power and simulation power for comparison of phenotypic means and variances was less than 0.005, and the absolute difference did not exceed 0.02. To maintain the same power, a one standard deviation (SD in measurement error of a standard normal distributed trait required a one-fold increase in sample size for comparison of means, and a three-fold increase in sample size for comparison of variances. GWAS results revealed almost no overlap in the significant SNPs (p<10(-5 for the two cataract grading scales while replication results in genetic variants of blood pressure displayed no significant differences between averaged blood pressure measurements and single blood pressure measurements. We have developed a framework for researchers to quantify power in the presence of measurement error, which will be applicable to studies of phenotypes in which the measurement is highly variable.

  15. Inflation of type I error rates by unequal variances associated with parametric, nonparametric, and Rank-Transformation Tests

    Directory of Open Access Journals (Sweden)

    Donald W. Zimmerman

    2004-01-01

    Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.

  16. Errors on the Trail Making Test Are Associated with Right Hemispheric Frontal Lobe Damage in Stroke Patients

    Directory of Open Access Journals (Sweden)

    Bruno Kopp

    2015-01-01

    Full Text Available Measures of performance on the Trail Making Test (TMT are among the most popular neuropsychological assessment techniques. Completion time on TMT-A is considered to provide a measure of processing speed, whereas completion time on TMT-B is considered to constitute a behavioral measure of the ability to shift between cognitive sets (cognitive flexibility, commonly attributed to the frontal lobes. However, empirical evidence linking performance on the TMT-B to localized frontal lesions is mostly lacking. Here, we examined the association of frontal lesions following stroke with TMT-B performance measures (i.e., completion time and completion accuracy measures using voxel-based lesion-behavior mapping, with a focus on right hemispheric frontal lobe lesions. Our results suggest that the number of errors, but not completion time on the TMT-B, is associated with right hemispheric frontal lesions. This finding contradicts common clinical practice—the use of completion time on the TMT-B to measure cognitive flexibility, and it underscores the need for additional research on the association between cognitive flexibility and the frontal lobes. Further work in a larger sample, including left frontal lobe damage and with more power to detect effects of right posterior brain injury, is necessary to determine whether our observation is specific for right frontal lesions.

  17. Analysis of the Largest Normalized Residual Test Robustness for Measurements Gross Errors Processing in the WLS State Estimator

    Directory of Open Access Journals (Sweden)

    Breno Carvalho

    2013-10-01

    Full Text Available This paper purpose is to implement a computational program to estimate the states (complex nodal voltages of a power system and showing that the largest normalized residual (LNR test fails many times. The chosen solution method was the Weighted Least Squares (WLS. Once the states are estimated a gross error analysis is made with the purpose to detect and identify the measurements that may contain gross errors (GEs, which can interfere in the estimated states, leading the process to an erroneous state estimation. If a measure is identified as having error, it is discarded of the measurement set and the whole process is remade until all measures are within an acceptable error threshold. To validate the implemented software there have been done several computer simulations in the IEEE´s systems of 6 and 14 buses, where satisfactory results were obtained. Another purpose is to show that even a widespread method as the LNR test is subjected to serious conceptual flaws, probably due to a lack of mathematical foundation attendance in the methodology. The paper highlights the need for continuous improvement of the employed techniques and a critical view, on the part of the researchers, to see those types of failures.

  18. Effect of measurement error on tests of density dependence of catchability for walleyes in northern Wisconsin angling and spearing fisheries

    Science.gov (United States)

    Hansen, M.J.; Beard, T.D.; Hewett, S.W.

    2005-01-01

    We sought to determine how much measurement errors affected tests of density dependence of spearing and angling catchability for walleye Sander vitreus by quantifying relationships between spearing and angling catch rates (catch/h) and walleye population density (number/acre) in northern Wisconsin lakes. The mean measurement error of spearing catch rates was 43.5 times greater than the mean measurement error of adult walleye population densities, whereas the mean measurement error of angling catch rates was only 5.6 times greater than the mean measurement error of adult walleye population densities. The bias-corrected estimate of the relationship between spearing catch rate and adult walleye population density was similar to the ordinary-least-squares regression estimate but differed significantly from the geometric mean (GM) functional regression estimate. In contrast, the bias-corrected estimate of the relationship between angling catch rate and total walleye population density was intermediate between ordinary-least-squares and GM functional regression estimates. Catch rates of walleyes in both spearing and angling fisheries were not linearly related to walleye population density, which indicated that catch rates in both fisheries were hyperstable in relation to walleye population density. For both fisheries, GM functional regression overestimated the degree of hyperdepletion in catch rates and ordinary-least-squares regression overestimated the degree of hyperstability in catch rates. However, ordinary-least-squares regression induced significantly less bias in tests of density dependence than GM functional regression, so it may be suitable for testing the degree of density dependence in fisheries for which fish population density is estimated with mark-recapture methods similar to those used in our study. ?? Copyright by the American Fisheries Society 2005.

  19. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  20. Measurement Error Variance of Test-Day Obervations from Automatic Milking Systems

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik S

    2012-01-01

    Automated milking systems (AMS) are becoming more popular in dairy farms. In this paper we present an approach for estimation of residual error covariance matrices for AMS and conventional milking system (CMS) observations. The variances for other random effects are kept as defined...... in the evaluation model. AMS residual variances were found to be 16 to 37 percent smaller for milk and protein yield and 42 to 47 percent larger for fat yield compared to CMS...

  1. The Langley thermal protection system test facility: A description including design operating boundaries

    Science.gov (United States)

    Klich, G. F.

    1976-01-01

    A description of the Langley thermal protection system test facility is presented. This facility was designed to provide realistic environments and times for testing thermal protection systems proposed for use on high speed vehicles such as the space shuttle. Products from the combustion of methane-air-oxygen mixtures, having a maximum total enthalpy of 10.3 MJ/kg, are used as a test medium. Test panels with maximum dimensions of 61 cm x 91.4 cm are mounted in the side wall of the test region. Static pressures in the test region can range from .005 to .1 atm and calculated equilibrium temperatures of test panels range from 700 K to 1700 K. Test times can be as long as 1800 sec. Some experimental data obtained while using combustion products of methane-air mixtures are compared with theory, and calibration of the facility is being continued to verify calculated values of parameters which are within the design operating boundaries.

  2. Recommendation to include methyldibromo glutaronitrile in the European standard patch test series.

    Science.gov (United States)

    Bruze, Magnus; Goossens, An; Gruvberger, Birgitta

    2005-01-01

    The preservative methyldibromo glutaronitrilc (MDBGN) is used non-occupationally and occupationally. High contact allergy rates have been reported when tested in consecutive dermatitis patients as well as clinical cases with allergic contact dermatitis. Up till now there has been no agreement on which patch test preparation to use to trace contact allergy to MDBGN. From the year 2005 on, MDBGN at 0.5% w/w in petrolatum is recommended for the European standard patch test series. The choice of 0.5% is based on consideration of rates of contact allergy, doubtful and irritant reactions, as well as on information on clinical relevance represented by results of a repeated open application test, and patch test concentrations to diagnose allergic contact dermatitis from MDBGN in individual cases.

  3. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    2011-01-01

    Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...... errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary...

  4. Post-test thermal-hydraulic analysis of two intermediate LOCA tests at the ROSA facility including uncertainty evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Freixa, J., E-mail: jordi@freixa.net [Paul Scherrer Institut (PSI) 5232 Villigen PSI (Switzerland); Kim, T.-W. [Paul Scherrer Institut (PSI) 5232 Villigen PSI (Switzerland); Manera, A. [University of Michigan, Ann Arbor, MI 48109 (United States)

    2013-11-15

    The OECD/NEA ROSA-2 project aims at addressing thermal-hydraulic safety issues relevant for light water reactors by building up an experimental database at the ROSA Large Scale Test Facility (LSTF). The ROSA facility simulates a PWR Westinghouse design with a four-loop configuration and a nominal power of 3423 MWth. Two intermediate break loss-of-coolant-accident (LOCA) experiments (Tests 1 and 2) have been carried out during 2010. The two tests were analyzed by using the US-NRC TRACE best estimate code, employing the same nodalization previously used for the simulation of small-break LOCA experiments of the ROSA-1 programme. A post-test calculation was performed for each test along with uncertainty analysis providing uncertainty bands for each relevant time trend. Uncertainties in the code modelling capabilities as well as in the initial and boundary conditions were taken into account, following the guidelines and lessons learnt through participation in the OECD/NEA BEMUSE programme. Two different versions of the TRACE code were used in the analysis, providing a qualitatively good prediction of the tests. However, the uncertainty analysis revealed differences between the performances of some models in the two versions. The most relevant parameters of the two experimental tests were falling within the computed uncertainty bands.

  5. Post-test thermal-hydraulic analysis of two intermediate LOCA tests at the ROSA facility including uncertainty evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Freixa, J.; Kim, T-W.; Manera, A. [Paul Scherrer Inst., Villigen (Switzerland)

    2011-07-01

    The OECD/NEA ROSA-2 project aims at addressing thermal-hydraulic safety issues relevant for light water reactors by building up an experimental database at the ROSA Large Scale Test Facility (LSTF). The ROSA facility simulates a PWR Westinghouse design with a four-loop configuration and a nominal power of 3423 MWth. Two intermediate break loss-of-coolant-accident (LOCA) experiments (Test 1 and 2) have been carried out during 2010. The two tests were analyzed by using the US-NRC TRACE best estimate code, employing the same nodalization previously used for the simulation of small-break LOCA experiments of the ROSA-1 program. A post-test calculation was performed for each test along with uncertainty analysis providing uncertainty bands for each relevant time trend. Uncertainties in the code modeling capabilities as well as in the initial and boundary conditions were taken into account, following the guidelines and lessons learnt through participation in the OECD/NEA BEMUSE program. Two different versions of the TRACE code were used in the analysis, providing a qualitatively good prediction of the tests. However, both versions showed deficiencies that need to be addressed. The most relevant parameters of the two experimental tests were falling within the computed uncertainty bands. (author)

  6. Seasonal performance of air conditioners - an analysis of the DOE test procedures: the thermostat and measurement errors. Report No. 2

    Energy Technology Data Exchange (ETDEWEB)

    Lamb, G.D.; Tree, D.R.

    1981-01-01

    Two aspects of the DOE test procedures are analyzed. First, the role of the thermostat in controlling the cycling of conditioning equipment is investigated. The test procedures call for a cycling scheme of 6 minutes on, 24 minutes off for Test D. To justify this cycling scheme as being representative of cycling in the field, it is assumed that the thermostat is the major factor in controlling the cycle rate. This assumption is examined by studying a closed-loop feedback model consisting of a thermostat, a heating/cooling plant and a conditioned space. Important parameters of this model are individually studied to determine their influence on the system. It is found that the switch differential and the anticipator gain are the major parameters in controlling the cycle rate. This confirms the thermostat's dominant role in the cycling of a system. The second aspect of the test procedures concerns transient errors or differences in the measurement of cyclic capacity. In particular, errors due to thermocouple response, thermocouple grid placement, dampers and nonuniform velocity and temperature distributions are considered. Problems in these four areas are mathematically modeled and the basic assumptions are stated. Results from these models help to clarify the problem areas and give an indication of the magnitude of the errors involved. It is found that major disagreement in measured capacity can arise in these four areas and can be mainly attributed to test set-up differences even though such differences are allowable in the test procedures. An understanding of such differences will aid in minimizing many problems in the measurement of cyclic capacity.

  7. Navigation strategies as revealed by error patterns on the Magic Carpet test in children with cerebral palsy

    Directory of Open Access Journals (Sweden)

    Vittorio eBelmonti

    2015-07-01

    Full Text Available IntroductionShort-term memory develops differently in navigation vs. manual space. The Magic Carpet (MC is a novel navigation test derived from the Walking Corsi Test and the manual Corsi Block-tapping Task (CBT. The MC requires mental rotations and executive function. In Cerebral Palsy (CP, CBT and MC scores relate differently to clinical and lesional factors. Hypotheses of this study are: that frontal lesions specifically affect navigation in CP; that brain lesions affect MC cognitive strategies.Material and methodsTwenty-two children with spastic CP, aged 5 to 14 years, 14 with a unilateral and 8 with a bilateral form, underwent the CBT and the MC. Errors were classified into 7 patterns by a recently described algorithm. Brain lesions were quantified according to a novel semi-quantitative MRI scale. Control data were partially drawn from a previous study on 91 typically developing children.ResultsChildren with CP performed worse than controls on both tests. Right hemispheric impairment correlated with spatial memory. MC span was reduced less than CBT span and was more selectively related to right middle white-matter and frontal lesions. Error patterns were differently distributed in CP than in typical development and depended on right brain impairment: children with more extensive right lesions made more positional than sequential errors.DiscussionIn CP, navigation is affected only by extensive lesions involving the right frontal lobe. In addition, these are associated with abnormal cognitive strategies. Whereas in typical development positional errors, preserving serial order, increase with age and performance, in CP they are associated with poorer performance and more extensive right-brain lesions. The explanation may lie in lesion side: right brain is crucial for mental rotations, necessary for spatial updating. Left-lateralized spatial memory strategies, relying on serial order, are not efficient if not accompanied by right-brain spatial

  8. Should sperm DNA fragmentation testing be included in the male infertility work-up?

    Science.gov (United States)

    Lewis, Sheena E M

    2015-08-01

    A response to the editorial 'Are we ready to incorporate sperm DNA fragmentation testing into our male infertility work-up? A plea for more robust studies' by Erma Drobnis and Martin Johnson. Copyright © 2015 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  9. Impact of Accumulated Error on Item Response Theory Pre-Equating with Mixed Format Tests

    Science.gov (United States)

    Keller, Lisa A.; Keller, Robert; Cook, Robert J.; Colvin, Kimberly F.

    2016-01-01

    The equating of tests is an essential process in high-stakes, large-scale testing conducted over multiple forms or administrations. By adjusting for differences in difficulty and placing scores from different administrations of a test on a common scale, equating allows scores from these different forms and administrations to be directly compared…

  10. [Long QT syndrome: a brief review of the electrocardiographical diagnosis including Viskin's test].

    Science.gov (United States)

    Márquez, Manlio F

    2012-01-01

    The QT interval measures both repolarization and depolarization. Learning to measure the QT interval and know how to correct (QTc) for heart rate (HR) is essential for the diagnosis of long QT syndrome (LQTS). The QTc interval changes in duration and even morphology depending on the time of the day and on a day-to-day basis. A diminished adaptive response of the QTc interval to changes in HR is known as QT hysteresis. Viskin has introduced a very simple clinical test to confirm the diagnosis of LQTS based on the "hypoadaptation" of the QT when standing. This phenomenon gives the appearance of a "stretching of the QT" on the surface ECG. Likewise, he has coined the term "QT stunning" to refer to the phenomenon that the QTc interval does not return to baseline despite recovery of baseline HR after standing. This article shows some examples of the Viskin's test.

  11. Screened test-charge - electron interaction including many-body effects in two and three dimensions

    Science.gov (United States)

    Gold, A.; Ghazali, A.

    1997-05-01

    Bound states of a negatively charged test particle and an electron are studied by incorporating many-body effects (exchange and correlation) in the screening function of an interacting electron gas via the local-field correction. Using a variational method and a matrix-diagonalization method we determine the energies and the wave functions of the ground state and the excited states as functions of the electron density for three-dimensional and two-dimensional systems. For high electron density no bound states are found. Below a critical density the number and the energy of the bound states increase with decreasing electron density. We also present results for bound-state energies of a positively charged test particle with an electron, and compare them with results obtained within the random-phase approximation where the local-field correction is ignored.

  12. Solar Energy Education. Industrial arts: teacher's guide. Field test edition. [Includes glossary

    Energy Technology Data Exchange (ETDEWEB)

    1981-05-01

    An instructional aid is presented which integrates the subject of solar energy into the classroom study of industrial arts. This guide for teachers was produced in addition to the student activities book for industrial arts by the USDOE Solar Energy Education. A glossary of solar energy terms is included. (BCS)

  13. Pilot's Guide to an Airline Career, Including Sample Pre-Employment Tests.

    Science.gov (United States)

    Traylor, W.L.

    Occupational information for persons considering a career as an airline pilot includes a detailed description of the pilot's duties and material concerning preparation for occupational entry and determining the relative merits of available jobs. The book consists of four parts: Part I, The Job, provides an overview of a pilot's duties in his daily…

  14. "Enzyme Test Bench," a high-throughput enzyme characterization technique including the long-term stability.

    Science.gov (United States)

    Rachinskiy, Kirill; Schultze, Hergen; Boy, Matthias; Bornscheuer, Uwe; Büchs, Jochen

    2009-06-01

    A new high throughput technique for enzyme characterization with specific attention to the long term stability, called "Enzyme Test Bench," is presented. The concept of the Enzyme Test Bench consists of short term enzyme tests in 96-well microtiter plates under partly extreme conditions to predict the enzyme long term stability under moderate conditions. The technique is based on the mathematical modeling of temperature dependent enzyme activation and deactivation. Adapting the temperature profiles in sequential experiments by optimal non-linear experimental design, the long term deactivation effects can be purposefully accelerated and detected within hours. During the experiment the enzyme activity is measured online to estimate the model parameters from the obtained data. Thus, the enzyme activity and long term stability can be calculated as a function of temperature. The engineered instrumentation provides for simultaneous automated assaying by fluorescent measurements, mixing and homogenous temperature control in the range of 10-85 +/- 0.5 degrees C. A universal fluorescent assay for online acquisition of ester hydrolysis reactions by pH-shift is developed and established. The developed instrumentation and assay are applied to characterize two esterases. The results of the characterization, carried out in microtiter plates applying short term experiments of hours, are in good agreement with the results of long term experiments at different temperatures in 1 L stirred tank reactors of a week. Thus, the new technique allows for both: the enzyme screening with regard to the long term stability and the choice of the optimal process temperature regarding such process parameters as turn over number, space time yield or optimal process duration. The comparison of the temperature dependent behavior of both characterized enzymes clearly demonstrates that the frequently applied estimation of long term stability at moderate temperatures by simple activity measurements

  15. Soft errors in modern electronic systems

    CERN Document Server

    Nicolaidis, Michael

    2010-01-01

    This book provides a comprehensive presentation of the most advanced research results and technological developments enabling understanding, qualifying and mitigating the soft errors effect in advanced electronics, including the fundamental physical mechanisms of radiation induced soft errors, the various steps that lead to a system failure, the modelling and simulation of soft error at various levels (including physical, electrical, netlist, event driven, RTL, and system level modelling and simulation), hardware fault injection, accelerated radiation testing and natural environment testing, s

  16. The Social Explanatory Styles Questionnaire: assessing moderators of basic social-cognitive phenomena including spontaneous trait inference, the fundamental attribution error, and moral blame.

    Science.gov (United States)

    Gill, Michael J; Andreychik, Michael R

    2014-01-01

    Why is he poor? Why is she failing academically? Why is he so generous? Why is she so conscientious? Answers to such everyday questions--social explanations--have powerful effects on relationships at the interpersonal and societal levels. How do people select an explanation in particular cases? We suggest that, often, explanations are selected based on the individual's pre-existing general theories of social causality. More specifically, we suggest that over time individuals develop general beliefs regarding the causes of social events. We refer to these beliefs as social explanatory styles. Our goal in the present article is to offer and validate a measure of individual differences in social explanatory styles. Accordingly, we offer the Social Explanatory Styles Questionnaire (SESQ), which measures three independent dimensions of social explanatory style: Dispositionism, historicism, and controllability. Studies 1-3 examine basic psychometric properties of the SESQ and provide positive evidence regarding internal consistency, factor structure, and both convergent and divergent validity. Studies 4-6 examine predictive validity for each subscale: Does each explanatory dimension moderate an important phenomenon of social cognition? Results suggest that they do. In Study 4, we show that SESQ dispositionism moderates the tendency to make spontaneous trait inferences. In Study 5, we show that SESQ historicism moderates the tendency to commit the Fundamental Attribution Error. Finally, in Study 6 we show that SESQ controllability predicts polarization of moral blame judgments: Heightened blaming toward controllable stigmas (assimilation), and attenuated blaming toward uncontrollable stigmas (contrast). Decades of research suggest that explanatory style regarding the self is a powerful predictor of self-functioning. We think it is likely that social explanatory styles--perhaps comprising interactive combinations of the basic dimensions tapped by the SESQ--will be

  17. The Social Explanatory Styles Questionnaire: Assessing Moderators of Basic Social-Cognitive Phenomena Including Spontaneous Trait Inference, the Fundamental Attribution Error, and Moral Blame

    Science.gov (United States)

    Gill, Michael J.; Andreychik, Michael R.

    2014-01-01

    Why is he poor? Why is she failing academically? Why is he so generous? Why is she so conscientious? Answers to such everyday questions—social explanations—have powerful effects on relationships at the interpersonal and societal levels. How do people select an explanation in particular cases? We suggest that, often, explanations are selected based on the individual's pre-existing general theories of social causality. More specifically, we suggest that over time individuals develop general beliefs regarding the causes of social events. We refer to these beliefs as social explanatory styles. Our goal in the present article is to offer and validate a measure of individual differences in social explanatory styles. Accordingly, we offer the Social Explanatory Styles Questionnaire (SESQ), which measures three independent dimensions of social explanatory style: Dispositionism, historicism, and controllability. Studies 1–3 examine basic psychometric properties of the SESQ and provide positive evidence regarding internal consistency, factor structure, and both convergent and divergent validity. Studies 4–6 examine predictive validity for each subscale: Does each explanatory dimension moderate an important phenomenon of social cognition? Results suggest that they do. In Study 4, we show that SESQ dispositionism moderates the tendency to make spontaneous trait inferences. In Study 5, we show that SESQ historicism moderates the tendency to commit the Fundamental Attribution Error. Finally, in Study 6 we show that SESQ controllability predicts polarization of moral blame judgments: Heightened blaming toward controllable stigmas (assimilation), and attenuated blaming toward uncontrollable stigmas (contrast). Decades of research suggest that explanatory style regarding the self is a powerful predictor of self-functioning. We think it is likely that social explanatory styles—perhaps comprising interactive combinations of the basic dimensions tapped by the SESQ—will be

  18. Correcting for Test Score Measurement Error in ANCOVA Models for Estimating Treatment Effects

    Science.gov (United States)

    Lockwood, J. R.; McCaffrey, Daniel F.

    2014-01-01

    A common strategy for estimating treatment effects in observational studies using individual student-level data is analysis of covariance (ANCOVA) or hierarchical variants of it, in which outcomes (often standardized test scores) are regressed on pretreatment test scores, other student characteristics, and treatment group indicators. Measurement…

  19. Effectiveness of Barcoding for Reducing Patient Specimen and Laboratory Testing Identification Errors: A Laboratory Medicine Best Practices Systematic Review and Meta-Analysis

    Science.gov (United States)

    Snyder, Susan R.; Favoretto, Alessandra M.; Derzon, James H.; Christenson, Robert; Kahn, Stephen; Shaw, Colleen; Baetz, Rich Ann; Mass, Diana; Fantz, Corrine; Raab, Stephen; Tanasijevic, Milenko; Liebow, Edward B.

    2015-01-01

    Objectives This is the first systematic review of the effectiveness of barcoding practices for reducing patient specimen and laboratory testing identification errors. Design and Methods The CDC-funded Laboratory Medicine Best Practices Initiative systematic review methods for quality improvement practices were used. Results A total of 17 observational studies reporting on barcoding systems are included in the body of evidence; 10 for patient specimens and 7 for point-of-care testing. All 17 studies favored barcoding, with meta-analysis mean odds ratios for barcoding systems of 4.39 (95% CI: 3.05 – 6.32) and for point-of-care testing of 5.93 (95% CI: 5.28 – 6.67). Conclusions Barcoding is effective for reducing patient specimen and laboratory testing identification errors in diverse hospital settings and is recommended as an evidence-based “best practice.” The overall strength of evidence rating is high and the effect size rating is substantial. Unpublished studies made an important contribution comprising almost half of the body of evidence. PMID:22750145

  20. Results of a Saxitoxin Proficiency Test Including Characterization of Reference Material and Stability Studies

    Directory of Open Access Journals (Sweden)

    Kirsi Harju

    2015-11-01

    Full Text Available A saxitoxin (STX proficiency test (PT was organized as part of the Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk (EQuATox project. The aim of this PT was to provide an evaluation of existing methods and the European laboratories’ capabilities for the analysis of STX and some of its analogues in real samples. Homogenized mussel material and algal cell materials containing paralytic shellfish poisoning (PSP toxins were produced as reference sample matrices. The reference material was characterized using various analytical methods. Acidified algal extract samples at two concentration levels were prepared from a bulk culture of PSP toxins producing dinoflagellate Alexandrium ostenfeldii. The homogeneity and stability of the prepared PT samples were studied and found to be fit-for-purpose. Thereafter, eight STX PT samples were sent to ten participating laboratories from eight countries. The PT offered the participating laboratories the possibility to assess their performance regarding the qualitative and quantitative detection of PSP toxins. Various techniques such as official Association of Official Analytical Chemists (AOAC methods, immunoassays, and liquid chromatography-mass spectrometry were used for sample analyses.

  1. Flex Power Grid Lab, an electronic equipment test laboratory for emerging MV applications, including grid inverters

    Energy Technology Data Exchange (ETDEWEB)

    Jong, Erik C.W. de [Flex Power Grid Lab, Arnheim (Netherlands); Vaessen, Peter T.M. [KEMA Nederland B.V., Arnheim (Netherlands)

    2008-07-01

    The success of a sustainable energy supply in a free energy market depends on proper management of the energy flows. For control and management, power electronics are indispensable. The knowledge about electromagnetic power technology and the development of components are about to undergo explosive growth. Due to the emergence of decentralised energy sources and the liberalisation of the energy market, the control and management of electrical flows is gaining in importance. Hierarchically controlled one-way traffic continues to decline in favour of (autonomous) networks supplied by large and small generating stations that provide varying electric currents in all directions within a power grid. The complexity of constantly balancing supply and demand is therefore increasing while the assets are simultaneously being utilised to their limits. Information and communication technology has given power technology a strong impulse. Power electronic technology allows network managers and operators to better guide the energy flow. It therefore also contributes to a more rapid transition to a durable energy supply. For example, the application of power electronics in network-integrated decentralised generators such as micro-CHP, wind turbines and solar cells increases the ability to intervene on an extremely local level in an intelligent manner. This increased application and penetration of grid connected power electronics also inevitably increases the demand for research, knowledge and testing of the behaviour of the equipment when integrated into the grid. (orig.)

  2. A flexible and robust soft-error testing system for microelectronic devices and integrated circuits

    Institute of Scientific and Technical Information of China (English)

    王晓辉; 杨振雷; 童腾; 苏弘; 刘杰; 张战刚; 古松; 刘天奇; 孔洁; 赵兴文

    2015-01-01

    Single event effects (SEEs) induced by radiations become a significant reliability challenge for modern elec-tronic systems. To evaluate SEEs susceptibility for microelectronic devices and integrated circuits (ICs), an SEE testing system with flexibility and robustness was developed at Heavy Ion Research Facility in Lanzhou (HIRFL). The system is compatible with various types of microelectronic devices and ICs, and supports plenty of complex and high-speed test schemes and plans for the irradiated devices under test (DUTs). Thanks to the combination of meticulous circuit design and the hardened logic design, the system has additional performances to avoid an overheated situation and irradiations by stray radiations. The system has been tested and verified by experiments for irradiating devices at HIRFL.

  3. Distance associated heterophoria measured with polarized Cross test of MKH method and its relationship to refractive error and age

    Directory of Open Access Journals (Sweden)

    Kříž P

    2017-03-01

    Full Text Available Pavel Kříž,1 Šárka Skorkovská1,2 1Faculty of Medicine, Department of Ophthalmology and Optometry, Masaryk University, 2Eye Clinic NeoVize Brno, Brno, Czech Republic Purpose: Due to the expansion of modern optotype liquid crystal display with the help of positive polarization, measurement of heterophorias (HTFs by means of polarization, and thus partial dissociation of perceptions, has become more and more accessible. Our aims were to establish the prevalence of distance associated HTF by measuring with polarized Cross test of MKH [measuring and correcting methodology after H-J Haase] method and its association with age and refractive error in clinical population of wide age range. Methods: A cross-sectional study was carried out with 170 clinical subjects aged 15–78 years with an average age of 40.7±16.62 years. All the participants had best-corrected visual acuity better than 20/25, stereopsis ≤60 second of arc, no heterotropia, not undergone vision therapy, and had no eye disease. The distance associated HTF was measured with the Cross test of the MKH methodology. The quantification of associated HTF was acquired by means of Risley rotary prism. Results: The occurrence of distance associated HTF was found in 71.2% of participants. Of the total, 36.5% of the cases had esophoria (EP, 9.4% EP and hyperphoria, 10.6% exophoria (XP, 7.1% XP and hyperphoria, 7.6% hyperphoria, and 28.8% orthophoria. The mean distance horizontal associated HTF was +0.76±2.38 ∆. With EP, the mean value was +2.47±2.18 ∆, and with XP, −2.1±1.72 ∆. There was no correlation observed between the amount of distance associated HTF and age. There was no effect of the type and amount of a refractive error on the amount of distance associated HTF. Conclusion: A high occurrence of distance associated HTF was revealed while performing the polarized Cross test of MKH method. The relationship between the degree of associated HTF and refractive error and age

  4. The terminator "toy" chemistry test: a simple tool to assess errors in transport schemes

    Directory of Open Access Journals (Sweden)

    P. H. Lauritzen

    2015-05-01

    Full Text Available This test extends the evaluation of transport schemes from prescribed advection of inert scalars to reactive species. The test consists of transporting two interacting chemical species in the Nair and Lauritzen 2-D idealized flow field. The sources and sinks for these two species are given by a simple, but non-linear, "toy" chemistry that represents combination (X + X → X2 and dissociation (X2 → X + X. This chemistry mimics photolysis-driven conditions near the solar terminator, where strong gradients in the spatial distribution of the species develop near its edge. Despite the large spatial variations in each species, the weighted sum XT = X + 2X2 should always be preserved at spatial scales at which molecular diffusion is excluded. The terminator test demonstrates how well the advection–transport scheme preserves linear correlations. Chemistry–transport (physics–dynamics coupling can also be studied with this test. Examples of the consequences of this test are shown for illustration.

  5. Research and development report. Eureka 147: Tests of the error performance of the DAB system

    Science.gov (United States)

    Gilchrist, N. H. C.

    This Report describes tests carried out by the BBC and other members of Eureka 147 Working Group 2-A, to assess the performance of the DAB (Digital Audio Broadcasting) system with a low carrier-to-noise ratio at the receiver. The tests were conducted using a number of listeners to judge the audio quality, making collaborative decisions with the knowledge of the conditions under which the system was operating at all times. The primary purpose of the work described in this Report was to inform the Eureka 147 project about the failure characteristics of the DAB system as the carrier-to-noise ratio is reduced.

  6. 78 FR 20345 - Modification and Expansion of CBP Centers of Excellence and Expertise Test To Include Six...

    Science.gov (United States)

    2013-04-04

    ... Expertise Test To Include Six Additional Centers AGENCY: U.S. Customs and Border Protection, Department of... Protection's (CBP's) plan to modify and expand its test for the Centers of Excellence and Expertise (CEEs... Border Protection (CBP) established two Centers of Excellence and Expertise (CEEs): The Electronics...

  7. Should We Stop Looking for a Better Scoring Algorithm for Handling Implicit Association Test Data? Test of the Role of Errors, Extreme Latencies Treatment, Scoring Formula, and Practice Trials on Reliability and Validity.

    Directory of Open Access Journals (Sweden)

    Juliette Richetin

    Full Text Available Since the development of D scores for the Implicit Association Test, few studies have examined whether there is a better scoring method. In this contribution, we tested the effect of four relevant parameters for IAT data that are the treatment of extreme latencies, the error treatment, the method for computing the IAT difference, and the distinction between practice and test critical trials. For some options of these different parameters, we included robust statistic methods that can provide viable alternative metrics to existing scoring algorithms, especially given the specificity of reaction time data. We thus elaborated 420 algorithms that result from the combination of all the different options and test the main effect of the four parameters with robust statistical analyses as well as their interaction with the type of IAT (i.e., with or without built-in penalty included in the IAT procedure. From the results, we can elaborate some recommendations. A treatment of extreme latencies is preferable but only if it consists in replacing rather than eliminating them. Errors contain important information and should not be discarded. The D score seems to be still a good way to compute the difference although the G score could be a good alternative, and finally it seems better to not compute the IAT difference separately for practice and test critical trials. From this recommendation, we propose to improve the traditional D scores with small yet effective modifications.

  8. Non-gaussian Test Models for Prediction and State Estimation with Model Errors

    Institute of Scientific and Technical Information of China (English)

    Michal BRANICKI; Nan CHEN; Andrew J.MAJDA

    2013-01-01

    Turbulent dynamical systems involve dynamics with both a large dimensional phase space and a large number of positive Lyapunov exponents.Such systems are ubiquitous in applications in contemporary science and engineering where the statistical ensemble prediction and the real time filtering/state estimation are needed despite the underlying complexity of the system.Statistically exactly solvable test models have a crucial role to provide firm mathematical underpinning or new algorithms for vastly more complex scientific phenomena.Here,a class of statistically exactly solvable non-Gaussian test models is introduced,where a generalized Feynman-Kac formulation reduces the exact behavior of conditional statistical moments to the solution to inhomogeneous Fokker-Planck equations modified by linear lower order coupling and source terms.This procedure is applied to a test model with hidden instabilities and is combined with information theory to address two important issues in the contemporary statistical prediction of turbulent dynamical systems:the coarse-gained ensemble prediction in a perfect model and the improving long range forecasting in imperfect models.The models discussed here should be useful for many other applications and algorithms for the real time prediction and the state estimation.

  9. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    Science.gov (United States)

    Kory, Carol L.

    2001-01-01

    prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the time-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit error rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real time transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.

  10. Frequent major errors in antimicrobial susceptibility testing of bacterial strains distributed under the Deutsches Krebsforschungszentrum Quality Assurance Program.

    Science.gov (United States)

    Boot, R

    2012-07-01

    The Quality Assurance Program (QAP) of the Deutsches Krebsforschungszentrum (DKFZ) was a proficiency testing system developed to service the laboratory animal discipline. The QAP comprised the distribution of bacterial strains from various species of animals for identification to species level and antibiotic susceptibility testing (AST). Identification capabilities were below acceptable standards. This study evaluated AST results using the DKFZ compilations of test results for all bacterial strains showing the number of participants reporting the strain as resistant (R), sensitive (S) or intermediate susceptible (I) to each antibiotic substance used. Due to lack of information about methods used, it was assumed that what the majority of the participants reported (R or S) was the correct test result and that an opposite result was a major error (ME). MEs occurred in 1375 of 14,258 (9.7%) of test results and ME% ranged from 0% to 23.2% per bacterial group-agent group combination. Considerable variation in MEs was found within groups of bacteria and within groups of agents. In addition to poor performance in proper species classification, the quality of AST in laboratory animal diagnostic laboratories seems far below standards considered acceptable in human diagnostic microbiology.

  11. Including Bioconcentration Kinetics for the Prioritization and Interpretation of Regulatory Aquatic Toxicity Tests of Highly Hydrophobic Chemicals.

    Science.gov (United States)

    Kwon, Jung-Hwan; Lee, So-Young; Kang, Hyun-Joong; Mayer, Philipp; Escher, Beate I

    2016-11-01

    Worldwide, regulations of chemicals require short-term toxicity data for evaluating hazards and risks of the chemicals. Current data requirements on the registration of chemicals are primarily based on tonnage and do not yet consider properties of chemicals. For example, short-term ecotoxicity data are required for chemicals with production volume greater than 1 or 10 ton/y according to REACH, without considering chemical properties. Highly hydrophobic chemicals are characterized by low water solubility and slow bioconcentration kinetics, which may hamper the interpretation of short-term toxicity experiments. In this work, internal concentrations of highly hydrophobic chemicals were predicted for standard acute ecotoxicity tests at three trophic levels, algae, invertebrate, and fish. As demonstrated by comparison with maximum aqueous concentrations at water solubility, chemicals with an octanol-water partition coefficient (Kow) greater than 10(6) are not expected to reach sufficiently high internal concentrations for exerting effects within the test duration of acute tests with fish and invertebrates, even though they might be intrinsically toxic. This toxicity cutoff was explained by the slow uptake, i.e., by kinetics, not by thermodynamic limitations. Predictions were confirmed by data entries of the OECD's screening information data set (SIDS) (n = 746), apart from a few exceptions concerning mainly organometallic substances and those with inconsistency between water solubility and Kow. Taking error propagation and model assumptions into account, we thus propose a revision of data requirements for highly hydrophobic chemicals with log Kow > 7.4: Short-term toxicity tests can be limited to algae that generally have the highest uptake rate constants, whereas the primary focus of the assessment should be on persistence, bioaccumulation, and long-term effects.

  12. Novel considerations about the error budget of the LAGEOS-based tests of frame-dragging with GRACE geopotential models

    CERN Document Server

    Iorio, Lorenzo; Corda, Christian

    2013-01-01

    A realistic assessment of the uncertainties in the even zonals of a given geopotential model must be made by directly comparing its coefficients with those of a wholly independent solution of superior formal accuracy. Otherwise, a favorable selective bias is introduced in the evaluation of the total error budget of the LAGEOS-based Lense-Thirring tests yielding likely too optimistic figures for it. By applying a novel approach which recently appeared in the literature, the second (L = 4) and the third (L = 6) even zonals turn out to be uncertain at a 2-3 10^-11 (L = 4) and 3-4 10^-11 (L = 6) level, respectively, yielding a total gravitational error of about 27-28%, with an upper bound of 37-39%. The results by Ries et al. themselves yield an upper bound for it of about 33%. The low-degree even zonals are not exclusively determined from the GRACE Satellite-to-Satellite Tracking (SST) range since they affect it with long-period, secular-like signatures over orbital arcs longer than one orbital period: GRACE SST...

  13. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging.

    Science.gov (United States)

    Nair, S P; Righetti, R

    2015-05-07

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  14. Identifying subassemblies by ultrasound to prevent fuel handling error in sodium fast reactors: First test performed in water

    Energy Technology Data Exchange (ETDEWEB)

    Paumel, Kevin; Lhuillier, Christian [CEA, DEN, Nuclear Technology Department, F-13108 Saint-Paul-lez-Durance, (France)

    2015-07-01

    Identifying subassemblies by ultrasound is a method that is being considered to prevent handling errors in sodium fast reactors. It is based on the reading of a code (aligned notches) engraved on the subassembly head by an emitting/receiving ultrasonic sensor. This reading is carried out in sodium with high temperature transducers. The resulting one-dimensional C-scan can be likened to a binary code expressing the subassembly type and number. The first test performed in water investigated two parameters: width and depth of the notches. The code remained legible for notches as thin as 1.6 mm wide. The impact of the depth seems minor in the range under investigation. (authors)

  15. Minimizing errors during in vitro testing of multisegmental spine specimens: considerations for component selection and kinematic measurement.

    Science.gov (United States)

    Gédet, Philippe; Thistlethwaite, Paul A; Ferguson, Stephen J

    2007-01-01

    Apparatus-induced artifacts may invalidate standard spine testing protocols. Kinematic measurements may be compromised by the configuration of motion capture equipment. This study has determined: (1) the influence of machine design (component friction) on in vitro spinal kinetics; (2) the sensitivity of kinematic measurements to variations in the placement of motion capture markers. A spinal loading simulator has been developed to dynamically apply pure bending moments (three axes) with or without a simultaneous compressive preload. Two linear slider types with different friction coefficients, one with caged ball bearings and one with high-precision roller bearings on rails, were mounted and specimen response compared in sequential tests. Three different optoelectronic marker cluster configurations were mounted on the specimen and motion data was captured simultaneously from all clusters during testing. A polymer tube with a uniform bending stiffness approximately equivalent to a polysegmental lumbar spine specimen was selected to allow reproducible behavior over multiple tests. The selection of sliders for linear degrees of freedom had a marked influence on parasitic shear forces. Higher shear forces were recorded with the caged-bearing design than with the high-precision rollers and consequently a higher moment was required to achieve a given rotation. Kinematic accuracy varied with each marker configuration, but in general higher accuracy was achieved with larger marker spacings and situations where markers moved predominantly parallel to the camera's imaging plane. Relatively common alternatives in the mechanical components used in an apparatus for in vitro spine testing can have a significant influence on the measured kinematic and kinetics. Low-magnitude parasitic shear forces due to friction in sliders induces a linearly increasing moment along the length of the specimen, precluding the ideal of pure moment application. This effect is compounded in

  16. Clinical blood test in the cause of the error are discussed and analyzed%临床血液检验中出现误差的原因进行探讨及分析

    Institute of Scientific and Technical Information of China (English)

    李生明

    2015-01-01

    Objective:to analyze the error performance and error inducing factors in clinical blood test, and to discuss the prevention strategy of the blood test error. Methods:50 cases of blood test samples were collected, and the error performance and error of the selected samples were analyzed retrospectively. Results:in our hospital in February 2014 February 2015 is to 540 samples were blood tests, of which 50 samples in blood examination appear error (error performance including sample coagulation, pollution and dilution) and blood test error occurrence rate was 9.3%, the survey found, an error occurs because including improper blood samples were collected, inspection and test operation, the subjects were not according to the requirements for blood preparation and so on. Among them, the blood samples were collected from errors caused by blood test error accounted for 52.0%(26/50), significantly higher than that of other reasons lead to error ratio (P<0.05). Conclusion:for the reasons of the errors in the blood test, it is worth to be used for reference in clinical examination by regulating blood sampling operation, proper storage and processing of blood samples, regular check and maintenance of inspection equipment, and strengthening the training of blood samples.%目的:分析临床血检中的误差表现及误差诱发因素,探讨血检误差的预防策略。方法:收集50例出现误差的血液检验样本,对所选样本的误差表现、误差发生原因进行回顾性调查,并分析临床血检误差的预防策略。结果:我院在2014年2月~2015年2月间共对540例样本进行血液检验,其中50例样本在血检过程中出现误差(误差表现包括样本凝血、污染和稀释等),血检误差发生率为9.3%,经调查发现,误差发生原因包括血样采集不当、送检和检验操作影响、受检者未按要求作采血准备等,其中,由于血样采集失误导致的血液检验误差占52.0%(26/50),

  17. Evaluation and comparison of multiple test methods, including real-time PCR, for Legionella detection in clinical specimens.

    Directory of Open Access Journals (Sweden)

    Adriana Peci

    2016-08-01

    Full Text Available Legionella is a gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture and PCR test methods and to determine if sputum is an alternative to the use of more invasive bronchoalveolar lavage (BAL. Data for this study included specimens tested for Legionella at PHOL from January 1, 2010 to April 30, 2014, as part of routine clinical testing. We found sensitivity of UAT compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV 63.8% and negative predictive value (NPV 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7% and NPV 98.1%. Of 146 patients who had a Legionella positive result by PCR, only 66(45.2% also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%; sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results despite testing methods (Fisher Exact p-values=1.0, for each test. In summary, all test methods have inherent weaknesses in identifying Legionella; thereforemore than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection, and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical, from patients being tested for Legionella.

  18. 立式加工中心主轴动态误差的测试及研究%Test and research of vertical machine center's dynamic errors

    Institute of Scientific and Technical Information of China (English)

    刘阔; 王冠明; 马晓波

    2013-01-01

    The reason of principal spindle's dynamic errors is analyzed. Principal spindle's radial average errors and asynchronous errors, axial average errors and asynchronous errors, minimum radial separation center of some vertical machine center are all tested by SPN-300 of API. The three test results of principal spindle's dynamic errors in different rotating speeds are presented. The test results are analyzed at last.%首先对主轴动态误差产生的原因进行了分析,采用API的SPN-300主轴动态误差分析仪对某立式加工中心的主轴径向平均误差和异步误差、轴向平均误差和异步误差、主轴最小径向间距等进行了测试,给出了主轴动态误差在不同转速下的3次测试结果;最后对测试结果进行了分析.

  19. Young Children Who Abandon Error Behaviourally Still Have to Free Themselves Mentally: A Retrospective Test for Inhibition in Intuitive Physics

    Science.gov (United States)

    Freeman, Norman H.; Hood, Bruce M.; Meehan, Caroline

    2004-01-01

    When preschoolers overcome persistent error, subsequent patterns of correct choices may identify how the error had been overcome. Children who no longer misrepresented a ball rolling down a bent tube as though it could only fall vertically, were asked sometimes to approach and sometimes to avoid where the ball landed. All children showed requisite…

  20. [Survey in hospitals. Nursing errors, error culture and error management].

    Science.gov (United States)

    Habermann, Monika; Cramer, Henning

    2010-09-01

    Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.

  1. The analysis of the performance test results including correlation between the traits of this evaluation in crossbred gilts

    Directory of Open Access Journals (Sweden)

    Jerzy NOWACHOWICZ

    2012-12-01

    Full Text Available The aim of the paper was the analysis of the performance test results including correlation between the traits of this evaluation in crossbred gilts of Polish Large White (PLW and Polish Landrace (PL, conducted in years 2004-2008 in Poland in The Bydgoszcz Breeding Region. The subject of research was 51.802 crossbred gilts came from two crossing variants (where the sows’ breed was given in first position: PLW x PL and PL x PLW. The PLW x PL crossbred gilts in years 2004-2007 and in a total results summary from years 2004-2008 obtained higher performance test selection index value, thus had higher the breeding value regarding to the growth and slaughter traits as compared to the animals came from PL x PLW crossing variant. Within the space of 5 analysed years (2004-2008 the performance test selection index increased in the PLW x PL and PL x PLW crossbred gilts by 3.6 and 5.8 points, respectively. Thus the improvement of the breeding value of evaluated animals has been done. In all analysed years in the tested groups of crossbred gilts negative and statistically high significant correlations have been observed between the growth rate and the standardised body meat content of animals, which may show the unfavourable impact of the high growth rate on the meat content of pigs.

  2. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  3. Synthetic Source Inversion Tests with the Full Complexity of Earthquake Source Processes, Including Both Supershear Rupture and Slip Reactivation

    Science.gov (United States)

    Song, Seok Goo; Dalguer, Luis A.

    2017-03-01

    Recent studies in dynamic source modeling and kinematic source inversion show that earthquake rupture may contain greater complexity than we previously anticipated, including multiple slipping at a given point on a fault. Finite source inversion methods suffer from the nonuniqueness of solutions, and it may become more serious if we aim to resolve more complex rupture models. In this study, we perform synthetic inversion tests with dynamically generated complex rupture models, including both supershear rupture and slip reactivation, to understand the possibility of resolving complex rupture processes by inverting seismic waveform data. We adopt a linear source inversion method with multiple windows, allowing for slipping from the nucleation of rupture to the termination at all locations along a fault. We regularize the model space effectively in the Bayesian framework and perform multiple inversion tests by considering the effect of inaccurate Green's functions and station distributions. We also perform a spectral stability analysis. Our results show that it may be possible to resolve both a supershear rupture front and reactivated secondary slipping using the linear inversion method if those complex features are well separated from the main rupture and produce a fair amount of seismic energy. It may be desirable to assume the full complexity of an earthquake rupture when we first develop finite source models after a major event occurs and then assume a simple rupture model for stability if the estimated models do not show a clear pattern of complex rupture processes.

  4. Experimental test of a hot water storage system including a macro-encapsulated phase change material (PCM)

    Science.gov (United States)

    Mongibello, L.; Atrigna, M.; Bianco, N.; Di Somma, M.; Graditi, G.; Risi, N.

    2017-01-01

    Thermal energy storage systems (TESs) are of fundamental importance for many energetic systems, essentially because they permit a certain degree of decoupling between the heat or cold production and the use of the heat or cold produced. In the last years, many works have analysed the addition of a PCM inside a hot water storage tank, as it can allow a reduction of the size of the storage tank due to the possibility of storing thermal energy as latent heat, and as a consequence its cost and encumbrance. The present work focuses on experimental tests realized by means of an indoor facility in order to analyse the dynamic behaviour of a hot water storage tank including PCM modules during a charging phase. A commercial bio-based PCM has been used for the purpose, with a melting temperature of 58°C. The experimental results relative to the hot water tank including the PCM modules are presented in terms of temporal evolution of the axial temperature profile, heat transfer and stored energy, and are compared with the ones obtained by using only water as energy storage material. Interesting insights, relative to the estimation of the percentage of melted PCM at the end of the experimental test, are presented and discussed.

  5. Improved error estimates of a discharge algorithm for remotely sensed river measurements: Test cases on Sacramento and Garonne Rivers

    Science.gov (United States)

    Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward

    2016-01-01

    We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.

  6. Do schizophrenia patients make more perseverative than non-perseverative errors on the Wisconsin Card Sorting Test? A meta-analytic study.

    Science.gov (United States)

    Li, Chiang-Shan Ray

    2004-12-15

    The Wisconsin Card Sorting Test (WCST) is widely used to explore executive functions in patients with schizophrenia. Among other findings, a higher number of perseverative errors has been suggested to implicate a deficit in task switching and inhibitory functions in schizophrenia. Many studies of patients with schizophrenia have focused on perseverative errors as the primary performance index in the WCST. However, do schizophrenia patients characteristically make more perseverative than non-perseverative errors compared with healthy controls? We reviewed the literature where schizophrenia patients were engaged in the WCST irrespective of the primary goal of the study. The results showed that while both schizophrenia patients and healthy participants made more perseverative than non-perseverative errors, the contrast between perseverative and non-perseverative errors is higher in schizophrenia patients only at a marginal level of significance. This result suggests that schizophrenia patients do make a comparable number of non-perseverative errors and cautions against simplistic interpretation of poor performance of schizophrenia patients in WCST as entirely resulting from impairment in set-shifting or inhibitory functions.

  7. Types and frequency of errors during different phases of testing at a clinical medical laboratory of a teaching hospital in Tehran, Iran

    Directory of Open Access Journals (Sweden)

    Alireza Abdollahi

    2014-01-01

    Full Text Available Background: According to official data, 60-70% of clinical decisions about hospitalization and discharge are based on laboratory results. Aims: The objective of this study is to examine the frequency of errors before, during, and after analysis in a major medical laboratory. Materials and Methods: This descriptive, cross-sectional study was conducted throughout 2012 (January-December 2012. Errors are recorded by the Quality Control Committee in a specially designed record. Results: A total of 303,866 samples, 2,430,928 tests were received for analysis. The total number of errors was 153,148 (6.3% (116,392 for inpatients and 36,756 for outpatients. Analysis of the results revealed that about 65.09% of the errors occur across preanalytical phase, whereas 23.2% and 11.68% are related to analytical and postanalytical phase, respectively. Conclusion: More than half of the laboratory errors are related to preanalytical phase; therefore, proper training and knowledge of intervening factors are essential for reducing errors and optimizing the quality.

  8. Decision Making for Borderline Cases in Pass/Fail Clinical Anatomy Courses: The Practical Value of the Standard Error of Measurement and Likelihood Ratio in a Diagnostic Test

    Science.gov (United States)

    Severo, Milton; Silva-Pereira, Fernanda; Ferreira, Maria Amelia

    2013-01-01

    Several studies have shown that the standard error of measurement (SEM) can be used as an additional “safety net” to reduce the frequency of false-positive or false-negative student grading classifications. Practical examinations in clinical anatomy are often used as diagnostic tests to admit students to course final examinations. The aim of this…

  9. Doppler imaging of chemical spots on magnetic Ap/Bp stars. Numerical tests and assessment of systematic errors

    CERN Document Server

    Kochukhov, O

    2016-01-01

    Doppler imaging (DI) is a powerful spectroscopic inversion technique that enables conversion of a line profile time series into a two-dimensional map of the stellar surface inhomogeneities. In this paper we investigate the accuracy of chemical abundance DI of Ap/Bp stars and assess the impact of several different systematic errors on the reconstructed spot maps. We simulate spectroscopic observational data for different spot distributions in the presence of a moderately strong dipolar magnetic field. We then reconstruct chemical maps using different sets of spectral lines and making different assumptions about line formation in the inversion calculations. Our numerical experiments demonstrate that a modern DI code successfully recovers the input chemical spot distributions comprised of multiple circular spots at different latitudes or an element overabundance belt at the magnetic equator. For the optimal reconstruction the average reconstruction errors do not exceed ~0.10 dex. The errors increase to about 0.1...

  10. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  11. 分析血液检验标本误差产生的原因%Analysis of the Cause of Error in Blood Test Specimens

    Institute of Scientific and Technical Information of China (English)

    吴云

    2015-01-01

    目的对血液检验标本误差产生的原因进行研究分析,给出相应的处理方式。方法对2013~2014年我院的80例血液检验样本进行研究分析,采取调查问卷的方式来收集误差原因,给出针对性的处理措施。结果产生误差的原因有:(1)患者因素;(2)采集因素;(3)检验因素;(4)送检因素。结论血液检验标本误差的因素比较多,临床中如果对这些因素给予重视,积极的预防,将能够降低误差,提升检验质量。%ObjectiveTo analysis the reason of blood test specimens of error, and gives the corresponding processing method.Methods 80 cases of blood samples tested methods from 2013 to 2014 in our hospital to carry on the research analysis, take the questionnaire to collect the cause of error, gives the measures targeted processing.Results The reason of the result error: the factors of patients, the acquisition factors, the inspection factors; the inspection factors.Conclusion There are some factors to cause blood test specimens error, taking seriously, active prevention can reduce error and improve the quality of test.

  12. Doppler imaging of chemical spots on magnetic Ap/Bp stars. Numerical tests and assessment of systematic errors

    Science.gov (United States)

    Kochukhov, O.

    2017-01-01

    Context. Doppler imaging (DI) is a powerful spectroscopic inversion technique that enables conversion of a line profile time series into a two-dimensional map of the stellar surface inhomogeneities. DI has been repeatedly applied to reconstruct chemical spot topologies of magnetic Ap/Bp stars with the goal of understanding variability of these objects and gaining an insight into the physical processes responsible for spot formation. Aims: In this paper we investigate the accuracy of chemical abundance DI and assess the impact of several different systematic errors on the reconstructed spot maps. Methods: We have simulated spectroscopic observational data for two different Fe spot distributions with a surface abundance contrast of 1.5 dex in the presence of a moderately strong dipolar magnetic field. We then reconstructed chemical maps using different sets of spectral lines and making different assumptions about line formation in the inversion calculations. Results: Our numerical experiments demonstrate that a modern DI code successfully recovers the input chemical spot distributions comprised of multiple circular spots at different latitudes or an element overabundance belt at the magnetic equator. For the optimal reconstruction based on half a dozen spectral intervals, the average reconstruction errors do not exceed 0.10 dex. The errors increase to about 0.15 dex when abundance distributions are recovered from a few and/or blended spectral lines. Ignoring a 2.5 kG dipolar magnetic field in chemical abundance DI leads to an average relative error of 0.2 dex and maximum errors of 0.3 dex. Similar errors are encountered if a DI inversion is carried out neglecting a non-uniform continuum brightness distribution and variation of the local atmospheric structure. None of the considered systematic effects lead to major spurious features in the recovered abundance maps. Conclusions: This series of numerical DI simulations proves that inversions based on one or two spectral

  13. Statistical Diagnosis and Gross Error Test for Semiparametric Linear Model%半参数模型统计诊断与粗差检验

    Institute of Scientific and Technical Information of China (English)

    丁士俊; 张松林; 姜卫平; 王守春

    2009-01-01

    This paper systematically studies the statistical diagnosis and hypothesis testing for the semiparametric linear re-gression model according to the theories and methods of the statistical diagnosis and hypothesis testing for parametric re-gression model.Several diagnostic measures and the methods for gross error testing are derived.Especially,the global and local influence analysis of the gross error on the parameter X and the nonparameter s are discussed in detail; at the same time,the paper proves that the data point deletion model is equivalent to the mean shift model for the semiparametric re-gression model.Finally,with one simulative computing example,some helpful conclusions are drawn.

  14. Statistical tests against systematic errors in data sets based on the equality of residual means and variances from control samples: theory and applications.

    Science.gov (United States)

    Henn, Julian; Meindl, Kathrin

    2015-03-01

    Statistical tests are applied for the detection of systematic errors in data sets from least-squares refinements or other residual-based reconstruction processes. Samples of the residuals of the data are tested against the hypothesis that they belong to the same distribution. For this it is necessary that they show the same mean values and variances within the limits given by statistical fluctuations. When the samples differ significantly from each other, they are not from the same distribution within the limits set by the significance level. Therefore they cannot originate from a single Gaussian function in this case. It is shown that a significance cutoff results in exactly this case. Significance cutoffs are still frequently used in charge-density studies. The tests are applied to artificial data with and without systematic errors and to experimental data from the literature.

  15. Friction Reduction Tested for a Downsized Diesel Engine with Low-Viscosity Lubricants Including a Novel Polyalkylene Glycol

    Directory of Open Access Journals (Sweden)

    David E. Sander

    2017-04-01

    Full Text Available With the increasing pressure to reduce emissions, friction reduction is always an up-to-date topic in the automotive industry. Among the various possibilities to reduce mechanical friction, the usage of a low-viscosity lubricant in the engine is one of the most effective and most economic options. Therefore, lubricants of continuously lower viscosity are being developed and offered on the market that promise to reduce engine friction while avoiding deleterious mixed lubrication and wear. In this work, a 1.6 L downsized Diesel engine is used on a highly accurate engine friction test-rig to determine the potential for friction reduction using low viscosity lubricants under realistic operating conditions including high engine loads. In particular, two hydrocarbon-based lubricants, 0W30 and 0W20, are investigated as well as a novel experimental lubricant, which is based on a polyalkylene glycol base stock. Total engine friction is measured for all three lubricants, which show a general 5% advantage for the 0W20 in comparison to the 0W30 lubricant. The polyalkylene glycol-based lubricant, however, shows strongly reduced friction losses, which are about 25% smaller than for the 0W20 lubricant. As the 0W20 and the polyalkylene glycol-based lubricant have the same HTHS-viscosity , the findings contradict the common understanding that the HTHS-viscosity is the dominant driver related to the friction losses.

  16. Constraints and tensions in testing general relativity from Planck and CFHTLenS including intrinsic alignment systematics

    CERN Document Server

    Dossett, Jason N; Parkinson, David; Davis, Tamara

    2015-01-01

    We present constraints on testing general relativity (GR) at cosmological scales using recent data sets and the impact of galaxy intrinsic alignment (IA) in the CFHTLenS lensing data on those constraints. We consider CMB temperature data from Planck, the galaxy power spectrum from WiggleZ, weak lensing tomography from the CFHTLenS, ISW-galaxy cross correlations, and BAO data from 6dF, SDSS DR7, and BOSS DR9. We use a parameterization of the modified gravity (MG) that is binned in redshift and scale, a parameterization that evolves monotonically in scale but is binned in redshift, and a functional parameterization that evolves only in redshift. We present the results in terms of the MG parameters $Q$ and $\\Sigma$. We employ an IA model with an amplitude $A_{CFHTLenS}$ that is included in the parameter analysis. We find an improvement in the constraints on the MG parameters corresponding to $40-53\\%$ increase on the figure of merit compared to previous studies, and GR is found consistent with the data at the $9...

  17. Validation of a dynamic model for unglazed collectors including condensation. Application for standardized testing and simulation in TRNSYS and IDA

    DEFF Research Database (Denmark)

    Perers, Bengt; Kovacs, Peter; Pettersson, Ulrik

    2011-01-01

    An improved unglazed collector model has been validated for use in TRNSYS and IDA and also for future extension of the EN12975 collector test standard. The basic model is the same as used in the EN12975 test standard in the quasi dynamic performance test method (QDT). In this case with the addition...... of a condensation term that can handle the operation of unglazed collectors below the dew point of the air. This is very desirable for simulation of recharging of ground source energy systems and direct operation of unglazed collectors together with a heat pump. The basic idea is to have a direct connection between...... collector testing and system simulation by using the same dynamic model and parameters during testing and simulation. The model together with the parameters will be validated in each test in this way. This work describes the method applied to an unglazed collector operating partly below the dew point under...

  18. 液压CAT系统测试误差分析%Error Analysis of the Hydraulic CAT System Test

    Institute of Scientific and Technical Information of China (English)

    胡森; 胡晓波

    2011-01-01

    本文对液压CAT技术的发展现状和趋势进行了探讨,并对液压元件CAT系统误差来源进行了分析,介绍一些液压CAT系统误差处理办法.%The development status and trends of hydraulic CAT technology are discussed, and the error sources of hydraulic component CAT system were analyzed, and some approaches are introduced in this paper.

  19. Understanding native Russian listeners' errors on an English word recognition test: model-based analysis of phoneme confusion.

    Science.gov (United States)

    Shi, Lu-Feng; Morozova, Natalia

    2012-08-01

    Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.

  20. Indici di Frequenza di Errori nella Prova di Comprensione dell'Italiano Come L2 (Frequency of Error Indexes in Testing Comprehension in Italian as a Second Language).

    Science.gov (United States)

    Pintori, Adriana

    1995-01-01

    The author discusses the preparation of a test of reading comprehension in Italian as a Second Language for Spanish university students and analyzes the results of the test. The article includes the reading passage and the test. (CFM)

  1. Indici di Frequenza di Errori nella Prova di Comprensione dell'Italiano Come L2 (Frequency of Error Indexes in Testing Comprehension in Italian as a Second Language).

    Science.gov (United States)

    Pintori, Adriana

    1995-01-01

    The author discusses the preparation of a test of reading comprehension in Italian as a Second Language for Spanish university students and analyzes the results of the test. The article includes the reading passage and the test. (CFM)

  2. Simulation analysis of the EUSAMA Plus suspension testing method including the impact of the vehicle untested side

    Science.gov (United States)

    Dobaj, K.

    2016-09-01

    The work deals with the simulation analysis of the half car vehicle model parameters on the suspension testing results. The Matlab simulation software was used. The considered model parameters are involved with the shock absorber damping coefficient, the tire radial stiffness, the car width and the rocker arm length. The consistent vibrations of both test plates were considered. Both wheels of the car were subjected to identical vibration, with frequency changed similar to the EUSAMA Plus principle. The shock absorber damping coefficient (for several values of the car width and rocker arm length) was changed on one and both sides of the vehicle. The obtained results are essential for the new suspension testing algorithm (basing on the EUSAMA Plus principle), which will be the aim of the further author's work.

  3. Recommendation to include fragrance mix 2 and hydroxyisohexyl 3-cyclohexene carboxaldehyde (Lyral) in the European baseline patch test series

    DEFF Research Database (Denmark)

    Bruze, Magnus; Andersen, Klaus Ejner; Goossens, An

    2008-01-01

    various European centres when tested in consecutive dermatitis patients. CONCLUSIONS: From 2008, pet. preparations of fragrance mix 2 at 14% w/w (5.6 mg/cm(2)) and hydroxyisohexyl 3-cyclohexene carboxaldehyde at 5% w/w (2.0 mg/cm(2)) are recommended for inclusion in the baseline series. With the Finn...... Chamber technique, a dose of 20 mg pet. preparation is recommended. Whenever there is a positive reaction to fragrance mix 2, additional patch testing with the 6 ingredients, 5 if there are simultaneous positive reactions to hydroxyisohexyl 3-cyclohexene carboxaldehyde and fragrance mix 2, is recommended....

  4. The Psychological Effect of Errors in Standardized Language Test Items on EFL Students' Responses to the Following Item

    Science.gov (United States)

    Khaksefidi, Saman

    2017-01-01

    This study investigates the psychological effect of a wrong question with wrong items on answering to the next question in a test of structure. Forty students selected through stratified random sampling are given 15 questions of a standardized test namely a TOEFL structure test in which questions number 7 and number 11 are wrong and their answers…

  5. Postprandial effects of test meals including concentrated arabinoxylan and whole grain rye in subjects with the metabolic syndrome

    DEFF Research Database (Denmark)

    Hartvigsen, M L; Lærke, H N; Overgaard, A;

    2014-01-01

    grain rye kernels on postprandial glucose, insulin, free fatty acids (FFA), gut hormones, SCFA and appetite in subjects with the metabolic syndrome (MetS). SUBJECTS/METHODS: Fifteen subjects with MetS participated in this acute, randomised, cross-over study. The test meals each providing 50 g...

  6. Combining empirical approaches and error modelling to enhance predictive uncertainty estimation in extrapolation for operational flood forecasting. Tests on flood events on the Loire basin, France.

    Science.gov (United States)

    Berthet, Lionel; Marty, Renaud; Bourgin, François; Viatgé, Julie; Piotte, Olivier; Perrin, Charles

    2017-04-01

    An increasing number of operational flood forecasting centres assess the predictive uncertainty associated with their forecasts and communicate it to the end users. This information can match the end-users needs (i.e. prove to be useful for an efficient crisis management) only if it is reliable: reliability is therefore a key quality for operational flood forecasts. In 2015, the French flood forecasting national and regional services (Vigicrues network; www.vigicrues.gouv.fr) implemented a framework to compute quantitative discharge and water level forecasts and to assess the predictive uncertainty. Among the possible technical options to achieve this goal, a statistical analysis of past forecasting errors of deterministic models has been selected (QUOIQUE method, Bourgin, 2014). It is a data-based and non-parametric approach based on as few assumptions as possible about the forecasting error mathematical structure. In particular, a very simple assumption is made regarding the predictive uncertainty distributions for large events outside the range of the calibration data: the multiplicative error distribution is assumed to be constant, whatever the magnitude of the flood. Indeed, the predictive distributions may not be reliable in extrapolation. However, estimating the predictive uncertainty for these rare events is crucial when major floods are of concern. In order to improve the forecasts reliability for major floods, an attempt at combining the operational strength of the empirical statistical analysis and a simple error modelling is done. Since the heteroscedasticity of forecast errors can considerably weaken the predictive reliability for large floods, this error modelling is based on the log-sinh transformation which proved to reduce significantly the heteroscedasticity of the transformed error in a simulation context, even for flood peaks (Wang et al., 2012). Exploratory tests on some operational forecasts issued during the recent floods experienced in

  7. An acceptance test method of error test units in automatic verifcation system for current transformers%电流互感器自动化检定系统误差试验单元验收方法

    Institute of Scientific and Technical Information of China (English)

    周杭军; 李建

    2015-01-01

    介绍了母线式电流互感器和复匝式电流互感器进行全自动的检定试验方法.通过对误差试验单元的分析,并结合JJF 1033-2014《计量标准考核规范》的相关要求,深入探讨和研究了误差试验单元的验收方法和技术路线.%This article introduces an automatic verification test method for busbar type current transformers and the complex coil type current transformers. Through the analysis of the error test units and the relevant requirements of JJF 1033-2010"Rule for the Examination of Measurement Standard", an acceptance test method and technical route of the error test units are discussed and researched.

  8. Statistical Analysis of a Large Sample Size Pyroshock Test Data Set Including Post Flight Data Assessment. Revision 1

    Science.gov (United States)

    Hughes, William O.; McNelis, Anne M.

    2010-01-01

    The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.

  9. An Asset Pricing Approach to Testing General Term Structure Models including Heath-Jarrow-Morton Specifications and Affine Subclasses

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; van der Wel, Michel

    of the risk premium is associated with the slope factor, and individual risk prices depend on own past values, factor realizations, and past values of other risk prices, and are significantly related to the output gap, consumption, and the equity risk price. The absence of arbitrage opportunities is strongly...... is tested, but in addition to the standard bilinear term in factor loadings and market prices of risk, the relevant mean restriction in the term structure case involves an additional nonlinear (quadratic) term in factor loadings. We estimate our general model using likelihood-based dynamic factor model...... techniques for a variety of volatility factors, and implement the relevant likelihood ratio tests. Our factor model estimates are similar across a general state space implementation and an alternative robust two-step principal components approach. The evidence favors time-varying market prices of risk. Most...

  10. Effective detection of toxigenic Clostridium difficile by a two-step algorithm including tests for antigen and cytotoxin.

    Science.gov (United States)

    Ticehurst, John R; Aird, Deborah Z; Dam, Lisa M; Borek, Anita P; Hargrove, John T; Carroll, Karen C

    2006-03-01

    We evaluated a two-step algorithm for detecting toxigenic Clostridium difficile: an enzyme immunoassay for glutamate dehydrogenase antigen (Ag-EIA) and then, for antigen-positive specimens, a concurrent cell culture cytotoxicity neutralization assay (CCNA). Antigen-negative results were > or = 99% predictive of CCNA negativity. Because the Ag-EIA reduced cell culture workload by approximately 75 to 80% and two-step testing was complete in CCNA alone had been performed on all 5,887 specimens.

  11. Yield of Stool Culture with Isolate Toxin Testing versus a Two-Step Algorithm Including Stool Toxin Testing for Detection of Toxigenic Clostridium difficile▿

    OpenAIRE

    Reller, Megan E.; Lema, Clara A.; Perl, Trish M.; Cai, Mian; Ross, Tracy L.; Speck, Kathleen A.; Carroll, Karen C.

    2007-01-01

    We examined the incremental yield of stool culture (with toxin testing on isolates) versus our two-step algorithm for optimal detection of toxigenic Clostridium difficile. Per the two-step algorithm, stools were screened for C. difficile-associated glutamate dehydrogenase (GDH) antigen and, if positive, tested for toxin by a direct (stool) cell culture cytotoxicity neutralization assay (CCNA). In parallel, stools were cultured for C. difficile and tested for toxin by both indirect (isolate) C...

  12. Probabilistic seismic safety assessment of a CANDU 6 nuclear power plant including ambient vibration tests: Case study

    Energy Technology Data Exchange (ETDEWEB)

    Nour, Ali [Hydro Québec, Montréal, Québec H2L4P5 (Canada); École Polytechnique de Montréal, Montréal, Québec H3C3A7 (Canada); Cherfaoui, Abdelhalim; Gocevski, Vladimir [Hydro Québec, Montréal, Québec H2L4P5 (Canada); Léger, Pierre [École Polytechnique de Montréal, Montréal, Québec H3C3A7 (Canada)

    2016-08-01

    Highlights: • In this case study, the seismic PSA methodology adopted for a CANDU 6 is presented. • Ambient vibrations testing to calibrate a 3D FEM and to reduce uncertainties is performed. • Procedure for the development of FRS for the RB considering wave incoherency effect is proposed. • Seismic fragility analysis for the RB is presented. - Abstract: Following the 2011 Fukushima Daiichi nuclear accident in Japan there is a worldwide interest in reducing uncertainties in seismic safety assessment of existing nuclear power plant (NPP). Within the scope of a Canadian refurbishment project of a CANDU 6 (NPP) put in service in 1983, structures and equipment must sustain a new seismic demand characterised by the uniform hazard spectrum (UHS) obtained from a site specific study defined for a return period of 1/10,000 years. This UHS exhibits larger spectral ordinates in the high-frequency range than those used in design. To reduce modeling uncertainties as part of a seismic probabilistic safety assessment (PSA), Hydro-Québec developed a procedure using ambient vibrations testing to calibrate a detailed 3D finite element model (FEM) of the containment and reactor building (RB). This calibrated FE model is then used for generating floor response spectra (FRS) based on ground motion time histories compatible with the UHS. Seismic fragility analyses of the reactor building (RB) and structural components are also performed in the context of a case study. Because the RB is founded on a large circular raft, it is possible to consider the effect of the seismic wave incoherency to filter out the high-frequency content, mainly above 10 Hz, using the incoherency transfer function (ITF) method. This allows reducing significantly the non-necessary conservatism in resulting FRS, an important issue for an existing NPP. The proposed case study, and related methodology using ambient vibration testing, is particularly useful to engineers involved in seismic re-evaluation of

  13. Effective Detection of Toxigenic Clostridium difficile by a Two-Step Algorithm Including Tests for Antigen and Cytotoxin

    OpenAIRE

    Ticehurst, John R.; Aird, Deborah Z.; Dam, Lisa M.; Borek, Anita P.; Hargrove, John T.; Carroll, Karen C.

    2006-01-01

    We evaluated a two-step algorithm for detecting toxigenic Clostridium difficile: an enzyme immunoassay for glutamate dehydrogenase antigen (Ag-EIA) and then, for antigen-positive specimens, a concurrent cell culture cytotoxicity neutralization assay (CCNA). Antigen-negative results were ≥99% predictive of CCNA negativity. Because the Ag-EIA reduced cell culture workload by ≈75 to 80% and two-step testing was complete in ≤3 days, we decided that this algorithm would be effective. Over 6 months...

  14. Register file soft error recovery

    Science.gov (United States)

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  15. Proofreading for word errors.

    Science.gov (United States)

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  16. On the systematic errors of cosmological-scale gravity tests using redshift-space distortion: non-linear effects and the halo bias

    Science.gov (United States)

    Ishikawa, Takashi; Totani, Tomonori; Nishimichi, Takahiro; Takahashi, Ryuichi; Yoshida, Naoki; Tonegawa, Motonari

    2014-10-01

    Redshift-space distortion (RSD) observed in galaxy redshift surveys is a powerful tool to test gravity theories on cosmological scales, but the systematic uncertainties must carefully be examined for future surveys with large statistics. Here we employ various analytic models of RSD and estimate the systematic errors on measurements of the structure growth-rate parameter, fσ8, induced by non-linear effects and the halo bias with respect to the dark matter distribution, by using halo catalogues from 40 realizations of 3.4 × 108 comoving h-3 Mpc3 cosmological N-body simulations. We consider hypothetical redshift surveys at redshifts z = 0.5, 1.35 and 2, and different minimum halo mass thresholds in the range of 5.0 × 1011-2.0 × 1013 h-1 M⊙. We find that the systematic error of fσ8 is greatly reduced to ˜5 per cent level, when a recently proposed analytical formula of RSD that takes into account the higher order coupling between the density and velocity fields is adopted, with a scale-dependent parametric bias model. Dependence of the systematic error on the halo mass, the redshift and the maximum wavenumber used in the analysis is discussed. We also find that the Wilson-Hilferty transformation is useful to improve the accuracy of likelihood analysis when only a small number of modes are available in power spectrum measurements.

  17. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.

  18. Yield of stool culture with isolate toxin testing versus a two-step algorithm including stool toxin testing for detection of toxigenic Clostridium difficile.

    Science.gov (United States)

    Reller, Megan E; Lema, Clara A; Perl, Trish M; Cai, Mian; Ross, Tracy L; Speck, Kathleen A; Carroll, Karen C

    2007-11-01

    We examined the incremental yield of stool culture (with toxin testing on isolates) versus our two-step algorithm for optimal detection of toxigenic Clostridium difficile. Per the two-step algorithm, stools were screened for C. difficile-associated glutamate dehydrogenase (GDH) antigen and, if positive, tested for toxin by a direct (stool) cell culture cytotoxicity neutralization assay (CCNA). In parallel, stools were cultured for C. difficile and tested for toxin by both indirect (isolate) CCNA and conventional PCR if the direct CCNA was negative. The "gold standard" for toxigenic C. difficile was detection of C. difficile by the GDH screen or by culture and toxin production by direct or indirect CCNA. We tested 439 specimens from 439 patients. GDH screening detected all culture-positive specimens. The sensitivity of the two-step algorithm was 77% (95% confidence interval [CI], 70 to 84%), and that of culture was 87% (95% CI, 80 to 92%). PCR results correlated completely with those of CCNA testing on isolates (29/29 positive and 32/32 negative, respectively). We conclude that GDH is an excellent screening test and that culture with isolate CCNA testing detects an additional 23% of toxigenic C. difficile missed by direct CCNA. Since culture is tedious and also detects nontoxigenic C. difficile, we conclude that culture is most useful (i) when the direct CCNA is negative but a high clinical suspicion of toxigenic C. difficile remains, (ii) in the evaluation of new diagnostic tests for toxigenic C. difficile (where the best reference standard is essential), and (iii) in epidemiologic studies (where the availability of an isolate allows for strain typing and antimicrobial susceptibility testing).

  19. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    Science.gov (United States)

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    2016-07-01

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of field and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.

  20. Inventory of forest and rangeland resources, including forest stress. [Atlanta, Georgia, Black Hills, and Manitou, Colorado test sites

    Science.gov (United States)

    Heller, R. C.; Aldrich, R. C.; Weber, F. P.; Driscoll, R. S. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Some current beetle-killed ponderosa pine can be detected on S190-B photography imaged over the Bear Lodge mountains in the Black Hills National Forest. Detections were made on SL-3 imagery (September 13, 1973) using a zoom lens microscope to view the photography. At this time correlations have not been made to all of the known infestation spots in the Bear Lodge mountains; rather, known infestations have been located on the SL-3 imagery. It was determined that the beetle-killed trees were current kills by stereo viewing of SL-3 imagery on one side and SL-2 on the other. A successful technique was developed for mapping current beetle-killed pine using MSS imagery from mission 247 flown by the C-130 over the Black Hills test site in September 1973. Color enhancement processing on the NASA/JSC, DAS system using three MSS channels produced an excellent quality detection map for current kill pine. More importantly it provides a way to inventory the dead trees by relating PCM counts to actual numbers of dead trees.

  1. Corrective Action Decision Document for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada: Revision 0, Including Errata Sheet

    Energy Technology Data Exchange (ETDEWEB)

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2004-04-01

    This Corrective Action Decision Document identifies the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's corrective action alternative recommendation for each of the corrective action sites (CASs) within Corrective Action Unit (CAU) 204: Storage Bunkers, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. An evaluation of analytical data from the corrective action investigation, review of current and future operations at each CAS, and a detailed comparative analysis of potential corrective action alternatives were used to determine the appropriate corrective action for each CAS. There are six CASs in CAU 204, which are all located between Areas 1, 2, 3, and 5 on the NTS. The No Further Action alternative was recommended for CASs 01-34-01, 02-34-01, 03-34-01, and 05-99-02; and a Closure in Place with Administrative Controls recommendation was the preferred corrective action for CASs 05-18-02 and 05-33-01. These alternatives were judged to meet all requirements for the technical components evaluated as well as applicable state and federal regulations for closure of the sites and will eliminate potential future exposure pathways to the contaminated media at CAU 204.

  2. Dynamic Test and Research of Spindle Rotation Error%主轴回转误差的动态测试和研究∗

    Institute of Scientific and Technical Information of China (English)

    孙军; 黄圆; 秦显军; 钱彬彬

    2015-01-01

    In order to realize the error of measuring spindle rotation, to achieve dynamic separation of spin-dle rotation accuracy, paper uses mathematical statistics on LabVIEW software to develop a separate spindle error dynamic simulation software by mathematical statistics and theoretical basis for the design of a Rotary error test system based on LabVIEW spindle. The system consists of combining CompactRIO embedded sys-tem controller, LabVIEW software programming, data collection, data processing functions and data dis-play. The system is used to measure the actual spindle rotation accuracy to obtain high-precision spindle er-ror number.%为实现主轴回转误差的测量,实现主轴回转精度的动态分离,文章运用数理统计法在LabVIEW软件上开发了一套主轴回转误差动态分离模拟软件,并由数理统计法为理论基础设计出一个基于LabVIEW的主轴回转误差测试系统。该系统由结合嵌入式系统CompactRIO为控制器、LabVIEW软件编程,可以实现数据采集、数据处理和数据显示的功能。该系统用于机床主轴回转精度的实际测量,得到了高精度的主轴回转误差数据,为主轴回转误差的动态测量提供技术支持。

  3. The value of a formula including haematocrit, blood urea and gender (HUGE) as a screening test for chronic renal insufficiency.

    Science.gov (United States)

    Alvarez-Gregori, J A; Robles, N R; Mena, C; Ardanuy, R; Jauregui, R; Macas-Nu Nunez, J F

    2011-06-01

    Despite increasing use in clinical practice, an estimated glomerular filtration rate value (eGFR) of HUGE formula. A formula including haematocrit , blood urea, and gender (HUGE), diagnoses CRI regardless of the variables of age, blood creatinine, creatinine clearance, or other eGFR. The HUGE formula is: L = 2.505458 - (0.264418 x Hematocrit) + (0.118100 x Urea) [+ 1.383960 if male]. If L is a negative number the individual does not have CRI; if L is a positive number, CRI is present. Our data demonstrate that the HUGE formula is more reliable than MDRD and CKD-EPI, particularly in persons aged over 70. Our HUGE screening formula offers a straightforward, easily available and inexpensive method for differentiating between CRI and eGFR < 60 ml/min/1.73 m2 that will prevent a considerable number of aged healthy persons, as much as 1.700.000 in Spain and 2.600.000 in U.K., to be excluded from clinical assays or treatments contraindicated in CRI.

  4. Effect of yoga practices on pulmonary function tests including transfer factor of lung for carbon monoxide (TLCO) in asthma patients.

    Science.gov (United States)

    Singh, Savita; Soni, Ritu; Singh, K P; Tandon, O P

    2012-01-01

    Prana is the energy, when the self-energizing force embraces the body with extension and expansion and control, it is pranayama. It may affect the milieu at the bronchioles and the alveoli particularly at the alveolo-capillary membrane to facilitate diffusion and transport of gases. It may also increase oxygenation at tissue level. Aim of our study is to compare pulmonary functions and diffusion capacity in patients of bronchial asthma before and after yogic intervention of 2 months. Sixty stable asthmatic-patients were randomized into two groups i.e group 1 (Yoga training group) and group 2 (control group). Each group included thirty patients. Lung functions were recorded on all patients at baseline, and then after two months. Group 1 subjects showed a statistically significant improvement (Pyoga practice. Quality of life also increased significantly. It was concluded that pranayama & yoga breathing and stretching postures are used to increase respiratory stamina, relax the chest muscles, expand the lungs, raise energy levels, and calm the body.

  5. Impact of Sample Size and Variability on the Power and Type I Error Rates of Equivalence Tests: A Simulation Study

    Science.gov (United States)

    Rusticus, Shayna A.; Lovato, Chris Y.

    2014-01-01

    The question of equivalence between two or more groups is frequently of interest to many applied researchers. Equivalence testing is a statistical method designed to provide evidence that groups are comparable by demonstrating that the mean differences found between groups are small enough that they are considered practically unimportant. Few…

  6. Influence of indexing errors on dynamic response of spur gear pairs

    Science.gov (United States)

    Inalpolat, M.; Handschuh, M.; Kahraman, A.

    2015-08-01

    In this study, a dynamic model of a spur gear pair is employed to investigate the influence of gear tooth indexing errors on the dynamic response. This transverse-torsional dynamic model includes periodically-time varying gear mesh stiffness and nonlinearities caused by tooth separations in resonance regions. With quasi-static transmission error time traces as the primary excitation, the model predicts frequency-domain dynamic mesh force and dynamic transmission error spectra. These long-period quasi-static transmission error time traces are measured using unity-ratio spur gear pairs having certain intentional indexing errors. A special test setup with dedicated instrumentation for the measurement of quasi-static transmission error is employed to perform a number of experiments with gears having deterministic spacing errors at one or two teeth of the test gear only and random spacing errors where all of the test gear teeth have a random distribution of errors as in a typical production gear.

  7. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Science.gov (United States)

    2010-07-01

    ...) For engines from an engine family that will be used only in variable-speed applications, use one of... you will not restrict an engine family to constant-speed or variable-speed applications. (4) Use a... § 1048.505 C2 mode No. Engine speed 1 Torque(percent) 2 Weightingfactors 1 Maximum test speed 25 0.06 2...

  8. Refractive Errors

    Science.gov (United States)

    ... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...

  9. Medication Errors

    Science.gov (United States)

    ... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...

  10. Space shuttle orbiter avionics software: Post review report for the entry FACI (First Article Configuration Inspection). [including orbital flight tests integrated system

    Science.gov (United States)

    Markos, H.

    1978-01-01

    Status of the computer programs dealing with space shuttle orbiter avionics is reported. Specific topics covered include: delivery status; SSW software; SM software; DL software; GNC software; level 3/4 testing; level 5 testing; performance analysis, SDL readiness for entry first article configuration inspection; and verification assessment.

  11. Design of a Channel Error Simulator using Virtual Instrument Techniques for the Initial Testing of TCP/IP and SCPS Protocols

    Science.gov (United States)

    Horan, Stephen; Wang, Ru-Hai

    1999-01-01

    There exists a need for designers and developers to have a method to conveniently test a variety of communications parameters for an overall system design. This is no different when testing network protocols as when testing modulation formats. In this report, we discuss a means of providing a networking test device specifically designed to be used for space communications. This test device is a PC-based Virtual Instrument (VI) programmed using the LabVIEW(TM) version 5 software suite developed by National Instruments(TM)TM. This instrument was designed to be portable and usable by others without special, additional equipment. The programming was designed to replicate a VME-based hardware module developed earlier at New Mexico State University (NMSU) and to provide expanded capabilities exceeding the baseline configuration existing in that module. This report describes the design goals for the VI module in the next section and follows that with a description of the design of the VI instrument. This is followed with a description of the validation tests run on the VI. An application of the error-generating VI to networking protocols is then given.

  12. Error analysis and passage dependency of test items from a standardized test of multiple-sentence reading comprehension for aphasic and non-brain-damaged adults.

    Science.gov (United States)

    Nicholas, L E; Brookshire, R H

    1987-11-01

    Aphasic and non-brain-damaged adults were tested with two forms of the Nelson Reading Skills Test (NRST; Hanna. Schell, & Schreiner, 1977). The NRST is a standardized measure of silent reading for students in Grades 3 through 9 and assesses comprehension of information at three levels of inference (literal, translational, and higher level). Subjects' responses to NRST test items were evaluated to determine if their performance differed on literal, translational, and higher level items. Subjects' performance was also evaluated to determine the passage dependency of NRST test items--the extent to which readers had to rely on information in the NRST reading passages to answer test items. Higher level NRST test items (requiring complex inferences) were significantly more difficult for both non-brain-damaged and aphasic adults than literal items (not requiring inferences) or translational items (requiring simple inferences). The passage dependency of NRST test items for aphasic readers was higher than those reported by Nicholas, MacLennan, and Brookshire (1986) for multiple-sentence reading tests designed for aphasic adults. This suggests that the NRST is a more valid measure of the multiple-sentence reading comprehension of aphasic adults than the other tests evaluated by Nicholas et al. (1986).

  13. Controlling Type I Error Rate in Evaluating Differential Item Functioning for Four DIF Methods: Use of Three Procedures for Adjustment of Multiple Item Testing

    Science.gov (United States)

    Kim, Jihye

    2010-01-01

    In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…

  14. A statistical model for point-based target registration error with anisotropic fiducial localizer error.

    Science.gov (United States)

    Wiles, Andrew D; Likholyot, Alexander; Frantz, Donald D; Peters, Terry M

    2008-03-01

    Error models associated with point-based medical image registration problems were first introduced in the late 1990s. The concepts of fiducial localizer error, fiducial registration error, and target registration error are commonly used in the literature. The model for estimating the target registration error at a position r in a coordinate frame defined by a set of fiducial markers rigidly fixed relative to one another is ubiquitous in the medical imaging literature. The model has also been extended to simulate the target registration error at the point of interest in optically tracked tools. However, the model is limited to describing the error in situations where the fiducial localizer error is assumed to have an isotropic normal distribution in R3. In this work, the model is generalized to include a fiducial localizer error that has an anisotropic normal distribution. Similar to the previous models, the root mean square statistic rms tre is provided along with an extension that provides the covariance Sigma tre. The new model is verified using a Monte Carlo simulation and a set of statistical hypothesis tests. Finally, the differences between the two assumptions, isotropic and anisotropic, are discussed within the context of their use in 1) optical tool tracking simulation and 2) image registration.

  15. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  16. Error analysis for a laser differential confocal radius measurement system.

    Science.gov (United States)

    Wang, Xu; Qiu, Lirong; Zhao, Weiqian; Xiao, Yang; Wang, Zhongyu

    2015-02-10

    In order to further improve the measurement accuracy of the laser differential confocal radius measurement system (DCRMS) developed previously, a DCRMS error compensation model is established for the error sources, including laser source offset, test sphere position adjustment offset, test sphere figure, and motion error, based on analyzing the influences of these errors on the measurement accuracy of radius of curvature. Theoretical analyses and experiments indicate that the expanded uncertainty of the DCRMS is reduced to U=0.13  μm+0.9  ppm·R (k=2) through the error compensation model. The error analysis and compensation model established in this study can provide the theoretical foundation for improving the measurement accuracy of the DCRMS.

  17. Tests for gene-environment interaction from case-control data: a novel study of type I error, power and designs.

    Science.gov (United States)

    Mukherjee, Bhramar; Ahn, Jaeil; Gruber, Stephen B; Rennert, Gad; Moreno, Victor; Chatterjee, Nilanjan

    2008-11-01

    To evaluate the risk of a disease associated with the joint effects of genetic susceptibility and environmental exposures, epidemiologic researchers often test for non-multiplicative gene-environment effects from case-control studies. In this article, we present a comparative study of four alternative tests for interactions: (i) the standard case-control method; (ii) the case-only method, which requires an assumption of gene-environment independence for the underlying population; (iii) a two-step method that decides between the case-only and case-control estimators depending on a statistical test for the gene-environment independence assumption and (iv) a novel empirical-Bayes (EB) method that combines the case-control and case-only estimators depending on the sample size and strength of the gene-environment association in the data. We evaluate the methods in terms of integrated Type I error and power, averaged with respect to varying scenarios for gene-environment association that are likely to appear in practice. These unique studies suggest that the novel EB procedure overall is a promising approach for detection of gene-environment interactions from case-control studies. In particular, the EB procedure, unlike the case-only or two-step methods, can closely maintain a desired Type I error under realistic scenarios of gene-environment dependence and yet can be substantially more powerful than the traditional case-control analysis when the gene-environment independence assumption is satisfied, exactly or approximately. Our studies also reveal potential utility of some non-traditional case-control designs that samples controls at a smaller rate than the cases. Apart from the simulation studies, we also illustrate the different methods by analyzing interactions of two commonly studied genes, N-acetyl transferase type 2 and glutathione s-transferase M1, with smoking and dietary exposures, in a large case-control study of colorectal cancer.

  18. 假设检验的两类错误与 Monte Carlo 模拟%Two type errors of hypothesis test and its Monte Carlo simulation

    Institute of Scientific and Technical Information of China (English)

    刘堂勇

    2013-01-01

    Objective Mastering the two kinds of errors is the key to the application of hypotyesis test method.But be-cause of its high level of abstraction,it′s very difficult to do so.In this paper, I use the intuitive method to achieve this objective. Methods I use the popular method of Monte Carlo simulation to simulate the results.Results The simulated result shows that the two kinds of different errors in hypothesis test had the reverse change relationship and there was a reverse change relationship between the two types of errors and the sample size.Conclusion Simulation shows that the hypothesis test method at present still has some problems in distinguishing small differences and need to be further improved.%目的:掌握假设检验的两类错误是合理运用假设检验方法的关键所在,但由于其高度抽象性,想完全理解有一定的难度,本文通过比较直观的方法加深对假设检验两类错误的认识。方法采用了当前比较流行的Monte Carlo模拟方法,用模拟数据直观得出结果。结果模拟数据显示了假设检验的两类错误之间存在此消彼长的关系,以及两类错误与样本容量之间存在反向变化关系。结论模拟显示目前的假设检验方法在区分小差异方面还存在一定的不足,有待于进一步改进和完善。

  19. On the Systematic Errors of Cosmological-Scale Gravity Tests using Redshift Space Distortion: Non-linear Effects and the Halo Bias

    CERN Document Server

    Ishikawa, Takashi; Nishimichi, Takahiro; Takahashi, Ryuichi; Yoshida, Naoki; Tonegawa, Motonari

    2013-01-01

    Redshift space distortion (RSD) observed in galaxy redshift surveys is a powerful tool to test gravity theories on cosmological scales, but the systematic uncertainties must carefully be examined for future surveys with large statistics. Here we employ various analytic models of RSD and estimate the systematic errors on measurements of the structure growth-rate parameter, f\\sigma_8, induced by non-linear effects and the halo bias with respect to the dark matter distribution, by using halo catalogues from 40 realisations of 3.4 \\times 10^8 comoving h^{-3}Mpc^3 cosmological N-body simulations. We consider hypothetical redshift surveys at redshifts z=0.5, 1.35 and 2, and different minimum halo mass thresholds in the range of 5.0 \\times 10^{11} -- 2.0 \\times 10^{13} h^{-1} M_\\odot. We find that the systematic error of f\\sigma_8 is greatly reduced to ~4 per cent level, when a recently proposed analytical formula of RSD that takes into account the higher-order coupling between the density and velocity fields is ado...

  20. Errors in neuroradiology.

    Science.gov (United States)

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  1. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  2. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  3. System-related factors contributing to diagnostic errors.

    Science.gov (United States)

    Thammasitboon, Satid; Thammasitboon, Supat; Singhal, Geeta

    2013-10-01

    Several studies in primary care, internal medicine, and emergency departments show that rates of errors in test requests and result interpretations are unacceptably high and translate into missed, delayed, or erroneous diagnoses. Ineffective follow-up of diagnostic test results could lead to patient harm if appropriate therapeutic interventions are not delivered in a timely manner. The frequency of system-related factors that contribute directly to diagnostic errors depends on the types and sources of errors involved. Recent studies reveal that the errors and patient harm in the diagnostic testing loop have occurred mainly at the pre- and post-analytic phases, which are directed primarily by clinicians who may have limited expertise in the rapidly expanding field of clinical pathology. These errors may include inappropriate test requests, failure/delay in receiving results, and erroneous interpretation and application of test results to patient care. Efforts to address system-related factors often focus on technical errors in laboratory testing or failures in delivery of intended treatment. System-improvement strategies related to diagnostic errors tend to focus on technical aspects of laboratory medicine or delivery of treatment after completion of the diagnostic process. System failures and cognitive errors, more often than not, coexist and together contribute to the incidents of errors in diagnostic process and in laboratory testing. The use of highly structured hand-off procedures and pre-planned follow-up for any diagnostic test could improve efficiency and reliability of the follow-up process. Many feedback pathways should be established so that providers can learn if or when a diagnosis is changed. Patients can participate in the effort to reduce diagnostic errors. Providers should educate their patients about diagnostic probabilities and uncertainties. The patient-safety strategies focusing on the interface between diagnostic system and therapeutic

  4. Evaluation of the Repeatability of the Delta Q Duct Leakage Testing TechniqueIncluding Investigation of Robust Analysis Techniques and Estimates of Weather Induced Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Dickerhoff, Darryl; Walker, Iain

    2008-08-01

    found in for the pressure station approach. Walker and Dickerhoff also included estimates of DeltaQ test repeatability based on the results of field tests where two houses were tested multiple times. The two houses were quite leaky (20-25 Air Changes per Hour at 50Pa (0.2 in. water) (ACH50)) and were located in the San Francisco Bay area. One house was tested on a calm day and the other on a very windy day. Results were also presented for two additional houses that were tested by other researchers in Minneapolis, MN and Madison, WI, that had very tight envelopes (1.8 and 2.5 ACH50). These tight houses had internal duct systems and were tested without operating the central blower--sometimes referred to as control tests. The standard deviations between the multiple tests for all four houses were found to be about 1% of the envelope air flow at 50 Pa (0.2 in. water) (Q50) that led to the suggestion of this as a rule of thumb for estimating DeltaQ uncertainty. Because DeltaQ is based on measuring envelope air flows it makes sense for uncertainty to scale with envelope leakage. However, these tests were on a limited data set and one of the objectives of the current study is to increase the number of tested houses. This study focuses on answering two questions: (1) What is the uncertainty associated with changes in weather (primarily wind) conditions during DeltaQ testing? (2) How can these uncertainties be reduced? The first question is addressing issues of repeatability. To study this five houses were tested as many times as possible over a day. Weather data was recorded on-site--including the local windspeed. The result from these five houses were combined with the two Bay Area homes from the previous studies. The variability of the tests (represented by the standard deviation) is the repeatability of the test method for that house under the prevailing weather conditions. Because the testing was performed over a day a wide range of wind speeds was achieved following

  5. Modification of the BAX Salmonella test kit to include a hot start functionality (modification of AOAC Official Method 2003.09).

    Science.gov (United States)

    Wallace, F Morgan; DiCosimo, Deana; Farnum, Andrew; Tice, George; Andaloro, Bridget; Davis, Eugene; Burns, Frank R

    2011-01-01

    In 2010, the BAX System PCR assay for Salmonella was modified to include a hot start functionality designed to keep the reaction enzyme inactive until PCR begins. To validate the assay's Official Methods of Analysis status to include this procedure modification, an evaluation was conducted on four food types that were simultaneously analyzed with the BAX System and either the U.S. Food and Drug Administration's Bacteriological Analytical Manual or the U.S. Department of Agriculture-Food Safety and Inspection Service Microbiology Laboratory Guidebook reference method for detecting Salmonella. Identical performance between the BAX System method and the reference methods was observed. Additionally, lysates were analyzed using both the BAX System Classic and BAX System Q7 instruments with identical results using both platforms for all samples tested. Of the 100 samples analyzed, 34 samples were positive for both the BAX System and reference methods, and 66 samples were negative by both the BAX System and reference methods, demonstrating 100% correlation. No instrument platform variation was observed. Additional inclusivity and exclusivity testing using the modified test kit demonstrated the test kit to be 100% accurate in evaluation of test panels of 352 Salmonella strains and 46 non-Salmonella strains.

  6. Error Control for Network-on-Chip Links

    CERN Document Server

    Fu, Bo

    2012-01-01

    As technology scales into nanoscale regime, it is impossible to guarantee the perfect hardware design. Moreover, if the requirement of 100% correctness in hardware can be relaxed, the cost of manufacturing, verification, and testing will be significantly reduced. Many approaches have been proposed to address the reliability problem of on-chip communications. This book focuses on the use of error control codes (ECCs) to improve on-chip interconnect reliability. Coverage includes detailed description of key issues in NOC error control faced by circuit and system designers, as well as practical error control techniques to minimize the impact of these errors on system performance. Provides a detailed background on the state of error control methods for on-chip interconnects; Describes the use of more complex concatenated codes such as Hamming Product Codes with Type-II HARQ, while emphasizing integration techniques for on-chip interconnect links; Examines energy-efficient techniques for integrating multiple error...

  7. Simulation testing the robustness of stock assessment models to error: some results from the ICES strategic initiative on stock assessment methods

    DEFF Research Database (Denmark)

    Deroba, J. J.; Butterworth, D. S.; Methot, R. D.

    2015-01-01

    The World Conference on Stock Assessment Methods (July 2013) included a workshop on testing assessment methods through simulations. The exercise was made up of two steps applied to datasets from 14 representative fish stocks from around the world. Step 1 involved applying stock assessments......-testing and cross-testing of models are a useful diagnostic approach, and suggested that estimates in the most recent years of time-series were the least robust. Results from the simulation exercise provide a basis for guidance on future large-scale simulation experiments and demonstrate the need for strategic...... investments in the evaluation and development of stock assessment methods...

  8. Political violence and child adjustment in Northern Ireland: Testing pathways in a social ecological model including single and two-parent families

    OpenAIRE

    Cummings, E. Mark; Schermerhorn, Alice C.; Merrilees, Christine E.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed

    2010-01-01

    Moving beyond simply documenting that political violence negatively impacts children, a social ecological hypothesis for relations between political violence and child outcomes was tested. Participants were 700 mother-child (M=12.1years, SD=1.8) dyads from 18 working class, socially deprived areas in Belfast, Northern Ireland, including single- and two-parent families. Sectarian community violence was associated with elevated family conflict and children’s reduced security about multiple aspe...

  9. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    Science.gov (United States)

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  10. How does pharmacogenetic testing alter the treatment course and patient response for chronic-pain patients in comparison with the current "trial-and-error" standard of care?

    Science.gov (United States)

    DeFeo, Kelly; Sykora, Kristen; Eley, Susan; Vincent, Debra

    2014-10-01

    To evaluate if pharmacogenetic testing (PT) holds value for pain-management practitioners by identifying the potential applications of pharmacogenetic research as well as applications in practice. A review of the literature was conducted utilizing the databases EBSCOhost, Biomedical Reference Collection, CINAHL, Health Business: Full Text, Health Source: Nursing/Academic Edition, and MEDLINE with the keywords, personalized medicine, cytochrome P450, and phamacogenetics. Chronic-pain patients present some of the most challenging patients to manage medically. Often paired with persistent, life-altering pain, they might also have oncologic and psychological comorbidities that can further complicate their management. One-step in-office PT is now widely available to optimize management of complicated patients and affectively remove the "trial-and-error" process of medication therapy. Practitioners must be familiar with the genetic determinants that affect a patient's response to medications in order to decrease preventable morbidity and mortality associated with drug-drug and patient-drug interactions, and to provide cost-effective care through avoidance of inappropriate medications. Improved pain managements will impove patient outcomes and satisfaction. ©2014 American Association of Nurse Practitioners.

  11. Processor register error correction management

    Science.gov (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  12. Smoothing error pitfalls

    Science.gov (United States)

    von Clarmann, T.

    2014-09-01

    effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated. If the effect of a retrieval constraint is to be diagnosed on a grid finer than the native grid of the retrieval by means of the smoothing error, the latter must be evaluated directly on the fine grid, using an ensemble covariance matrix which includes all variability on the fine grid. Ideally, the averaging kernels needed should be calculated directly on the finer grid, but if the grid of the original averaging kernels allows for representation of all the structures the instrument is sensitive to, then their interpolation can be an adequate approximation.

  13. Traces of dissolved particles, including coccoliths, in the tests of agglutinated foraminifera from the Challenger Deep (10,897 m water depth, western equatorial Pacific)

    Science.gov (United States)

    Gooday, A. J.; Uematsu, K.; Kitazato, H.; Toyofuku, T.; Young, J. R.

    2010-02-01

    We examined four multilocular agglutinated foraminiferan tests from the Challenger Deep, the deepest point in the world's oceans and well below the depth at which biogenic and most detrital minerals disappear from the sediment. The specimens represent undescribed species. Three are trochamminaceans in which imprints and other traces of dissolved agglutinated particles are visible in the orange or yellowish organic test lining. In Trochamminacean sp. A, a delicate meshwork of organic cement forms ridges between the grain impressions. The remnants of test particles include organic structures identifiable as moulds of coccoliths produced by the genus Helicosphaera. Their random alignment suggests that they were agglutinated individually rather than as fragments of a coccosphere. Trochamminacean sp. C incorporates discoidal structures with a central hole; these probably represent the proximal sides of isolated distal shields of another coccolith species, possibly Hayaster perplexus. Imprints of planktonic foraminiferan test fragments are also present in both these trochamminaceans. In Trochamminacean sp. B, the test surface is densely pitted with deep, often angular imprints ranging from roughly equidimensional to rod-shaped. The surfaces are either smooth, or have prominent longitudinal striations, probably made by cleavage traces. We presume these imprints represent mineral grains of various types that subsequently dissolved. X-ray microanalyses reveal strong peaks for Ca associated with grain impressions and coccolith remains in Trochamminacean sp. C. Minor peaks for this element are associated with coccolith remains and planktonic foraminiferan imprints in Trochamminacean sp. A. These Ca peaks possibly originate from traces of calcite remaining on the test surfaces. Agglutinated particles, presumably clay minerals, survive only in the fourth specimen (' Textularia' sp.). Here, the final 4-5 chambers comprise a pavement of small, irregularly shaped grains with flat

  14. Engaging with learners’ errors when teaching mathematics

    Directory of Open Access Journals (Sweden)

    Ingrid Sapire

    2016-05-01

    Full Text Available Teachers come across errors not only in tests but also in their mathematics classrooms virtually every day. When they respond to learners’ errors in their classrooms, during or after teaching, teachers are actively carrying out formative assessment. In South Africa the Annual National Assessment, a written test under the auspices of the Department of Basic Education, requires that teachers use learner data diagnostically. This places a new and complex cognitive demand on teachers’ pedagogical content knowledge. We argue that teachers’ involvement in, and application of, error analysis is an integral aspect of teacher knowledge. The Data Informed Practice Improvement Project was one of the first attempts in South Africa to include teachers in a systematic process of interpretation of learners’ performance data. In this article we analyse video data of teachers’ engagement with errors during interactions with learners in their classrooms and in one-on-one interviews with learners (17 lessons and 13 interviews. The schema of teachers’ knowledge of error analysis and the complexity of its application are discussed in relation to Ball’s domains of knowledge and Hugo’s explanation of the relation between cognitive and pedagogical loads. The analysis suggests that diagnostic assessment requires teachers to focus their attention on the germane load of the task and this in turn requires awareness of error and the use of specific probing questions in relation to learners’ diagnostic reasoning. Quantitative and qualitative data findings show the difficulty of this activity. For the 62 teachers who took part in this project, the demands made by diagnostic assessment exceeded their capacity, resulting in many instances (mainly in the classroom where teachers ignored learners’ errors or dealt with them partially.

  15. El diagnóstico genético del sexo mediante el test de la amelogenina: Métodos y posibles fuentes de error Sex typing through the Amelogenin test: Methods and possible pitfalls

    Directory of Open Access Journals (Sweden)

    F. Francès

    2008-04-01

    Full Text Available El diagnóstico del sexo a partir de indicios biológicos es crucial en la ciencia forense en general y en la investigación criminal, en particular. La amelogenina -proteina codificada en los cromosomas sexuales- se viene utilizando con ese fin desde la última década del siglo pasado. Existen divergencias en secuencia y tamaño entre los alelos codificados en el cromosoma X y el cromosoma Y (AMELX y AMELY, respectivamente. Esta es la base que ha permitido su amplia utilización en ciencias forenses para el diagnóstico genético del sexo. No obstante, recientemente se han publicado casos en los cuales el resultado del test de la amelogenina no corresponde con el sexo legal (oficial del individuo. El presente trabajo presenta una revisión de los protocolos publicados, localizando las áreas más comúnmente amplificadas del gen de la amelogenina, así como de las técnicas utilizadas para la detección de los fragmentos amplificados de AMELX y AMELY. Por último se analizan las condiciones en las cuales el test de la amelogenina, puede mostrarse discrepante con el sexo fenotípico del individuo y que han de ser tenidas en cuenta para evitar errores potencialmente graves en el curso de la investigación con fines forenses.Sex typing of biological evidences is crucial in forensics in general, and in criminal investigation, in particular. The amelogenin -a protein codified in the sexual chromosomes- has been used with these purposes since the last decade of the past century. There are sequence and size divergences between the X and Y-codified alleles of this gene (AMELX and AMELY. This fact is in the base of its using as a genetic sex typing test. However some cases in which the amelogenin test outcome does not correspond with the legal (official sex have been published. The present work offers a revision of the published protocols of the amelogenin sex typing test, locating the most common amplified areas of this gene, and the different

  16. High-Speed Wind-Tunnel Tests of a Model of the Lockheed YP-80A Airplane Including Correlation with Flight Tests and Tests of Dive-Recovery Flaps

    Science.gov (United States)

    Cleary, Joseph W.; Gray, Lyle J.

    1947-01-01

    This report contains the results of tests of a 1/3-scale model of the Lockheed YP-90A "Shooting Star" airplane and a comparison of drag, maximum lift coefficient, and elevator angle required for level flight as measured in the wind tunnel and in flight. Included in the report are the general aerodynamic characteristics of the model and of two types of dive-recovery flaps, one at several positions along the chord on the lower surface of the wing and the other on the lower surface of the fuselage. The results show good agreement between the flight and wind-tunnel measurements at all Mach numbers. The results indicate that the YP-80A is controllable in pitch by the elevators to a Mach number of at least 0.85. The fuselage dive-recovery flaps are effective for producing a climbing moment and increasing the drag at Mach numbers up to at least 0.8. The wing dive-recovery flaps are most effective for producing a climbing moment at 0.75 Mach number. At 0.85 Mach number, their effectiveness is approximately 50 percent of the maximum. The optimum position for the wing dive-recovery flaps to produce a climbing moment is at approximately 35 percent of the chord.

  17. Estimating implementation and operational costs of an integrated tiered CD4 service including laboratory and point of care testing in a remote health district in South Africa.

    Directory of Open Access Journals (Sweden)

    Naseem Cassim

    Full Text Available An integrated tiered service delivery model (ITSDM has been proposed to provide 'full-coverage' of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1 Tier-1/decentralized point-of-care service (POC in a single site; Tier-2/POC-hub servicing processing 600 samples/day and serving > 100 or > 200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa.Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time.The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively, but with related increased LTR-TAT of > 24-48 hours. Full service coverage with TAT < 6-hours could be achieved with placement of twenty-seven Tier-1/POC or eight Tier-2/POC-hubs, at a cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured 'full service coverage' and < 24 hour LTR-TAT for the district at $7.42 per-test.Implementing a single Tier-3/community laboratory to extend and improve delivery of services in Pixley-ka-Seme, with an estimated local ∼ 12-24-hour LTR-TAT, is ∼ $2 more than existing referred services per-test, but 2-4 fold cheaper than implementing eight Tier-2/POC-hubs or providing twenty-seven Tier-1/POCT CD4

  18. The effect of uncertainty and systematic errors in hydrological modelling

    Science.gov (United States)

    Steinsland, I.; Engeland, K.; Johansen, S. S.; Øverleir-Petersen, A.; Kolberg, S. A.

    2014-12-01

    The aims of hydrological model identification and calibration are to find the best possible set of process parametrization and parameter values that transform inputs (e.g. precipitation and temperature) to outputs (e.g. streamflow). These models enable us to make predictions of streamflow. Several sources of uncertainties have the potential to hamper the possibility of a robust model calibration and identification. In order to grasp the interaction between model parameters, inputs and streamflow, it is important to account for both systematic and random errors in inputs (e.g. precipitation and temperatures) and streamflows. By random errors we mean errors that are independent from time step to time step whereas by systematic errors we mean errors that persists for a longer period. Both random and systematic errors are important in the observation and interpolation of precipitation and temperature inputs. Important random errors comes from the measurements themselves and from the network of gauges. Important systematic errors originate from the under-catch in precipitation gauges and from unknown spatial trends that are approximated in the interpolation. For streamflow observations, the water level recordings might give random errors whereas the rating curve contributes mainly with a systematic error. In this study we want to answer the question "What is the effect of random and systematic errors in inputs and observed streamflow on estimated model parameters and streamflow predictions?". To answer we test systematically the effect of including uncertainties in inputs and streamflow during model calibration and simulation in distributed HBV model operating on daily time steps for the Osali catchment in Norway. The case study is based on observations from, uncertainty carefullt quantified, and increased uncertainties and systmatical errors are done realistically by for example removing a precipitation gauge from the network.We find that the systematical errors in

  19. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  20. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  1. Influence of sampling errors on ELISA test results%ELISA试验加样误差对试验结果的影响分析

    Institute of Scientific and Technical Information of China (English)

    袁红; 毛旖; 黄文芳

    2009-01-01

    目的 评价不同程度加样误差对ELISA试验结果的影响.方法 分别在标准加样量基础上递减1、2、3、4μL和递增1、2、3 μL进行加样,比较加样差异对HBsAg、HCV和TP ELISA检测结果的影响.结果 随加样量的增加,S/CO值呈上升趋势,HBsAg和TP(除+3组)与对照组的平均S/CO值比较,差异无统计学意义(P>0.05);各组S/CO值虽存在差异,但差异无统计学意义(P>0.05).HCV的-2、-1、+1组S/CO值虽存在差异,但差异无统计学意义(P>0.05);而-4、-3组和+2、+3组与对照组相比,差异有统计学意义(P0.05). The difference of mean positive rate between ex-perimental groups and control group showed an increasing tendency with the reduction of sample vol-ume,and significant differences in HBsAg, HCV and TP results were also found between sample vol-ume increase groups and reduction groups(P<0.05). Conclusion Various sampling errors influence ELISA test results to different degrees,and the extent increases with the reduction of standard sample volume.

  2. Neighboring extremal optimal control design including model mismatch errors

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T.J. [Sandia National Labs., Albuquerque, NM (United States); Hull, D.G. [Texas Univ., Austin, TX (United States). Dept. of Aerospace Engineering and Engineering Mechanics

    1994-11-01

    The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.

  3. Development and Field-Testing of a Study Protocol, including a Web-Based Occupant Survey Tool, for Use in Intervention Studies of Indoor Environmental Quality

    Energy Technology Data Exchange (ETDEWEB)

    Mendell, Mark; Eliseeva, Ekaterina; Spears, Michael; Fisk, William J.

    2009-06-01

    We developed and pilot-tested an overall protocol for intervention studies to evaluate the effects of indoor environmental changes in office buildings on the health symptoms and comfort of occupants. The protocol includes a web-based survey to assess the occupant's responses, as well as specific features of study design and analysis. The pilot study, carried out on two similar floors in a single building, compared two types of ventilation system filter media. With support from the building's Facilities staff, the implementation of the filter change intervention went well. While the web-based survey tool worked well also, low overall response rates (21-34percent among the three work groups included) limited our ability to evaluate the filter intervention., The total number of questionnaires returned was low even though we extended the study from eight to ten weeks. Because another simultaneous study we conducted elsewhere using the same survey had a high response rate (>70percent), we conclude that the low response here resulted from issues specific to this pilot, including unexpected restrictions by some employing agencies on communication with occupants.

  4. Development and Field-Testing of a Study Protocol, including a Web-Based Occupant Survey Tool, for Use in Intervention Studies of Indoor Environmental Quality

    Energy Technology Data Exchange (ETDEWEB)

    Mendell, Mark; Eliseeva, Ekaterina; Spears, Michael; Fisk, William J.

    2009-06-01

    We developed and pilot-tested an overall protocol for intervention studies to evaluate the effects of indoor environmental changes in office buildings on the health symptoms and comfort of occupants. The protocol includes a web-based survey to assess the occupant's responses, as well as specific features of study design and analysis. The pilot study, carried out on two similar floors in a single building, compared two types of ventilation system filter media. With support from the building's Facilities staff, the implementation of the filter change intervention went well. While the web-based survey tool worked well also, low overall response rates (21-34percent among the three work groups included) limited our ability to evaluate the filter intervention., The total number of questionnaires returned was low even though we extended the study from eight to ten weeks. Because another simultaneous study we conducted elsewhere using the same survey had a high response rate (>70percent), we conclude that the low response here resulted from issues specific to this pilot, including unexpected restrictions by some employing agencies on communication with occupants.

  5. Offset Error Compensation in Roundness Measurement

    Institute of Scientific and Technical Information of China (English)

    朱喜林; 史俊; 李晓梅

    2004-01-01

    This paper analyses three causes of offset error in roundness measurement and presents corresponding compensation methods.The causes of offset error include excursion error resulting from the deflection of the sensor's line of measurement from the rotational center in measurement (datum center), eccentricity error resulting from the variance between the workpiece's geometrical center and the rotational center, and tilt error resulting from the tilt between the workpiece's geometrical axes and the rotational centerline.

  6. 高精密机床主轴回转误差在线测试系统%Spindle rotary error online test system of high precision machine tool

    Institute of Scientific and Technical Information of China (English)

    周继昆; 张荣; 凌明祥; 张毅

    2016-01-01

    To realize real-time online measurement of spindle rotary error of high precision machine tool,based on the mechanical structure of machine tool,the standard ball and external reference measurement method was used to measure the radial displacement of the spindle indirectly and three-point method (error separation algorithm) was used to separate the spindle roundness error and rotary error so as to obtain the pure rotary error. According to the proposed test method, the hardware part of test system was created based on PXI bus data acquisition technology and the PXI external clock synchronization collection card was used to realize the equiangular and synchronous collection of spindle radial displacement information. Online test software was developed in LabVIEW development environment,with which the spindle rotary error can be separated on line. A machine tool was tested online with the developed test system and the test results showed that the developed test system can separate pure spindle rotary error from radial displacement information online with a high accuracy.%为实现对高精度机床主轴回转误差的在线实时测量,根据机床机械结构特点,采用标准球及外基准测量方法间接对机床主轴径向位移进行测量,并利用三点法误差分离算法将主轴圆度误差与回转误差分离,得到纯回转误差。依据所提出的测试方法,基于PXI总线数据采集技术建立测试系统硬件部分,并利用PXI外部时钟同步采集卡实现主轴径向位移信息的等角度同步采样;在LabVIEW开发环境下,开发在线测试软件,对回转误差进行在线分离。利用该系统对某机床进行在线测试,测试结果表明,测试系统可以在线将机床主轴纯回转误差从径向位移信息中分离出来,并达到较高的测试准确度。

  7. HPTN 071 (PopART: a cluster-randomized trial of the population impact of an HIV combination prevention intervention including universal testing and treatment: mathematical model.

    Directory of Open Access Journals (Sweden)

    Anne Cori

    Full Text Available BACKGROUND: The HPTN 052 trial confirmed that antiretroviral therapy (ART can nearly eliminate HIV transmission from successfully treated HIV-infected individuals within couples. Here, we present the mathematical modeling used to inform the design and monitoring of a new trial aiming to test whether widespread provision of ART is feasible and can substantially reduce population-level HIV incidence. METHODS AND FINDINGS: The HPTN 071 (PopART trial is a three-arm cluster-randomized trial of 21 large population clusters in Zambia and South Africa, starting in 2013. A combination prevention package including home-based voluntary testing and counseling, and ART for HIV positive individuals, will be delivered in arms A and B, with ART offered universally in arm A and according to national guidelines in arm B. Arm C will be the control arm. The primary endpoint is the cumulative three-year HIV incidence. We developed a mathematical model of heterosexual HIV transmission, informed by recent data on HIV-1 natural history. We focused on realistically modeling the intervention package. Parameters were calibrated to data previously collected in these communities and national surveillance data. We predict that, if targets are reached, HIV incidence over three years will drop by >60% in arm A and >25% in arm B, relative to arm C. The considerable uncertainty in the predicted reduction in incidence justifies the need for a trial. The main drivers of this uncertainty are possible community-level behavioral changes associated with the intervention, uptake of testing and treatment, as well as ART retention and adherence. CONCLUSIONS: The HPTN 071 (PopART trial intervention could reduce HIV population-level incidence by >60% over three years. This intervention could serve as a paradigm for national or supra-national implementation. Our analysis highlights the role mathematical modeling can play in trial development and monitoring, and more widely in evaluating the

  8. Performance of multi level error correction in binary holographic memory

    Science.gov (United States)

    Hanan, Jay C.; Chao, Tien-Hsin; Reyes, George F.

    2004-01-01

    At the Optical Computing Lab in the Jet Propulsion Laboratory (JPL) a binary holographic data storage system was designed and tested with methods of recording and retrieving the binary information. Levels of error correction were introduced to the system including pixel averaging, thresholding, and parity checks. Errors were artificially introduced into the binary holographic data storage system and were monitored as a function of the defect area fraction, which showed a strong influence on data integrity. Average area fractions exceeding one quarter of the bit area caused unrecoverable errors. Efficient use of the available data density was discussed. .

  9. Corrective Action Investigation Plan for Corrective Action Unit 410: Waste Disposal Trenches, Tonopah Test Range, Nevada, Revision 0 (includes ROTCs 1, 2, and 3)

    Energy Technology Data Exchange (ETDEWEB)

    NNSA/NV

    2002-07-16

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 410 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 410 is located on the Tonopah Test Range (TTR), which is included in the Nevada Test and Training Range (formerly the Nellis Air Force Range) approximately 140 miles northwest of Las Vegas, Nevada. This CAU is comprised of five Corrective Action Sites (CASs): TA-19-002-TAB2, Debris Mound; TA-21-003-TANL, Disposal Trench; TA-21-002-TAAL, Disposal Trench; 09-21-001-TA09, Disposal Trenches; 03-19-001, Waste Disposal Site. This CAU is being investigated because contaminants may be present in concentrations that could potentially pose a threat to human health and/or the environment, and waste may have been disposed of with out appropriate controls. Four out of five of these CASs are the result of weapons testing and disposal activities at the TTR, and they are grouped together for site closure based on the similarity of the sites (waste disposal sites and trenches). The fifth CAS, CAS 03-19-001, is a hydrocarbon spill related to activities in the area. This site is grouped with this CAU because of the location (TTR). Based on historical documentation and process know-ledge, vertical and lateral migration routes are possible for all CASs. Migration of contaminants may have occurred through transport by infiltration of precipitation through surface soil which serves as a driving force for downward migration of contaminants. Land-use scenarios limit future use of these CASs to industrial activities. The suspected contaminants of potential concern which have been identified are volatile organic compounds; semivolatile organic compounds; high explosives; radiological constituents including depleted

  10. Target registration and target positioning errors in computer-assisted neurosurgery: proposal for a standardized reporting of error assessment.

    Science.gov (United States)

    Widmann, Gerlig; Stoffner, Rudolf; Sieb, Michael; Bale, Reto

    2009-12-01

    Assessment of errors is essential in development, testing and clinical application of computer-assisted neurosurgery. Our aim was to provide a comprehensive overview of the different methods to assess target registration error (TRE) and target positioning error (TPE) and to develop a proposal for a standardized reporting of error assessment. A PubMed research on phantom, cadaver or clinical studies on TRE and TPE has been performed. Reporting standards have been defined according to (a) study design and evaluation methods and (b) specifications of the navigation technology. The proposed standardized reporting includes (a) study design (controlled, non-controlled), study type (non-anthropomorphic phantom, anthropomorphic phantom, cadaver, patient), target design, error type and subtypes, space of TPE measurement, statistics, and (b) image modality, scan parameters, tracking technology, registration procedure and targeting technique. Adoption of the proposed standardized reporting may help in the understanding and comparability of different accuracy reports. Copyright (c) 2009 John Wiley & Sons, Ltd.

  11. Corrective Action Investigation Plan for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada (December 2002, Revision No.: 0), Including Record of Technical Change No. 1

    Energy Technology Data Exchange (ETDEWEB)

    NNSA/NSO

    2002-12-12

    The Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 204 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 204 is located on the Nevada Test Site approximately 65 miles northwest of Las Vegas, Nevada. This CAU is comprised of six Corrective Action Sites (CASs) which include: 01-34-01, Underground Instrument House Bunker; 02-34-01, Instrument Bunker; 03-34-01, Underground Bunker; 05-18-02, Chemical Explosives Storage; 05-33-01, Kay Blockhouse; 05-99-02, Explosive Storage Bunker. Based on site history, process knowledge, and previous field efforts, contaminants of potential concern for Corrective Action Unit 204 collectively include radionuclides, beryllium, high explosives, lead, polychlorinated biphenyls, total petroleum hydrocarbons, silver, warfarin, and zinc phosphide. The primary question for the investigation is: ''Are existing data sufficient to evaluate appropriate corrective actions?'' To address this question, resolution of two decision statements is required. Decision I is to ''Define the nature of contamination'' by identifying any contamination above preliminary action levels (PALs); Decision II is to ''Determine the extent of contamination identified above PALs. If PALs are not exceeded, the investigation is completed. If PALs are exceeded, then Decision II must be resolved. In addition, data will be obtained to support waste management decisions. Field activities will include radiological land area surveys, geophysical surveys to identify any subsurface metallic and nonmetallic debris, field screening for applicable contaminants of potential concern, collection and analysis of surface and subsurface soil samples from biased locations

  12. Political violence and child adjustment in Northern Ireland: Testing pathways in a social ecological model including single and two-parent families

    Science.gov (United States)

    Cummings, E. Mark; Schermerhorn, Alice C.; Merrilees, Christine E.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed

    2013-01-01

    Moving beyond simply documenting that political violence negatively impacts children, a social ecological hypothesis for relations between political violence and child outcomes was tested. Participants were 700 mother-child (M=12.1years, SD=1.8) dyads from 18 working class, socially deprived areas in Belfast, Northern Ireland, including single- and two-parent families. Sectarian community violence was associated with elevated family conflict and children’s reduced security about multiple aspects of their social environment (i.e., family, parent-child relations, and community), with links to child adjustment problems and reductions in prosocial behavior. By comparison, and consistent with expectations, links with negative family processes, child regulatory problems and child outcomes were less consistent for nonsectarian community violence. Support was found for a social ecological model for relations between political violence and child outcomes among both single and two parent families, with evidence that emotional security and adjustment problems were more negatively affected in single-parent families. The implications for understanding social ecologies of political violence and children’s functioning are discussed. PMID:20604605

  13. Transient Error Data Analysis.

    Science.gov (United States)

    1979-05-01

    Analysis is 3.2 Graphical Data Analysis 16 3.3 General Statistics and Confidence Intervals 1" 3.4 Goodness of Fit Test 15 4. Conclusions 31 Acknowledgements...MTTF per System Technology Mechanism Processor Processor MT IE . CMUA PDP-10, ECL Parity 44 hrs. 800-1600 hrs. 0.03-0.06 Cm* LSI-1 1, NMOS Diagnostics...OF BAD TIME ERRORS: 6 TOTAL NUMBER OF ENTRIES FOR ALL INPUT FILESs 18445 TIME SPAN: 1542 HRS., FROM: 17-Feb-79 5:3:11 TO: 18-1Mj-79 11:30:99

  14. Evaluation of Analytical Errors in a Clinical Chemistry Laboratory: A ...

    African Journals Online (AJOL)

    Evaluation of Analytical Errors in a Clinical Chemistry. Laboratory: A 3 Year ... The number of tests reduced significantly over the 3‑year period, but this did not correspond .... 11 number of errors they classified under the analytical errors,.

  15. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  16. Exploring the Effectiveness of a Measurement Error Tutorial in Helping Teachers Understand Score Report Results

    Science.gov (United States)

    Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret

    2016-01-01

    The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…

  17. Administration and Scoring Errors of Graduate Students Learning the WISC-IV: Issues and Controversies

    Science.gov (United States)

    Mrazik, Martin; Janzen, Troy M.; Dombrowski, Stefan C.; Barford, Sean W.; Krawchuk, Lindsey L.

    2012-01-01

    A total of 19 graduate students enrolled in a graduate course conducted 6 consecutive administrations of the Wechsler Intelligence Scale for Children, 4th edition (WISC-IV, Canadian version). Test protocols were examined to obtain data describing the frequency of examiner errors, including administration and scoring errors. Results identified 511…

  18. Exploring the Effectiveness of a Measurement Error Tutorial in Helping Teachers Understand Score Report Results

    Science.gov (United States)

    Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret

    2016-01-01

    The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…

  19. Administration and Scoring Errors of Graduate Students Learning the WISC-IV: Issues and Controversies

    Science.gov (United States)

    Mrazik, Martin; Janzen, Troy M.; Dombrowski, Stefan C.; Barford, Sean W.; Krawchuk, Lindsey L.

    2012-01-01

    A total of 19 graduate students enrolled in a graduate course conducted 6 consecutive administrations of the Wechsler Intelligence Scale for Children, 4th edition (WISC-IV, Canadian version). Test protocols were examined to obtain data describing the frequency of examiner errors, including administration and scoring errors. Results identified 511…

  20. Residual Cusum Test for Parameters Change in ARCH Errors Models with Deterministic Trend%带趋势项的ARCH误差模型参数变化的残量检验

    Institute of Scientific and Technical Information of China (English)

    金浩; 田铮

    2009-01-01

    This paper analyzes the problem of testing for parameters change in ARCH errors models with deterministic trend based on residual cusum test. It is shown that the asymptotically limiting distribution of the residual cusum test statistic is still the sup of a standard Brownian bridge under null hypothesis. In order to check this, we carry out a Monte Carlo simulation and examine the return of IBM data. The results from both simulation and real data analysis support our claim. We also can explain this phenomenon from a theoretical viewpoint that the variance in ARCH model in mainly determined by its parameters.

  1. Cognitive Deficits Underlying Error Behavior on a Naturalistic Task after Severe Traumatic Brain Injury

    Directory of Open Access Journals (Sweden)

    Kathryn Mary Hendry

    2016-10-01

    Full Text Available People with severe traumatic brain injury (TBI often make errors on everyday tasks that compromise their safety and independence. Such errors potentially arise from the breakdown or failure of multiple cognitive processes. This study aimed to investigate cognitive deficits underlying error behavior on a home-based version of the Cooking Task (HBCT following TBI. Participants included 45 adults (9 females, 36 males with severe TBI aged 18-64 years (M = 37.91, SD = 13.43. Participants were administered the HBCT in their home kitchens, with audiovisual recordings taken to enable scoring of total errors and error subtypes (Omissions, Additions, Estimations, Substitutions, Commentary/Questions, Dangerous Behavior, Goal Achievement. Participants also completed a battery of neuropsychological tests, including the Trail Making Test, Hopkins Verbal Learning Test-Revised, Digit Span, Zoo Map test, Modified Stroop Test and Hayling Sentence Completion Test. After controlling for cooking experience, greater Omissions and Estimation errors, lack of goal achievement and longer completion time were significantly associated with poorer attention, memory and executive functioning. These findings indicate that errors on naturalistic tasks arise from deficits in multiple cognitive domains. Assessment of error behavior in a real life setting provides insight into individuals’ functional abilities which can guide rehabilitation planning and lifestyle support.

  2. Cognitive Deficits Underlying Error Behavior on a Naturalistic Task after Severe Traumatic Brain Injury

    Science.gov (United States)

    Hendry, Kathryn; Ownsworth, Tamara; Beadle, Elizabeth; Chevignard, Mathilde P.; Fleming, Jennifer; Griffin, Janelle; Shum, David H. K.

    2016-01-01

    People with severe traumatic brain injury (TBI) often make errors on everyday tasks that compromise their safety and independence. Such errors potentially arise from the breakdown or failure of multiple cognitive processes. This study aimed to investigate cognitive deficits underlying error behavior on a home-based version of the Cooking Task (HBCT) following TBI. Participants included 45 adults (9 females, 36 males) with severe TBI aged 18–64 years (M = 37.91, SD = 13.43). Participants were administered the HBCT in their home kitchens, with audiovisual recordings taken to enable scoring of total errors and error subtypes (Omissions, Additions, Estimations, Substitutions, Commentary/Questions, Dangerous Behavior, Goal Achievement). Participants also completed a battery of neuropsychological tests, including the Trail Making Test, Hopkins Verbal Learning Test-Revised, Digit Span, Zoo Map test, Modified Stroop Test, and Hayling Sentence Completion Test. After controlling for cooking experience, greater Omissions and Estimation errors, lack of goal achievement, and longer completion time were significantly associated with poorer attention, memory, and executive functioning. These findings indicate that errors on naturalistic tasks arise from deficits in multiple cognitive domains. Assessment of error behavior in a real life setting provides insight into individuals' functional abilities which can guide rehabilitation planning and lifestyle support. PMID:27790099

  3. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    Science.gov (United States)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  4. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  5. Classification of Spreadsheet Errors

    OpenAIRE

    Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian

    2008-01-01

    This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...

  6. Research on Spindle Rotary Error Test Simulation Technique of Precision Centrifuge%精密离心机主轴回转误差测量仿真技术研究

    Institute of Scientific and Technical Information of China (English)

    张荣; 牛宝良; 凌明祥; 王珏; 周继昆

    2016-01-01

    The rotary error of precision centrifuge affects directly dynamic radius measurement,the acceleration precision outputted by precision centrifuge and the safety of spindle working,it must be measured accurately.The application of capacitance micrometer test with three-point method to separate rotary error and roundness error is introduced.Also,the installation angle error of three capacitance mi-crometers,the sampling numbers of spindle lap and the test system background noise influence rotary error measurement result are simulated in MATLAB.The influences to rotary error separation caused by sampling N,δα,δβand noise of test system are obtained.Thus,several technique parameters in rotary error test for 10-6 level high precision centrifuge are certified.This technique has been applied to the spindle rotary error test of high precision centrifuge,the measurement result shows that using this measurement technique,the pure rotary error of spindle is 0. 25μm within speed of 300 rpm,the measurement result satisfies the requirement of the high precision centrifuge technique indi-cators.%精密离心机主轴回转误差直接影响精密离心机动态半径的测量、离心加速度输出精度以及精密离心机主轴运行安全性,必须精确测量主轴回转误差参数;介绍一种应用3个电容测微仪测试并分离主轴回转误差与圆度误差的方法,利用 MATLAB对三只电容测微仪安装角度误差、主轴全周采样点数、测试系统本底噪声对主轴回转误差测试结果影响进行仿真分析,得出采样点数N、测微仪安装角度误差δα、δβ以及测试系统本底噪声对回转误差分离的影响,基于仿真结果确定了10-6量级精密离心机主轴回转误差测量的几个工程参数。该方法已应用于某高精度精密离心机主轴回转误差精密测试中,实测表明,转速在300 rpm内精密离心机纯回转误差测量结果为0.25μm,满足10-6量级高精度

  7. Inversion of levelling data: how important is error treatment?

    Science.gov (United States)

    Amoruso, A.; Crescentini, L.

    2007-12-01

    Even if proper treatment of error statistics is potentially essential for the reliability of experimental data inversion, a critical evaluation of its effects on levelling data inversion is still lacking. In this paper, we consider the complete covariance matrix for levelling measurements, obtained by combining the covariance matrix due to measurement errors and the covariance matrix due to non-measurement errors, under the simple hypothesis of uncorrelated non-measurement errors on bench mark vertical displacements. The complete covariance matrix is reduced to diagonal form by means of a rotation matrix; the same rotation transforms the data to independent form. The eigenvalues of the complete covariance matrix give the uncertainties of the transformed independent data. This procedure can be used also with non-normal distributions of errors, in which case misfit functions other than χ2 (e.g. the mean absolute deviation) are minimized. Here we focus on two test cases (the 1989 Loma Prieta earthquake and the 1908 Messina earthquake) inverting both real data and synthetics. The inversion of synthetic data sets does not evidence any systematic dependence of retrieved parameter values on the covariance matrix. Most retrieved fault parameter values are close to what used in the forward model, whatever covariance matrix is used. As a consequence, large discrepancies among results obtained using covariance matrices including different combinations of measurement and non-measurement errors when inverting measured and synthetic data sets would possibly indicate the need for further investigations. While measurement errors can be a priori evaluated, it is difficult to estimate non-measurement errors. Our synthetic tests using a uniform-slip rectangular fault in a homogeneous elastic half-space show that, if measurement errors have been correctly evaluated, average non-measurement errors can be estimated by choosing their weight inside the covariance matrix so that the ratio

  8. Orthogonality of inductosyn angle-measuring system error and error-separating technology

    Institute of Scientific and Technical Information of China (English)

    任顺清; 曾庆双; 王常虹

    2003-01-01

    Round inductosyn is widely used in inertial navigation test equipment, and its accuracy has significant effect on the general accuracy of the equipment. Four main errors of round inductosyn,i. e. the first-order long-period (360°) harmonic error, the second-order long-period harmonic error, the first-order short-period harmonic error and the second-order short-period harmonic error, are described, and the orthogonality of these tour kinds of errors is studied. An error separating technology is proposed to separate these four kinds of errors,and in the process of separating the short-period harmonic errors, the arrangement in the order of decimal part of the angle pitch number can be omitted. The effectiveness of the technology proposed is proved through measuring and adjusting the angular errors.

  9. Error Propagation in a System Model

    Science.gov (United States)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  10. L’errore nel laboratorio di Microbiologia

    Directory of Open Access Journals (Sweden)

    Paolo Lanzafame

    2006-03-01

    Full Text Available Error management plays one of the most important roles in facility process improvement efforts. By detecting and reducing errors quality and patient care improve. The records of errors was analysed over a period of 6 months and another was used to study the potential bias in the registrations.The percentage of errors detected was 0,17% (normalised 1720 ppm and the errors in the pre-analytical phase was the largest part.The major rate of errors was generated by the peripheral centres which send only sometimes the microbiology tests and don’t know well the specific procedures to collect and storage biological samples.The errors in the management of laboratory supplies were reported too. The conclusion is that improving operators training, in particular concerning samples collection and storage, is very important and that an affective system of error detection should be employed to determine the causes and the best corrective action should be applied.

  11. A New Extension of the Binomial Error Model for Responses to Items of Varying Difficulty in Educational Testing and Attitude Surveys.

    Directory of Open Access Journals (Sweden)

    James A Wiley

    Full Text Available We put forward a new item response model which is an extension of the binomial error model first introduced by Keats and Lord. Like the binomial error model, the basic latent variable can be interpreted as a probability of responding in a certain way to an arbitrarily specified item. For a set of dichotomous items, this model gives predictions that are similar to other single parameter IRT models (such as the Rasch model but has certain advantages in more complex cases. The first is that in specifying a flexible two-parameter Beta distribution for the latent variable, it is easy to formulate models for randomized experiments in which there is no reason to believe that either the latent variable or its distribution vary over randomly composed experimental groups. Second, the elementary response function is such that extensions to more complex cases (e.g., polychotomous responses, unfolding scales are straightforward. Third, the probability metric of the latent trait allows tractable extensions to cover a wide variety of stochastic response processes.

  12. A New Extension of the Binomial Error Model for Responses to Items of Varying Difficulty in Educational Testing and Attitude Surveys.

    Science.gov (United States)

    Wiley, James A; Martin, John Levi; Herschkorn, Stephen J; Bond, Jason

    2015-01-01

    We put forward a new item response model which is an extension of the binomial error model first introduced by Keats and Lord. Like the binomial error model, the basic latent variable can be interpreted as a probability of responding in a certain way to an arbitrarily specified item. For a set of dichotomous items, this model gives predictions that are similar to other single parameter IRT models (such as the Rasch model) but has certain advantages in more complex cases. The first is that in specifying a flexible two-parameter Beta distribution for the latent variable, it is easy to formulate models for randomized experiments in which there is no reason to believe that either the latent variable or its distribution vary over randomly composed experimental groups. Second, the elementary response function is such that extensions to more complex cases (e.g., polychotomous responses, unfolding scales) are straightforward. Third, the probability metric of the latent trait allows tractable extensions to cover a wide variety of stochastic response processes.

  13. Designing and testing the coronagraphic Modal Wavefront Sensor: a fast non-common path error sensor for high-contrast imaging

    Science.gov (United States)

    Wilby, M. J.; Keller, C. U.; Haffert, S.; Korkiakoski, V.; Snik, F.; Pietrow, A. G. M.

    2016-07-01

    Non-Common Path Errors (NCPEs) are the dominant factor limiting the performance of current astronomical high-contrast imaging instruments. If uncorrected, the resulting quasi-static speckle noise floor limits coronagraph performance to a raw contrast of typically 10-4, a value which does not improve with increasing integration time. The coronagraphic Modal Wavefront Sensor (cMWS) is a hybrid phase optic which uses holographic PSF copies to supply focal-plane wavefront sensing information directly from the science camera, whilst maintaining a bias-free coronagraphic PSF. This concept has already been successfully implemented on-sky at the William Herschel Telescope (WHT), La Palma, demonstrating both real-time wavefront sensing capability and successful extraction of slowly varying wavefront errors under a dominant and rapidly changing atmospheric speckle foreground. In this work we present an overview of the development of the cMWS and recent first light results obtained using the Leiden EXoplanet Instrument (LEXI), a high-contrast imager and high-dispersion spectrograph pathfinder instrument for the WHT.

  14. Errors in CT colonography.

    Science.gov (United States)

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  15. Rectifying calibration error of Goldmann applanation tonometer is easy!

    Directory of Open Access Journals (Sweden)

    Nikhil S Choudhari

    2014-01-01

    Full Text Available Purpose: Goldmann applanation tonometer (GAT is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn′t suffice. We followed the South East Asia Glaucoma Interest Group′s definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively. Results: Twelve out of 29 (41.3% GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6% faulty instruments. Only one (8.3% faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.

  16. A causal link between prediction errors, dopamine neurons and learning.

    Science.gov (United States)

    Steinberg, Elizabeth E; Keiflin, Ronald; Boivin, Josiah R; Witten, Ilana B; Deisseroth, Karl; Janak, Patricia H

    2013-07-01

    Situations in which rewards are unexpectedly obtained or withheld represent opportunities for new learning. Often, this learning includes identifying cues that predict reward availability. Unexpected rewards strongly activate midbrain dopamine neurons. This phasic signal is proposed to support learning about antecedent cues by signaling discrepancies between actual and expected outcomes, termed a reward prediction error. However, it is unknown whether dopamine neuron prediction error signaling and cue-reward learning are causally linked. To test this hypothesis, we manipulated dopamine neuron activity in rats in two behavioral procedures, associative blocking and extinction, that illustrate the essential function of prediction errors in learning. We observed that optogenetic activation of dopamine neurons concurrent with reward delivery, mimicking a prediction error, was sufficient to cause long-lasting increases in cue-elicited reward-seeking behavior. Our findings establish a causal role for temporally precise dopamine neuron signaling in cue-reward learning, bridging a critical gap between experimental evidence and influential theoretical frameworks.

  17. Outlier removal, sum scores, and the inflation of the type I error rate in independent samples t tests : The power of alternatives and recommendations

    NARCIS (Netherlands)

    Bakker, M.; Wicherts, J.M.

    2014-01-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal

  18. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  19. Biomechanical evaluation of bending strength of spinal pedicle screws, including cylindrical, conical, dual core and double dual core designs using numerical simulations and mechanical tests.

    Science.gov (United States)

    Amaritsakul, Yongyut; Chao, Ching-Kong; Lin, Jinn

    2014-09-01

    Pedicle screws are used for treating several types of spinal injuries. Although several commercial versions are presently available, they are mostly either fully cylindrical or fully conical. In this study, the bending strengths of seven types of commercial pedicle screws and a newly designed double dual core screw were evaluated by finite element analyses and biomechanical tests. All the screws had an outer diameter of 7 mm, and the biomechanical test consisted of a cantilever bending test in which a vertical point load was applied using a level arm of 45 mm. The boundary and loading conditions of the biomechanical tests were applied to the model used for the finite element analyses. The results showed that only the conical screws with fixed outer diameter and the new double dual core screw could withstand 1,000,000 cycles of a 50-500 N cyclic load. The new screw, however, exhibited lower stiffness than the conical screw, indicating that it could afford patients more flexible movements. Moreover, the new screw produced a level of stability comparable to that of the conical screw, and it was also significantly stronger than the other screws. The finite element analysis further revealed that the point of maximum tensile stress in the screw model was comparable to the point at which fracture occurred during the fatigue test. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  1. Analgesic medication errors in North Carolina nursing homes.

    Science.gov (United States)

    Desai, Rishi J; Williams, Charrlotte E; Greene, Sandra B; Pierson, Stephanie; Caprio, Anthony J; Hansen, Richard A

    2013-06-01

    The objective of this study was to characterize analgesic medication errors and to evaluate their association with patient harm. The authors conducted a cross-sectional analysis of individual medication error incidents reported by North Carolina nursing homes to the Medication Error Quality Initiative (MEQI) during fiscal years 2010-2011. Bivariate associations between analgesic medication errors with patient factors, error-related factors, and impact on patients were tested with chi-square tests. A multivariate logistic regression model explored the relationship between type of analgesic medication errors and patient harm, controlling for patient- and error-related factors. A total of 32,176 individual medication error incidents were reported over a 2-year period in North Carolina nursing homes, 12.3% (n = 3949) of which were analgesic medication errors. Of these analgesic medication errors, opioid and nonopioid analgesics were involved in 3105 and 844 errors, respectively. Opioid errors were more likely to be wrong drug errors, wrong dose errors, and administration errors compared with nonopioid errors (P errors were found to have higher odds of patient harm compared with nonopioid errors (odds ratio [OR] = 3, 95% confodence interval [CI]: 1.1-7.8). The authors conclude that opioid analgesics represent the majority of analgesic error reports, and these error reports reflect an increased likelihood of patient harm compared with nonopioid analgesics.

  2. Political Violence and Child Adjustment in Northern Ireland: Testing Pathways in a Social-Ecological Model Including Single- and Two-Parent Families

    Science.gov (United States)

    Cummings, E. Mark; Schermerhorn, Alice C.; Merrilees, Christine E.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed

    2010-01-01

    Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including…

  3. Use of an Aptitude Test in University Entrance--A Validity Study: Updated Analyses of Higher Education Destinations, Including 2007 Entrants

    Science.gov (United States)

    Kirkup, Catherine; Wheater, Rebecca; Morrison, Jo; Durbin, Ben

    2010-01-01

    In 2005, the National Foundation for Educational Research (NFER) was commissioned to evaluate the potential value of using an aptitude test as an additional tool in the selection of candidates for admission to higher education (HE). This five-year study is co-funded by the National Foundation for Educational Research (NFER), the Department for…

  4. Steering Organoids Toward Discovery: Self-Driving Stem Cells Are Opening a World of Possibilities, Including Drug Testing and Tissue Sourcing.

    Science.gov (United States)

    Solis, Michele

    2016-01-01

    Since the 1980s, stem cells' shape-shifting abilities have wowed scientists. With proper handling, a few growth factors, and some time, stem cells can be cooked up into specific cell types, including neurons, muscle, and skin.

  5. Error image aware content restoration

    Science.gov (United States)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  6. Antenna motion errors in bistatic SAR imagery

    Science.gov (United States)

    Wang, Ling; Yazıcı, Birsen; Cagri Yanik, H.

    2015-06-01

    Antenna trajectory or motion errors are pervasive in synthetic aperture radar (SAR) imaging. Motion errors typically result in smearing and positioning errors in SAR images. Understanding the relationship between the trajectory errors and position errors in reconstructed images is essential in forming focused SAR images. Existing studies on the effect of antenna motion errors are limited to certain geometries, trajectory error models or monostatic SAR configuration. In this paper, we present an analysis of position errors in bistatic SAR imagery due to antenna motion errors. Bistatic SAR imagery is becoming increasingly important in the context of passive imaging and multi-sensor imaging. Our analysis provides an explicit quantitative relationship between the trajectory errors and the positioning errors in bistatic SAR images. The analysis is applicable to arbitrary trajectory errors and arbitrary imaging geometries including wide apertures and large scenes. We present extensive numerical simulations to validate the analysis and to illustrate the results in commonly used bistatic configurations and certain trajectory error models.

  7. Reducing medication errors.

    Science.gov (United States)

    Nute, Christine

    2014-11-25

    Most nurses are involved in medicines management, which is integral to promoting patient safety. Medicines management is prone to errors, which depending on the error can cause patient injury, increased hospital stay and significant legal expenses. This article describes a new approach to help minimise drug errors within healthcare settings where medications are prescribed, dispensed or administered. The acronym DRAINS, which considers all aspects of medicines management before administration, was devised to reduce medication errors on a cardiothoracic intensive care unit.

  8. Propagation of angular errors in two-axis rotation systems

    Science.gov (United States)

    Torrington, Geoffrey K.

    2003-10-01

    Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.

  9. Demand Forecasting Errors

    OpenAIRE

    Mackie, Peter; Nellthorp, John; Laird, James

    2005-01-01

    Demand forecasts form a key input to the economic appraisal. As such any errors present within the demand forecasts will undermine the reliability of the economic appraisal. The minimization of demand forecasting errors is therefore important in the delivery of a robust appraisal. This issue is addressed in this note by introducing the key issues, and error types present within demand fore...

  10. When errors are rewarding

    NARCIS (Netherlands)

    Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.

    2009-01-01

    For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br

  11. Errors associated with outpatient computerized prescribing systems

    Science.gov (United States)

    Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G

    2011-01-01

    Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428

  12. Error detection and reduction in blood banking.

    Science.gov (United States)

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle

  13. The Empirical Power and Type I Error Rates of the GBT and [omega] Indices in Detecting Answer Copying on Multiple-Choice Tests

    Science.gov (United States)

    Zopluoglu, Cengiz; Davenport, Ernest C., Jr.

    2012-01-01

    The generalized binomial test (GBT) and [omega] indices are the most recent methods suggested in the literature to detect answer copying behavior on multiple-choice tests. The [omega] index is one of the most studied indices, but there has not yet been a systematic simulation study for the GBT index. In addition, the effect of the ability levels…

  14. Carbapenem Susceptibility Testing Errors Using Three Automated Systems, Disk Diffusion, Etest, and Broth Microdilution and Carbapenem Resistance Genes in Isolates of Acinetobacter baumannii-calcoaceticus Complex

    Science.gov (United States)

    2016-06-07

    2006 and 2008 were studied. Frozen cultures were passed twice on blood agar plates (Remel, Lenexa, KS) before testing. Quality control strains used for...Solna, Sweden ) according to the manufacturer’s instructions (3, 18, 23). Antimicrobial solutions for broth microdilution testing were prepared from

  15. How social is error observation? The neural mechanisms underlying the observation of human and machine errors.

    Science.gov (United States)

    Desmet, Charlotte; Deschrijver, Eliane; Brass, Marcel

    2014-04-01

    Recently, it has been shown that the medial prefrontal cortex (MPFC) is involved in error execution as well as error observation. Based on this finding, it has been argued that recognizing each other's mistakes might rely on motor simulation. In the current functional magnetic resonance imaging (fMRI) study, we directly tested this hypothesis by investigating whether medial prefrontal activity in error observation is restricted to situations that enable simulation. To this aim, we compared brain activity related to the observation of errors that can be simulated (human errors) with brain activity related to errors that cannot be simulated (machine errors). We show that medial prefrontal activity is not only restricted to the observation of human errors but also occurs when observing errors of a machine. In addition, our data indicate that the MPFC reflects a domain general mechanism of monitoring violations of expectancies.

  16. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  17. AWARENESS OF DE NTISTS ABOUT MEDICATION ERRORS

    Directory of Open Access Journals (Sweden)

    Sangeetha

    2014-01-01

    Full Text Available OBJECTIVE: To assess the awareness of medication errors among dentists. METHODS: Medication errors are the most common single preventable cause o f adverse events in medication practice. We conducted a survey with a sample of sixty dentists. Among them 30 were general dentists (BDS and 30 were dental specialists (MDS. Questionnaires were distributed to them with questions regarding medication erro rs and they were asked to fill up the questionnaire. Data was collected and subjected to statistical analysis using Fisher exact and Chi square test. RESULTS: In our study, sixty percent of general dentists and 76.7% of dental specialists were aware about the components of medication error. Overall 66.7% of the respondents in each group marked wrong duration as the dispensing error. Almost thirty percent of the general dentists and 56.7% of the dental specialists felt that technologic advances could accompl ish diverse task in reducing medication errors. This was of suggestive statistical significance with a P value of 0.069. CONCLUSION: Medication errors compromise patient confidence in the health - care system and increase health - care costs. Overall, the dent al specialists were more knowledgeable than the general dentists about the Medication errors. KEY WORDS: Medication errors; Dosing error; Prevention of errors; Adverse drug events; Prescribing errors; Medical errors.

  18. A general tank test of a model of the hull of the Pem-1 flying boat including a special working chart for the determination of hull performance

    Science.gov (United States)

    Dawson, John R

    1938-01-01

    The results of a general tank test of a 1/6 full-size model of the hull of the Pem-1 flying boat (N.A.C.A. model 18) are given in non-dimensional form. In addition to the usual curves, the results are presented in a new form that makes it possible to apply them more conveniently than in the forms previously used. The resistance was compared with that of N.A.C.A. models 11-C and 26(Sikorsky S-40) and was found to be generally less than the resistance of either.

  19. Differentiation between Staphylococcus aureus and coagulase-negative Staphylococcus species by real-time PCR including detection of methicillin resistants in comparison to conventional microbiology testing.

    Science.gov (United States)

    Klaschik, Sven; Lehmann, Lutz E; Steinhagen, Folkert; Book, Malte; Molitor, Ernst; Hoeft, Andreas; Stueber, Frank

    2015-03-01

    Staphylococcus aureus has long been recognized as a major pathogen. Methicillin-resistant strains of S. aureus (MRSA) and methicillin-resistant strains of S. epidermidis (MRSE) are among the most prevalent multiresistant pathogens worldwide, frequently causing nosocomial and community-acquired infections. In the present pilot study, we tested a polymerase chain reaction (PCR) method to quickly differentiate Staphylococci and identify the mecA gene in a clinical setting. Compared to the conventional microbiology testing the real-time PCR assay had a higher detection rate for both S. aureus and coagulase-negative Staphylococci (CoNS; 55 vs. 32 for S. aureus and 63 vs. 24 for CoNS). Hands-on time preparing DNA, carrying out the PCR, and evaluating results was less than 5 h. The assay is largely automated, easy to adapt, and has been shown to be rapid and reliable. Fast detection and differentiation of S. aureus, CoNS, and the mecA gene by means of this real-time PCR protocol may help expedite therapeutic decision-making and enable earlier adequate antibiotic treatment. © 2014 Wiley Periodicals, Inc.

  20. Relative Effects of Trajectory Prediction Errors on the AAC Autoresolver

    Science.gov (United States)

    Lauderdale, Todd

    2011-01-01

    Trajectory prediction is fundamental to automated separation assurance. Every missed alert, false alert and loss of separation can be traced to one or more errors in trajectory prediction. These errors are a product of many different sources including wind prediction errors, inferred pilot intent errors, surveillance errors, navigation errors and aircraft weight estimation errors. This study analyzes the impact of six different types of errors on the performance of an automated separation assurance system composed of a geometric conflict detection algorithm and the Advanced Airspace Concept Autoresolver resolution algorithm. Results show that, of the error sources considered in this study, top-of-descent errors were the leading contributor to missed alerts and failed resolution maneuvers. Descent-speed errors were another significant contributor, as were cruise-speed errors in certain situations. The results further suggest that increasing horizontal detection and resolution standards are not effective strategies for mitigating these types of error sources.

  1. Error correction maintains post-error adjustments after one night of total sleep deprivation.

    Science.gov (United States)

    Hsieh, Shulan; Tsai, Cheng-Yin; Tsai, Ling-Ling

    2009-06-01

    Previous behavioral and electrophysiologic evidence indicates that one night of total sleep deprivation (TSD) impairs error monitoring, including error detection, error correction, and posterror adjustments (PEAs). This study examined the hypothesis that error correction, manifesting as an overtly expressed self-generated performance feedback to errors, can effectively prevent TSD-induced impairment in the PEAs. Sixteen healthy right-handed adults (seven women and nine men) aged 19-23 years were instructed to respond to a target arrow flanked by four distracted arrows and to correct their errors immediately after committing errors. Task performance and electroencephalogram (EEG) data were collected after normal sleep (NS) and after one night of TSD in a counterbalanced repeated-measures design. With the demand of error correction, the participants maintained the same level of PEAs in reducing the error rate for trial N + 1 after TSD as after NS. Corrective behavior further affected the PEAs for trial N + 1 in the omission rate and response speed, which decreased and speeded up following corrected errors, particularly after TSD. These results show that error correction effectively maintains posterror reduction in both committed and omitted errors after TSD. A cerebral mechanism might be involved in the effect of error correction as EEG beta (17-24 Hz) activity was increased after erroneous responses compared to after correct responses. The practical application of error correction to increasing work safety, which can be jeopardized by repeated errors, is suggested for workers who are involved in monotonous but attention-demanding monitoring tasks.

  2. Systematic error revisited

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod, M.C.

    1996-08-05

    The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.

  3. The Effect of the Basis-Set Superposition Error on the Calculation of Dispersion Interactions:  A Test Study on the Neon Dimer.

    Science.gov (United States)

    Monari, Antonio; Bendazzoli, Gian Luigi; Evangelisti, Stefano; Angeli, Celestino; Ben Amor, Nadia; Borini, Stefano; Maynau, Daniel; Rossi, Elda

    2007-03-01

    The dispersion interactions of the Ne2 dimer were studied using both the long-range perturbative and supramolecular approaches:  for the long-range approach, full CI or string-truncated CI methods were used, while for the supramolecular treatments, the energy curves were computed by using configuration interaction with single and double excitation (CISD), coupled cluster with single and double excitation, and coupled-cluster with single and double (and perturbative) triple excitations. From the interatomic potential-energy curves obtained by the supramolecular approach, the C6 and C8 dispersion coefficients were computed via an interpolation scheme, and they were compared with the corresponding values obtained within the long-range perturbative treatment. We found that the lack of size consistency of the CISD approach makes this method completely useless to compute dispersion coefficients even when the effect of the basis-set superposition error on the dimer curves is considered. The largest full-CI space we were able to use contains more than 1 billion symmetry-adapted Slater determinants, and it is, to our knowledge, the largest calculation of second-order properties ever done at the full-CI level so far. Finally, a new data format and libraries (Q5Cost) have been used in order to interface different codes used in the present study.

  4. Research on the test-fired precision analysis method of single tube guns by measuring distance-direction error%单管舰炮测量距离方向法试射精度分析方法

    Institute of Scientific and Technical Information of China (English)

    向宏志

    2012-01-01

    单管舰炮与双管舰炮测量距离和方向法试射的方法近似相同,单管舰炮一般以最大发射率连续发射k发炮弹作为1组进行射击,以形成近似双管舰炮齐射的效果.本文充分考虑1组射击中几发弹着的时间间隔所造成的误差,系统地提出了单管舰炮测量距离和方位法试射精度分析的方法和步骤.%Approximately the same single-tube guns with double-barreled guns by measuring distance-direction error,single tube guns generally continuous firing k artillery shells by maximum emission rate as a group of shooters,to form an approximate double-barreled guns salvo. This article fully consider the error caused by the time interval of the shells placement in a group shooting, systematically proposing the methods and procedures of the test-fired precision analysis method of single tube guns by measuring distance-direction error.

  5. Heteroscedasticity and/or Autocorrelation Checks in Longitudinal Nonlinear Models with Elliptical and AR(1)Errors

    Institute of Scientific and Technical Information of China (English)

    Chun-Zheng CAO; Jin-Guan LIN

    2012-01-01

    The aim of this paper is to study the tests for variance heterogeneity and/or autocorrelation in nonlinear regression models with elliptical and AR(1) errors.The elliptical class includes several symmetric multivariate distributions such as normal,Student-t,power exponential,among others.Several diagnostic tests using score statistics and their adjustment are constructed.The asymptotic properties,including asymptotic chi-square and approximate powers under local alternatives of the score statistics,are studied.The properties of test statistics are investigated through Monte Carlo simulations.A data set previously analyzed under normal errors is reanalyzed under elliptical models to illustrate our test methods.

  6. Firewall Configuration Errors Revisited

    CERN Document Server

    Wool, Avishai

    2009-01-01

    The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point FireWall-1 rule-sets. In general that survey indicated that corporate firewalls were often enforcing poorly written rule-sets, containing many mistakes. The goal of this work is to revisit the first survey. The current study is much larger. Moreover, for the first time, the study includes configurations from two major vendors. The study also introduce a novel "Firewall Complexity" (FC) measure, that applies to both types of firewalls. The findings of the current study indeed validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule-set's complexity is (still) positively correlated with the number of detected risk items. Thus we can conclude that, for well-configured firewalls, ``small is (still) beautiful''. However, unlike the 2004 study, we see no significant indication that later software versions have fewer errors (for both vendors).

  7. Corrective Action Investigation Plan for Corrective Action Unit 529: Area 25 Contaminated Materials, Nevada Test Site, Nevada, Rev. 0, Including Record of Technical Change No. 1

    Energy Technology Data Exchange (ETDEWEB)

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2003-02-26

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 529, Area 25 Contaminated Materials, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. CAU 529 consists of one Corrective Action Site (25-23-17). For the purpose of this investigation, the Corrective Action Site has been divided into nine parcels based on the separate and distinct releases. A conceptual site model was developed for each parcel to address the translocation of contaminants from each release. The results of this investigation will be used to support a defensible evaluation of corrective action alternatives in the corrective action decision document.

  8. An aerial radiological survey of the Tonopah Test Range including Clean Slate 1,2,3, Roller Coaster, decontamination area, Cactus Springs Ranch target areas. Central Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Proctor, A.E.; Hendricks, T.J.

    1995-08-01

    An aerial radiological survey was conducted of major sections of the Tonopah Test Range (TTR) in central Nevada from August through October 1993. The survey consisted of aerial measurements of both natural and man-made gamma radiation emanating from the terrestrial surface. The initial purpose of the survey was to locate depleted uranium (detecting {sup 238}U) from projectiles which had impacted on the TTR. The examination of areas near Cactus Springs Ranch (located near the western boundary of the TTR) and an animal burial area near the Double Track site were secondary objectives. When more widespread than expected {sup 241}Am contamination was found around the Clean Slates sites, the survey was expanded to cover the area surrounding the Clean Slates and also the Double Track site. Results are reported as radiation isopleths superimposed on aerial photographs of the area.

  9. Imperfect practice makes perfect: error management training improves transfer of learning.

    Science.gov (United States)

    Dyre, Liv; Tabor, Ann; Ringsted, Charlotte; Tolsgaard, Martin G

    2017-02-01

    Traditionally, trainees are instructed to practise with as few errors as possible during simulation-based training. However, transfer of learning may improve if trainees are encouraged to commit errors. The aim of this study was to assess the effects of error management instructions compared with error avoidance instructions during simulation-based ultrasound training. Medical students (n = 60) with no prior ultrasound experience were randomised to error management training (EMT) (n = 32) or error avoidance training (EAT) (n = 28). The EMT group was instructed to deliberately make errors during training. The EAT group was instructed to follow the simulator instructions and to commit as few errors as possible. Training consisted of 3 hours of simulation-based ultrasound training focusing on fetal weight estimation. Simulation-based tests were administered before and after training. Transfer tests were performed on real patients 7-10 days after the completion of training. Primary outcomes were transfer test performance scores and diagnostic accuracy. Secondary outcomes included performance scores and diagnostic accuracy during the simulation-based pre- and post-tests. A total of 56 participants completed the study. On the transfer test, EMT group participants attained higher performance scores (mean score: 67.7%, 95% confidence interval [CI]: 62.4-72.9%) than EAT group members (mean score: 51.7%, 95% CI: 45.8-57.6%) (p training improves the transfer of learning to the clinical setting compared with error avoidance instructions. Rather than teaching to avoid errors, the use of errors for learning should be explored further in medical education theory and practice. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  10. Tolerance for error and computational estimation ability.

    Science.gov (United States)

    Hogan, Thomas P; Wyckoff, Laurie A; Krebs, Paul; Jones, William; Fitzgerald, Mark P

    2004-06-01

    Previous investigators have suggested that the personality variable tolerance for error is related to success in computational estimation. However, this suggestion has not been tested directly. This study examined the relationship between performance on a computational estimation test and scores on the NEO-Five Factor Inventory, a measure of the Big Five personality traits, including Openness, an index of tolerance for ambiguity. Other variables included SAT-I Verbal and Mathematics scores and self-rated mathematics ability. Participants were 65 college students. There was no significant relationship between the tolerance variable and computational estimation performance. There was a modest negative relationship between Agreeableness and estimation performance. The skepticism associated with the negative pole of the Agreeableness dimension may be important to pursue in further understanding of estimation ability.

  11. Corrective Action Investigation Plan for Corrective Action Unit 516: Septic Systems and Discharge Points, Nevada Test Site, Nevada, Rev. 0, Including Record of Technical Change No. 1

    Energy Technology Data Exchange (ETDEWEB)

    None

    2003-04-28

    This Corrective Action Investigation Plan (CAIP) contains the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Sites Office's (NNSA/NSO's) approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 516, Septic Systems and Discharge Points, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. CAU 516 consists of six Corrective Action Sites: 03-59-01, Building 3C-36 Septic System; 03-59-02, Building 3C-45 Septic System; 06-51-01, Sump Piping, 06-51-02, Clay Pipe and Debris; 06-51-03, Clean Out Box and Piping; and 22-19-04, Vehicle Decontamination Area. Located in Areas 3, 6, and 22 of the NTS, CAU 516 is being investigated because disposed waste may be present without appropriate controls, and hazardous and/or radioactive constituents may be present or migrating at concentrations and locations that could potentially pose a threat to human health and the environment. Existing information and process knowledge on the expected nature and extent of contamination of CAU 516 are insufficient to select preferred corrective action alternatives; therefore, additional information will be obtained by conducting a corrective action investigation. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document. Record of Technical Change No. 1 is dated 3/2004.

  12. A depth-averaged debris-flow model that includes the effects of evolving dilatancy: II. Numerical predictions and experimental tests.

    Science.gov (United States)

    George, David L.; Iverson, Richard M.

    2014-01-01

    We evaluate a new depth-averaged mathematical model that is designed to simulate all stages of debris-flow motion, from initiation to deposition. A companion paper shows how the model’s five governing equations describe simultaneous evolution of flow thickness, solid volume fraction, basal pore-fluid pressure, and two components of flow momentum. Each equation contains a source term that represents the influence of state-dependent granular dilatancy. Here we recapitulate the equations and analyze their eigenstructure to show that they form a hyperbolic system with desirable stability properties. To solve the equations we use a shock-capturing numerical scheme with adaptive mesh refinement, implemented in an open-source software package we call D-Claw. As tests of D-Claw, we compare model output with results from two sets of large-scale debris-flow experiments. One set focuses on flow initiation from landslides triggered by rising pore-water pressures, and the other focuses on downstream flow dynamics, runout, and deposition. D-Claw performs well in predicting evolution of flow speeds, thicknesses, and basal pore-fluid pressures measured in each type of experiment. Computational results illustrate the critical role of dilatancy in linking coevolution of the solid volume fraction and pore-fluid pressure, which mediates basal Coulomb friction and thereby regulates debris-flow dynamics.

  13. Corrective Action Investigation Plan for Corrective Action Unit 536: Area 3 Release Site, Nevada Test Site, Nevada (Rev. 0 / June 2003), Including Record of Technical Change No. 1

    Energy Technology Data Exchange (ETDEWEB)

    None

    2003-06-27

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives (CAAs) appropriate for the closure of Corrective Action Unit (CAU) 536: Area 3 Release Site, Nevada Test Site, Nevada, under the Federal Facility Agreement and Consent Order. Corrective Action Unit 536 consists of a single Corrective Action Site (CAS): 03-44-02, Steam Jenny Discharge. The CAU 536 site is being investigated because existing information on the nature and extent of possible contamination is insufficient to evaluate and recommend corrective action alternatives for CAS 03-44-02. The additional information will be obtained by conducting a corrective action investigation (CAI) prior to evaluating CAAs and selecting the appropriate corrective action for this CAS. The results of this field investigation are to be used to support a defensible evaluation of corrective action alternatives in the corrective action decision document. Record of Technical Change No. 1 is dated 3-2004.

  14. Explaining errors in children's questions.

    Science.gov (United States)

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  15. Mapping of Schistosomiasis and Soil-Transmitted Helminths in Namibia: The First Large-Scale Protocol to Formally Include Rapid Diagnostic Tests.

    Directory of Open Access Journals (Sweden)

    José Carlos Sousa-Figueiredo

    Full Text Available Namibia is now ready to begin mass drug administration of praziquantel and albendazole against schistosomiasis and soil-transmitted helminths, respectively. Although historical data identifies areas of transmission of these neglected tropical diseases (NTDs, there is a need to update epidemiological data. For this reason, Namibia adopted a new protocol for mapping of schistosomiasis and geohelminths, formally integrating rapid diagnostic tests (RDTs for infections and morbidity. In this article, we explain the protocol in detail, and introduce the concept of 'mapping resolution', as well as present results and treatment recommendations for northern Namibia.This new protocol allowed a large sample to be surveyed (N = 17,896 children from 299 schools at relatively low cost (7 USD per person mapped and very quickly (28 working days. All children were analysed by RDTs, but only a sub-sample was also diagnosed by light microscopy. Overall prevalence of schistosomiasis in the surveyed areas was 9.0%, highly associated with poorer access to potable water (OR = 1.5, P<0.001 and defective (OR = 1.2, P<0.001 or absent sanitation infrastructure (OR = 2.0, P<0.001. Overall prevalence of geohelminths, more particularly hookworm infection, was 12.2%, highly associated with presence of faecal occult blood (OR = 1.9, P<0.001. Prevalence maps were produced and hot spots identified to better guide the national programme in drug administration, as well as targeted improvements in water, sanitation and hygiene. The RDTs employed (circulating cathodic antigen and microhaematuria for Schistosoma mansoni and S. haematobium, respectively performed well, with sensitivities above 80% and specificities above 95%.This protocol is cost-effective and sensitive to budget limitations and the potential economic and logistical strains placed on the national Ministries of Health. Here we present a high resolution map of disease prevalence levels, and treatment regimens are

  16. Mapping of Schistosomiasis and Soil-Transmitted Helminths in Namibia: The First Large-Scale Protocol to Formally Include Rapid Diagnostic Tests

    Science.gov (United States)

    Sousa-Figueiredo, José Carlos; Stanton, Michelle C.; Katokele, Stark; Arinaitwe, Moses; Adriko, Moses; Balfour, Lexi; Reiff, Mark; Lancaster, Warren; Noden, Bruce H.; Bock, Ronnie; Stothard, J. Russell

    2015-01-01

    Background Namibia is now ready to begin mass drug administration of praziquantel and albendazole against schistosomiasis and soil-transmitted helminths, respectively. Although historical data identifies areas of transmission of these neglected tropical diseases (NTDs), there is a need to update epidemiological data. For this reason, Namibia adopted a new protocol for mapping of schistosomiasis and geohelminths, formally integrating rapid diagnostic tests (RDTs) for infections and morbidity. In this article, we explain the protocol in detail, and introduce the concept of ‘mapping resolution’, as well as present results and treatment recommendations for northern Namibia. Methods/Findings/Interpretation This new protocol allowed a large sample to be surveyed (N = 17 896 children from 299 schools) at relatively low cost (7 USD per person mapped) and very quickly (28 working days). All children were analysed by RDTs, but only a sub-sample was also diagnosed by light microscopy. Overall prevalence of schistosomiasis in the surveyed areas was 9.0%, highly associated with poorer access to potable water (OR = 1.5, P<0.001) and defective (OR = 1.2, P<0.001) or absent sanitation infrastructure (OR = 2.0, P<0.001). Overall prevalence of geohelminths, more particularly hookworm infection, was 12.2%, highly associated with presence of faecal occult blood (OR = 1.9, P<0.001). Prevalence maps were produced and hot spots identified to better guide the national programme in drug administration, as well as targeted improvements in water, sanitation and hygiene. The RDTs employed (circulating cathodic antigen and microhaematuria for Schistosoma mansoni and S. haematobium, respectively) performed well, with sensitivities above 80% and specificities above 95%. Conclusion/Significance This protocol is cost-effective and sensitive to budget limitations and the potential economic and logistical strains placed on the national Ministries of Health. Here we present a high resolution map

  17. CUSUM Statistics for Large Item Banks: Computation of Standard Errors. Law School Admission Council Computerized Testing Report. LSAC Research Report Series.

    Science.gov (United States)

    Glas, C. A. W.

    In a previous study (1998), how to evaluate whether adaptive testing data used for online calibration sufficiently fit the item response model used by C. Glas was studied. Three approaches were suggested, based on a Lagrange multiplier (LM) statistic, a Wald statistic, and a cumulative sum (CUMSUM) statistic respectively. For all these methods,…

  18. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  19. Output Error Method for Tiltrotor Unstable in Hover

    Directory of Open Access Journals (Sweden)

    Lichota Piotr

    2017-03-01

    Full Text Available This article investigates unstable tiltrotor in hover system identification from flight test data. The aircraft dynamics was described by a linear model defined in Body-Fixed-Coordinate System. Output Error Method was selected in order to obtain stability and control derivatives in lateral motion. For estimating model parameters both time and frequency domain formulations were applied. To improve the system identification performed in the time domain, a stabilization matrix was included for evaluating the states. In the end, estimates obtained from various Output Error Method formulations were compared in terms of parameters accuracy and time histories. Evaluations were performed in MATLAB R2009b environment.

  20. Nonlinear error testing method based on sine wave click rate technology%基于正弦波点击率的非线性误差测试方法

    Institute of Scientific and Technical Information of China (English)

    杨景阳; 刘路扬; 吕兵

    2016-01-01

    针对ADC在通讯和多媒体技术上的应用需求,研究了基于正弦波点击率的ADC非线性误差测试方法,实现了正弦波点击率技术在ADC非线性误差测试中的应用。向ADC输入正弦波信号,对输出数字码进行标准化处理,补偿正弦波形电压分布的不均匀性,通过点击率算法推算 ADC 的微分非线性误差,并在大规模数模混合测试设备Catalyst-200上验证了算法的可靠性和精确性。实验结果表明,该算法能够精确地估算 ADC非线性误差,完整地表征了ADC线性度和丢码率,为ADC在通讯和多媒体技术上的应用提供了重要的参数依据,具有较强的工程实用性和市场前景性。%To fulfill the ADC applied on communication and imedia technology,this article researches the ADC’s Nonlinear error testing method based on Sine Wave click rate technology,and realizes the application of Sine wave click technology in ADC’s nonlinearity error testing.Put the sine wave into ADC’s input,deal the output data with normalized conduct to compensate the uneven distribution of sine waveform voltage,and then,calculate the ADC’s differential Nonlinearity Error by means of click rate algorithm,at the end,verify reliability and accuracy of the algorithm at the e-scale mixed-signal test equipment Catalyst-200.The experimental results show that this algorithm can estimate the ADC’s nonlinearity error accurately,represent the ADC’s linearity and dropout rate entirely,provide important parameter to the application of ADC on communication and imedia technology.it has a strong engineering practicality and a fine market outlook.

  1. Quantum rms error and Heisenberg’s error-disturbance relation

    Directory of Open Access Journals (Sweden)

    Busch Paul

    2014-01-01

    Full Text Available Reports on experiments recently performed in Vienna [Erhard et al, Nature Phys. 8, 185 (2012] and Toronto [Rozema et al, Phys. Rev. Lett. 109, 100404 (2012] include claims of a violation of Heisenberg’s error-disturbance relation. In contrast, a Heisenberg-type tradeoff relation for joint measurements of position and momentum has been formulated and proven in [Phys. Rev. Lett. 111, 160405 (2013]. Here I show how the apparent conflict is resolved by a careful consideration of the quantum generalization of the notion of root-mean-square error. The claim of a violation of Heisenberg’s principle is untenable as it is based on a historically wrong attribution of an incorrect relation to Heisenberg, which is in fact trivially violated. We review a new general trade-off relation for the necessary errors in approximate joint measurements of incompatible qubit observables that is in the spirit of Heisenberg’s intuitions. The experiments mentioned may directly be used to test this new error inequality.

  2. Self-Regulation in ADHD: The Role of Error Processing

    Science.gov (United States)

    Shiels, Keri; Hawk, Larry W.

    2010-01-01

    Attention-deficit hyperactivity disorder (ADHD) is characterized by persistent and impairing developmentally inappropriate levels of inattention, hyperactivity, and impulsivity. Such behavioral dysregulation may be a consequence of deficits in self-monitoring or adaptive control, both of which are required for adaptive behavior. Processing of contextual demands, ongoing monitoring of one’s behavior to evaluate whether it is appropriate for a particular situation, and adjusting behavior when it is suboptimal are components of self-regulation. This review examines and integrates the emerging literature on error-processing and adaptive control as components of self-regulation into the prominent etiological theories of ADHD. Available data on error-processing, as reflected in event-related potentials (ERN and Pe) and behavioral performance, suggest that both early error detection and later error-evaluation may be diminished in ADHD, thereby interfering with adaptive control processes. However, variability in results limit broad conclusions, particularly for early error detection. A range of methodological issues, including ERP parameters and sample and task characteristics, likely contribute to this variability, and recommendations for future work are presented. The emerging literature on error-processing and adaptive control informs etiological theories of ADHD in general and may provide a method for testing self-regulation models in particular. PMID:20659781

  3. Probabilistic quantum error correction

    CERN Document Server

    Fern, J; Fern, Jesse; Terilla, John

    2002-01-01

    There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.

  4. A prospective three-step intervention study to prevent medication errors in drug handling in paediatric care.

    Science.gov (United States)

    Niemann, Dorothee; Bertsche, Astrid; Meyrath, David; Koepf, Ellen D; Traiser, Carolin; Seebald, Katja; Schmitt, Claus P; Hoffmann, Georg F; Haefeli, Walter E; Bertsche, Thilo

    2015-01-01

    To prevent medication errors in drug handling in a paediatric ward. One in five preventable adverse drug events in hospitalised children is caused by medication errors. Errors in drug prescription have been studied frequently, but data regarding drug handling, including drug preparation and administration, are scarce. A three-step intervention study including monitoring procedure was used to detect and prevent medication errors in drug handling. After approval by the ethics committee, pharmacists monitored drug handling by nurses on an 18-bed paediatric ward in a university hospital prior to and following each intervention step. They also conducted a questionnaire survey aimed at identifying knowledge deficits. Each intervention step targeted different causes of errors. The handout mainly addressed knowledge deficits, the training course addressed errors caused by rule violations and slips, and the reference book addressed knowledge-, memory- and rule-based errors. The number of patients who were subjected to at least one medication error in drug handling decreased from 38/43 (88%) to 25/51 (49%) following the third intervention, and the overall frequency of errors decreased from 527 errors in 581 processes (91%) to 116/441 (26%). The issue of the handout reduced medication errors caused by knowledge deficits regarding, for instance, the correct 'volume of solvent for IV drugs' from 49-25%. Paediatric drug handling is prone to errors. A three-step intervention effectively decreased the high frequency of medication errors by addressing the diversity of their causes. Worldwide, nurses are in charge of drug handling, which constitutes an error-prone but often-neglected step in drug therapy. Detection and prevention of errors in daily routine is necessary for a safe and effective drug therapy. Our three-step intervention reduced errors and is suitable to be tested in other wards and settings. © 2014 John Wiley & Sons Ltd.

  5. Fibre-reinforced plastic composites - Determination of the in-plane shear stress/shear strain response, including the in-plane shear modulus and strength, by the plus or minus 45 degree tension test method

    CERN Document Server

    International Organization for Standardization. Geneva

    1997-01-01

    Fibre-reinforced plastic composites - Determination of the in-plane shear stress/shear strain response, including the in-plane shear modulus and strength, by the plus or minus 45 degree tension test method

  6. Simultaneous control of error rates in fMRI data analysis.

    Science.gov (United States)

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-12-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain.

  7. Negligence, genuine error, and litigation

    Directory of Open Access Journals (Sweden)

    Sohn DH

    2013-02-01

    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  8. Measurement error analysis of taxi meter

    Science.gov (United States)

    He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu

    2011-12-01

    The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.

  9. Error tracking in a clinical biochemistry laboratory

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Ødum, Lars

    2009-01-01

    BACKGROUND: We report our results for the systematic recording of all errors in a standard clinical laboratory over a 1-year period. METHODS: Recording was performed using a commercial database program. All individuals in the laboratory were allowed to report errors. The testing processes were cl...

  10. Error Analysis: Past, Present, and Future

    Science.gov (United States)

    McCloskey, George

    2017-01-01

    This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…

  11. Error Analysis: Past, Present, and Future

    Science.gov (United States)

    McCloskey, George

    2017-01-01

    This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…

  12. Error-resilient DNA computation

    Energy Technology Data Exchange (ETDEWEB)

    Karp, R.M.; Kenyon, C.; Waarts, O. [Univ. of California, Berkeley, CA (United States)

    1996-12-31

    The DNA model of computation, with test tubes of DNA molecules encoding bit sequences, is based on three primitives, Extract-A-Bit, which splits a test tube into two test tubes according to the value of a particular bit x, Merge-Two-Tubes and Detect-Emptiness. Perfect operations can test the satisfiability of any boolean formula in linear time. However, in reality the Extract operation is faulty; it misclassifies a certain proportion of the strands. We consider the following problem: given an algorithm based on perfect Extract, Merge and Detect operations, convert it to one that works correctly with high probability when the Extract operation is faulty. The fundamental problem in such a conversion is to construct a sequence of faulty Extracts and perfect Merges that simulates a highly reliable Extract operation. We first determine (up to a small constant factor) the minimum number of faulty Extract operations inherently required to simulate a highly reliable Extract operation. We then go on to derive a general method for converting any algorithm based on error-free operations to an error-resilient one, and give optimal error-resilient algorithms for realizing simple n-variable boolean functions such as Conjunction, Disjunction and Parity.

  13. Identifying afterloading PDR and HDR brachytherapy errors using real-time fiber-coupled Al2O3:C dosimetry and a novel statistical error decision criterion

    DEFF Research Database (Denmark)

    Kertzscher, Gustavo; Andersen, Claus Erik; Siebert, Frank-André

    2011-01-01

    treatment errors, including interchanged pairs of afterloader guide tubes and 2–20mm source displacements, were monitored using a real-time fiber-coupled carbon doped aluminum oxide (Al2O3:C) crystal dosimeter that was positioned in the reconstructed tumor region. The error detection capacity was evaluated...... conditions, and (2) test a new statistical error decision concept based on full uncertainty analysis. Materials and methodsPhantom studies of two gynecological cancer PDR and one prostate cancer HDR patient treatment plans were performed using tandem ring applicators or interstitial needles. Imposed...

  14. On Non-Phonetic Errors in“Topic Talk”Items in the Mandarin Proficiency Test and Test-Training Strategies%普通话水平测试“命题说话”项的非语音失误与应试培训策略

    Institute of Scientific and Technical Information of China (English)

    贾淑云

    2015-01-01

    “Topic Talk”is the key part in the Mandarin Proficiency Test. It is quite difficult,but the score is high. In this part,the examinees’weaknesses are easily exposed so that it becomes the most important in test-training tutorials. This paper mainly analyses the examinees’non-phonetic errors in“topic talk”items and their causes,and points out that these errors could be avoided in“topic talk”in the Mandarin Proficiency Test. So long as the examinees make the full preparation,they will do better and have a contented result. Therefore,the paper suggests that,in the tutorials before the examination,teachers not only lay emphasis on speech training,but also specially strengthen the test-taking guidance. Based on the marking criterion for“topic talk”,teachers should give the examinees more training on the mistakes that they often make so as to help them reduce their non-phonetic errors,increase the scoring average of“topic talk”,and improve the examinees’scores in the Mandarin Proficiency Test.

  15. Analysis of Errors Encountered in Simultaneous Interpreting

    Institute of Scientific and Technical Information of China (English)

    方峥

    2015-01-01

    I.Introduction1.1 Definition of an error An error happens when the interpreter’s delivery affects the communicative impact of the speaker’s message,including semantic inaccuracies and inaccuracies of presentation.Along with the development of simultaneous interpreting,there has been a number of professional interpreters and linguists present their definitions and points of views about the errors

  16. Medication errors in anesthesia: unacceptable or unavoidable?

    OpenAIRE

    Ira Dhawan; Anurag Tewari; Sankalp Sehgal; Ashish Chandra Sinha

    2017-01-01

    Abstract Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be succes...

  17. Correction for quadrature errors

    DEFF Research Database (Denmark)

    Netterstrøm, A.; Christensen, Erik Lintz

    1994-01-01

    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signal...

  18. ERRORS AND CORRECTION

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.

  19. ERROR AND ERROR CORRECTION AT ELEMENTARY LEVEL

    Institute of Scientific and Technical Information of China (English)

    1994-01-01

    Introduction Errors are unavoidable in language learning, however, to a great extent, teachers in most middle schools in China regard errors as undesirable, a sign of failure in language learning. Most middle schools are still using the grammar-translation method which aims at encouraging students to read scientific works and enjoy literary works. The other goals of this method are to gain a greater understanding of the first language and to improve the students’ ability to cope with difficult subjects and materials, i.e. to develop the students’ minds. The practical purpose of using this method is to help learners pass the annual entrance examination. "To achieve these goals, the students must first learn grammar and vocabulary,... Grammar is taught deductively by means of long and elaborate explanations... students learn the rules of the language rather than its use." (Tang Lixing, 1983:11-12)

  20. Errors on errors - Estimating cosmological parameter covariance

    CERN Document Server

    Joachimi, Benjamin

    2014-01-01

    Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.

  1. Identification errors in pathology and laboratory medicine.

    Science.gov (United States)

    Valenstein, Paul N; Sirota, Ronald L

    2004-12-01

    Identification errors involve misidentification of a patient or a specimen. Either has the potential to cause patients harm. Identification errors can occur during any part of the test cycle; however, most occur in the preanalytic phase. Patient identification errors in transfusion medicine occur in 0.05% of specimens; for general laboratory specimens the rate is much higher, around 1%. Anatomic pathology, which involves multiple specimen transfers and hand-offs, may have the highest identification error rate. Certain unavoidable cognitive failures lead to identification errors. Technology, ranging from bar-coded specimen labels to radio frequency identification tags, can be incorporated into protective systems that have the potential to detect and correct human error and reduce the frequency with which patients and specimens are misidentified.

  2. Error Models of the Analog to Digital Converters

    OpenAIRE

    Michaeli Linus; Šaliga Ján

    2014-01-01

    Error models of the Analog to Digital Converters describe metrological properties of the signal conversion from analog to digital domain in a concise form using few dominant error parameters. Knowledge of the error models allows the end user to provide fast testing in the crucial points of the full input signal range and to use identified error models for post correction in the digital domain. The imperfections of the internal ADC structure determine the error characteristics represented by t...

  3. The Role of a Computerized System of Medical Order Registration on the Reduction of Medical Errors

    Directory of Open Access Journals (Sweden)

    Shahverdi

    2016-04-01

    Full Text Available Background Medication errors are the most common medical errors, and are one of the major challenges threatening the healthcare system, which is inherently susceptible to error. Objectives In this study, we aimed to compare the occurrence of errors between two methods of entering orders: manual and digital. Patients and Methods In this perspective study, 350 files in the Baqiyatallah hospital in Tehran, Iran, were evaluated in 2014. The files were divided into two groups, including manual and digital methods, with 175 members each. In both groups, the presence of errors in the administration, registration, and execution of orders was compared. Results Overall, 350 cases underwent analysis; 175 files were evaluated manually and 175 were evaluated digitally. Of the 69 errors (19.7% that occurred, 65 errors (18.6% were in the manual files versus 4 (1.1% in the digital files (P < 0.001. The mean age of the nurses making errors was 32.42 ± 7.13 years old, and for the others it was 35.15 ± 7.76 years old (P = 0.008. Additionally, the mean age of the physicians with errors was 37.52 ± 7.97 years old versus 34.48 ± 6.82 years old in the others. Moreover, significant differences were observed between the two groups in terms of age (P = 0.002. Of the 69 errors, 80% were because of bad handwriting (P < 0.001, 50 errors (14.3% were pharmaceutical, 2 errors (0.6% were related to the procedure, and 17 (4.9% were related to the tests. Conclusions It can be concluded that electronic health records lead to a reduction in medication errors and increase patient safety.

  4. L'analyse des erreurs: etat actuel de la recherche (Error Analysis: Present State of Research). Errors: A New Perspective.

    Science.gov (United States)

    Lange, Michel

    This paper raises questions about the significance of errors made by language learners. The discussion is divided into four parts: (1) definition of error analysis, (2) the present status of error analysis research, including an overview of the theories of Lado, Skinner, Chomsky, Corder, Nemser, and Selinker; (3) the subdivisions of error analysis…

  5. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice.

  6. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  7. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  8. Errors in Radiologic Reporting

    Directory of Open Access Journals (Sweden)

    Esmaeel Shokrollahi

    2010-05-01

    Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological  

  9. Regression calibration with heteroscedastic error variance.

    Science.gov (United States)

    Spiegelman, Donna; Logan, Roger; Grove, Douglas

    2011-01-01

    The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses' Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice.

  10. Engaging with learners’ errors when teaching mathematics

    OpenAIRE

    Ingrid Sapire; Yael Shalem; Bronwen Wilson-Thompson; Ronél Paulsen

    2016-01-01

    Teachers come across errors not only in tests but also in their mathematics classrooms virtually every day. When they respond to learners’ errors in their classrooms, during or after teaching, teachers are actively carrying out formative assessment. In South Africa the Annual National Assessment, a written test under the auspices of the Department of Basic Education, requires that teachers use learner data diagnostically. This places a new and complex cognitive demand on teachers’ pedagogical...

  11. Errors in transfusion medicine: have we learned our lesson?

    Science.gov (United States)

    Fastman, Barbara Rabin; Kaplan, Harold S

    2011-01-01

    The phrase "patient safety" represents freedom from accidental or preventable harm due to events occurring in the healthcare setting. Practitioners aim to reduce, if not prevent, medical errors and adverse outcomes. Yet studies performed from many perspectives show that medical error constitutes a serious worldwide problem. Transfusion medicine, with its interdisciplinary intricacies and the danger of fatal outcomes, serves as an exemplar of lessons learned. Opportunity for error in complex systems is vast, and although errors are traditionally blamed on humans, they are often set up by preexisting factors. Transfusion has inherent hazards such as clinical vulnerabilities (eg, contracting an infectious agent or experiencing a transfusion reaction), but there also exists the possibility of hazards associated with process errors. Sample collection errors, or preanalytic errors, may occur when samples are drawn from donors during blood donation, as well as when drawn from patients prior to transfusion-related testing, and account for approximately one-third of events in transfusion. Errors in the analytic phase of the transfusion chain, slips and errors in the laboratory, comprise close to one-third of patient safety-related transfusion events. As many as 40% of mistransfusions are due to errors in the postanalytic phase: often failures in the final check of the right blood and the right patient at the bedside. Bar-code labels, radiofrequency identification tags, and even palm vein-scanning technology are increasingly being utilized in patient identification. The last phase of transfusion, careful monitoring of the recipient for adverse signs or symptoms, when performed diligently can help prevent or manage a potentially fatal reaction caused by an earlier process error or an unavoidable physiologic condition. Ways in which we can and do deal with potential hazards of transfusion are discussed, including a method of hazard reduction termed inherently safer design

  12. Assigning error to an M2 measurement

    Science.gov (United States)

    Ross, T. Sean

    2006-02-01

    The ISO 11146:1999 standard has been published for 6 years and set forth the proper way to measure the M2 parameter. In spite of the strong experimental guidance given by this standard and the many commercial devices based upon ISO 11146, it is still the custom to quote M2 measurements without any reference to significant figures or error estimation. To the author's knowledge, no commercial M2 measurement device includes error estimation. There exists, perhaps, a false belief that M2 numbers are high precision and of insignificant error. This paradigm causes program managers and purchasers to over-specify a beam quality parameter and researchers not to question the accuracy and precision of their M2 measurements. This paper will examine the experimental sources of error in an M2 measurement including discretization error, CCD noise, discrete filter sets, noise equivalent aperture estimation, laser fluctuation and curve fitting error. These sources of error will be explained in their experimental context and convenient formula given to properly estimate error in a given M2 measurement. This work is the result of the author's inability to find error estimation and disclosure of methods in commercial beam quality measurement devices and building an ISO 11146 compliant, computer- automated M2 measurement device and the resulting lessons learned and concepts developed.

  13. Medication errors in anesthesia: unacceptable or unavoidable?

    Directory of Open Access Journals (Sweden)

    Ira Dhawan

    Full Text Available Abstract Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to ‘treat' drug errors is to prevent them. Wrong medication (due to syringe swap, overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error, incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and ‘just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors.

  14. [Medication errors in anesthesia: unacceptable or unavoidable?

    Science.gov (United States)

    Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra

    Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Publicado por Elsevier Editora Ltda.

  15. Medication errors in anesthesia: unacceptable or unavoidable?

    Science.gov (United States)

    Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra

    Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Published by Elsevier Editora Ltda.

  16. Error recovery to enable error-free message transfer between nodes of a computer network

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A.; Coteus, Paul W.; Chen, Dong; Gara, Alan; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd; Steinmacher-Burow, Burkhard; Vranas, Pavlos M.

    2016-01-26

    An error-recovery method to enable error-free message transfer between nodes of a computer network. A first node of the network sends a packet to a second node of the network over a link between the nodes, and the first node keeps a copy of the packet on a sending end of the link until the first node receives acknowledgment from the second node that the packet was received without error. The second node tests the packet to determine if the packet is error free. If the packet is not error free, the second node sets a flag to mark the packet as corrupt. The second node returns acknowledgement to the first node specifying whether the packet was received with or without error. When the packet is received with error, the link is returned to a known state and the packet is sent again to the second node.

  17. Screening for Inborn Errors of Metabolism

    Directory of Open Access Journals (Sweden)

    F.A. Elshaari

    2013-09-01

    Full Text Available Inborn errors of metabolism (IEM are a heterogeneous group of monogenic diseases that affect the metabolic pathways. The detection of IEM relies on a high index of clinical suspicion and co-ordinated access to specialized laboratory services. Biochemical analysis forms the basis of the final confirmed diagnosis in several of these disorders. The investigations fall into four main categories1.General metabolic screening tests2.Specific metabolite assays3.Enzyme studies4.DNA analysis The first approach to the diagnosis is by a multi-component analysis of body fluids in clinically selected patients, referred to as metabolic screening tests. These include simple chemical tests in the urine, blood glucose, acid-base profile, lactate, ammonia and liver function tests. The results of these tests can help to suggest known groups of metabolic disorders so that specific metabolites such as amino acids, organic acids, etc. can be estimated. However, not all IEM needs the approach of general screening. Lysosomal, peroxisomal, thyroid and adrenal disorders are suspected mainly on clinical grounds and pertinent diagnostic tests can be performed. The final diagnosis relies on the demonstration of the specific enzyme defect, which can be further confirmed by DNA studies.

  18. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    Science.gov (United States)

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  19. Distortion Modeling and Error Robust Coding Scheme for H.26L Video

    Institute of Scientific and Technical Information of China (English)

    CHENChuan; YUSongyu; CHENGLianji

    2004-01-01

    Transmission of hybrid-coded video including motion compensation and spatial prediction over error prone channel results in the well-known problem of error propagation because of the drift in reference frames between encoder and decoder. The prediction loop propa-gates errors and causes substantial degradation in video quality. Especially in H.26L video, both intra and inter prediction strategies are used to improve compression efficiency, however, they make error propagation more serious. This work proposes distortion models for H.26L video to optimally estimate the overall distortion of decoder frame reconstruction due to quantization, error propagation, and error concealment. Based on these statistical distortion models, our error robust coding scheme only integrates the distinct distortion between intra and inter macroblocks into a rate-distortlon based framework to select suitable coding mode for each macroblock, and so,the cost in computation complexity is modest. Simulations under typical 3GPP/3GPP2 channel and Internet channel conditions have shown that our proposed scheme achieves much better performance than those currently used in H.26L. The error propagation estimation and effect at high fractural pixel-level prediction have also been tested. All the results have demonstrated that our proposed scheme achieves a good balance between compression efficiency and error robustness for H.26L video, at the cost of modest additional complexity.

  20. Inpatients’ medical prescription errors

    Directory of Open Access Journals (Sweden)

    Aline Melo Santos Silva

    2009-09-01

    Full Text Available Objective: To identify and quantify the most frequent prescription errors in inpatients’ medical prescriptions. Methods: A survey of prescription errors was performed in the inpatients’ medical prescriptions, from July 2008 to May 2009 for eight hours a day. Rresults: At total of 3,931 prescriptions was analyzed and 362 (9.2% prescription errors were found, which involved the healthcare team as a whole. Among the 16 types of errors detected in prescription, the most frequent occurrences were lack of information, such as dose (66 cases, 18.2% and administration route (26 cases, 7.2%; 45 cases (12.4% of wrong transcriptions to the information system; 30 cases (8.3% of duplicate drugs; doses higher than recommended (24 events, 6.6% and 29 cases (8.0% of prescriptions with indication but not specifying allergy. Cconclusion: Medication errors are a reality at hospitals. All healthcare professionals are responsible for the identification and prevention of these errors, each one in his/her own area. The pharmacist is an essential professional in the drug therapy process. All hospital organizations need a pharmacist team responsible for medical prescription analyses before preparation, dispensation and administration of drugs to inpatients. This study showed that the pharmacist improves the inpatient’s safety and success of prescribed therapy.

  1. 汽车前轮转向角测试误差修正算法研究%Algorithm for correcting turning-angle test error of vehicle's front-wheel

    Institute of Scientific and Technical Information of China (English)

    张扬; 张晓光

    2011-01-01

    An algorithm to correct turning-angle test error for vehicle steering-wheel was described in this paper, it used the mechanical structure properties of vehicle steering-axle. By this algorithm, the zero turning-angle starting position of the steering-wheel can be accurately calculated, and then the turning-angle test data can be corrected. Furthermore, the accurate maximum turning angles and related angles of the left and the right steering-wheels can be calculated. It solves the problem that the turning -angle test data for, the steering -wheel has low accuracy and poor repeatability.%提出一种利用汽车转向桥机械结构特性解决汽车转向轮转向角测试误差修正的算法,通过该算法可计算出汽车轮胎准确的零转角起点位置,进而对转角测试数据进行修正,以此计算出准确的左右轮最大转向角及相关角,解决了目前汽车转向轮转向角测试数据准确度低、重复性差的问题.

  2. System modeling based measurement error analysis of digital sun sensors

    Institute of Scientific and Technical Information of China (English)

    WEI; M; insong; XING; Fei; WANG; Geng; YOU; Zheng

    2015-01-01

    Stringent attitude determination accuracy is required for the development of the advanced space technologies and thus the accuracy improvement of digital sun sensors is necessary.In this paper,we presented a proposal for measurement error analysis of a digital sun sensor.A system modeling including three different error sources was built and employed for system error analysis.Numerical simulations were also conducted to study the measurement error introduced by different sources of error.Based on our model and study,the system errors from different error sources are coupled and the system calibration should be elaborately designed to realize a digital sun sensor with extra-high accuracy.

  3. Error effects in anterior cingulate cortex reverse when error likelihood is high

    Science.gov (United States)

    Jessup, Ryan K.; Busemeyer, Jerome R.; Brown, Joshua W.

    2010-01-01

    Strong error-related activity in medial prefrontal cortex (mPFC) has been shown repeatedly with neuroimaging and event-related potential studies for the last several decades. Multiple theories have been proposed to account for error effects, including comparator models and conflict detection models, but the neural mechanisms that generate error signals remain in dispute. Typical studies use relatively low error rates, confounding the expectedness and the desirability of an error. Here we show with a gambling task and fMRI that when losses are more frequent than wins, the mPFC error effect disappears, and moreover, exhibits the opposite pattern by responding more strongly to unexpected wins than losses. These findings provide perspective on recent ERP studies and suggest that mPFC error effects result from a comparison between actual and expected outcomes. PMID:20203206

  4. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    Science.gov (United States)

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  5. Motion error compensation of multi-legged walking robots

    Science.gov (United States)

    Wang, Liangwen; Chen, Xuedong; Wang, Xinjie; Tang, Weigang; Sun, Yi; Pan, Chunmei

    2012-07-01

    Existing errors in the structure and kinematic parameters of multi-legged walking robots, the motion trajectory of robot will diverge from the ideal sports requirements in movement. Since the existing error compensation is usually used for control compensation of manipulator arm, the error compensation of multi-legged robots has seldom been explored. In order to reduce the kinematic error of robots, a motion error compensation method based on the feedforward for multi-legged mobile robots is proposed to improve motion precision of a mobile robot. The locus error of a robot body is measured, when robot moves along a given track. Error of driven joint variables is obtained by error calculation model in terms of the locus error of robot body. Error value is used to compensate driven joint variables and modify control model of robot, which can drive the robots following control model modified. The model of the relation between robot's locus errors and kinematic variables errors is set up to achieve the kinematic error compensation. On the basis of the inverse kinematics of a multi-legged walking robot, the relation between error of the motion trajectory and driven joint variables of robots is discussed. Moreover, the equation set is obtained, which expresses relation among error of driven joint variables, structure parameters and error of robot's locus. Take MiniQuad as an example, when the robot MiniQuad moves following beeline tread, motion error compensation is studied. The actual locus errors of the robot body are measured before and after compensation in the test. According to the test, variations of the actual coordinate value of the robot centroid in x-direction and z-direction are reduced more than one time. The kinematic errors of robot body are reduced effectively by the use of the motion error compensation method based on the feedforward.

  6. Error monitoring in musicians

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-07-01

    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  7. Medication errors recovered by emergency department pharmacists.

    Science.gov (United States)

    Rothschild, Jeffrey M; Churchill, William; Erickson, Abbie; Munz, Kristin; Schuur, Jeremiah D; Salzberg, Claudia A; Lewinski, Daniel; Shane, Rita; Aazami, Roshanak; Patka, John; Jaggers, Rondell; Steffenhagen, Aaron; Rough, Steve; Bates, David W

    2010-06-01

    We assess the impact of emergency department (ED) pharmacists on reducing potentially harmful medication errors. We conducted this observational study in 4 academic EDs. Trained pharmacy residents observed a convenience sample of ED pharmacists' activities. The primary outcome was medication errors recovered by pharmacists, including errors intercepted before reaching the patient (near miss or potential adverse drug event), caught after reaching the patient but before causing harm (mitigated adverse drug event), or caught after some harm but before further or worsening harm (ameliorated adverse drug event). Pairs of physician and pharmacist reviewers confirmed recovered medication errors and assessed their potential for harm. Observers were unblinded and clinical outcomes were not evaluated. We conducted 226 observation sessions spanning 787 hours and observed pharmacists reviewing 17,320 medications ordered or administered to 6,471 patients. We identified 504 recovered medication errors, or 7.8 per 100 patients and 2.9 per 100 medications. Most of the recovered medication errors were intercepted potential adverse drug events (90.3%), with fewer mitigated adverse drug events (3.9%) and ameliorated adverse drug events (0.2%). The potential severities of the recovered errors were most often serious (47.8%) or significant (36.2%). The most common medication classes associated with recovered medication errors were antimicrobial agents (32.1%), central nervous system agents (16.2%), and anticoagulant and thrombolytic agents (14.1%). The most common error types were dosing errors, drug omission, and wrong frequency errors. ED pharmacists can identify and prevent potentially harmful medication errors. Controlled trials are necessary to determine the net costs and benefits of ED pharmacist staffing on safety, quality, and costs, especially important considerations for smaller EDs and pharmacy departments. Copyright (c) 2009 American College of Emergency Physicians

  8. Development of the Barriers to Error Disclosure Assessment Tool.

    Science.gov (United States)

    Welsh, Darlene; Zephyr, Dominique; Pfeifle, Andrea L; Carr, Douglas E; Fink, Joseph L; Jones, Mandy

    2017-06-30

    An interprofessional group of health colleges' faculty created and piloted the Barriers to Error Disclosure Assessment tool as an instrument to measure barriers to medical error disclosure among health care providers. A review of the literature guided the creation of items describing influences on the decision to disclose a medical error. Local and national experts in error disclosure used a modified Delphi process to gain consensus on the items included in the pilot. After receiving university institutional review board approval, researchers distributed the tool to a convenience sample of physicians (n = 19), pharmacists (n = 20), and nurses (n = 20) from an academic medical center. Means and SDs were used to describe the sample. Intraclass correlation coefficients were used to examine test-retest correspondence between the continuous items on the scale. Factor analysis with varimax rotation was used to determine factor loadings and examine internal consistency reliability. Cronbach α coefficients were calculated during initial and subsequent administrations to assess test-retest reliability. After omitting 2 items with intraclass correlation coefficient of less than 0.40, intraclass correlation coefficients ranged from 0.43 to 0.70, indicating fair to good test-retest correspondence between the continuous items on the final draft. Factor analysis revealed the following factors during the initial administration: confidence and knowledge barriers, institutional barriers, psychological barriers, and financial concern barriers to medical error disclosure. α Coefficients of 0.85 to 0.93 at time 1 and 0.82 to 0.95 at time 2 supported test-retest reliability. The final version of the 31-item tool can be used to measure perceptions about abilities for disclosing, impressions regarding institutional policies and climate, and specific barriers that inhibit disclosure by health care providers. Preliminary evidence supports the tool's validity and reliability for measuring

  9. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  10. Error Correction in Classroom

    Institute of Scientific and Technical Information of China (English)

    Dr. Grace Zhang

    2000-01-01

    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  11. A comparative study of voluntarily reported medication errors among ...

    African Journals Online (AJOL)

    Pharmacotherapy Group, Faculty of Pharmacy, University of Benin, Benin City, ... errors among adult patients in intensive care (IC) and non- .... category include system error, documentation .... the importance of patient safety and further.

  12. Speak Up: Help Prevent Errors in Your Care: Laboratory Services

    Science.gov (United States)

    SpeakUP TM Help Prevent Errors in Your Care Laboratory Services To prevent health care errors, patients are ... making health care safe. That includes doctors, nurses, laboratory technologists, phlebotomists (health care staff who take blood), ...

  13. 坐标转换误差传递模型的建立及靶场试验应用研究%Building Error Transfer Model of Coordinate Transformation and Researching the Application on Shooting Range Test

    Institute of Scientific and Technical Information of China (English)

    王春明

    2013-01-01

    We have studied the calculation method of error transfer when the range test data from electronic information equipment is transferred about coordinate, and simulated and calculated the changes under different conditions by a typical air route changes of the measurement data after coordinate transformation. Gives the true value for calculating precision of equipment precision, and the method of point for subjects in test cloth station equipment selection to provide the reference.%研究了电子信息装备靶场试验数据坐标转换时误差传递的计算方法,模拟计算了在不同条件下某典型航路测量数据经坐标转换后的变化情况。给出了由设备精度计算真值精度的方法,为试验布站中被试装备的点位选取提供参考。

  14. Managing human error in aviation.

    Science.gov (United States)

    Helmreich, R L

    1997-05-01

    Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.

  15. Study of Errors among Nursing Students

    Directory of Open Access Journals (Sweden)

    Ella Koren

    2007-09-01

    Full Text Available The study of errors in the health system today is a topic of considerable interest aimed at reducing errors through analysis of the phenomenon and the conclusions reached. Errors that occur frequently among health professionals have also been observed among nursing students. True, in most cases they are actually “near errors,” but these could be a future indicator of therapeutic reality and the effect of nurses' work environment on their personal performance. There are two different approaches to such errors: (a The EPP (error prone person approach lays full responsibility at the door of the individual involved in the error, whether a student, nurse, doctor, or pharmacist. According to this approach, handling consists purely in identifying and penalizing the guilty party. (b The EPE (error prone environment approach emphasizes the environment as a primary contributory factor to errors. The environment as an abstract concept includes components and processes of interpersonal communications, work relations, human engineering, workload, pressures, technical apparatus, and new technologies. The objective of the present study was to examine the role played by factors in and components of personal performance as compared to elements and features of the environment. The study was based on both of the aforementioned approaches, which, when combined, enable a comprehensive understanding of the phenomenon of errors among the student population as well as a comparison of factors contributing to human error and to error deriving from the environment. The theoretical basis of the study was a model that combined both approaches: one focusing on the individual and his or her personal performance and the other focusing on the work environment. The findings emphasize the work environment of health professionals as an EPE. However, errors could have been avoided by means of strict adherence to practical procedures. The authors examined error events in the

  16. Prediction and simulation errors in parameter estimation for nonlinear systems

    Science.gov (United States)

    Aguirre, Luis A.; Barbosa, Bruno H. G.; Braga, Antônio P.

    2010-11-01

    This article compares the pros and cons of using prediction error and simulation error to define cost functions for parameter estimation in the context of nonlinear system identification. To avoid being influenced by estimators of the least squares family (e.g. prediction error methods), and in order to be able to solve non-convex optimisation problems (e.g. minimisation of some norm of the free-run simulation error), evolutionary algorithms were used. Simulated examples which include polynomial, rational and neural network models are discussed. Our results—obtained using different model classes—show that, in general the use of simulation error is preferable to prediction error. An interesting exception to this rule seems to be the equation error case when the model structure includes the true model. In the case of error-in-variables, although parameter estimation is biased in both cases, the algorithm based on simulation error is more robust.

  17. Strategy that includes serial noninvasive leg tests for diagnosis of thromboembolic disease in patients with suspected acute pulmonary embolism based on data from PIOPED. Prospective Investigation of Pulmonary Embolism Diagnosis.

    Science.gov (United States)

    Stein, P D; Hull, R D; Pineo, G

    1995-10-23

    % to 66%) if only a single noninvasive leg test were performed, and further reduced to 193 (29%) of 662 (95% CI, 26% to 33%) if serial noninvasive leg tests were used where appropriate. A noninvasive strategy that includes VQ scans, single noninvasive leg tests, and serial noninvasive leg tests would permit a diagnosis of thromboembolic disease or a safe exclusion of thromboembolic disease in 71% of patients with suspected acute pulmonary embolism.

  18. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  19. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  20. Passport officers' errors in face matching.

    Directory of Open Access Journals (Sweden)

    David White

    Full Text Available Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of 'fraudulent' photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately--though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.

  1. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    Science.gov (United States)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  2. AN ANALYSIS OF GRAMMATICAL ERRORS ON SPEAKING ACTIVITIES

    Directory of Open Access Journals (Sweden)

    Merlyn Simbolon

    2015-09-01

    Full Text Available This study aims to analyze the grammatical errors and to provide description of errors on speaking activities using simple present and present progressive tenses made by the second year students of English Education Department, Palangka Raya University. The subject for this study was 30 students. This research applied qualitative research to describe the types, source and causes of students’ errors taken from oral essay test which consisted of questions using the tenses of simple present and present progressive. The errors were indentified and classified according to Linguistic Category Taxonomy and Richard’s classification, well as the possible sources and causes of errors. The findings showed that the errors made by students were in 6 aspects; errors in production of verb groups, errors in the distribution of verb groups, errors in the use of article, errors in the use of preposition, errors in the use of questions and miscellaneous errors. In regard to resource and causes, it was found that intra-lingual interference was the major source of errors (82.55% where overgeneralization took place as the major cause of the errors with total percentage of 44.71%. Keywords: grammatical errors, speaking skill, speaking activities

  3. Medication Error, What Is the Reason?

    Directory of Open Access Journals (Sweden)

    Ali Banaozar Mohammadi

    2015-09-01

    Full Text Available Background: Medication errors due to different reasons may alter the outcome of all patients, especially patients with drug poisoning. We introduce one of the most common type of medication error in the present article. Case:A 48 year old woman with suspected organophosphate poisoning was died due to lethal medication error. Unfortunately these types of errors are not rare and had some preventable reasons included lack of suitable and enough training and practicing of medical students and some failures in medical students’ educational curriculum. Conclusion:Hereby some important reasons are discussed because sometimes they are tre-mendous. We found that most of them are easily preventable. If someone be aware about the method of use, complications, dosage and contraindication of drugs, we can minimize most of these fatal errors.

  4. FMEA: a model for reducing medical errors.

    Science.gov (United States)

    Chiozza, Maria Laura; Ponzetti, Clemente

    2009-06-01

    Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).

  5. Soft error mechanisms, modeling and mitigation

    CERN Document Server

    Sayil, Selahattin

    2016-01-01

    This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption an...

  6. Physician perspectives on quality and error in the outpatient setting.

    Science.gov (United States)

    Manwell, Linda Baier; Williams, Eric S; Babbott, Stewart; Rabatin, Joseph S; Linzer, Mark

    2009-05-01

    Little is known about the influence of the primary care workplace on patient care. Assessing physician opinion through focus groups can elucidate factors related to safety and error in this setting. During phase 1 of the Minimizing Error, Maximizing Outcome (MEMO) Study, 9 focus groups were conducted with 32 family physicians and general internists from 5 areas in the upper Midwest and New York City. The physicians described challenging settings with rapidly changing conditions. Patients are medically and psychosocially complex and often underinsured. Communication is complicated by multiple languages, time pressure, and inadequate information systems. Complex processes of care have missing elements including medication lists and test results. Physicians are pressed to be more productive, and key administrative decisions are made without their input. Targeted areas to improve safety and reduce error included teamwork, aligned leadership values, diversity, collegiality, and respect. Primary care physicians clearly described positive and negative workplace factors related to safety and error. The themes suggest that systems of care and their dynamic nature warrant attention. Enhancing positive and ameliorating negative cultures and processes of care could bring real benefits to patients, physicians, and ambulatory office settings.

  7. Compensating for Type-I Errors in Video Quality Assessment

    DEFF Research Database (Denmark)

    Brunnström, Kjell; Tavakoli, Samira; Søgaard, Jacob

    2015-01-01

    This paper analyzes the impact on compensating for Type-I errors in video quality assessment. A Type-I error is to incorrectly conclude that there is an effect. The risk increases with the number of comparisons that are performed in statistical tests. Type-I errors are an issue often neglected...

  8. Experimental study of error sources in skin-friction balance measurements

    Science.gov (United States)

    Allen, J. M.

    1977-01-01

    An experimental study has been performed to determine potential error sources in skin-friction balance measurements. A floating-element balance, large enough to contain the instrumentation needed to systematically investigate these error sources has been constructed and tested in the thick turbulent boundary layer on the sidewall of a large supersonic wind tunnel. Test variables include element-to-case misalignment, gap size, and Reynolds number. The effects of these variables on the friction, lip, and normal forces have been analyzed. It was found that larger gap sizes were preferable to smaller ones; that small element recession below the surrounding test surface produced errors comparable to the same amount of protrusion above the test surface; and that normal forces on the element were, in some cases, large compared to the friction force.

  9. Orwell's Instructive Errors

    Science.gov (United States)

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  10. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  11. Network dynamics underlying speed-accuracy trade-offs in response to errors.

    Directory of Open Access Journals (Sweden)

    Yigal Agam

    Full Text Available The ability to dynamically and rapidly adjust task performance based on its outcome is fundamental to adaptive, flexible behavior. Over trials of a task, responses speed up until an error is committed and after the error responses slow down. These dynamic adjustments serve to optimize performance and are well-described by the speed-accuracy trade-off (SATO function. We hypothesized that SATOs based on outcomes reflect reciprocal changes in the allocation of attention between the internal milieu and the task-at-hand, as indexed by reciprocal changes in activity between the default and dorsal attention brain networks. We tested this hypothesis using functional MRI to examine the pattern of network activation over a series of trials surrounding and including an error. We further hypothesized that these reciprocal changes in network activity are coordinated by the posterior cingulate cortex (PCC and would rely on the structural integrity of its white matter connections. Using diffusion tensor imaging, we examined whether fractional anisotropy of the posterior cingulum bundle correlated with the magnitude of reciprocal changes in network activation around errors. As expected, reaction time (RT in trials surrounding errors was consistent with predictions from the SATO function. Activation in the default network was: (i inversely correlated with RT, (ii greater on trials before than after an error and (iii maximal at the error. In contrast, activation in the right intraparietal sulcus of the dorsal attention network was (i positively correlated with RT and showed the opposite pattern: (ii less activation before than after an error and (iii the least activation on the error. Greater integrity of the posterior cingulum bundle was associated with greater reciprocity in network activation around errors. These findings suggest that dynamic changes in attention to the internal versus external milieu in response to errors underlie SATOs in RT and are mediated

  12. Modeling human response errors in synthetic flight simulator domain

    Science.gov (United States)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  13. Apparently conclusive meta-analyses may be inconclusive--Trial sequential analysis adjustment of random error risk due to repetitive testing of accumulating data in apparently conclusive neonatal meta-analyses

    DEFF Research Database (Denmark)

    Brok, Jesper; Thorlund, Kristian; Wetterslev, Jørn;

    2008-01-01

    BACKGROUND: Random error may cause misleading evidence in meta-analyses. The required number of participants in a meta-analysis (i.e. information size) should be at least as large as an adequately powered single trial. Trial sequential analysis (TSA) may reduce risk of random errors due...

  14. Structured error recovery for code-word-stabilized quantum codes

    Science.gov (United States)

    Li, Yunfan; Dumer, Ilya; Grassl, Markus; Pryadko, Leonid P.

    2010-05-01

    Code-word-stabilized (CWS) codes are, in general, nonadditive quantum codes that can correct errors by an exhaustive search of different error patterns, similar to the way that we decode classical nonlinear codes. For an n-qubit quantum code correcting errors on up to t qubits, this brute-force approach consecutively tests different errors of weight t or less and employs a separate n-qubit measurement in each test. In this article, we suggest an error grouping technique that allows one to simultaneously test large groups of errors in a single measurement. This structured error recovery technique exponentially reduces the number of measurements by about 3t times. While it still leaves exponentially many measurements for a generic CWS code, the technique is equivalent to syndrome-based recovery for the special case of additive CWS codes.

  15. Patient error: a preliminary taxonomy.

    NARCIS (Netherlands)

    Buetow, S.; Kiata, L.; Liew, T.; Kenealy, T.; Dovey, S.; Elwyn, G.

    2009-01-01

    PURPOSE: Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary ca

  16. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  17. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  18. Test

    DEFF Research Database (Denmark)

    Bendixen, Carsten

    2014-01-01

    Bidrag med en kortfattet, introducerende, perspektiverende og begrebsafklarende fremstilling af begrebet test i det pædagogiske univers.......Bidrag med en kortfattet, introducerende, perspektiverende og begrebsafklarende fremstilling af begrebet test i det pædagogiske univers....

  19. Soft Error Vulnerability of Iterative Linear Algebra Methods

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; de Supinski, B

    2008-01-19

    Devices are increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft error rates were significant primarily in space and high-atmospheric computing. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming important even at terrestrial altitudes. Due to their large number of components, supercomputers are particularly susceptible to soft errors. Since many large scale parallel scientific applications use iterative linear algebra methods, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. Many users consider these methods invulnerable to most soft errors since they converge from an imprecise solution to a precise one. However, we show in this paper that iterative methods are vulnerable to soft errors, exhibiting both silent data corruptions and poor ability to detect errors. Further, we evaluate a variety of soft error detection and tolerance techniques, including checkpointing, linear matrix encodings, and residual tracking techniques.

  20. Error bars in experimental biology.

    Science.gov (United States)

    Cumming, Geoff; Fidler, Fiona; Vaux, David L

    2007-04-09

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.

  1. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  2. Analyzing temozolomide medication errors: potentially fatal.

    Science.gov (United States)

    Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee

    2014-10-01

    The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.

  3. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  4. Personality and error monitoring: an update

    Directory of Open Access Journals (Sweden)

    Sven eHoffmann

    2012-06-01

    Full Text Available People differ considerably with respect to their ability to initiate and maintain cognitive control. A core control function is the processing and evaluation of errors from which we learn to prevent maladaptive behavior. People strongly differ in the degree of error processing, and how errors are interpreted and appraised. In the present study it was investigated whether a correlate of error monitoring, the error negativity (Ne or ERN, is related to personality factors. Therefore the EEG was measured continuously during a task which provoked errors, and the Ne was tested with respect to its relation to personality traits. Our results indicate a substantial trait-like relation of error processing and personality factors: The Ne was more pronounced for subjection scoring low on the Openness scale, the Impulsiveness scale and the Emotionality scale. Inversely, the Ne was less pronounced for subjects scoring low on the Social Orientation scale. The results implicate that personality traits related to emotional valences and rigidity are reflected in the way people monitor and adapt to erroneous actions. 

  5. A Characterization of Prediction Errors

    OpenAIRE

    Meek, Christopher

    2016-01-01

    Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.

  6. Error Analysis and Its Implication

    Institute of Scientific and Technical Information of China (English)

    崔蕾

    2007-01-01

    Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.

  7. Error bars in experimental biology

    OpenAIRE

    2007-01-01

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what er...

  8. New approximating results for data with errors in both variables

    Science.gov (United States)

    Bogdanova, N.; Todorov, S.

    2015-05-01

    We introduce new data from mineral water probe Lenovo Bulgaria, measured with errors in both variables. We apply our Orthonormal Polynomial Expansion Method (OPEM), based on Forsythe recurrence formula to describe the data in the new error corridor. The development of OPEM gives the approximating curves and their derivatives in optimal orthonormal and usual expansions including the errors in both variables with special criteria.

  9. Decision support system for determining the contact lens for refractive errors patients with classification ID3

    Science.gov (United States)

    Situmorang, B. H.; Setiawan, M. P.; Tosida, E. T.

    2017-01-01

    Refractive errors are abnormalities of the refraction of light so that the shadows do not focus precisely on the retina resulting in blurred vision [1]. Refractive errors causing the patient should wear glasses or contact lenses in order eyesight returned to normal. The use of glasses or contact lenses in a person will be different from others, it is influenced by patient age, the amount of tear production, vision prescription, and astigmatic. Because the eye is one organ of the human body is very important to see, then the accuracy in determining glasses or contact lenses which will be used is required. This research aims to develop a decision support system that can produce output on the right contact lenses for refractive errors patients with a value of 100% accuracy. Iterative Dichotomize Three (ID3) classification methods will generate gain and entropy values of attributes that include code sample data, age of the patient, astigmatic, the ratio of tear production, vision prescription, and classes that will affect the outcome of the decision tree. The eye specialist test result for the training data obtained the accuracy rate of 96.7% and an error rate of 3.3%, the result test using confusion matrix obtained the accuracy rate of 96.1% and an error rate of 3.1%; for the data testing obtained accuracy rate of 100% and an error rate of 0.

  10. Measurement error in longitudinal film badge data

    CERN Document Server

    Marsh, J L

    2002-01-01

    Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study context, and applied to this Sellafield study. The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is main...

  11. Dynamic diagnostics of the error fields in tokamaks

    Science.gov (United States)

    Pustovitov, V. D.

    2007-07-01

    The error field diagnostics based on magnetic measurements outside the plasma is discussed. The analysed methods rely on measuring the plasma dynamic response to the finite-amplitude external magnetic perturbations, which are the error fields and the pre-programmed probing pulses. Such pulses can be created by the coils designed for static error field correction and for stabilization of the resistive wall modes, the technique developed and applied in several tokamaks, including DIII-D and JET. Here analysis is based on the theory predictions for the resonant field amplification (RFA). To achieve the desired level of the error field correction in tokamaks, the diagnostics must be sensitive to signals of several Gauss. Therefore, part of the measurements should be performed near the plasma stability boundary, where the RFA effect is stronger. While the proximity to the marginal stability is important, the absolute values of plasma parameters are not. This means that the necessary measurements can be done in the diagnostic discharges with parameters below the nominal operating regimes, with the stability boundary intentionally lowered. The estimates for ITER are presented. The discussed diagnostics can be tested in dedicated experiments in existing tokamaks. The diagnostics can be considered as an extension of the 'active MHD spectroscopy' used recently in the DIII-D tokamak and the EXTRAP T2R reversed field pinch.

  12. The role of comprehensive check at the blood bank reception on blood requisitions in detecting potential transfusion errors.

    Science.gov (United States)

    Jain, Ashish; Kumari, Sonam; Marwaha, Neelam; Sharma, Ratti Ram

    2015-06-01

    Pre-transfusion testing includes proper requisitions, compatibility testing and pre-release checks. Proper labelling of samples and blood units and accurate patient details check helps to minimize the risk of errors in transfusion. This study was aimed to identify requisition errors before compatibility testing. The study was conducted in the blood bank of a tertiary care hospital in north India over a period of 3 months. The requisitions were screened at the reception counter and inside the pre-transfusion testing laboratory for errors. This included checking the Central Registration number (C.R. No.) and name of patient on the requisition form and the sample label; appropriateness of sample container and sample label; incomplete requisitions; blood group discrepancy. Out of the 17,148 blood requisitions, 474 (2.76 %) requisition errors were detected before the compatibility testing. There were 192 (1.11 %) requisitions where the C.R. No. on the form and the sample were not tallying and in 70 (0.40 %) requisitions patient's name on the requisition form and the sample were different. Highest number of requisitions errors were observed in those received from the Emergency and Trauma services (27.38 %) followed by Medical wards (15.82 %) and the lowest number (3.16 %) of requisition errors were observed from Hematology and Oncology wards. C.R. No. error was the most common error observed in our study. Thus a careful check of the blood requisitions at the blood bank reception counter helps in identifying the potential transfusion errors.

  13. Koppitz errors on the Bender-Gestalt for adult retardates: normative data.

    Science.gov (United States)

    Andert, J N; Dinning, W D; Hustak, T L

    1976-04-01

    Normative data on the Koppitz developmental scoring system for the Bender-Gestalt test were derived from a sample which included 510 protocols of adult resident retardates. Percentile norms are presented on Koppitz error scores for three AAMD ranges of retardation based on WAIS IQs and two AAMD ranges of retardation based on Stanford-Binet IQs.

  14. Practicality of Evaluating Soft Errors in Commercial sub-90 nm CMOS for Space Applications

    Science.gov (United States)

    Pellish, Jonathan A.; LaBel, Kenneth A.

    2010-01-01

    The purpose of this presentation is to: Highlight space memory evaluation evolution, Review recent developments regarding low-energy proton direct ionization soft errors, Assess current space memory evaluation challenges, including increase of non-volatile technology choices, and Discuss related testing and evaluation complexities.

  15. Factors which affect the occurrence of nursing errors in medication administration and the errors' management

    Directory of Open Access Journals (Sweden)

    Theodore Kapadohos

    2012-04-01

    Full Text Available Nursing, as a humanitarian science, offers its services, on the comprehensive care of patients. Each nurse handling, involves the possibility of error. Meurier appointed nursing error as "any act, any decision or omission by a nurse, assessed as incorrect, by more experienced colleagues, and have adverse consequences for patients". Medication errors are the most common category of nursing errors. They affect health, patient safety and also have a high economic impact to health systems of each country. Aim: The present study investigated the causative factors of nursing errors, the frequency of medication errors and the ways of reporting, recording and managing these errors in the hospitals of Greece. Method: For the purpose of this study, a descriptive cross-sectional design was used. The sample consisted of 176 registered nurses, from eight public and three private hospitals, working in the ICU and their duties included the administration of drugs. Data collection was performed using an anonymous structured questionnaire that included demographic characteristics of the sample and closed questions about the factors implicated in the occurrence of errors and their management. To investigate the existence of correlation between demographics and various questions referred to the management of errors by nurses, the criterion of heterogeneity X2 of Pearson was used and to check for correlation between questions that reflect the participants' views on working conditions and management of errors, the non-parametric correlation coefficient of Spearman (Spearman rho was applied. The statistical analysis was performed using SPSS 17 software. Results: After statistical analysis of data, the most important causative factors for the occurrence of errors are the nursing workload (78.9%, the distraction of nurses (75.8% and the burnout (56.8%. More than 9 out of 10 nurses have made errors in drug administration (91.5%, especially with the wrong dose (34.7% and

  16. Error bounds from extra precise iterative refinement

    Energy Technology Data Exchange (ETDEWEB)

    Demmel, James; Hida, Yozo; Kahan, William; Li, Xiaoye S.; Mukherjee, Soni; Riedy, E. Jason

    2005-02-07

    We present the design and testing of an algorithm for iterative refinement of the solution of linear equations, where the residual is computed with extra precision. This algorithm was originally proposed in the 1960s [6, 22] as a means to compute very accurate solutions to all but the most ill-conditioned linear systems of equations. However two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5] has recently removed the first obstacle. To overcome the second obstacle, we show how a single application of iterative refinement can be used to compute an error bound in any norm at small cost, and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. We report extensive test results on over 6.2 million matrices of dimension 5, 10, 100, and 1000. As long as a normwise (resp. componentwise) condition number computed by the algorithm is less than 1/max{l_brace}10,{radical}n{r_brace} {var_epsilon}{sub w}, the computed normwise (resp. componentwise) error bound is at most 2 max{l_brace}10,{radical}n{r_brace} {center_dot} {var_epsilon}{sub w}, and indeed bounds the true error. Here, n is the matrix dimension and w is single precision roundoff error. For worse conditioned problems, we get similarly small correct error bounds in over 89.4% of cases.

  17. (Sample) Size Matters: Defining Error in Planktic Foraminiferal Isotope Measurement

    Science.gov (United States)

    Lowery, C.; Fraass, A. J.

    2015-12-01

    Planktic foraminifera have been used as carriers of stable isotopic signals since the pioneering work of Urey and Emiliani. In those heady days, instrumental limitations required hundreds of individual foraminiferal tests to return a usable value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population, which generally turns over monthly, removing that potential noise from each sample. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. This has been a tremendous advantage, allowing longer time series with the same investment of time and energy. Unfortunately, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most workers (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB or ~1°C. Additionally, and perhaps more importantly, we show that under unrealistically ideal conditions (perfect preservation, etc.) it takes ~5 individuals from the mixed-layer to achieve an error of less than 0.1‰. Including just the unavoidable vital effects inflates that number to ~10 individuals to achieve ~0.1‰. Combining these errors with the typical machine error inherent in mass spectrometers make this a vital consideration moving forward.

  18. Logical error rate in the Pauli twirling approximation.

    Science.gov (United States)

    Katabarwa, Amara; Geller, Michael R

    2015-09-30

    The performance of error correction protocols are necessary for understanding the operation of potential quantum computers, but this requires physical error models that can be simulated efficiently with classical computers. The Gottesmann-Knill theorem guarantees a class of such error models. Of these, one of the simplest is the Pauli twirling approximation (PTA), which is obtained by twirling an arbitrary completely positive error channel over the Pauli basis, resulting in a Pauli channel. In this work, we test the PTA's accuracy at predicting the logical error rate by simulating the 5-qubit code using a 9-qubit circuit with realistic decoherence and unitary gate errors. We find evidence for good agreement with exact simulation, with the PTA overestimating the logical error rate by a factor of 2 to 3. Our results suggest that the PTA is a reliable predictor of the logical error rate, at least for low-distance codes.

  19. Diagnostic errors in pediatric radiology

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, George A.; Voss, Stephan D. [Children' s Hospital Boston, Department of Radiology, Harvard Medical School, Boston, MA (United States); Melvin, Patrice R. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Graham, Dionne A. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Harvard Medical School, The Department of Pediatrics, Boston, MA (United States)

    2011-03-15

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  20. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  1. Demands on and testing of resistance welding for HDPE pipes. Status and prospects of codes, frequent errors; Anforderungen und Pruefung von Heizwendelschweissverbindungen fuer Rohre aus PE-HD. Stand und Aussicht der Richtlinien, haeufige Fehler

    Energy Technology Data Exchange (ETDEWEB)

    Langlouis, Winfried; Baudrit, Benjamin; Behr, Heinz; Bastian, Martin [SKZ, Wuerzburg (Germany)

    2009-01-15

    Resistance welding (RW) of pipes and pipeline elements has been an established process, permitting construction of reliable and durable piping systems, for many decades. Up to now, however, there have been no standardized requirements for performance in shear and peeling tests. This gap has now been closed, and a comprehensive DVS code covering resistance welding from the training stage, via welding parameters, up to and including inspection, is now available. (orig.)

  2. Precision and shortcomings of yaw error estimation using spinner-based light detection and ranging

    DEFF Research Database (Denmark)

    Kragh, Knud Abildgaard; Hansen, Morten Hartvig; Mikkelsen, Torben

    2013-01-01

    was developed and tested. In this study, the simulation parameter space is extended to include higher levels of turbulence intensity. Furthermore, the method is applied to experimental data and compared with met-mast data corrected for a calibration error that was not discovered during previous work. Finally......, the shortcomings of using a spinner mounted LIDAR for yaw error estimation are discussed. The extended simulation study shows that with the applied method, the yaw error can be estimated with a precision of a few degrees, even in highly turbulent flows. Applying the method to experimental data reveals an average......When extracting energy from the wind using horizontal axis wind turbines, the ability to align the rotor axis with the mean wind direction is crucial. In previous work, a method for estimating the yaw error based on measurements from a spinner mounted light detection and ranging (LIDAR) device...

  3. Recommended Practices for Spreadsheet Testing

    CERN Document Server

    Panko, Raymond R

    2006-01-01

    This paper presents the authors recommended practices for spreadsheet testing. Documented spreadsheet error rates are unacceptable in corporations today. Although improvements are needed throughout the systems development life cycle, credible improvement programs must include comprehensive testing. Several forms of testing are possible, but logic inspection is recommended for module testing. Logic inspection appears to be feasible for spreadsheet developers to do, and logic inspection appears to be safe and effective.

  4. Rank Modulation for Translocation Error Correction

    CERN Document Server

    Farnoud, Farzad; Milenkovic, Olgica

    2012-01-01

    We consider rank modulation codes for flash memories that allow for handling arbitrary charge drop errors. Unlike classical rank modulation codes used for correcting errors that manifest themselves as swaps of two adjacently ranked elements, the proposed \\emph{translocation rank codes} account for more general forms of errors that arise in storage systems. Translocations represent a natural extension of the notion of adjacent transpositions and as such may be analyzed using related concepts in combinatorics and rank modulation coding. Our results include tight bounds on the capacity of translocation rank codes, construction techniques for asymptotically good codes, as well as simple decoding methods for one class of structured codes. As part of our exposition, we also highlight the close connections between the new code family and permutations with short common subsequences, deletion and insertion error-correcting codes for permutations and permutation arrays.

  5. Optical modulator including grapene

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Ming; Yin, Xiaobo; Zhang, Xiang

    2016-06-07

    The present invention provides for a one or more layer graphene optical modulator. In a first exemplary embodiment the optical modulator includes an optical waveguide, a nanoscale oxide spacer adjacent to a working region of the waveguide, and a monolayer graphene sheet adjacent to the spacer. In a second exemplary embodiment, the optical modulator includes at least one pair of active media, where the pair includes an oxide spacer, a first monolayer graphene sheet adjacent to a first side of the spacer, and a second monolayer graphene sheet adjacent to a second side of the spacer, and at least one optical waveguide adjacent to the pair.

  6. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  7. Liquid medication dosing errors in children: role of provider counseling strategies.

    Science.gov (United States)

    Yin, H Shonna; Dreyer, Benard P; Moreira, Hannah A; van Schaick, Linda; Rodriguez, Luis; Boettger, Susanne; Mendelsohn, Alan L

    2014-01-01

    To examine the degree to which recommended provider counseling strategies, including advanced communication techniques and dosing instrument provision, are associated with reductions in parent liquid medication dosing errors. Cross-sectional analysis of baseline data on provider communication and dosing instrument provision from a study of a health literacy intervention to reduce medication errors. Parents whose children (20% deviation from prescribed). Multivariate logistic regression analyses were performed, controlling for parent age, language, country, ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease status; and site. Of 287 parents, 41.1% made dosing errors. Advanced counseling and instrument provision in the ED were reported by 33.1% and 19.2%, respectively; 15.0% reported both. Advanced counseling and instrument provision in the ED were associated with decreased errors (30.5 vs. 46.4%, P = .01; 21.8 vs. 45.7%, P = .001). In adjusted analyses, ED advanced counseling in combination with instrument provision was associated with a decreased odds of error compared to receiving neither (adjusted odds ratio 0.3; 95% confidence interval 0.1-0.7); advanced counseling alone and instrument alone were not significantly associated with odds of error. Provider use of advanced counseling strategies and dosing instrument provision may be especially effective in reducing errors when used together. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  8. Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.

    Science.gov (United States)

    Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep

    2017-06-12

    Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.

  9. Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor

    Directory of Open Access Journals (Sweden)

    Lyvia Biagi

    2017-06-01

    Full Text Available Continuous glucose monitors (CGMs are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL sensor and compare it with the Dexcom SEVEN PLUS (7P, G4 PLATINUM (G4P, and advanced G4 for Artificial Pancreas studies (G4AP systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D, in which CGMs were worn in parallel and blood glucose (BG samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.

  10. Implications of Error Analysis Studies for Academic Interventions

    Science.gov (United States)

    Mather, Nancy; Wendling, Barbara J.

    2017-01-01

    We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…

  11. Errors and mistakes in breast ultrasound diagnostics.

    Science.gov (United States)

    Jakubowski, Wiesław; Dobruch-Sobczak, Katarzyna; Migda, Bartosz

    2012-09-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  12. Errors and mistakes in breast ultrasound diagnostics

    Science.gov (United States)

    Jakubowski, Wiesław; Migda, Bartosz

    2012-01-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions. PMID:26675358

  13. Visual Impairment, Including Blindness

    Science.gov (United States)

    ... Who Knows What? Survey Item Bank Search for: Visual Impairment, Including Blindness Links updated, April 2017 En ... doesn’t wear his glasses. Back to top Visual Impairments in Children Vision is one of our ...

  14. Transient and permanent error control for networks-on-chip

    CERN Document Server

    Yu, Qiaoyan

    2012-01-01

    This book addresses reliability and energy efficiency of on-chip networks using a configurable error control coding (ECC) scheme for datalink-layer transient error management. The method can adjust both error detection and correction strengths at runtime by varying the number of redundant wires for parity-check bits. Methods are also presented to tackle joint transient and permanent error correction, exploiting the redundant resources already available on-chip. A parallel and flexible network simulator is also introduced, which facilitates examining the impact of various error control methods on network-on-chip performance. Includes a complete survey of error control methods for reliable networks-on-chip, evaluated for reliability, energy and performance metrics; Provides analysis of error control in various network-on-chip layers, as well as presentation of an innovative multi-layer error control coding technique; Presents state-of-the-art solutions to address simultaneously reliability, energy and performan...

  15. Teacher knowledge of error analysis in differential calculus

    Directory of Open Access Journals (Sweden)

    Eunice K. Moru

    2014-12-01

    Full Text Available The study investigated teacher knowledge of error analysis in differential calculus. Two teachers were the sample of the study: one a subject specialist and the other a mathematics education specialist. Questionnaires and interviews were used for data collection. The findings of the study reflect that the teachers’ knowledge of error analysis was characterised by the following assertions, which are backed up with some evidence: (1 teachers identified the errors correctly, (2 the generalised error identification resulted in opaque analysis, (3 some of the identified errors were not interpreted from multiple perspectives, (4 teachers’ evaluation of errors was either local or global and (5 in remedying errors accuracy and efficiency were emphasised more than conceptual understanding. The implications of the findings of the study for teaching include engaging in error analysis continuously as this is one way of improving knowledge for teaching.

  16. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  17. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  18. Error-finding and error-correcting methods for the start-up of the SLC

    Energy Technology Data Exchange (ETDEWEB)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper.

  19. An investigation of error correcting techniques for OMV and AXAF

    Science.gov (United States)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  20. Drug Administration Errors in Hospital Inpatients: A Systematic Review

    Science.gov (United States)

    Berdot, Sarah; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre; Sabatier, Brigitte

    2013-01-01

    Context Drug administration in the hospital setting is the last barrier before a possible error reaches the patient. Objectives We aimed to analyze the prevalence and nature of administration error rate detected by the observation method. Data Sources Embase, MEDLINE, Cochrane Library from 1966 to December 2011 and reference lists of included studies. Study Selection Observational studies, cross-sectional studies, before-and-after studies, and randomized controlled trials that measured the rate of administration errors in inpatients were included. Data Extraction Two reviewers (senior pharmacists) independently identified studies for inclusion. One reviewer extracted the data; the second reviewer checked the data. The main outcome was the error rate calculated as being the number of errors without wrong time errors divided by the Total Opportunity for Errors (TOE, sum of the total number of doses ordered plus the unordered doses given), and multiplied by 100. For studies that reported it, clinical impact was reclassified into four categories from fatal to minor or no impact. Due to a large heterogeneity, results were expressed as median values (interquartile range, IQR), according to their study design. Results Among 2088 studies, a total of 52 reported TOE. Most of the studies were cross-sectional studies (N=46). The median error rate without wrong time errors for the cross-sectional studies using TOE was 10.5% [IQR: 7.3%-21.7%]. No fatal error was observed and most errors were classified as minor in the 18 studies in which clinical impact was analyzed. We did not find any evidence of publication bias. Conclusions Administration errors are frequent among inpatients. The median error rate without wrong time errors for the cross-sectional studies using TOE was about 10%. A standardization of administration error rate using the same denominator (TOE), numerator and types of errors is essential for further publications. PMID:23818992