WorldWideScience

Sample records for survey ii error

  1. THE DISKMASS SURVEY. II. ERROR BUDGET

    International Nuclear Information System (INIS)

    Bershady, Matthew A.; Westfall, Kyle B.; Verheijen, Marc A. W.; Martinsson, Thomas; Andersen, David R.; Swaters, Rob A.

    2010-01-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ * ), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25 0 -35 0 is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction (F bar ) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σ dyn ), disk stellar mass-to-light ratio (Υ disk * ), and disk maximality (F *,max disk ≡V disk *,max / V c ). Random and systematic errors in these quantities for individual galaxies will be ∼25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  2. The DiskMass Survey. II. Error Budget

    Science.gov (United States)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  3. Total Survey Error for Longitudinal Surveys

    NARCIS (Netherlands)

    Lynn, Peter; Lugtig, P.J.

    2016-01-01

    This article describes the application of the total survey error paradigm to longitudinal surveys. Several aspects of survey error, and of the interactions between different types of error, are distinct in the longitudinal survey context. Furthermore, error trade-off decisions in survey design and

  4. Analysis of Employee's Survey for Preventing Human-Errors

    International Nuclear Information System (INIS)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun

    2013-01-01

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses

  5. The decline and fall of Type II error rates

    Science.gov (United States)

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  6. Medical Errors in Cyprus: The 2005 Eurobarometer Survey

    Directory of Open Access Journals (Sweden)

    Andreas Pavlakis

    2012-01-01

    Full Text Available Background: Medical errors have been highlighted in recent years by different agencies, scientific bodies and research teams alike. We sought to explore the issue of medical errors in Cyprus using data from the Eurobarometer survey.Methods: Data from the special Eurobarometer survey conducted in 2005 across all European Union countries (EU-25 and the acceding countries were obtained from the corresponding EU office. Statisticalanalyses including logistic regression models were performed using SPSS.Results: A total of 502 individuals participated in the Cyprus survey. About 90% reported that they had often or sometimes heard about medical errors, while 22% reported that a family member or they had suffered a serious medical error in a local hospital. In addition, 9.4% reported a serious problem from a prescribed medicine. We also found statistically significant differences across different ages and gender and in rural versus urban residents. Finally, using multivariable-adjusted logistic regression models, wefound that residents in rural areas were more likely to have suffered a serious medical error in a local hospital or from a prescribed medicine.Conclusion: Our study shows that the vast majority of residents in Cyprus in parallel with the other Europeans worry about medical errors and a significant percentage report having suffered a serious medical error at a local hospital or from a prescribed medicine. The results of our study could help the medical community in Cyprus and the society at large to enhance its vigilance with respect to medical errors in order to improve medical care.

  7. Nonresponse Error in Mail Surveys: Top Ten Problems

    Directory of Open Access Journals (Sweden)

    Jeanette M. Daly

    2011-01-01

    Full Text Available Conducting mail surveys can result in nonresponse error, which occurs when the potential participant is unwilling to participate or impossible to contact. Nonresponse can result in a reduction in precision of the study and may bias results. The purpose of this paper is to describe and make readers aware of a top ten list of mailed survey problems affecting the response rate encountered over time with different research projects, while utilizing the Dillman Total Design Method. Ten nonresponse error problems were identified, such as inserter machine gets sequence out of order, capitalization in databases, and mailing discarded by postal service. These ten mishaps can potentiate nonresponse errors, but there are ways to minimize their frequency. Suggestions offered stem from our own experiences during research projects. Our goal is to increase researchers' knowledge of nonresponse error problems and to offer solutions which can decrease nonresponse error in future projects.

  8. The Southern H ii Region Discovery Survey (SHRDS): Pilot Survey

    Energy Technology Data Exchange (ETDEWEB)

    Brown, C.; Dickey, John M. [School of Physical Sciences, Private Bag 37, University of Tasmania, Hobart, TAS, 7001 (Australia); Jordan, C. [International Centre for Radio Astronomy Research, Curtin University, Perth, WA, 6845 (Australia); Anderson, L. D.; Armentrout, W. P. [Department of Physics and Astronomy, West Virginia University, P.O. Box 6315, Morgantown, WV 26506 (United States); Balser, Dana S.; Wenger, Trey V. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22904 (United States); Bania, T. M. [Institute for Astrophysical Research, Department of Astronomy, Boston University, 725 Commonwealth Avenue, Boston, MA 02215 (United States); Dawson, J. R. [Department of Physics and Astronomy and MQ Research Centre in Astronomy, Astrophysics and Astrophotonics, Macquarie University, NSW, 2109 (Australia); Mc Clure-Griffiths, N. M. [Research School of Astronomy and Astrophysics, The Australian National University, Canberra ACT 2611 (Australia)

    2017-07-01

    The Southern H ii Region Discovery Survey is a survey of the third and fourth quadrants of the Galactic plane that will detect radio recombination line (RRL) and continuum emission at cm-wavelengths from several hundred H ii region candidates using the Australia Telescope Compact Array. The targets for this survey come from the WISE Catalog of Galactic H ii Regions and were identified based on mid-infrared and radio continuum emission. In this pilot project, two different configurations of the Compact Array Broad Band receiver and spectrometer system were used for short test observations. The pilot surveys detected RRL emission from 36 of 53 H ii region candidates, as well as seven known H ii regions that were included for calibration. These 36 recombination line detections confirm that the candidates are true H ii regions and allow us to estimate their distances.

  9. The Southern H ii Region Discovery Survey (SHRDS): Pilot Survey

    International Nuclear Information System (INIS)

    Brown, C.; Dickey, John M.; Jordan, C.; Anderson, L. D.; Armentrout, W. P.; Balser, Dana S.; Wenger, Trey V.; Bania, T. M.; Dawson, J. R.; Mc Clure-Griffiths, N. M.

    2017-01-01

    The Southern H ii Region Discovery Survey is a survey of the third and fourth quadrants of the Galactic plane that will detect radio recombination line (RRL) and continuum emission at cm-wavelengths from several hundred H ii region candidates using the Australia Telescope Compact Array. The targets for this survey come from the WISE Catalog of Galactic H ii Regions and were identified based on mid-infrared and radio continuum emission. In this pilot project, two different configurations of the Compact Array Broad Band receiver and spectrometer system were used for short test observations. The pilot surveys detected RRL emission from 36 of 53 H ii region candidates, as well as seven known H ii regions that were included for calibration. These 36 recombination line detections confirm that the candidates are true H ii regions and allow us to estimate their distances.

  10. The sloan digital sky survey-II supernova survey

    DEFF Research Database (Denmark)

    Frieman, Joshua A.; Bassett, Bruce; Becker, Andrew

    2008-01-01

    The Sloan Digital Sky Survey-II (SDSS-II) has embarked on a multi-year project to identify and measure light curves for intermediate-redshift (0.05 < z < 0.35) Type Ia supernovae (SNe Ia) using repeated five-band (ugriz) imaging over an area of 300 sq. deg. The survey region is a stripe 2.5° wide...

  11. The computation of equating errors in international surveys in education.

    Science.gov (United States)

    Monseur, Christian; Berezner, Alla

    2007-01-01

    Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.

  12. Analysis of Employee's Survey for Preventing Human-Errors

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses.

  13. The Data Release of the Sloan Digital Sky Survey-II Supernova Survey

    Science.gov (United States)

    Sako, Masao; Bassett, Bruce; Becker, Andrew C.; Brown, Peter J.; Campbell, Heather; Wolf, Rachel; Cinabro, David; D’Andrea, Chris B.; Dawson, Kyle S.; DeJongh, Fritz; Depoy, Darren L.; Dilday, Ben; Doi, Mamoru; Filippenko, Alexei V.; Fischer, John A.; Foley, Ryan J.; Frieman, Joshua A.; Galbany, Lluis; Garnavich, Peter M.; Goobar, Ariel; Gupta, Ravi R.; Hill, Gary J.; Hayden, Brian T.; Hlozek, Renée; Holtzman, Jon A.; Hopp, Ulrich; Jha, Saurabh W.; Kessler, Richard; Kollatschny, Wolfram; Leloudas, Giorgos; Marriner, John; Marshall, Jennifer L.; Miquel, Ramon; Morokuma, Tomoki; Mosher, Jennifer; Nichol, Robert C.; Nordin, Jakob; Olmstead, Matthew D.; Östman, Linda; Prieto, Jose L.; Richmond, Michael; Romani, Roger W.; Sollerman, Jesper; Stritzinger, Max; Schneider, Donald P.; Smith, Mathew; Wheeler, J. Craig; Yasuda, Naoki; Zheng, Chen

    2018-06-01

    This paper describes the data release of the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey conducted between 2005 and 2007. Light curves, spectra, classifications, and ancillary data are presented for 10,258 variable and transient sources discovered through repeat ugriz imaging of SDSS Stripe 82, a 300 deg2 area along the celestial equator. This data release is comprised of all transient sources brighter than r ≃ 22.5 mag with no history of variability prior to 2004. Dedicated spectroscopic observations were performed on a subset of 889 transients, as well as spectra for thousands of transient host galaxies using the SDSS-III BOSS spectrographs. Photometric classifications are provided for the candidates with good multi-color light curves that were not observed spectroscopically, using host galaxy redshift information when available. From these observations, 4607 transients are either spectroscopically confirmed, or likely to be, supernovae, making this the largest sample of supernova candidates ever compiled. We present a new method for SN host-galaxy identification and derive host-galaxy properties including stellar masses, star formation rates, and the average stellar population ages from our SDSS multi-band photometry. We derive SALT2 distance moduli for a total of 1364 SN Ia with spectroscopic redshifts as well as photometric redshifts for a further 624 purely photometric SN Ia candidates. Using the spectroscopically confirmed subset of the three-year SDSS-II SN Ia sample and assuming a flat ΛCDM cosmology, we determine Ω M = 0.315 ± 0.093 (statistical error only) and detect a non-zero cosmological constant at 5.7σ.

  14. The Data Release of the Sloan Digital Sky Survey-II Supernova Survey

    Energy Technology Data Exchange (ETDEWEB)

    Sako, Masao; et al.

    2014-01-14

    This paper describes the data release of the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey conducted between 2005 and 2007. Light curves, spectra, classifications, and ancillary data are presented for 10,258 variable and transient sources discovered through repeat ugriz imaging of SDSS Stripe 82, a 300 deg2 area along the celestial equator. This data release is comprised of all transient sources brighter than r~22.5 mag with no history of variability prior to 2004. Dedicated spectroscopic observations were performed on a subset of 889 transients, as well as spectra for thousands of transient host galaxies using the SDSS-III BOSS spectrographs. Photometric classifications are provided for the candidates with good multi-color light curves that were not observed spectroscopically. From these observations, 4607 transients are either spectroscopically confirmed, or likely to be, supernovae, making this the largest sample of supernova candidates ever compiled. We present a new method for SN host-galaxy identification and derive host-galaxy properties including stellar masses, star-formation rates, and the average stellar population ages from our SDSS multi-band photometry. We derive SALT2 distance moduli for a total of 1443 SN Ia with spectroscopic redshifts as well as photometric redshifts for a further 677 purely-photometric SN Ia candidates. Using the spectroscopically confirmed subset of the three-year SDSS-II SN Ia sample and assuming a flat Lambda-CDM cosmology, we determine Omega_M = 0.315 +/- 0.093 (statistical error only) and detect a non-zero cosmological constant at 5.7 sigmas.

  15. Errors in practical measurement in surveying, engineering, and technology

    International Nuclear Information System (INIS)

    Barry, B.A.; Morris, M.D.

    1991-01-01

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  16. A national physician survey of diagnostic error in paediatrics.

    Science.gov (United States)

    Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B

    2016-10-01

    This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.

  17. Merotelic kinetochore attachment in oocyte meiosis II causes sister chromatids segregation errors in aged mice.

    Science.gov (United States)

    Cheng, Jin-Mei; Li, Jian; Tang, Ji-Xin; Hao, Xiao-Xia; Wang, Zhi-Peng; Sun, Tie-Cheng; Wang, Xiu-Xia; Zhang, Yan; Chen, Su-Ren; Liu, Yi-Xun

    2017-08-03

    Mammalian oocyte chromosomes undergo 2 meiotic divisions to generate haploid gametes. The frequency of chromosome segregation errors during meiosis I increase with age. However, little attention has been paid to the question of how aging affects sister chromatid segregation during oocyte meiosis II. More importantly, how aneuploid metaphase II (MII) oocytes from aged mice evade the spindle assembly checkpoint (SAC) mechanism to complete later meiosis II to form aneuploid embryos remains unknown. Here, we report that MII oocytes from naturally aged mice exhibited substantial errors in chromosome arrangement and configuration compared with young MII oocytes. Interestingly, these errors in aged oocytes had no impact on anaphase II onset and completion as well as 2-cell formation after parthenogenetic activation. Further study found that merotelic kinetochore attachment occurred more frequently and could stabilize the kinetochore-microtubule interaction to ensure SAC inactivation and anaphase II onset in aged MII oocytes. This orientation could persist largely during anaphase II in aged oocytes, leading to severe chromosome lagging and trailing as well as delay of anaphase II completion. Therefore, merotelic kinetochore attachment in oocyte meiosis II exacerbates age-related genetic instability and is a key source of age-dependent embryo aneuploidy and dysplasia.

  18. Overview about bias in Customer Satisfaction Surveys and focus on self-selection error

    OpenAIRE

    Giovanna Nicolini; Luciana Dalla Valle

    2009-01-01

    The present paper provides an overview of the main types of surveys carried out for customer satisfaction analyses. In order to carry out these surveys it is possible to plan a census or select a sample. The higher the accuracy of the survey, the more reliable the results of the analysis. For this very reason, researchers pay special attention to surveys with bias due to non sampling errors, in particular to self-selection errors. These phenomena are very frequent especially in web surveys. S...

  19. The Extended Northern ROSAT Galaxy Cluster Survey (NORAS II). I. Survey Construction and First Results

    International Nuclear Information System (INIS)

    Böhringer, Hans; Chon, Gayoung; Trümper, Joachim; Retzlaff, Jörg; Meisenheimer, Klaus; Schartel, Norbert

    2017-01-01

    As the largest, clearly defined building blocks of our universe, galaxy clusters are interesting astrophysical laboratories and important probes for cosmology. X-ray surveys for galaxy clusters provide one of the best ways to characterize the population of galaxy clusters. We provide a description of the construction of the NORAS II galaxy cluster survey based on X-ray data from the northern part of the ROSAT All-Sky Survey. NORAS II extends the NORAS survey down to a flux limit of 1.8 × 10 −12 erg s −1 cm −2 (0.1–2.4 keV), increasing the sample size by about a factor of two. The NORAS II cluster survey now reaches the same quality and depth as its counterpart, the southern REFLEX II survey, allowing us to combine the two complementary surveys. The paper provides information on the determination of the cluster X-ray parameters, the identification process of the X-ray sources, the statistics of the survey, and the construction of the survey selection function, which we provide in numerical format. Currently NORAS II contains 860 clusters with a median redshift of z  = 0.102. We provide a number of statistical functions, including the log N –log S and the X-ray luminosity function and compare these to the results from the complementary REFLEX II survey. Using the NORAS II sample to constrain the cosmological parameters, σ 8 and Ω m , yields results perfectly consistent with those of REFLEX II. Overall, the results show that the two hemisphere samples, NORAS II and REFLEX II, can be combined without problems into an all-sky sample, just excluding the zone of avoidance.

  20. The Extended Northern ROSAT Galaxy Cluster Survey (NORAS II). I. Survey Construction and First Results

    Energy Technology Data Exchange (ETDEWEB)

    Böhringer, Hans; Chon, Gayoung; Trümper, Joachim [Max-Planck-Institut für Extraterrestrische Physik, D-85748 Garching (Germany); Retzlaff, Jörg [ESO, D-85748 Garching (Germany); Meisenheimer, Klaus [Max-Planck-Institut für Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Schartel, Norbert [ESAC, Camino Bajo del Castillo, Villanueva de la Cañada, E-28692 Madrid (Spain)

    2017-05-01

    As the largest, clearly defined building blocks of our universe, galaxy clusters are interesting astrophysical laboratories and important probes for cosmology. X-ray surveys for galaxy clusters provide one of the best ways to characterize the population of galaxy clusters. We provide a description of the construction of the NORAS II galaxy cluster survey based on X-ray data from the northern part of the ROSAT All-Sky Survey. NORAS II extends the NORAS survey down to a flux limit of 1.8 × 10{sup −12} erg s{sup −1} cm{sup −2} (0.1–2.4 keV), increasing the sample size by about a factor of two. The NORAS II cluster survey now reaches the same quality and depth as its counterpart, the southern REFLEX II survey, allowing us to combine the two complementary surveys. The paper provides information on the determination of the cluster X-ray parameters, the identification process of the X-ray sources, the statistics of the survey, and the construction of the survey selection function, which we provide in numerical format. Currently NORAS II contains 860 clusters with a median redshift of z  = 0.102. We provide a number of statistical functions, including the log N –log S and the X-ray luminosity function and compare these to the results from the complementary REFLEX II survey. Using the NORAS II sample to constrain the cosmological parameters, σ {sub 8} and Ω{sub m}, yields results perfectly consistent with those of REFLEX II. Overall, the results show that the two hemisphere samples, NORAS II and REFLEX II, can be combined without problems into an all-sky sample, just excluding the zone of avoidance.

  1. The Data Release of the Sloan Digital Sky Survey-II Supernova Survey

    DEFF Research Database (Denmark)

    Sako, Masao; Bassett, Bruce; C. Becker, Andrew

    2014-01-01

    This paper describes the data release of the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey conducted between 2005 and 2007. Light curves, spectra, classifications, and ancillary data are presented for 10,258 variable and transient sources discovered through repeat ugriz imaging of SDSS S...

  2. Medication errors in chemotherapy preparation and administration: a survey conducted among oncology nurses in Turkey.

    Science.gov (United States)

    Ulas, Arife; Silay, Kamile; Akinci, Sema; Dede, Didem Sener; Akinci, Muhammed Bulent; Sendur, Mehmet Ali Nahit; Cubukcu, Erdem; Coskun, Hasan Senol; Degirmenci, Mustafa; Utkan, Gungor; Ozdemir, Nuriye; Isikdogan, Abdurrahman; Buyukcelik, Abdullah; Inanc, Mevlude; Bilici, Ahmet; Odabasi, Hatice; Cihan, Sener; Avci, Nilufer; Yalcin, Bulent

    2015-01-01

    Medication errors in oncology may cause severe clinical problems due to low therapeutic indices and high toxicity of chemotherapeutic agents. We aimed to investigate unintentional medication errors and underlying factors during chemotherapy preparation and administration based on a systematic survey conducted to reflect oncology nurses experience. This study was conducted in 18 adult chemotherapy units with volunteer participation of 206 nurses. A survey developed by primary investigators and medication errors (MAEs) defined preventable errors during prescription of medication, ordering, preparation or administration. The survey consisted of 4 parts: demographic features of nurses; workload of chemotherapy units; errors and their estimated monthly number during chemotherapy preparation and administration; and evaluation of the possible factors responsible from ME. The survey was conducted by face to face interview and data analyses were performed with descriptive statistics. Chi-square or Fisher exact tests were used for a comparative analysis of categorical data. Some 83.4% of the 210 nurses reported one or more than one error during chemotherapy preparation and administration. Prescribing or ordering wrong doses by physicians (65.7%) and noncompliance with administration sequences during chemotherapy administration (50.5%) were the most common errors. The most common estimated average monthly error was not following the administration sequence of the chemotherapeutic agents (4.1 times/month, range 1-20). The most important underlying reasons for medication errors were heavy workload (49.7%) and insufficient number of staff (36.5%). Our findings suggest that the probability of medication error is very high during chemotherapy preparation and administration, the most common involving prescribing and ordering errors. Further studies must address the strategies to minimize medication error in chemotherapy receiving patients, determine sufficient protective measures

  3. Comparing Two Inferential Approaches to Handling Measurement Error in Mixed-Mode Surveys

    Directory of Open Access Journals (Sweden)

    Buelens Bart

    2017-06-01

    Full Text Available Nowadays sample survey data collection strategies combine web, telephone, face-to-face, or other modes of interviewing in a sequential fashion. Measurement bias of survey estimates of means and totals are composed of different mode-dependent measurement errors as each data collection mode has its own associated measurement error. This article contains an appraisal of two recently proposed methods of inference in this setting. The first is a calibration adjustment to the survey weights so as to balance the survey response to a prespecified distribution of the respondents over the modes. The second is a prediction method that seeks to correct measurements towards a benchmark mode. The two methods are motivated differently but at the same time coincide in some circumstances and agree in terms of required assumptions. The methods are applied to the Labour Force Survey in the Netherlands and are found to provide almost identical estimates of the number of unemployed. Each method has its own specific merits. Both can be applied easily in practice as they do not require additional data collection beyond the regular sequential mixed-mode survey, an attractive element for national statistical institutes and other survey organisations.

  4. The Sloan Digital Sky Survey-II Supernova Survey: Technical Summary

    Energy Technology Data Exchange (ETDEWEB)

    Frieman, Joshua A.; /Fermilab /KICP, Chicago /Chicago U., Astron. Astrophys. Ctr.; Bassett, Bruce; /Cape Town U. /South African Astron. Observ.; Becker, Andrew; /Washington; Choi, Changsu; /Seoul Natl. U.; Cinabro, David; /Wayne State U.; DeJongh, Don Frederic; /Fermilab; Depoy, Darren L.; /Ohio State U.; Doi, Mamoru; /Tokyo U.; Garnavich, Peter M.; /Notre Dame U.; Hogan, Craig J.; /Washington U., Seattle, Astron. Dept.; Holtzman, Jon; /New Mexico State U.; Im, Myungshin; /Seoul Natl. U.; Jha, Saurabh; /Stanford U., Phys. Dept.; Konishi, Kohki; /Tokyo U.; Lampeitl, Hubert; /Baltimore, Space Telescope Sci.; Marriner, John; /Fermilab; Marshall, Jennifer L.; /Ohio State U.; McGinnis,; /Fermilab; Miknaitis, Gajus; /Fermilab; Nichol, Robert C.; /Portsmouth U.; Prieto, Jose Luis; /Ohio State U. /Rochester Inst. Tech. /Stanford U., Phys. Dept. /Pennsylvania U.

    2007-09-14

    The Sloan Digital Sky Survey-II (SDSS-II) has embarked on a multi-year project to identify and measure light curves for intermediate-redshift (0.05 < z < 0.35) Type Ia supernovae (SNe Ia) using repeated five-band (ugriz) imaging over an area of 300 sq. deg. The survey region is a stripe 2.5 degrees wide centered on the celestial equator in the Southern Galactic Cap that has been imaged numerous times in earlier years, enabling construction of a deep reference image for discovery of new objects. Supernova imaging observations are being acquired between 1 September and 30 November of 2005-7. During the first two seasons, each region was imaged on average every five nights. Spectroscopic follow-up observations to determine supernova type and redshift are carried out on a large number of telescopes. In its first two three-month seasons, the survey has discovered and measured light curves for 327 spectroscopically confirmed SNe Ia, 30 probable SNe Ia, 14 confirmed SNe Ib/c, 32 confirmed SNe II, plus a large number of photometrically identified SNe Ia, 94 of which have host-galaxy spectra taken so far. This paper provides an overview of the project and briefly describes the observations completed during the first two seasons of operation.

  5. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    Science.gov (United States)

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  6. A Green Bank Telescope Survey of Large Galactic H II Regions

    Science.gov (United States)

    Anderson, L. D.; Armentrout, W. P.; Luisi, Matteo; Bania, T. M.; Balser, Dana S.; Wenger, Trey V.

    2018-02-01

    As part of our ongoing H II Region Discovery Survey (HRDS), we report the Green Bank Telescope detection of 148 new angularly large Galactic H II regions in radio recombination line (RRL) emission. Our targets are located at a declination of δ > -45^\\circ , which corresponds to 266^\\circ > {\\ell }> -20^\\circ at b=0^\\circ . All sources were selected from the Wide-field Infrared Survey Explorer Catalog of Galactic H II Regions, and have infrared angular diameters ≥slant 260\\prime\\prime . The Galactic distribution of these “large” H II regions is similar to that of the previously known sample of Galactic H II regions. The large H II region RRL line width and peak line intensity distributions are skewed toward lower values, compared with that of previous HRDS surveys. We discover seven sources with extremely narrow RRLs 100 {pc}, making them some of the physically largest known H II regions in the Galaxy. This survey completes the HRDS H II region census in the Northern sky, where we have discovered 887 H II regions and more than doubled the size of the previously known census of Galactic H II regions.

  7. How Radiation Oncologists Would Disclose Errors: Results of a Survey of Radiation Oncologists and Trainees

    International Nuclear Information System (INIS)

    Evans, Suzanne B.; Yu, James B.; Chagpar, Anees

    2012-01-01

    Purpose: To analyze error disclosure attitudes of radiation oncologists and to correlate error disclosure beliefs with survey-assessed disclosure behavior. Methods and Materials: With institutional review board exemption, an anonymous online survey was devised. An email invitation was sent to radiation oncologists (American Society for Radiation Oncology [ASTRO] gold medal winners, program directors and chair persons of academic institutions, and former ASTRO lecturers) and residents. A disclosure score was calculated based on the number or full, partial, or no disclosure responses chosen to the vignette-based questions, and correlation was attempted with attitudes toward error disclosure. Results: The survey received 176 responses: 94.8% of respondents considered themselves more likely to disclose in the setting of a serious medical error; 72.7% of respondents did not feel it mattered who was responsible for the error in deciding to disclose, and 3.9% felt more likely to disclose if someone else was responsible; 38.0% of respondents felt that disclosure increased the likelihood of a lawsuit, and 32.4% felt disclosure decreased the likelihood of lawsuit; 71.6% of respondents felt near misses should not be disclosed; 51.7% thought that minor errors should not be disclosed; 64.7% viewed disclosure as an opportunity for forgiveness from the patient; and 44.6% considered the patient's level of confidence in them to be a factor in disclosure. For a scenario that could be considerable, a non-harmful error, 78.9% of respondents would not contact the family. Respondents with high disclosure scores were more likely to feel that disclosure was an opportunity for forgiveness (P=.003) and to have never seen major medical errors (P=.004). Conclusions: The surveyed radiation oncologists chose to respond with full disclosure at a high rate, although ideal disclosure practices were not uniformly adhered to beyond the initial decision to disclose the occurrence of the error.

  8. A Type II Supernova Hubble diagram from the CSP-I, SDSS-II, and SNLS surveys

    OpenAIRE

    de Jaeger, T.; González-Gaitán, S.; Hamuy, M.; Galbany, L.; Anderson, J. P.; Phillips, M. M.; Stritzinger, M. D.; Carlberg, R. G.; Sullivan, M.; Gutiérrez, C. P.; Hook, I. M.; Howell, D. Andrew; Hsiao, E. Y.; Kuncarayakti, H.; Ruhlmann-Kleider, V.

    2016-01-01

    The coming era of large photometric wide-field surveys will increase the detection rate of supernovae by orders of magnitude. Such numbers will restrict spectroscopic follow-up in the vast majority of cases, and hence new methods based solely on photometric data must be developed. Here, we construct a complete Hubble diagram of Type II supernovae (SNe II) combining data from three different samples: the Carnegie Supernova Project-I, the Sloan Digital Sky Survey II SN, and th...

  9. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    Directory of Open Access Journals (Sweden)

    Brady T West

    Full Text Available Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT, which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.

  10. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    Science.gov (United States)

    West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817

  11. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S. [et al.

    2016-05-27

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  12. A Survey of Soft-Error Mitigation Techniques for Non-Volatile Memories

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2017-02-01

    Full Text Available Non-volatile memories (NVMs offer superior density and energy characteristics compared to the conventional memories; however, NVMs suffer from severe reliability issues that can easily eclipse their energy efficiency advantages. In this paper, we survey architectural techniques for improving the soft-error reliability of NVMs, specifically PCM (phase change memory and STT-RAM (spin transfer torque RAM. We focus on soft-errors, such as resistance drift and write disturbance, in PCM and read disturbance and write failures in STT-RAM. By classifying the research works based on key parameters, we highlight their similarities and distinctions. We hope that this survey will underline the crucial importance of addressing NVM reliability for ensuring their system integration and will be useful for researchers, computer architects and processor designers.

  13. Active and passive compensation of APPLE II-introduced multipole errors through beam-based measurement

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Hwang, Ching-Shiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Department of Electrophysics, National Chiao Tung University, Hsinchu 30050, Taiwan (China)

    2016-08-01

    The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.

  14. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    CERN Document Server

    Audren, Benjamin; Bird, Simeon; Haehnelt, Martin G.; Viel, Matteo

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fourier space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservat...

  15. The relative size of measurement error and attrition error in a panel survey. Comparing them with a new multi-trait multi-method model

    NARCIS (Netherlands)

    Lugtig, Peter

    2017-01-01

    This paper proposes a method to simultaneously estimate both measurement and nonresponse errors for attitudinal and behavioural questions in a longitudinal survey. The method uses a Multi-Trait Multi-Method (MTMM) approach, which is commonly used to estimate the reliability and validity of survey

  16. TYPE II-P SUPERNOVAE FROM THE SDSS-II SUPERNOVA SURVEY AND THE STANDARDIZED CANDLE METHOD

    International Nuclear Information System (INIS)

    D'Andrea, Chris B.; Sako, Masao; Dilday, Benjamin; Jha, Saurabh; Frieman, Joshua A.; Kessler, Richard; Holtzman, Jon; Konishi, Kohki; Yasuda, Naoki; Schneider, D. P.; Sollerman, Jesper; Wheeler, J. Craig; Cinabro, David; Nichol, Robert C.; Lampeitl, Hubert; Smith, Mathew; Atlee, David W.; Bassett, Bruce; Castander, Francisco J.; Goobar, Ariel

    2010-01-01

    We apply the Standardized Candle Method (SCM) for Type II Plateau supernovae (SNe II-P), which relates the velocity of the ejecta of a SN to its luminosity during the plateau, to 15 SNe II-P discovered over the three season run of the Sloan Digital Sky Survey-II Supernova Survey. The redshifts of these SNe-0.027 0.01) as all of the current literature on the SCM combined. We find that the SDSS SNe have a very small intrinsic I-band dispersion (0.22 mag), which can be attributed to selection effects. When the SCM is applied to the combined SDSS-plus-literature set of SNe II-P, the dispersion increases to 0.29 mag, larger than the scatter for either set of SNe separately. We show that the standardization cannot be further improved by eliminating SNe with positive plateau decline rates, as proposed in Poznanski et al. We thoroughly examine all potential systematic effects and conclude that for the SCM to be useful for cosmology, the methods currently used to determine the Fe II velocity at day 50 must be improved, and spectral templates able to encompass the intrinsic variations of Type II-P SNe will be needed.

  17. ASSESSMENT OF SYSTEMATIC CHROMATIC ERRORS THAT IMPACT SUB-1% PHOTOMETRIC PRECISION IN LARGE-AREA SKY SURVEYS

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Boada, S.; Mondrik, N.; Nagasawa, D. [George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, and Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Tucker, D.; Annis, J.; Finley, D. A.; Kent, S.; Lin, H.; Marriner, J.; Wester, W. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Kessler, R.; Scolnic, D. [Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Bernstein, G. M. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Burke, D. L.; Rykoff, E. S. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); James, D. J.; Walker, A. R. [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile); Collaboration: DES Collaboration; and others

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  18. The Use of PCs, Smartphones, and Tablets in a Probability-Based Panel Survey : Effects on Survey Measurement Error

    NARCIS (Netherlands)

    Lugtig, Peter; Toepoel, Vera

    2016-01-01

    Respondents in an Internet panel survey can often choose which device they use to complete questionnaires: a traditional PC, laptop, tablet computer, or a smartphone. Because all these devices have different screen sizes and modes of data entry, measurement errors may differ between devices. Using

  19. Are Divorce Studies Trustworthy? The Effects of Survey Nonresponse and Response Errors

    Science.gov (United States)

    Mitchell, Colter

    2010-01-01

    Researchers rely on relationship data to measure the multifaceted nature of families. This article speaks to relationship data quality by examining the ramifications of different types of error on divorce estimates, models predicting divorce behavior, and models employing divorce as a predictor. Comparing matched survey and divorce certificate…

  20. Estimating Classification Errors under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC)

    NARCIS (Netherlands)

    Boeschoten, Laura; Oberski, Daniel; De Waal, Ton

    2017-01-01

    Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible

  1. Factors controlling volume errors through 2D gully erosion assessment: guidelines for optimal survey design

    Science.gov (United States)

    Castillo, Carlos; Pérez, Rafael

    2017-04-01

    The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey

  2. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    Science.gov (United States)

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  3. First-Year Spectroscopy for the SDSS-II Supernova Survey

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Chen; Romani, Roger W.; Sako, Masao; Marriner, John; Bassett, Bruce; Becker, Andrew; Choi, Changsu; Cinabro, David; DeJongh, Fritz; Depoy, Darren L.; Dilday, Ben; Doi, Mamoru; Frieman, Joshua A.; Garnavich, Peter M.; Hogan, Craig J.; Holtzman, Jon; Im, Myungshin; Jha, Saurabh; Kessler, Richard; Konishi, Kohki; Lampeitl, Hubert

    2008-03-25

    This paper presents spectroscopy of supernovae discovered in the first season of the Sloan Digital Sky Survey-II Supernova Survey. This program searches for and measures multi-band light curves of supernovae in the redshift range z = 0.05-0.4, complementing existing surveys at lower and higher redshifts. Our goal is to better characterize the supernova population, with a particular focus on SNe Ia, improving their utility as cosmological distance indicators and as probes of dark energy. Our supernova spectroscopy program features rapid-response observations using telescopes of a range of apertures, and provides confirmation of the supernova and host-galaxy types as well as precise redshifts. We describe here the target identification and prioritization, data reduction, redshift measurement, and classification of 129 SNe Ia, 16 spectroscopically probable SNe Ia, 7 SNe Ib/c, and 11 SNe II from the first season. We also describe our efforts to measure and remove the substantial host galaxy contamination existing in the majority of our SN spectra.

  4. Quality assurance and human error effects on the structural safety

    International Nuclear Information System (INIS)

    Bertero, R.; Lopez, R.; Sarrate, M.

    1991-01-01

    Statistical surveys show that the frequency of failure of structures is much larger than that expected by the codes. Evidence exists that human errors (especially during the design process) is the main cause for the difference between the failure probability admitted by codes and the reality. In this paper, the attenuation of human error effects using tools of quality assurance is analyzed. In particular, the importance of the independent design review is highlighted, and different approaches are discussed. The experience from the Atucha II project, as well as the USA and German practice on independent design review, are summarized. (Author)

  5. A Type II Supernova Hubble Diagram from the CSP-I, SDSS-II, and SNLS Surveys

    Science.gov (United States)

    de Jaeger, T.; González-Gaitán, S.; Hamuy, M.; Galbany, L.; Anderson, J. P.; Phillips, M. M.; Stritzinger, M. D.; Carlberg, R. G.; Sullivan, M.; Gutiérrez, C. P.; Hook, I. M.; Howell, D. Andrew; Hsiao, E. Y.; Kuncarayakti, H.; Ruhlmann-Kleider, V.; Folatelli, G.; Pritchet, C.; Basa, S.

    2017-02-01

    The coming era of large photometric wide-field surveys will increase the detection rate of supernovae by orders of magnitude. Such numbers will restrict spectroscopic follow-up in the vast majority of cases, and hence new methods based solely on photometric data must be developed. Here, we construct a complete Hubble diagram of Type II supernovae (SNe II) combining data from three different samples: the Carnegie Supernova Project-I, the Sloan Digital Sky Survey II SN, and the Supernova Legacy Survey. Applying the Photometric Color Method (PCM) to 73 SNe II with a redshift range of 0.01-0.5 and with no spectral information, we derive an intrinsic dispersion of 0.35 mag. A comparison with the Standard Candle Method (SCM) using 61 SNe II is also performed and an intrinsic dispersion in the Hubble diagram of 0.27 mag, I.e., 13% in distance uncertainties, is derived. Due to the lack of good statistics at higher redshifts for both methods, only weak constraints on the cosmological parameters are obtained. However, assuming a flat universe and using the PCM, we derive the universe’s matter density: {{{Ω }}}m={0.32}-0.21+0.30 providing a new independent evidence for dark energy at the level of two sigma. This paper includes data gathered with the 6.5 m Magellan Telescopes, with the du Pont and Swope telescopes located at Las Campanas Observatory, Chile; and the Gemini Observatory, Cerro Pachon, Chile (Gemini Program N-2005A-Q-11, GN-2005B-Q-7, GN-2006A-Q-7, GS-2005A-Q-11, GS-2005B-Q-6, and GS-2008B-Q-56). Based on observations collected at the European Organization for Astronomical Research in the Southern Hemisphere, Chile (ESO Programmes 076.A-0156,078.D-0048, 080.A-0516, and 082.A-0526).

  6. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  7. Improved characterisation and modelling of measurement errors in electrical resistivity tomography (ERT) surveys

    Science.gov (United States)

    Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew

    2017-11-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  8. The Core Collapse Supernova Rate from the SDSS-II Supernova Survey

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Matt; Cinabro, David; Dilday, Ben; Galbany, Lluis; Gupta, Ravi R.; Kessler, R.; Marriner, John; Nichol, Robert C.; Richmond, Michael; Schneider, Donald P.; Sollerman, Jesper

    2014-08-26

    We use the Sloan Digital Sky Survey II Supernova Survey (SDSS-II SNS) data to measure the volumetric core collapse supernova (CCSN) rate in the redshift range (0.03 < z < 0.09). Using a sample of 89 CCSN, we find a volume-averaged rate of 1.06 ± 0.19 × 10(–)(4)((h/0.7)(3)/(yr Mpc(3))) at a mean redshift of 0.072 ± 0.009. We measure the CCSN luminosity function from the data and consider the implications on the star formation history.

  9. Identifying Lattice, Orbit, And BPM Errors in PEP-II

    International Nuclear Information System (INIS)

    Decker, F.-J.; SLAC

    2005-01-01

    The PEP-II B-Factory is delivering peak luminosities of up to 9.2 · 10 33 1/cm 2 · l/s. This is very impressive especially considering our poor understanding of the lattice, absolute orbit and beam position monitor system (BPM). A few simple MATLAB programs were written to get lattice information, like betatron functions in a coupled machine (four all together) and the two dispersions, from the current machine and compare it the design. Big orbit deviations in the Low Energy Ring (LER) could be explained not by bad BPMs (only 3), but by many strong correctors (one corrector to fix four BPMs on average). Additionally these programs helped to uncover a sign error in the third order correction of the BPM system. Further analysis of the current information of the BPMs (sum of all buttons) indicates that there might be still more problematic BPMs

  10. EFFECT OF MEASUREMENT ERRORS ON PREDICTED COSMOLOGICAL CONSTRAINTS FROM SHEAR PEAK STATISTICS WITH LARGE SYNOPTIC SURVEY TELESCOPE

    Energy Technology Data Exchange (ETDEWEB)

    Bard, D.; Chang, C.; Kahn, S. M.; Gilmore, K.; Marshall, S. [KIPAC, Stanford University, 452 Lomita Mall, Stanford, CA 94309 (United States); Kratochvil, J. M.; Huffenberger, K. M. [Department of Physics, University of Miami, Coral Gables, FL 33124 (United States); May, M. [Physics Department, Brookhaven National Laboratory, Upton, NY 11973 (United States); AlSayyad, Y.; Connolly, A.; Gibson, R. R.; Jones, L.; Krughoff, S. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Lorenz, S. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Haiman, Z.; Jernigan, J. G., E-mail: djbard@slac.stanford.edu [Department of Astronomy and Astrophysics, Columbia University, New York, NY 10027 (United States); and others

    2013-09-01

    We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST Image Simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.

  11. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. THE PITTSBURGH SLOAN DIGITAL SKY SURVEY Mg II QUASAR ABSORPTION-LINE SURVEY CATALOG

    International Nuclear Information System (INIS)

    Quider, Anna M.; Nestor, Daniel B.; Turnshek, David A.; Rao, Sandhya M.; Weyant, Anja N.; Monier, Eric M.; Busche, Joseph R.

    2011-01-01

    We present a catalog of intervening Mg II quasar absorption-line systems in the redshift interval 0.36 ≤ z ≤ 2.28. The catalog was built from Sloan Digital Sky Survey Data Release Four (SDSS DR4) quasar spectra. Currently, the catalog contains ∼17, 000 measured Mg II doublets. We also present data on the ∼44, 600 quasar spectra which were searched to construct the catalog, including redshift and magnitude information, continuum-normalized spectra, and corresponding arrays of redshift-dependent minimum rest equivalent widths detectable at our confidence threshold. The catalog is available online. A careful second search of 500 random spectra indicated that, for every 100 spectra searched, approximately one significant Mg II system was accidentally rejected. Current plans to expand the catalog beyond DR4 quasars are discussed. Many Mg II absorbers are known to be associated with galaxies. Therefore, the combination of large size and well understood statistics makes this catalog ideal for precision studies of the low-ionization and neutral gas regions associated with galaxies at low to moderate redshift. An analysis of the statistics of Mg II absorbers using this catalog will be presented in a subsequent paper.

  13. Association between presenilin-1 polymorphism and maternal meiosis II errors in Down syndrome.

    Science.gov (United States)

    Petersen, M B; Karadima, G; Samaritaki, M; Avramopoulos, D; Vassilopoulos, D; Mikkelsen, M

    2000-08-28

    Several lines of evidence suggest a shared genetic susceptibility to Down syndrome (DS) and Alzheimer disease (AD). Rare forms of autosomal-dominant AD are caused by mutations in the APP and presenilin genes (PS-1 and PS-2). The presenilin proteins have been localized to the nuclear membrane, kinetochores, and centrosomes, suggesting a function in chromosome segregation. A genetic association between a polymorphism in intron 8 of the PS-1 gene and AD has been described in some series, and an increased risk of AD has been reported in mothers of DS probands. We therefore studied 168 probands with free trisomy 21 of known parental and meiotic origin and their parents from a population-based material, by analyzing the intron 8 polymorphism in the PS-1 gene. An increased frequency of allele 1 in mothers with a meiosis II error (70.8%) was found compared with mothers with a meiosis I error (52.7%, P < 0.01), with an excess of the 11 genotype in the meiosis II mothers. The frequency of allele 1 in mothers carrying apolipoprotein E (APOE) epsilon4 allele (68.0%) was higher than in mothers without epsilon4 (52.2%, P < 0.01). We hypothesize that the PS-1 intronic polymorphism might be involved in chromosomal nondisjunction through an influence on the expression level of PS-1 or due to linkage disequilibrium with biologically relevant polymorphisms in or outside the PS-1 gene. Copyright 2000 Wiley-Liss, Inc.

  14. On the errors in measurements of Ohio 5 radio sources in the light of the GB survey

    International Nuclear Information System (INIS)

    Machalski, J.

    1975-01-01

    Positions and flux densities of 405 OSU 5 radio sources surveyed at 1415 MHz down to 0.18 f.u. (Brundage et al. 1971) have been examined in the light of data from the GB survey made at 1400 MHz (Maslowski 1972). An identification analysis has shown that about 56% of OSU sources reveal themselves as single, 18% - as confused, 20% - as unresolved and 6% - having no counterparts in the GB survey down to 0.09 f.u. - seem to be spurious. The single OSU sources are strongly affected by the underestimation of their flux densities due to base-line procedure in their vicinity. The average value of about 0.03 f.u. has been found for the systematic underestimation. The second systematic error is due to the presence of a significant number of confused sources with strong overestimation of their flux densities. The confusion effect gives a characteristic non-Gaussian tail in the difference distribution between observed and real flux densities. The confusion effect has a strong influence on source counts from the OSU 5 survey. Differential number-counts relatively to that from the GB survey shows that the counts agree between themselves within the statistical uncertainty up to about 0.40 f.u., which is approximately 4 delta (delta - average rms flux density error in the OSU 5 survey). Below 0.40 f.u. the number of sources missing due to the confusion effect is significantly greater than the number-overestimation due to the noise error. Thus, this part of the OSU 5 source counts cannot be treated seriously, even in the statistical sense. An analysis of the approximate reliability and completeness of the OSU 5 survey shows that, although the total reliability estimated by the authors of the survey is good, the completeness is significantly lower due to the underestimation of the confusion effect magnitude. In fact, the OSU 5 completeness is 67% at 0.18 f.u. and 79% at 0.25 f.u. (author)

  15. Quantifying type I and type II errors in decision-making under uncertainty : The case of GM crops

    NARCIS (Netherlands)

    Ansink, Erik; Wesseler, Justus

    2009-01-01

    In a recent paper, Hennessy and Moschini (American Journal of Agricultural Economics 88(2): 308-323, 2006) analyse the interactions between scientific uncertainty and costly regulatory actions. We use their model to analyse the costs of making type I and type II errors, in the context of the

  16. Quantifying type I and type II errors in decision-making under uncertainty: the case of GM crops

    NARCIS (Netherlands)

    Ansink, E.J.H.; Wesseler, J.H.H.

    2009-01-01

    In a recent paper, Hennessy and Moschini (American Journal of Agricultural Economics 88(2): 308¿323, 2006) analyse the interactions between scientific uncertainty and costly regulatory actions. We use their model to analyse the costs of making type I and type II errors, in the context of the

  17. NEWLY IDENTIFIED EXTENDED GREEN OBJECTS (EGOs) FROM THE SPITZER GLIMPSE II SURVEY. II. MOLECULAR CLOUD ENVIRONMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Chen Xi; Gan Conggui; Shen Zhiqiang [Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200030 (China); Ellingsen, Simon P.; Titmarsh, Anita [School of Mathematics and Physics, University of Tasmania, Hobart, Tasmania (Australia); He Jinhua, E-mail: chenxi@shao.ac.cn [Key Laboratory for the Structure and Evolution of Celestial Objects, Yunnan Astronomical Observatory/National Astronomical Observatory, Chinese Academy of Sciences, P.O. Box 110, Kunming 650011, Yunnan Province (China)

    2013-06-01

    We have undertaken a survey of molecular lines in the 3 mm band toward 57 young stellar objects using the Australia Telescope National Facility Mopra 22 m radio telescope. The target sources were young stellar objects with active outflows (extended green objects (EGOs)) newly identified from the GLIMPSE II survey. We observe a high detection rate (50%) of broad line wing emission in the HNC and CS thermal lines, which combined with the high detection rate of class I methanol masers toward these sources (reported in Paper I) further demonstrates that the GLIMPSE II EGOs are associated with outflows. The physical and kinematic characteristics derived from the 3 mm molecular lines for these newly identified EGOs are consistent with these sources being massive young stellar objects with ongoing outflow activity and rapid accretion. These findings support our previous investigations of the mid-infrared properties of these sources and their association with other star formation tracers (e.g., infrared dark clouds, methanol masers and millimeter dust sources) presented in Paper I. The high detection rate (64%) of the hot core tracer CH{sub 3}CN reveals that the majority of these new EGOs have evolved to the hot molecular core stage. Comparison of the observed molecular column densities with predictions from hot core chemistry models reveals that the newly identified EGOs from the GLIMPSE II survey are members of the youngest hot core population, with an evolutionary time scale of the order of 10{sup 3} yr.

  18. Can i just check...? Effects of edit check questions on measurement error and survey estimates

    NARCIS (Netherlands)

    Lugtig, Peter; Jäckle, Annette

    2014-01-01

    Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to

  19. Mg II-Absorbing Galaxies in the UltraVISTA Survey

    Science.gov (United States)

    Stroupe, Darren; Lundgren, Britt

    2018-01-01

    Light that is emitted from distant quasars can become partially absorbed by intervening gaseous structures, including galaxies, in its path toward Earth, revealing information about the chemical content, degree of ionization, organization and evolution of these structures through time. In this project, quasar spectra are used to probe the halos of foreground galaxies at a mean redshift of z=1.1 in the COSMOS Field. Mg II absorption lines in Sloan Digital Sky Survey quasar spectra are paired with galaxies in the UltraVISTA catalog at an impact parameter less than 200 kpc. A sample of 77 strong Mg II absorbers with a rest-frame equivalent width ≥ 0.3 Å and redshift from 0.34 < z < 2.21 are investigated to find equivalent width ratios of Mg II, C IV and Fe II absorption lines, and their relation to the impact parameter and the star formation rates, stellar masses, environments and redshifts of their host galaxies.

  20. Nonresponse and Underreporting Errors Increase over the Data Collection Week Based on Paradata from the National Household Food Acquisition and Purchase Survey.

    Science.gov (United States)

    Hu, Mengyao; Gremel, Garrett W; Kirlin, John A; West, Brady T

    2017-05-01

    Background: Food acquisition diary surveys are important for studying food expenditures, factors affecting food acquisition decisions, and relations between these decisions with selected measures of health (e.g., body mass index, self-reported health). However, to our knowledge, no studies have evaluated the errors associated with these diary surveys, which can bias survey estimates and research findings. The use of paradata, which has been largely ignored in previous literature on diary surveys, could be useful for studying errors in these surveys. Objective: We used paradata to assess survey errors in the National Household Food Acquisition and Purchase Survey (FoodAPS). Methods: To evaluate the patterns of nonresponse over the diary period, we fit a multinomial logistic regression model to data from this 1-wk diary survey. We also assessed factors influencing respondents' probability of reporting food acquisition events during the diary process by using logistic regression models. Finally, with the use of an ordinal regression model, we studied factors influencing respondents' perceived ease of participation in the survey. Results: As the diary period progressed, nonresponse increased, especially for those starting the survey on Friday (where the odds of a refusal increased by 12% with each fielding day). The odds of reporting food acquisition events also decreased by 6% with each additional fielding day. Similarly, the odds of reporting ≥1 food-away-from-home event (i.e., meals, snacks, and drinks obtained outside the home) decreased significantly over the fielding period. Male respondents, larger households, households that eat together less often, and households with frequent guests reported a significantly more difficult time getting household members to participate, as did non-English-speaking households and households currently experiencing difficult financial conditions. Conclusions: Nonresponse and underreporting of food acquisition events tended to

  1. The Impact of Repeated Lying on Survey Results

    Directory of Open Access Journals (Sweden)

    Thomas Chesney

    2013-01-01

    Full Text Available We study the effects on results of participants completing a survey more than once, a phenomenon known as farming. Using data from a real social science study as a baseline, three strategies that participants might use to farm are studied by Monte Carlo simulation. Findings show that farming influences survey results and can cause both statistical hypotheses testing Type I (false positive and Type II (false negative errors in unpredictable ways.

  2. Beam induced vacuum measurement error in BEPC II

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    When the beam in BEPCII storage ring aborts suddenly, the measured pressure of cold cathode gauges and ion pumps will drop suddenly and decrease to the base pressure gradually. This shows that there is a beam induced positive error in the pressure measurement during beam operation. The error is the difference between measured and real pressures. Right after the beam aborts, the error will disappear immediately and the measured pressure will then be equal to real pressure. For one gauge, we can fit a non-linear pressure-time curve with its measured pressure data 20 seconds after a sudden beam abortion. From this negative exponential decay pumping-down curve, real pressure at the time when the beam starts aborting is extrapolated. With the data of several sudden beam abortions we have got the errors of that gauge in different beam currents and found that the error is directly proportional to the beam current, as expected. And a linear data-fitting gives the proportion coefficient of the equation, which we derived to evaluate the real pressure all the time when the beam with varied currents is on.

  3. Characteristics and verification of a car-borne survey system for dose rates in air: KURAMA-II

    International Nuclear Information System (INIS)

    Tsuda, S.; Yoshida, T.; Tsutsumi, M.; Saito, K.

    2015-01-01

    The car-borne survey system KURAMA-II, developed by the Kyoto University Research Reactor Institute, has been used for air dose rate mapping after the Fukushima Dai-ichi Nuclear Power Plant accident. KURAMA-II consists of a CsI(Tl) scintillation detector, a GPS device, and a control device for data processing. The dose rates monitored by KURAMA-II are based on the G(E) function (spectrum-dose conversion operator), which can precisely calculate dose rates from measured pulse-height distribution even if the energy spectrum changes significantly. The characteristics of KURAMA-II have been investigated with particular consideration to the reliability of the calculated G(E) function, dose rate dependence, statistical fluctuation, angular dependence, and energy dependence. The results indicate that 100 units of KURAMA-II systems have acceptable quality for mass monitoring of dose rates in the environment. - Highlights: • KURAMA-II is a car-borne survey system developed by Kyoto University. • A spectrum-dose conversion operator for KURAMA-II was calculated and examined. • We examined the radiation characteristics of KURAMA-II such as energy dependence. • KURAMA-II has acceptable quality for environmental mass dose rate monitoring

  4. Technical errors in complete mouth radiographic survey according to radiographic techniques and film holding methods

    International Nuclear Information System (INIS)

    Choi, Karp Sik; Byun, Chong Soo; Choi, Soon Chul

    1986-01-01

    The purpose of this study was to investigate the numbers and causes of retakes in 300 complete mouth radiographic surveys made by 75 senior dental students. According to radiographic techniques and film holding methods, they were divided into 4 groups: Group I: Bisecting-angle technique with patient's fingers. Group II: Bisecting-angle technique with Rinn Snap-A-Ray device. Group III: Bisecting-angle technique with Rinn XCP instrument (short cone) Group IV: Bisecting-angle technique with Rinn XCP instrument (long cone). The most frequent cases of retakes, the most frequent tooth area examined, of retakes and average number of retakes per complete mouth survey were evaluated. The obtained results were as follows: Group I: Incorrect film placement (47.8), upper canine region, and 0.89. Group II: Incorrect film placement (44.0), upper canine region, and 1.12. Group III: Incorrect film placement (79.2), upper canine region, and 2.05. Group IV: Incorrect film placement (67.7), upper canine region, and 1.69.

  5. THE GREEN BANK TELESCOPE H II REGION DISCOVERY SURVEY. III. KINEMATIC DISTANCES

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, L. D. [Department of Physics, West Virginia University, Morgantown, WV 26506 (United States); Bania, T. M. [Institute for Astrophysical Research, Department of Astronomy, Boston University, 725 Commonwealth Avenue, Boston, MA 02215 (United States); Balser, Dana S. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903-2475 (United States); Rood, Robert T., E-mail: Loren.Anderson@mail.wvu.edu [Astronomy Department, University of Virginia, P.O. Box 3818, Charlottesville, VA 22903-0818 (United States)

    2012-07-20

    Using the H I emission/absorption method, we resolve the kinematic distance ambiguity and derive distances for 149 of 182 (82%) H II regions discovered by the Green Bank Telescope H II Region Discovery Survey (GBT HRDS). The HRDS is an X-band (9 GHz, 3 cm) GBT survey of 448 previously unknown H II regions in radio recombination line and radio continuum emission. Here, we focus on HRDS sources from 67 Degree-Sign {>=} l {>=} 18 Degree-Sign , where kinematic distances are more reliable. The 25 HRDS sources in this zone that have negative recombination line velocities are unambiguously beyond the orbit of the Sun, up to 20 kpc distant. They are the most distant H II regions yet discovered. We find that 61% of HRDS sources are located at the far distance, 31% at the tangent-point distance, and only 7% at the near distance. 'Bubble' H II regions are not preferentially located at the near distance (as was assumed previously) but average 10 kpc from the Sun. The HRDS nebulae, when combined with a large sample of H II regions with previously known distances, show evidence of spiral structure in two circular arc segments of mean Galactocentric radii of 4.25 and 6.0 kpc. We perform a thorough uncertainty analysis to analyze the effect of using different rotation curves, streaming motions, and a change to the solar circular rotation speed. The median distance uncertainty for our sample of H II regions is only 0.5 kpc, or 5%. This is significantly less than the median difference between the near and far kinematic distances, 6 kpc. The basic Galactic structure results are unchanged after considering these sources of uncertainty.

  6. PHYSICAL AND MORPHOLOGICAL PROPERTIES OF [O II] EMITTING GALAXIES IN THE HETDEX PILOT SURVEY

    International Nuclear Information System (INIS)

    Bridge, Joanna S.; Gronwall, Caryl; Ciardullo, Robin; Hagen, Alex; Zeimann, Greg; Malz, A. I.; Schneider, Donald P.

    2015-01-01

    The Hobby-Eberly Dark Energy Experiment pilot survey identified 284 [O II] λ3727 emitting galaxies in a 169 arcmin 2 field of sky in the redshift range 0 < z < 0.57. This line flux limited sample provides a bridge between studies in the local universe and higher-redshift [O II] surveys. We present an analysis of the star formation rates (SFRs) of these galaxies as a function of stellar mass as determined via spectral energy distribution fitting. The [O II] emitters fall on the ''main sequence'' of star-forming galaxies with SFR decreasing at lower masses and redshifts. However, the slope of our relation is flatter than that found for most other samples, a result of the metallicity dependence of the [O II] star formation rate indicator. The mass-specific SFR is higher for lower mass objects, supporting the idea that massive galaxies formed more quickly and efficiently than their lower mass counterparts. This is confirmed by the fact that the equivalent widths of the [O II] emission lines trend smaller with larger stellar mass. Examination of the morphologies of the [O II] emitters reveals that their star formation is not a result of mergers, and the galaxies' half-light radii do not indicate evolution of physical sizes

  7. PLANETARY NEBULAE DETECTED IN THE SPITZER SPACE TELESCOPE GLIMPSE II LEGACY SURVEY

    International Nuclear Information System (INIS)

    Zhang Yong; Sun Kwok

    2009-01-01

    We report the result of a search for the infrared counterparts of 37 planetary nebulae (PNs) and PN candidates in the Spitzer Galactic Legacy Infrared Mid-Plane Survey Extraordinaire II (GLIMPSE II) survey. The photometry and images of these PNs at 3.6, 4.5, 5.8, 8.0, and 24 μm, taken through the Infrared Array Camera (IRAC) and the Multiband Imaging Photometer for Spitzer (MIPS), are presented. Most of these nebulae are very red and compact in the IRAC bands, and are found to be bright and extended in the 24 μm band. The infrared morphology of these objects are compared with Hα images of the Macquarie-AAO-Strasbourg (MASH) and MASH II PNs. The implications for morphological difference in different wavelengths are discussed. The IRAC data allow us to differentiate between PNs and H II regions and be able to reject non-PNs from the optical catalog (e.g., PNG 352.1 - 00.0). Spectral energy distributions are constructed by combing the IRAC and MIPS data with existing near-, mid-, and far-IR photometry measurements. The anomalous colors of some objects allow us to infer the presence of aromatic emission bands. These multi-wavelength data provide useful insights into the nature of different nebular components contributing to the infrared emission of PNs.

  8. Guideline appraisal with AGREE II: online survey of the potential influence of AGREE II items on overall assessment of guideline quality and recommendation for use.

    Science.gov (United States)

    Hoffmann-Eßer, Wiebke; Siering, Ulrich; Neugebauer, Edmund A M; Brockhaus, Anne Catharina; McGauran, Natalie; Eikermann, Michaela

    2018-02-27

    The AGREE II instrument is the most commonly used guideline appraisal tool. It includes 23 appraisal criteria (items) organized within six domains. AGREE II also includes two overall assessments (overall guideline quality, recommendation for use). Our aim was to investigate how strongly the 23 AGREE II items influence the two overall assessments. An online survey of authors of publications on guideline appraisals with AGREE II and guideline users from a German scientific network was conducted between 10th February 2015 and 30th March 2015. Participants were asked to rate the influence of the AGREE II items on a Likert scale (0 = no influence to 5 = very strong influence). The frequencies of responses and their dispersion were presented descriptively. Fifty-eight of the 376 persons contacted (15.4%) participated in the survey and the data of the 51 respondents with prior knowledge of AGREE II were analysed. Items 7-12 of Domain 3 (rigour of development) and both items of Domain 6 (editorial independence) had the strongest influence on the two overall assessments. In addition, Items 15-17 (clarity of presentation) had a strong influence on the recommendation for use. Great variations were shown for the other items. The main limitation of the survey is the low response rate. In guideline appraisals using AGREE II, items representing rigour of guideline development and editorial independence seem to have the strongest influence on the two overall assessments. In order to ensure a transparent approach to reaching the overall assessments, we suggest the inclusion of a recommendation in the AGREE II user manual on how to consider item and domain scores. For instance, the manual could include an a-priori weighting of those items and domains that should have the strongest influence on the two overall assessments. The relevance of these assessments within AGREE II could thereby be further specified.

  9. Estimating Classification Errors Under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC

    Directory of Open Access Journals (Sweden)

    Boeschoten Laura

    2017-12-01

    Full Text Available Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible combinations with scores on other variables. Furthermore, the latent class model, by multiply imputing a new variable, enhances the quality of statistics based on the composite data set. The performance of this method is investigated by a simulation study, which shows that whether or not the method can be applied depends on the entropy R2 of the latent class model and the type of analysis a researcher is planning to do. Finally, the method is applied to public data from Statistics Netherlands.

  10. The sloan digital sky Survey-II supernova survey: search algorithm and follow-up observations

    Energy Technology Data Exchange (ETDEWEB)

    Sako, Masao [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Bassett, Bruce [Department of Mathematics and Applied Mathematics, University of Cape Town, Rondebosch 7701 (South Africa); Becker, Andrew; Hogan, Craig J. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Cinabro, David [Department of Physics, Wayne State University, Detroit, MI 48202 (United States); DeJongh, Fritz; Frieman, Joshua A.; Marriner, John; Miknaitis, Gajus [Center for Particle Astrophysics, Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Depoy, D. L.; Prieto, Jose Luis [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210-1173 (United States); Dilday, Ben; Kessler, Richard [Kavli Institute for Cosmological Physics, The University of Chicago, 5640 South Ellis Avenue Chicago, IL 60637 (United States); Doi, Mamoru [Institute of Astronomy, Graduate School of Science, University of Tokyo 2-21-1, Osawa, Mitaka, Tokyo 181-0015 (Japan); Garnavich, Peter M. [University of Notre Dame, 225 Nieuwland Science, Notre Dame, IN 46556-5670 (United States); Holtzman, Jon [Department of Astronomy, MSC 4500, New Mexico State University, P.O. Box 30001, Las Cruces, NM 88003 (United States); Jha, Saurabh [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, P.O. Box 20450, MS29, Stanford, CA 94309 (United States); Konishi, Kohki [Institute for Cosmic Ray Research, University of Tokyo, 5-1-5, Kashiwanoha, Kashiwa, Chiba, 277-8582 (Japan); Lampeitl, Hubert [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Nichol, Robert C. [Institute of Cosmology and Gravitation, Mercantile House, Hampshire Terrace, University of Portsmouth, Portsmouth PO1 2EG (United Kingdom); and others

    2008-01-01

    The Sloan Digital Sky Survey-II Supernova Survey has identified a large number of new transient sources in a 300 deg{sup 2} region along the celestial equator during its first two seasons of a three-season campaign. Multi-band (ugriz) light curves were measured for most of the sources, which include solar system objects, galactic variable stars, active galactic nuclei, supernovae (SNe), and other astronomical transients. The imaging survey is augmented by an extensive spectroscopic follow-up program to identify SNe, measure their redshifts, and study the physical conditions of the explosions and their environment through spectroscopic diagnostics. During the survey, light curves are rapidly evaluated to provide an initial photometric type of the SNe, and a selected sample of sources are targeted for spectroscopic observations. In the first two seasons, 476 sources were selected for spectroscopic observations, of which 403 were identified as SNe. For the type Ia SNe, the main driver for the survey, our photometric typing and targeting efficiency is 90%. Only 6% of the photometric SN Ia candidates were spectroscopically classified as non-SN Ia instead, and the remaining 4% resulted in low signal-to-noise, unclassified spectra. This paper describes the search algorithm and the software, and the real-time processing of the SDSS imaging data. We also present the details of the supernova candidate selection procedures and strategies for follow-up spectroscopic and imaging observations of the discovered sources.

  11. The Sloan Digital Sky Survey-II Supernova Survey:Search Algorithm and Follow-up Observations

    Energy Technology Data Exchange (ETDEWEB)

    Sako, Masao; /Pennsylvania U. /KIPAC, Menlo Park; Bassett, Bruce; /Cape Town U. /South African Astron. Observ.; Becker, Andrew; /Washington U., Seattle, Astron. Dept.; Cinabro, David; /Wayne State U.; DeJongh, Don Frederic; /Fermilab; Depoy, D.L.; /Ohio State U.; Doi, Mamoru; /Tokyo U.; Garnavich, Peter M.; /Notre Dame U.; Craig, Hogan, J.; /Washington U., Seattle, Astron. Dept.; Holtzman, Jon; /New Mexico State U.; Jha, Saurabh; /Stanford U., Phys. Dept.; Konishi, Kohki; /Tokyo U.; Lampeitl, Hubert; /Baltimore, Space; Marriner, John; /Fermilab; Miknaitis, Gajus; /Fermilab; Nichol, Robert C.; /Portsmouth U.; Prieto, Jose Luis; /Ohio State U.; Richmond, Michael W.; /Rochester Inst.; Schneider, Donald P.; /Penn State U., Astron. Astrophys.; Smith, Mathew; /Portsmouth U.; SubbaRao, Mark; /Chicago U. /Tokyo U. /Tokyo U. /South African Astron. Observ. /Tokyo

    2007-09-14

    The Sloan Digital Sky Survey-II Supernova Survey has identified a large number of new transient sources in a 300 deg2 region along the celestial equator during its first two seasons of a three-season campaign. Multi-band (ugriz) light curves were measured for most of the sources, which include solar system objects, Galactic variable stars, active galactic nuclei, supernovae (SNe), and other astronomical transients. The imaging survey is augmented by an extensive spectroscopic follow-up program to identify SNe, measure their redshifts, and study the physical conditions of the explosions and their environment through spectroscopic diagnostics. During the survey, light curves are rapidly evaluated to provide an initial photometric type of the SNe, and a selected sample of sources are targeted for spectroscopic observations. In the first two seasons, 476 sources were selected for spectroscopic observations, of which 403 were identified as SNe. For the Type Ia SNe, the main driver for the Survey, our photometric typing and targeting efficiency is 90%. Only 6% of the photometric SN Ia candidates were spectroscopically classified as non-SN Ia instead, and the remaining 4% resulted in low signal-to-noise, unclassified spectra. This paper describes the search algorithm and the software, and the real-time processing of the SDSS imaging data. We also present the details of the supernova candidate selection procedures and strategies for follow-up spectroscopic and imaging observations of the discovered sources.

  12. Optimal power flow: a bibliographic survey II. Non-deterministic and hybrid methods

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [Univ. of Jyvaskyla, Dept. of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)

    2012-09-15

    Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey (this article) examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)

  13. On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models

    Energy Technology Data Exchange (ETDEWEB)

    Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.

    2014-02-01

    The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.

  14. Biennial Survey of Education, 1916-18. Volume II. Bulletin, 1919, No. 89

    Science.gov (United States)

    Bureau of Education, Department of the Interior, 1921

    1921-01-01

    Volume II of the Biennial Survey of Education, 1916-1918 includes the following chapters: (1) Education in Great Britain and Ireland (I. L. Kandel); (2) Education in parts of the British Empire: Educational Developments in the Dominion of Canada (Walter A. Montgomery), Public School System of Jamaica (Charles A. Asbury), Recent Progress of…

  15. VizieR Online Data Catalog: REFLEX II. Properties of the survey (Boehringer+ 2013)

    Science.gov (United States)

    Boehringer, H.; Chon, G.; Collins, C. A.; Guzzo, L.; Nowak, N.; Bobrovskyi, S.

    2013-06-01

    Like REFLEX I, the extended survey covers the southern sky outside the band of the Milky Way (|bII|>=20°) with regions around the Magellanic clouds excised (3 in LMC, 3 in SMC). The total survey area after this excision amounts to 4.24 steradian (or 13924°2) which corresponds to 33.75% of the sky. Different from REFLEX I, we use the refined RASS product RASS III (Voges et al. 1999, Cat. IX/10). (2 data files).

  16. A survey of mindset theories of intelligence and medical error self-reporting among pediatric housestaff and faculty.

    Science.gov (United States)

    Jegathesan, Mithila; Vitberg, Yaffa M; Pusic, Martin V

    2016-02-11

    Intelligence theory research has illustrated that people hold either "fixed" (intelligence is immutable) or "growth" (intelligence can be improved) mindsets and that these views may affect how people learn throughout their lifetime. Little is known about the mindsets of physicians, and how mindset may affect their lifetime learning and integration of feedback. Our objective was to determine if pediatric physicians are of the "fixed" or "growth" mindset and whether individual mindset affects perception of medical error reporting.  We sent an anonymous electronic survey to pediatric residents and attending pediatricians at a tertiary care pediatric hospital. Respondents completed the "Theories of Intelligence Inventory" which classifies individuals on a 6-point scale ranging from 1 (Fixed Mindset) to 6 (Growth Mindset). Subsequent questions collected data on respondents' recall of medical errors by self or others. We received 176/349 responses (50 %). Participants were equally distributed between mindsets with 84 (49 %) classified as "fixed" and 86 (51 %) as "growth". Residents, fellows and attendings did not differ in terms of mindset. Mindset did not correlate with the small number of reported medical errors. There is no dominant theory of intelligence (mindset) amongst pediatric physicians. The distribution is similar to that seen in the general population. Mindset did not correlate with error reports.

  17. Errors and omissions in hospital prescriptions: a survey of prescription writing in a hospital.

    Science.gov (United States)

    Calligaris, Laura; Panzera, Angela; Arnoldo, Luca; Londero, Carla; Quattrin, Rosanna; Troncon, Maria G; Brusaferro, Silvio

    2009-05-13

    The frequency of drug prescription errors is high. Excluding errors in decision making, the remaining are mainly due to order ambiguity, non standard nomenclature and writing illegibility. The aim of this study is to analyse, as a part of a continuous quality improvement program, the quality of prescriptions writing for antibiotics, in an Italian University Hospital as a risk factor for prescription errors. The point prevalence survey, carried out in May 26-30 2008, involved 41 inpatient Units. Every parenteral or oral antibiotic prescription was analysed for legibility (generic or brand drug name, dose, frequency of administration) and completeness (generic or brand name, dose, frequency of administration, route of administration, date of prescription and signature of the prescriber). Eight doctors (residents in Hygiene and Preventive Medicine) and two pharmacists performed the survey by reviewing the clinical records of medical, surgical or intensive care section inpatients. The antibiotics drug category was chosen because its use is widespread in the setting considered. Out of 756 inpatients included in the study, 408 antibiotic prescriptions were found in 298 patients (mean prescriptions per patient 1.4; SD +/- 0.6). Overall 92.7% (38/41) of the Units had at least one patient with antibiotic prescription. Legibility was in compliance with 78.9% of generic or brand names, 69.4% of doses, 80.1% of frequency of administration, whereas completeness was fulfilled for 95.6% of generic or brand names, 76.7% of doses, 83.6% of frequency of administration, 87% of routes of administration, 43.9% of dates of prescription and 33.3% of physician's signature. Overall 23.9% of prescriptions were illegible and 29.9% of prescriptions were incomplete. Legibility and completeness are higher in unusual drugs prescriptions. The Intensive Care Section performed best as far as quality of prescription writing was concerned when compared with the Medical and Surgical Sections

  18. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations

    NARCIS (Netherlands)

    Derks, E. M.; Zwinderman, A. H.; Gamazon, E. R.

    2017-01-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (FST) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates;

  19. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  20. Evaluation of the computerized procedures Manual II (COPMA II)

    International Nuclear Information System (INIS)

    Converse, S.A.

    1995-11-01

    The purpose of this study was to evaluate the effects of a computerized procedure system, the Computerized Procedure Manual II (COPMA-II), on the performance and mental workload of licensed reactor operators. To evaluate COPMA-II, eight teams of two operators were trained to operate a scaled pressurized water reactor facility (SPWRF) with traditional paper procedures and with COPMA-II. Following training, each team operated the SPWRF under normal operating conditions with both paper procedures and COPMA-II. The teams then performed one of two accident scenarios with paper procedures, but performed the remaining accident scenario with COPMA-II. Performance measures and subjective estimates of mental workload were recorded for each performance trial. The most important finding of the study was that the operators committed only half as many errors during the accident scenarios with COPMA-II as they committed with paper procedures. However, time to initiate a procedure was fastest for paper procedures for accident scenario trials. For performance under normal operating conditions, there was no difference in time to initiate or to complete a procedure, or in the number of errors committed with paper procedures and with COPMA-II. There were no consistent differences in the mental workload ratings operators recorded for trials with paper procedures and COPMA-II

  1. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where

  2. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  3. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  4. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  5. Medication errors detected in non-traditional databases

    DEFF Research Database (Denmark)

    Perregaard, Helene; Aronson, Jeffrey K; Dalhoff, Kim

    2015-01-01

    AIMS: We have looked for medication errors involving the use of low-dose methotrexate, by extracting information from Danish sources other than traditional pharmacovigilance databases. We used the data to establish the relative frequencies of different types of errors. METHODS: We searched four...... errors, whereas knowledge-based errors more often resulted in near misses. CONCLUSIONS: The medication errors in this survey were most often action-based (50%) and knowledge-based (34%), suggesting that greater attention should be paid to education and surveillance of medical personnel who prescribe...

  6. Psychometric properties of the School Fears Survey Scale for preadolescents (SFSS-II).

    Science.gov (United States)

    García-Fernández, José Manuel; Espada Sánchez, José Pedro; Orgilés Amorós, Mireia; Méndez Carrillo, Xavier

    2010-08-01

    This paper describes the psychometric properties of a new children's self-report measure. The School Fears Survey Scale, Form II (SFSS-II) assesses school fears in children from ages 8 to 11. The factor solution with a Spanish sample of 3,665 children isolated four factors: Fear of academic failure and punishment, fear of physical discomfort, fear of social and school assessment and anticipatory and separation anxiety. The questionnaire was tested by confirmatory factor analysis, which accounted for 55.80% of the total variance. Results indicated that the SFSS-II has a high internal consistency (alpha= .89). The results revealed high test-retest reliability and appropriate relationship with other scales. The age by gender interaction was significant. Two-way analysis of variance found that older children and girls had higher anxiety. The instrument shows adequate psychometric guarantees and can be used for the multidimensional assessment of anxiety in clinical and educational settings.

  7. SLIM-MAUD: an approach to assessing human error probabilities using structured expert judgment. Volume II. Detailed analysis of the technical issues

    International Nuclear Information System (INIS)

    Embrey, D.E.; Humphreys, P.; Rosa, E.A.; Kirwan, B.; Rea, K.

    1984-07-01

    This two-volume report presents the procedures and analyses performed in developing an approach for structuring expert judgments to estimate human error probabilities. Volume I presents an overview of work performed in developing the approach: SLIM-MAUD (Success Likelihood Index Methodology, implemented through the use of an interactive computer program called MAUD-Multi-Attribute Utility Decomposition). Volume II provides a more detailed analysis of the technical issues underlying the approach

  8. Quantum error-correcting code for ternary logic

    Science.gov (United States)

    Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita

    2018-05-01

    Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.

  9. Error management for musicians: an interdisciplinary conceptual framework.

    Science.gov (United States)

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and

  10. Error management for musicians: an interdisciplinary conceptual framework

    Directory of Open Access Journals (Sweden)

    Silke eKruse-Weber

    2014-07-01

    Full Text Available Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for errorless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error and error management (during and after the error are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of these abilities. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further

  11. Computer augumented modelling studies of Pb(II, Cd(II, Hg(II, Co(II, Ni(II, Cu(II and Zn(II complexes of L-glutamic acid in 1,2-propanediol–water mixtures

    Directory of Open Access Journals (Sweden)

    MAHESWARA RAO VEGI

    2008-12-01

    Full Text Available Chemical speciation of Pb(II, Cd(II, Hg(II, Co(II, Ni(II, Cu(II and Zn(II complexes of L-glutamic acid was studied at 303 K in 0–60 vol. % 1,2-propanediol–water mixtures, whereby the ionic strength was maintained at 0.16 mol dm-3. The active forms of the ligand are LH3+, LH2 and LH–. The predominant detected species were ML, ML2, MLH, ML2H and ML2H2. The trend of the variation in the stability constants with changing dielectric constant of the medium is explained based on the cation stabilizing nature of the co-solvents, specific solvent–water interactions, charge dispersion and specific interactions of the co-solvent with the solute. The effect of systematic errors in the concentrations of the substances on the stability constants is in the order alkali > > acid > ligand > metal. The bioavailability and transportation of metals are explained based on distribution diagrams and stability constants.

  12. Measurement error in income and schooling, and the bias of linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...... with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result...

  13. Errors in radiographic recognition in the emergency room

    International Nuclear Information System (INIS)

    Britton, C.A.; Cooperstein, L.A.

    1986-01-01

    For 6 months we monitored the frequency and type of errors in radiographic recognition made by radiology residents on call in our emergency room. A relatively low error rate was observed, probably because the authors evaluated cognitive errors only, rather than include those of interpretation. The most common missed finding was a small fracture, particularly on the hands or feet. First-year residents were most likely to make an error, but, interestingly, our survey revealed a small subset of upper-level residents who made a disproportionate number of errors

  14. PHOTOMETRIC TYPE Ia SUPERNOVA CANDIDATES FROM THE THREE-YEAR SDSS-II SN SURVEY DATA

    International Nuclear Information System (INIS)

    Sako, Masao; Connolly, Brian; Gladney, Larry; Bassett, Bruce; Dilday, Benjamin; Cambell, Heather; Lampeitl, Hubert; Nichol, Robert C.; Frieman, Joshua A.; Kessler, Richard; Marriner, John; Miquel, Ramon; Schneider, Donald P.; Smith, Mathew; Sollerman, Jesper

    2011-01-01

    We analyze the three-year Sloan Digital Sky Survey II (SDSS-II) Supernova (SN) Survey data and identify a sample of 1070 photometric Type Ia supernova (SN Ia) candidates based on their multiband light curve data. This sample consists of SN candidates with no spectroscopic confirmation, with a subset of 210 candidates having spectroscopic redshifts of their host galaxies measured while the remaining 860 candidates are purely photometric in their identification. We describe a method for estimating the efficiency and purity of photometric SN Ia classification when spectroscopic confirmation of only a limited sample is available, and demonstrate that SN Ia candidates from SDSS-II can be identified photometrically with ∼91% efficiency and with a contamination of ∼6%. Although this is the largest uniform sample of SN candidates to date for studying photometric identification, we find that a larger spectroscopic sample of contaminating sources is required to obtain a better characterization of the background events. A Hubble diagram using SN candidates with no spectroscopic confirmation, but with host galaxy spectroscopic redshifts, yields a distance modulus dispersion that is only ∼20%-40% larger than that of the spectroscopically confirmed SN Ia sample alone with no significant bias. A Hubble diagram with purely photometric classification and redshift-distance measurements, however, exhibits biases that require further investigation for precision cosmology.

  15. Photometric type Ia supernova candidates from the three-year SDSS-II SN survey data

    Energy Technology Data Exchange (ETDEWEB)

    Sako, Masao; /Pennsylvania U.; Bassett, Bruce; /South African Astron. Observ. /Cape Town U., Dept. Math.; Connolly, Brian; /Pennsylvania U.; Dilday, Benjamin; /Las Cumbres Observ. /UC, Santa Barbara /Rutgers U., Piscataway; Cambell, Heather; /Portsmouth U., ICG; Frieman, Joshua A.; /Chicago U. /Chicago U., KICP /Fermilab; Gladney, Larry; /Pennsylvania U.; Kessler, Richard; /Chicago U. /Chicago U., KICP; Lampeitl, Hubert; /Portsmouth U., ICG; Marriner, John; /Fermilab; Miquel, Ramon; /Barcelona, IFAE /ICREA, Barcelona /Portsmouth U., ICG

    2011-07-01

    We analyze the three-year Sloan Digital Sky Survey II (SDSS-II) Supernova (SN) Survey data and identify a sample of 1070 photometric Type Ia supernova (SN Ia) candidates based on their multiband light curve data. This sample consists of SN candidates with no spectroscopic confirmation, with a subset of 210 candidates having spectroscopic redshifts of their host galaxies measured while the remaining 860 candidates are purely photometric in their identification. We describe a method for estimating the efficiency and purity of photometric SN Ia classification when spectroscopic confirmation of only a limited sample is available, and demonstrate that SN Ia candidates from SDSS-II can be identified photometrically with {approx}91% efficiency and with a contamination of {approx}6%. Although this is the largest uniform sample of SN candidates to date for studying photometric identification, we find that a larger spectroscopic sample of contaminating sources is required to obtain a better characterization of the background events. A Hubble diagram using SN candidates with no spectroscopic confirmation, but with host galaxy spectroscopic redshifts, yields a distance modulus dispersion that is only {approx}20%-40% larger than that of the spectroscopically confirmed SN Ia sample alone with no significant bias. A Hubble diagram with purely photometric classification and redshift-distance measurements, however, exhibits biases that require further investigation for precision cosmology.

  16. Issues in environmental survey design

    International Nuclear Information System (INIS)

    Iachan, R.

    1989-01-01

    Several environmental survey design issues are discussed and illustrated with surveys designed by Research Triangle Institute statisticians. Issues related to sampling and nonsampling errors are illustrated for indoor air quality surveys, radon surveys, pesticide surveys, and occupational and personal exposure surveys. Sample design issues include the use of auxiliary information (e.g. for stratification), and sampling in time. We also discuss the reduction and estimation of nonsampling errors, including nonresponse and measurement bias

  17. Survey of Biomass Gasification, Volume II: Principles of Gasification

    Energy Technology Data Exchange (ETDEWEB)

    Reed, T.B. (comp.)

    1979-07-01

    Biomass can be converted by gasification into a clean-burning gaseous fuel that can be used to retrofit existing gas/oil boilers, to power engines, to generate electricity, and as a base for synthesis of methanol, gasoline, ammonia, or methane. This survey describes biomass gasification, associated technologies, and issues in three volumes. Volume I contains the synopsis and executive summary, giving highlights of the findings of the other volumes. In Volume II the technical background necessary for understanding the science, engineering, and commercialization of biomass is presented. In Volume III the present status of gasification processes is described in detail, followed by chapters on economics, gas conditioning, fuel synthesis, the institutional role to be played by the federal government, and recommendations for future research and development.

  18. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Science.gov (United States)

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  19. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Directory of Open Access Journals (Sweden)

    Martin eSpüler

    2015-03-01

    Full Text Available When a person recognizes an error during a task, an error-related potential (ErrP can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback.With this study, we wanted to answer three different questions: (i Can ErrPs be measured in electroencephalography (EEG recordings during a task with continuous cursor control? (ii Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action. We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible.Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG.

  20. Multiagency radiation survey and site investigation manual (MARSSIM): Survey design

    International Nuclear Information System (INIS)

    Abelquist, E.W.; Berger, J.D.

    1996-01-01

    This paper describes the MultiAgency Radiation Survey and Site Investigation Manual (MARSSIM) strategy for designing a final status survey. The purpose of the final status survey is to demonstrate that release criteria established by the regulatory agency have been met. Survey design begins with identification of the contaminants and determination of whether the radionuclides of concern exist in background. The decommissioned site is segregated into Class 1, Class 2, and Class 3 areas, based on contamination potential, and each area is further divided into survey units. Appropriate reference areas for indoor and outdoor background measurements are selected. Survey instrumentation and techniques are selected in order to assure that the instrumentation is capable of detecting the contamination at the derived concentration guideline level (DCGL). Survey reference systems are established and the number of survey data points is determined-with the required number of data points distributed on a triangular grid Pattern. Two suitistical tests are used to evaluate data from final status surveys. For contaminants that are b, present in background, the Wilcoxon Rank Sum test is used; for contaminants that are not present in background, the Wilcoxon Signed Rank (or Sign) test is used. The number of data points needed to satisfy these nonparametric tests is based on the contaminant DCGL value, the expected Standard deviation of the contaminant in background and in the survey unit, and the acceptable probability of making Type I and Type II decision errors. The MARSSIM also requires a reasonable level of assurance that any small areas of elevated residual radioactivity that could be significant relative to regulatory limits are not missed during the final status survey. Measurements and sampling on a specified grid size are used to obtain an adequate assurance level that small locations of elevated radioactivity will Still satisfy DCGLs-applicable to small areas

  1. Comparison of two dietary assessment methods by food consumption: results of the German National Nutrition Survey II.

    Science.gov (United States)

    Eisinger-Watzl, Marianne; Straßburg, Andrea; Ramünke, Josa; Krems, Carolin; Heuer, Thorsten; Hoffmann, Ingrid

    2015-04-01

    To further characterise the performance of the diet history method and the 24-h recalls method, both in an updated version, a comparison was conducted. The National Nutrition Survey II, representative for Germany, assessed food consumption with both methods. The comparison was conducted in a sample of 9,968 participants aged 14-80. Besides calculating mean differences, statistical agreement measurements encompass Spearman and intraclass correlation coefficients, ranking participants in quartiles and the Bland-Altman method. Mean consumption of 12 out of 18 food groups was higher assessed with the diet history method. Three of these 12 food groups had a medium to large effect size (e.g., raw vegetables) and seven showed at least a small strength while there was basically no difference for coffee/tea or ice cream. Intraclass correlations were strong only for beverages (>0.50) and revealed the least correlation for vegetables (diet history method to remember consumption of the past 4 weeks may be a source of inaccurateness, especially for inhomogeneous food groups. Additionally, social desirability gains significance. There is no assessment method without errors and attention to specific food groups is a critical issue with every method. Altogether, the 24-h recalls method applied in the presented study, offers advantages approximating food consumption as compared to the diet history method.

  2. Web-based Surveys: Changing the Survey Process

    OpenAIRE

    Gunn, Holly

    2002-01-01

    Web-based surveys are having a profound influence on the survey process. Unlike other types of surveys, Web page design skills and computer programming expertise play a significant role in the design of Web-based surveys. Survey respondents face new and different challenges in completing a Web-based survey. This paper examines the different types of Web-based surveys, the advantages and challenges of using Web-based surveys, the design of Web-based surveys, and the issues of validity, error, ...

  3. Error Control for Network-on-Chip Links

    CERN Document Server

    Fu, Bo

    2012-01-01

    As technology scales into nanoscale regime, it is impossible to guarantee the perfect hardware design. Moreover, if the requirement of 100% correctness in hardware can be relaxed, the cost of manufacturing, verification, and testing will be significantly reduced. Many approaches have been proposed to address the reliability problem of on-chip communications. This book focuses on the use of error control codes (ECCs) to improve on-chip interconnect reliability. Coverage includes detailed description of key issues in NOC error control faced by circuit and system designers, as well as practical error control techniques to minimize the impact of these errors on system performance. Provides a detailed background on the state of error control methods for on-chip interconnects; Describes the use of more complex concatenated codes such as Hamming Product Codes with Type-II HARQ, while emphasizing integration techniques for on-chip interconnect links; Examines energy-efficient techniques for integrating multiple error...

  4. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  5. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  6. Nurse perceptions of organizational culture and its association with the culture of error reporting: a case of public sector hospitals in Pakistan.

    Science.gov (United States)

    Jafree, Sara Rizvi; Zakar, Rubeena; Zakar, Muhammad Zakria; Fischer, Florian

    2016-01-05

    There is an absence of formal error tracking systems in public sector hospitals of Pakistan and also a lack of literature concerning error reporting culture in the health care sector. Nurse practitioners have front-line knowledge and rich exposure about both the organizational culture and error sharing in hospital settings. The aim of this paper was to investigate the association between organizational culture and the culture of error reporting, as perceived by nurses. The authors used the "Practice Environment Scale-Nurse Work Index Revised" to measure the six dimensions of organizational culture. Seven questions were used from the "Survey to Solicit Information about the Culture of Reporting" to measure error reporting culture in the region. Overall, 309 nurses participated in the survey, including female nurses from all designations such as supervisors, instructors, ward-heads, staff nurses and student nurses. We used SPSS 17.0 to perform a factor analysis. Furthermore, descriptive statistics, mean scores and multivariable logistic regression were used for the analysis. Three areas were ranked unfavorably by nurse respondents, including: (i) the error reporting culture, (ii) staffing and resource adequacy, and (iii) nurse foundations for quality of care. Multivariable regression results revealed that all six categories of organizational culture, including: (1) nurse manager ability, leadership and support, (2) nurse participation in hospital affairs, (3) nurse participation in governance, (4) nurse foundations of quality care, (5) nurse-coworkers relations, and (6) nurse staffing and resource adequacy, were positively associated with higher odds of error reporting culture. In addition, it was found that married nurses and nurses on permanent contract were more likely to report errors at the workplace. Public healthcare services of Pakistan can be improved through the promotion of an error reporting culture, reducing staffing and resource shortages and the

  7. Measurements of the Rate of Type Ia Supernovae at Redshift z < ~0.3 from the SDSS-II Supernova Survey

    Energy Technology Data Exchange (ETDEWEB)

    Dilday, Benjamin; /Rutgers U., Piscataway /Chicago U. /KICP, Chicago; Smith, Mathew; /Cape Town U., Dept. Math. /Portsmouth U.; Bassett, Bruce; /Cape Town U., Dept. Math. /South African Astron. Observ.; Becker, Andrew; /Washington U., Seattle, Astron. Dept.; Bender, Ralf; /Munich, Tech. U. /Munich U. Observ.; Castander, Francisco; /Barcelona, IEEC; Cinabro, David; /Wayne State U.; Filippenko, Alexei V.; /UC, Berkeley; Frieman, Joshua A.; /Chicago U. /Fermilab; Galbany, Lluis; /Barcelona, IFAE; Garnavich, Peter M.; /Notre Dame U. /Stockholm U., OKC /Stockholm U.

    2010-01-01

    We present a measurement of the volumetric Type Ia supernova (SN Ia) rate based on data from the Sloan Digital Sky Survey II (SDSS-II) Supernova Survey. The adopted sample of supernovae (SNe) includes 516 SNe Ia at redshift z {approx}< 0.3, of which 270 (52%) are spectroscopically identified as SNe Ia. The remaining 246 SNe Ia were identified through their light curves; 113 of these objects have spectroscopic redshifts from spectra of their host galaxy, and 133 have photometric redshifts estimated from the SN light curves. Based on consideration of 87 spectroscopically confirmed non-Ia SNe discovered by the SDSS-II SN Survey, we estimate that 2.04{sub -0.95}{sup +1.61}% of the photometric SNe Ia may be misidentified. The sample of SNe Ia used in this measurement represents an order of magnitude increase in the statistics for SN Ia rate measurements in the redshift range covered by the SDSS-II Supernova Survey. If we assume a SN Ia rate that is constant at low redshift (z < 0.15), then the SN observations can be used to infer a value of the SN rate of r{sub V} = (2.69{sub -0.30-0.01}{sup +0.34+0.21}) x 10{sup -5} SNe yr{sup -1} Mpc{sup -3} (H{sub 0}/(70 km s{sup -1} Mpc{sup -1})){sup 3} at a mean redshift of {approx} 0.12, based on 79 SNe Ia of which 72 are spectroscopically confirmed. However, the large sample of SNe Ia included in this study allows us to place constraints on the redshift dependence of the SN Ia rate based on the SDSS-II Supernova Survey data alone. Fitting a power-law model of the SN rate evolution, r{sub V} (z) = A{sub p} x ((1+z)/(1+z{sub 0})){sup {nu}}, over the redshift range 0.0 < z < 0.3 with z{sub 0} = 0.21, results in A{sub p} = (3.43{sub -0.15}{sup +0.15}) x 10{sup -5} SNe yr{sup -1} Mpc{sup -3} (H{sub 0}/(70 km s{sup -1} Mpc{sup -1})){sup 3} and {nu} = 2.04{sub -0.89}{sup +0.90}.

  8. Monitoring and reporting of preanalytical errors in laboratory medicine: the UK situation.

    Science.gov (United States)

    Cornes, Michael P; Atherton, Jennifer; Pourmahram, Ghazaleh; Borthwick, Hazel; Kyle, Betty; West, Jamie; Costelloe, Seán J

    2016-03-01

    Most errors in the clinical laboratory occur in the preanalytical phase. This study aimed to comprehensively describe the prevalence and nature of preanalytical quality monitoring practices in UK clinical laboratories. A survey was sent on behalf of the Association for Clinical Biochemistry and Laboratory Medicine Preanalytical Working Group (ACB-WG-PA) to all heads of department of clinical laboratories in the UK. The survey captured data on the analytical platform and Laboratory Information Management System in use; which preanalytical errors were recorded and how they were classified and gauged interest in an external quality assurance scheme for preanalytical errors. Of the 157 laboratories asked to participate, responses were received from 104 (66.2%). Laboratory error rates were recorded per number of specimens, rather than per number of requests in 51% of respondents. Aside from serum indices for haemolysis, icterus and lipaemia, which were measured in 80% of laboratories, the most common errors recorded were booking-in errors (70.1%) and sample mislabelling (56.9%) in laboratories who record preanalytical errors. Of the laboratories surveyed, 95.9% expressed an interest in guidance on recording preanalytical error and 91.8% expressed interest in an external quality assurance scheme. This survey observes a wide variation in the definition, repertoire and collection methods for preanalytical errors in the UK. Data indicate there is a lot of interest in improving preanalytical data collection. The ACB-WG-PA aims to produce guidance and support for laboratories to standardize preanalytical data collection and to help establish and validate an external quality assurance scheme for interlaboratory comparison. © The Author(s) 2015.

  9. Learning from errors in radiology to improve patient safety.

    Science.gov (United States)

    Saeed, Shaista Afzal; Masroor, Imrana; Shafqat, Gulnaz

    2013-10-01

    To determine the views and practices of trainees and consultant radiologists about error reporting. Cross-sectional survey. Radiology trainees and consultant radiologists in four tertiary care hospitals in Karachi approached in the second quarter of 2011. Participants were enquired as to their grade, sub-specialty interest, whether they kept a record/log of their errors (defined as a mistake that has management implications for the patient), number of errors they made in the last 12 months and the predominant type of error. They were also asked about the details of their department error meetings. All duly completed questionnaires were included in the study while the ones with incomplete information were excluded. A total of 100 radiologists participated in the survey. Of them, 34 were consultants and 66 were trainees. They had a wide range of sub-specialty interest like CT, Ultrasound, etc. Out of the 100 responders, 49 kept a personal record/log of their errors. In response to the recall of approximate errors they made in the last 12 months, 73 (73%) of participants recorded a varied response with 1 - 5 errors mentioned by majority i.e. 47 (64.5%). Most of the radiologists (97%) claimed receiving information about their errors through multiple sources like morbidity/mortality meetings, patients' follow-up, through colleagues and consultants. Perceptual error 66 (66%) were the predominant error type reported. Regular occurrence of error meetings and attending three or more error meetings in the last 12 months was reported by 35% participants. Majority among these described the atmosphere of these error meetings as informative and comfortable (n = 22, 62.8%). It is of utmost importance to develop a culture of learning from mistakes by conducting error meetings and improving the process of recording and addressing errors to enhance patient safety.

  10. Learning from Errors: Critical Incident Reporting in Nursing

    Science.gov (United States)

    Gartmeier, Martin; Ottl, Eva; Bauer, Johannes; Berberat, Pascal Oliver

    2017-01-01

    Purpose: The purpose of this paper is to conceptualize error reporting as a strategy for informal workplace learning and investigate nurses' error reporting cost/benefit evaluations and associated behaviors. Design/methodology/approach: A longitudinal survey study was carried out in a hospital setting with two measurements (time 1 [t1]:…

  11. SDSS-II SUPERNOVA SURVEY: AN ANALYSIS OF THE LARGEST SAMPLE OF TYPE IA SUPERNOVAE AND CORRELATIONS WITH HOST-GALAXY SPECTRAL PROPERTIES

    International Nuclear Information System (INIS)

    Wolf, Rachel C.; Gupta, Ravi R.; Sako, Masao; Fischer, John A.; March, Marisa C.; Fischer, Johanna-Laina; D’Andrea, Chris B.; Smith, Mathew; Kessler, Rick; Scolnic, Daniel M.; Jha, Saurabh W.; Campbell, Heather; Nichol, Robert C.; Olmstead, Matthew D.; Richmond, Michael; Schneider, Donald P.

    2016-01-01

    Using the largest single-survey sample of Type Ia supernovae (SNe Ia) to date, we study the relationship between properties of SNe Ia and those of their host galaxies, focusing primarily on correlations with Hubble residuals (HRs). Our sample consists of 345 photometrically classified or spectroscopically confirmed SNe Ia discovered as part of the SDSS-II Supernova Survey (SDSS-SNS). This analysis utilizes host-galaxy spectroscopy obtained during the SDSS-I/II spectroscopic survey and from an ancillary program on the SDSS-III Baryon Oscillation Spectroscopic Survey that obtained spectra for nearly all host galaxies of SDSS-II SN candidates. In addition, we use photometric host-galaxy properties from the SDSS-SNS data release such as host stellar mass and star formation rate. We confirm the well-known relation between HR and host-galaxy mass and find a 3.6 σ significance of a nonzero linear slope. We also recover correlations between HR and host-galaxy gas-phase metallicity and specific star formation rate as they are reported in the literature. With our large data set, we examine correlations between HR and multiple host-galaxy properties simultaneously and find no evidence of a significant correlation. We also independently analyze our spectroscopically confirmed and photometrically classified SNe Ia and comment on the significance of similar combined data sets for future surveys.

  12. A Survey of Wireless Fair Queuing Algorithms with Location-Dependent Channel Errors

    Directory of Open Access Journals (Sweden)

    Anca VARGATU

    2011-01-01

    Full Text Available The rapid development of wireless networks has brought more and more attention to topics related to fair allocation of resources, creation of suitable algorithms, taking into account the special characteristics of wireless environment and insurance fair access to the transmission channel, with delay bound and throughput guaranteed. Fair allocation of resources in wireless networks requires significant challenges, because of errors that occur only in these networks, such as location-dependent and bursty channel errors. In wireless networks, frequently happens, because interference of radio waves, that a user experiencing bad radio conditions during a period of time, not to receive resources in that period. This paper analyzes some resource allocation algorithms for wireless networks with location dependent errors, specifying the base idea for each algorithm and the way how it works. The analyzed fair queuing algorithms differ by the way they treat the following aspects: how to select the flows which should receive additional services, how to allocate these resources, which is the proportion received by error free flows and how the flows affected by errors are compensated.

  13. Modelling vertical error in LiDAR-derived digital elevation models

    Science.gov (United States)

    Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.

    2010-01-01

    A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings

  14. Interprofessional conflict and medical errors: results of a national multi-specialty survey of hospital residents in the US.

    Science.gov (United States)

    Baldwin, Dewitt C; Daugherty, Steven R

    2008-12-01

    Clear communication is considered the sine qua non of effective teamwork. Breakdowns in communication resulting from interprofessional conflict are believed to potentiate errors in the care of patients, although there is little supportive empirical evidence. In 1999, we surveyed a national, multi-specialty sample of 6,106 residents (64.2% response rate). Three questions inquired about "serious conflict" with another staff member. Residents were also asked whether they had made a "significant medical error" (SME) during their current year of training, and whether this resulted in an "adverse patient outcome" (APO). Just over 20% (n = 722) reported "serious conflict" with another staff member. Ten percent involved another resident, 8.3% supervisory faculty, and 8.9% nursing staff. Of the 2,813 residents reporting no conflict with other professional colleagues, 669, or 23.8%, recorded having made an SME, with 3.4% APOs. By contrast, the 523 residents who reported conflict with at least one other professional had 36.4% SMEs and 8.3% APOs. For the 187 reporting conflict with two or more other professionals, the SME rate was 51%, with 16% APOs. The empirical association between interprofessional conflict and medical errors is both alarming and intriguing, although the exact nature of this relationship cannot currently be determined from these data. Several theoretical constructs are advanced to assist our thinking about this complex issue.

  15. Finding errors in big data

    NARCIS (Netherlands)

    Puts, Marco; Daas, Piet; de Waal, A.G.

    No data source is perfect. Mistakes inevitably creep in. Spotting errors is hard enough when dealing with survey responses from several thousand people, but the difficulty is multiplied hugely when that mysterious beast Big Data comes into play. Statistics Netherlands is about to publish its first

  16. Friendship at work and error disclosure

    Directory of Open Access Journals (Sweden)

    Hsiao-Yen Mao

    2017-10-01

    Full Text Available Organizations rely on contextual factors to promote employee disclosure of self-made errors, which induces a resource dilemma (i.e., disclosure entails costing one's own resources to bring others resources and a friendship dilemma (i.e., disclosure is seemingly easier through friendship, yet the cost of friendship is embedded. This study proposes that friendship at work enhances error disclosure and uses conservation of resources theory as underlying explanation. A three-wave survey collected data from 274 full-time employees with a variety of occupational backgrounds. Empirical results indicated that friendship enhanced error disclosure partially through relational mechanisms of employees’ attitudes toward coworkers (i.e., employee engagement and of coworkers’ attitudes toward employees (i.e., perceived social worth. Such effects hold when controlling for established predictors of error disclosure. This study expands extant perspectives on employee error and the theoretical lenses used to explain the influence of friendship at work. We propose that, while promoting error disclosure through both contextual and relational approaches, organizations should be vigilant about potential incongruence.

  17. Error probabilities in default Bayesian hypothesis testing

    NARCIS (Netherlands)

    Gu, Xin; Hoijtink, Herbert; Mulder, J,

    2016-01-01

    This paper investigates the classical type I and type II error probabilities of default Bayes factors for a Bayesian t test. Default Bayes factors quantify the relative evidence between the null hypothesis and the unrestricted alternative hypothesis without needing to specify prior distributions for

  18. Effect of External Disturbing Gravity Field on Spacecraft Guidance and Surveying Line Layout for Marine Gravity Survey

    Directory of Open Access Journals (Sweden)

    HUANG Motao

    2016-11-01

    Full Text Available Centred on the support requirement of flying track control for a long range spacecraft, a detail research is made on the computation of external disturbing gravity field, the survey accuracy of gravity anomaly on the earth' surface and the program of surveying line layout for marine gravity survey. Firstly, the solution expression of navigation error for a long range spacecraft is analyzed and modified, and the influence of the earth's gravity field on flying track of spacecraft is evaluated. Then with a given limited quota of biased error of spacecraft drop point, the accuracy requirement for calculating the external disturbing gravity field is discussed and researched. Secondly, the data truncation error and the propagated data error are studied and estimated, and the quotas of survey resolution and computation accuracy for gravity anomaly on the earth' surface are determined. Finally, based on the above quotas, a corresponding program of surveying line layout for marine gravity survey is proposed. A numerical test has been made to prove the reasonableness and validity of the suggested program.

  19. Organizational Climate, Stress, and Error in Primary Care: The MEMO Study

    National Research Council Canada - National Science Library

    Linzer, Mark; Manwell, Linda B; Mundt, Marlon; Williams, Eric; Maguire, Ann; McMurray, Julia; Plane, Mary B

    2005-01-01

    .... Physician surveys assessed office environment and organizational climate (OC). Stress was measured using a 4-item scale, past errors were self reported, and the likelihood of future errors was self-assessed using the OSPRE...

  20. Uncertainty in mapped geological boundaries held by a national geological survey:eliciting the geologists' tacit error model

    Science.gov (United States)

    Lark, R. M.; Lawley, R. S.; Barron, A. J. M.; Aldiss, D. T.; Ambrose, K.; Cooper, A. H.; Lee, J. R.; Waters, C. N.

    2015-06-01

    It is generally accepted that geological line work, such as mapped boundaries, are uncertain for various reasons. It is difficult to quantify this uncertainty directly, because the investigation of error in a boundary at a single location may be costly and time consuming, and many such observations are needed to estimate an uncertainty model with confidence. However, it is recognized across many disciplines that experts generally have a tacit model of the uncertainty of information that they produce (interpretations, diagnoses, etc.) and formal methods exist to extract this model in usable form by elicitation. In this paper we report a trial in which uncertainty models for geological boundaries mapped by geologists of the British Geological Survey (BGS) in six geological scenarios were elicited from a group of five experienced BGS geologists. In five cases a consensus distribution was obtained, which reflected both the initial individually elicited distribution and a structured process of group discussion in which individuals revised their opinions. In a sixth case a consensus was not reached. This concerned a boundary between superficial deposits where the geometry of the contact is hard to visualize. The trial showed that the geologists' tacit model of uncertainty in mapped boundaries reflects factors in addition to the cartographic error usually treated by buffering line work or in written guidance on its application. It suggests that further application of elicitation, to scenarios at an appropriate level of generalization, could be useful to provide working error models for the application and interpretation of line work.

  1. Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.

    Science.gov (United States)

    Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn

    2017-07-01

    The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.

  2. The Effect of Random Error on Diagnostic Accuracy Illustrated with the Anthropometric Diagnosis of Malnutrition

    Science.gov (United States)

    2016-01-01

    Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627

  3. Analysis of Students' Errors on Linear Programming at Secondary ...

    African Journals Online (AJOL)

    The purpose of this study was to identify secondary school students' errors on linear programming at 'O' level. It is based on the fact that students' errors inform teaching hence an essential tool for any serious mathematics teacher who intends to improve mathematics teaching. The study was guided by a descriptive survey ...

  4. Policies on documentation and disciplinary action in hospital pharmacies after a medication error.

    Science.gov (United States)

    Bauman, A N; Pedersen, C A; Schommer, J C; Griffith, N L

    2001-06-15

    Hospital pharmacies were surveyed about policies on medication error documentation and actions taken against pharmacists involved in an error. The survey was mailed to 500 randomly selected hospital pharmacy directors in the United States. Data were collected on the existence of medication error reporting policies, what types of errors were documented and how, and hospital demographics. The response rate was 28%. Virtually all of the hospitals had policies and procedures for medication error reporting. Most commonly, documentation of oral and written reprimand was placed in the personnel file of a pharmacist involved in an error. One sixth of respondents had no policy on documentation or disciplinary action in the event of an error. Approximately one fourth of respondents reported that suspension or termination had been used as a form of disciplinary action; legal action was rarely used. Many respondents said errors that caused harm (42%) or death (40%) to the patient were documented in the personnel file, but 34% of hospitals did not document errors in the personnel file regardless of error type. Nearly three fourths of respondents differentiated between errors caught and not caught before a medication leaves the pharmacy and between errors caught and not caught before administration to the patient. More emphasis is needed on documentation of medication errors in hospital pharmacies.

  5. An Analysis of Students Error In Solving PISA 2012 And Its Scaffolding

    Directory of Open Access Journals (Sweden)

    Yurizka Melia Sari

    2017-08-01

    Full Text Available Based on PISA survey in 2012, Indonesia was only placed on 64 out of 65 participating countries. The survey suggest that the students’ ability of reasoning, spatial orientation, and problem solving are lower compare with other participants countries, especially in Shouth East Asia. Nevertheless, the result of PISA does not elicit clearly on the students’ inability in solving PISA problem such as the location and the types of student’s errors. Therefore, analyzing students’ error in solving PISA problem would be essential countermeasure to help the students in solving mathematics problems and to develop scaffolding. Based on the data analysis, it is found that there are 5 types of error which is made by the subject. They consist of reading error, comprehension error, transformation error, process skill error, and encoding error. The most common mistake that subject do is encoding error with a percentage of 26%. While reading is the fewest errors made by the subjects that is only 12%. The types of given scaffolding was explaining the problem carefully and making a summary of new words and find the meaning of them, restructuring problem-solving strategies and reviewing the results of the completion of the problem.

  6. Subdivision Error Analysis and Compensation for Photoelectric Angle Encoder in a Telescope Control System

    Directory of Open Access Journals (Sweden)

    Yanrui Su

    2015-01-01

    Full Text Available As the position sensor, photoelectric angle encoder affects the accuracy and stability of telescope control system (TCS. A TCS-based subdivision error compensation method for encoder is proposed. Six types of subdivision error sources are extracted through mathematical expressions of subdivision signals first. Then the period length relationships between subdivision signals and subdivision errors are deduced. And the error compensation algorithm only utilizing the shaft position of TCS is put forward, along with two control models; Model I is that the algorithm applies only to the speed loop of TCS and Model II is applied to both speed loop and position loop. Combined with actual project, elevation jittering phenomenon of the telescope is discussed to decide the necessity of DC-type subdivision error compensation. Low-speed elevation performance before and after error compensation is compared to help decide that Model II is preferred. In contrast to original performance, the maximum position error of the elevation with DC subdivision error compensation is reduced by approximately 47.9% from 1.42″ to 0.74″. The elevation gets a huge decrease in jitters. This method can compensate the encoder subdivision errors effectively and improve the stability of TCS.

  7. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    Science.gov (United States)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  8. Systematic errors of EIT systems determined by easily-scalable resistive phantoms

    International Nuclear Information System (INIS)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-01-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design

  9. Linking Errors between Two Populations and Tests: A Case Study in International Surveys in Education

    Directory of Open Access Journals (Sweden)

    Dirk Hastedt

    2015-06-01

    Full Text Available This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average achievement scores from the latter can be linked to the international metric. In addition to raising issues associated with different testing conditions, administrative procedures, and the like, this approach also poses psychometric challenges. This paper endeavors to shed some light on the effects that can be expected, the linkage errors in particular, by countries using this practice. The ILSA selected for this simulation study was IEA TIMSS 2011, and the three countries used as the national assessment cases were Botswana, Honduras, and Tunisia, all of which participated in TIMSS 2011. The items selected as items common to the simulated national tests and the international test came from the Grade 4 TIMSS 2011 mathematics items that IEA released into the public domain after completion of this assessment. The findings of the current study show that linkage errors seemed to achieve acceptable levels if 30 or more items were used for the linkage, although the errors were still significantly higher compared to the TIMSS' cutoffs. Comparison of the estimated country averages based on the simulated national surveys and the averages based on the international TIMSS assessment revealed only one instance across the three countries of the estimates approaching parity. Also, the percentages of students in these countries who actually reached the defined benchmarks on the TIMSS achievement scale differed significantly from the results based on TIMSS and the results for the simulated national assessments. As a conclusion, we advise against using groups of released items from international assessments in national

  10. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  11. Comparing acquired angioedema with hereditary angioedema (types I/II): findings from the Icatibant Outcome Survey.

    Science.gov (United States)

    Longhurst, H J; Zanichelli, A; Caballero, T; Bouillet, L; Aberer, W; Maurer, M; Fain, O; Fabien, V; Andresen, I

    2017-04-01

    Icatibant is used to treat acute hereditary angioedema with C1 inhibitor deficiency types I/II (C1-INH-HAE types I/II) and has shown promise in angioedema due to acquired C1 inhibitor deficiency (C1-INH-AAE). Data from the Icatibant Outcome Survey (IOS) were analysed to evaluate the effectiveness of icatibant in the treatment of patients with C1-INH-AAE and compare disease characteristics with those with C1-INH-HAE types I/II. Key medical history (including prior occurrence of attacks) was recorded upon IOS enrolment. Thereafter, data were recorded retrospectively at approximately 6-month intervals during patient follow-up visits. In the icatibant-treated population, 16 patients with C1-INH-AAE had 287 attacks and 415 patients with C1-INH-HAE types I/II had 2245 attacks. Patients with C1-INH-AAE versus C1-INH-HAE types I/II were more often male (69 versus 42%; P = 0·035) and had a significantly later mean (95% confidence interval) age of symptom onset [57·9 (51·33-64·53) versus 14·0 (12·70-15·26) years]. Time from symptom onset to diagnosis was significantly shorter in patients with C1-INH-AAE versus C1-INH-HAE types I/II (mean 12·3 months versus 118·1 months; P = 0·006). Patients with C1-INH-AAE showed a trend for higher occurrence of attacks involving the face (35 versus 21% of attacks; P = 0·064). Overall, angioedema attacks were more severe in patients with C1-INH-HAE types I/II versus C1-INH-AAE (61 versus 40% of attacks were classified as severe to very severe; P types I/II, respectively. © 2016 British Society for Immunology.

  12. Aliasing errors in measurements of beam position and ellipticity

    International Nuclear Information System (INIS)

    Ekdahl, Carl

    2005-01-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all

  13. Aliasing errors in measurements of beam position and ellipticity

    Science.gov (United States)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  14. Characterisation of false-positive observations in botanical surveys

    Directory of Open Access Journals (Sweden)

    Quentin J. Groom

    2017-05-01

    Full Text Available Errors in botanical surveying are a common problem. The presence of a species is easily overlooked, leading to false-absences; while misidentifications and other mistakes lead to false-positive observations. While it is common knowledge that these errors occur, there are few data that can be used to quantify and describe these errors. Here we characterise false-positive errors for a controlled set of surveys conducted as part of a field identification test of botanical skill. Surveys were conducted at sites with a verified list of vascular plant species. The candidates were asked to list all the species they could identify in a defined botanically rich area. They were told beforehand that their final score would be the sum of the correct species they listed, but false-positive errors counted against their overall grade. The number of errors varied considerably between people, some people create a high proportion of false-positive errors, but these are scattered across all skill levels. Therefore, a person’s ability to correctly identify a large number of species is not a safeguard against the generation of false-positive errors. There was no phylogenetic pattern to falsely observed species; however, rare species are more likely to be false-positive as are species from species rich genera. Raising the threshold for the acceptance of an observation reduced false-positive observations dramatically, but at the expense of more false negative errors. False-positive errors are higher in field surveying of plants than many people may appreciate. Greater stringency is required before accepting species as present at a site, particularly for rare species. Combining multiple surveys resolves the problem, but requires a considerable increase in effort to achieve the same sensitivity as a single survey. Therefore, other methods should be used to raise the threshold for the acceptance of a species. For example, digital data input systems that can verify

  15. Pecan nutshell as biosorbent to remove Cu(II), Mn(II) and Pb(II) from aqueous solutions.

    Science.gov (United States)

    Vaghetti, Julio C P; Lima, Eder C; Royer, Betina; da Cunha, Bruna M; Cardoso, Natali F; Brasil, Jorge L; Dias, Silvio L P

    2009-02-15

    In the present study we reported for the first time the feasibility of pecan nutshell (PNS, Carya illinoensis) as an alternative biosorbent to remove Cu(II), Mn(II) and Pb(II) metallic ions from aqueous solutions. The ability of PNS to remove the metallic ions was investigated by using batch biosorption procedure. The effects such as, pH, biosorbent dosage on the adsorption capacities of PNS were studied. Four kinetic models were tested, being the adsorption kinetics better fitted to fractionary-order kinetic model. Besides that, the kinetic data were also fitted to intra-particle diffusion model, presenting three linear regions, indicating that the kinetics of adsorption should follow multiple sorption rates. The equilibrium data were fitted to Langmuir, Freundlich, Sips and Redlich-Peterson isotherm models. Taking into account a statistical error function, the data were best fitted to Sips isotherm model. The maximum biosorption capacities of PNS were 1.35, 1.78 and 0.946mmolg(-1) for Cu(II), Mn(II) and Pb(II), respectively.

  16. Prevalence of refractive errors among junior high school students in ...

    African Journals Online (AJOL)

    Among school children, uncorrected refractive errors have a considerable impact on their participation and learning in class. The aim of this study was to assess the prevalence of refractive error among students in the Ejisu-Juabeng Municipality of Ghana. A survey with multi-stage sampling was undertaken. We interviewed ...

  17. High School and Beyond. 1980 Senior Coort. Third-Follow-Up (1986). Data File User's Manual. Volume II: Survey Instruments. Contractor Report.

    Science.gov (United States)

    Sebring, Penny; And Others

    Survey instruments used in the collection of data for the High School and Beyond base year (1980) through the third follow-up surveys (1986) are provided as Volume II of a user's manual for the senior cohort data file. The complete user's manual is designed to provide the extensive documentation necessary for using the cohort data files. Copies of…

  18. Unusual broad-line Mg II emitters among luminous galaxies in the baryon oscillation spectroscopic survey

    International Nuclear Information System (INIS)

    Roig, Benjamin; Blanton, Michael R.; Ross, Nicholas P.

    2014-01-01

    Many classes of active galactic nuclei (AGNs) have been observed and recorded since the discovery of Seyfert galaxies. In this paper, we examine the sample of luminous galaxies in the Baryon Oscillation Spectroscopic Survey. We find a potentially new observational class of AGNs, one with strong and broad Mg II λ2799 line emission, but very weak emission in other normal indicators of AGN activity, such as the broad-line Hα, Hβ, and the near-ultraviolet AGN continuum, leading to an extreme ratio of broad Hα/Mg II flux relative to normal quasars. Meanwhile, these objects' narrow-line flux ratios reveal AGN narrow-line regions with levels of activity consistent with the Mg II fluxes and in agreement with that of normal quasars. These AGN may represent an extreme case of the Baldwin effect, with very low continuum and high equivalent width relative to typical quasars, but their ratio of broad Mg II to broad Balmer emission remains very unusual. They may also be representative of a class of AGN where the central engine is observed indirectly with scattered light. These galaxies represent a small fraction of the total population of luminous galaxies (≅ 0.1%), but are more likely (about 3.5 times) to have AGN-like nuclear line emission properties than other luminous galaxies. Because Mg II is usually inaccessible for the population of nearby galaxies, there may exist a related population of broad-line Mg II emitters in the local universe which is currently classified as narrow-line emitters (Seyfert 2 galaxies) or low ionization nuclear emission-line regions.

  19. Evaluation of Analysis by Cross-Validation, Part II: Diagnostic and Optimization of Analysis Error Covariance

    Directory of Open Access Journals (Sweden)

    Richard Ménard

    2018-02-01

    Full Text Available We present a general theory of estimation of analysis error covariances based on cross-validation as well as a geometric interpretation of the method. In particular, we use the variance of passive observation-minus-analysis residuals and show that the true analysis error variance can be estimated, without relying on the optimality assumption. This approach is used to obtain near optimal analyses that are then used to evaluate the air quality analysis error using several different methods at active and passive observation sites. We compare the estimates according to the method of Hollingsworth-Lönnberg, Desroziers et al., a new diagnostic we developed, and the perceived analysis error computed from the analysis scheme, to conclude that, as long as the analysis is near optimal, all estimates agree within a certain error margin.

  20. An Analysis of Students Error in Solving PISA 2012 and Its Scaffolding

    OpenAIRE

    Sari, Yurizka Melia; Valentino, Erik

    2016-01-01

    Based on PISA survey in 2012, Indonesia was only placed on 64 out of 65 participating countries. The survey suggest that the students’ ability of reasoning, spatial orientation, and problem solving are lower compare with other participants countries, especially in Shouth East Asia. Nevertheless, the result of PISA does not elicit clearly on the students’ inability in solving PISA problem such as the location and the types of student’s errors. Therefore, analyzing students’ error in solving PI...

  1. Estimation of the limit of detection with a bootstrap-derived standard error by a partly non-parametric approach. Application to HPLC drug assays

    DEFF Research Database (Denmark)

    Linnet, Kristian

    2005-01-01

    Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors......Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors...

  2. Exploring the Milky Way halo with SDSS-II SN survey RR Lyrae stars

    Science.gov (United States)

    De Lee, Nathan

    This thesis details the creation of a large catalog of RR Lyrae stars, their lightcurves, and their associated photometric and kinematic parameters. This catalog contains 421 RR Lyrae stars with 305 RRab and 116 RRc. Of these, 241 stars have stellar spectra taken with either the Blanco 4m RC spectrograph or the SDSS/SEGUE survey, and in some cases taken by both. From these spectra and photometric methods derived from them, an analysis is conducted of the RR lyrae's distribution, metallicity, kinematics, and photometric properties within the halo. All of these RR Lyrae originate from the SDSS-II Supernova Survey. The SDSS-II SN Survey covers a 2.5 degree equatorial stripe ranging from -60 to +60 degrees in RA. This corresponds to relatively high southern galactic latitudes in the anti-center direction. The full catalog ranges from g 0 magnitude 13 to 20 which covers a distance of 3 to 95 kpc from the sun. Using this sample, we explore the Oosterhoff dichotomy through the D log P method as a function of | Z | distance from the plane. This results in a clear division of the RRab stars into OoI and OoII groups at lower | Z |, but the population becomes dominated by OoI stars at higher | Z |. The idea of a dual halo is explored primarily in the context of radial velocity distributions as a function of | Z |. In particular, V gsr , the radial velocity in the galactic standard of rest, is used as a proxy for V [straight phi] , the cylindrical rotational velocity. This is then compared against a single halo model galaxy, which results in very similar V gsr histograms for both at low to medium | Z |. However, at high | Z | there is a clear separation into two distinct velocity groups for the data without a corresponding separation in the model, suggesting that at least a two-component model for the halo is necessary. The final part of the analysis involves [Fe/H] measurements from both spectra and photometric relations cut in both | Z | and radial velocity. In this case

  3. Psychological safety and error reporting within Veterans Health Administration hospitals.

    Science.gov (United States)

    Derickson, Ryan; Fishman, Jonathan; Osatuke, Katerine; Teclaw, Robert; Ramsel, Dee

    2015-03-01

    In psychologically safe workplaces, employees feel comfortable taking interpersonal risks, such as pointing out errors. Previous research suggested that psychologically safe climate optimizes organizational outcomes. We evaluated psychological safety levels in Veterans Health Administration (VHA) hospitals and assessed their relationship to employee willingness of reporting medical errors. We conducted an ANOVA on psychological safety scores from a VHA employees census survey (n = 185,879), assessing variability of means across racial and supervisory levels. We examined organizational climate assessment interviews (n = 374) evaluating how many employees asserted willingness to report errors (or not) and their stated reasons. Finally, based on survey data, we identified 2 (psychologically safe versus unsafe) hospitals and compared their number of employees who would be willing/unwilling to report an error. Psychological safety increased with supervisory level (P hospital (71% would report, 13% would not) were less willing to report an error than at the psychologically safe hospital (91% would, 0% would not). A substantial minority would not report an error and were willing to admit so in a private interview setting. Their stated reasons as well as higher psychological safety means for supervisory employees both suggest power as an important determinant. Intentions to report were associated with psychological safety, strongly suggesting this climate aspect as instrumental to improving patient safety and reducing costs.

  4. Role of memory errors in quantum repeaters

    International Nuclear Information System (INIS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Duer, W.

    2007-01-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication

  5. The Unique Optical Design of the CTI-II Survey Telescope

    Science.gov (United States)

    Ackermann, Mark R.; McGraw, J. T.; MacFarlane, M.

    2006-12-01

    The CCD/Transit Instrument with Innovative Instrumentation (CTI-II) is being developed for precision ground-based astrometric and photometric astronomical observations. The 1.8m telescope will be stationary, near-zenith pointing and will feature a CCD-mosaic array operated in time-delay and integrate (TDI) mode to image a continuous strip of the sky in five bands. The heart of the telescope is a Nasmyth-like bent-Cassegrain optical system optimized to produce near diffraction-limited images with near zero distortion over a circular1.42 deg field. The optical design includes an f/2.2 parabolic ULE primary with no central hole salvaged from the original CTI telescope and adds the requisite hyperbolic secondary, a folding flat and a highly innovative all-spherical, five lens corrector which includes three plano surfaces. The reflective and refractive portions of the design have been optimized as individual but interdependent systems so that the same reflective system can be used with slightly different refractive correctors. At present, two nearly identical corrector designs are being evaluated, one fabricated from BK-7 glass and the other of fused silica. The five lens corrector consists of an air-spaced triplet separated from follow-on air-spaced doublet. Either design produces 0.25 arcsecond images at 83% encircled energy with a maximum of 0.0005% distortion. The innovative five lens corrector design has been applied to other current and planned Cassegrain, RC and super RC optical systems requiring correctors. The basic five lens approach always results in improved performance compared to the original designs. In some cases, the improvement in image quality is small but includes substantial reductions in distortion. In other cases, the improvement in image quality is substantial. Because the CTI-II corrector is designed for a parabolic primary, it might be especially useful for liquid mirror telescopes. We describe and discuss the CTI-II optical design with respect

  6. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Stratification in Business and Agriculture Surveys with R

    Directory of Open Access Journals (Sweden)

    Marco Ballin

    2016-06-01

    Full Text Available Usually sample surveys on enterprises and farms adopt a one stage stratified sampling design. In practice the sampling frame is divided in non-overlapping strata and simple random sampling is carried out independently in each stratum. Stratification allows for reduction of the sampling error and permits to derive accurate estimates. Stratified sampling requires a number of decisions strictly related: (i how to stratify the population and how many strata to consider; (ii the size of the whole sample and corresponding partitioning among the strata (so called allocation. This paper will deal mainly with the problem (i and will show how to tackle it in the R environment using packages already available on the CRAN.

  8. LOWER BOUNDS ON PHOTOMETRIC REDSHIFT ERRORS FROM TYPE Ia SUPERNOVA TEMPLATES

    International Nuclear Information System (INIS)

    Asztalos, S.; Nikolaev, S.; De Vries, W.; Olivier, S.; Cook, K.; Wang, L.

    2010-01-01

    Cosmology with Type Ia supernova heretofore has required extensive spectroscopic follow-up to establish an accurate redshift. Though this resource-intensive approach is tolerable at the present discovery rate, the next generation of ground-based all-sky survey instruments will render it unsustainable. Photometry-based redshift determination may be a viable alternative, though the technique introduces non-negligible errors that ultimately degrade the ability to discriminate between competing cosmologies. We present a strictly template-based photometric redshift estimator and compute redshift reconstruction errors in the presence of statistical errors. Under highly degraded photometric conditions corresponding to a statistical error σ of 0.5, the residual redshift error is found to be 0.236 when assuming a nightly observing cadence and a single Large Synoptic Science Telescope (LSST) u-band filter. Utilizing all six LSST bandpass filters reduces the residual redshift error to 9.1 x 10 -3 . Assuming a more optimistic statistical error σ of 0.05, we derive residual redshift errors of 4.2 x 10 -4 , 5.2 x 10 -4 , 9.2 x 10 -4 , and 1.8 x 10 -3 for observations occuring nightly, every 5th, 20th and 45th night, respectively, in each of the six LSST bandpass filters. Adopting an observing cadence in which photometry is acquired with all six filters every 5th night and a realistic supernova distribution, binned redshift errors are combined with photometric errors with a σ of 0.17 and systematic errors with a σ∼ 0.003 to derive joint errors (σ w , σ w ' ) of (0.012, 0.066), respectively, in (w,w') with 68% confidence using Fisher matrix formalism. Though highly idealized in the present context, the methodology is nonetheless quite relevant for the next generation of ground-based all-sky surveys.

  9. A survey of camera error sources in machine vision systems

    Science.gov (United States)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  10. Investigating Medication Errors in Educational Health Centers of Kermanshah

    Directory of Open Access Journals (Sweden)

    Mohsen Mohammadi

    2015-08-01

    Full Text Available Background and objectives : Medication errors can be a threat to the safety of patients. Preventing medication errors requires reporting and investigating such errors. The present study was conducted with the purpose of investigating medication errors in educational health centers of Kermanshah. Material and Methods: The present research is an applied, descriptive-analytical study and is done as a survey. Error Report of Ministry of Health and Medical Education was used for data collection. The population of the study included all the personnel (nurses, doctors, paramedics of educational health centers of Kermanshah. Among them, those who reported the committed errors were selected as the sample of the study. The data analysis was done using descriptive statistics and Chi 2 Test using SPSS version 18. Results: The findings of the study showed that most errors were related to not using medication properly, the least number of errors were related to improper dose, and the majority of errors occurred in the morning. The most frequent reason for errors was staff negligence and the least frequent was the lack of knowledge. Conclusion: The health care system should create an environment for detecting and reporting errors by the personnel, recognizing related factors causing errors, training the personnel and create a good working environment and standard workload.

  11. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  12. Hydra II: A Faint and Compact Milky Way Dwarf Galaxy Found in the Survey of the Magellanic Stellar History

    OpenAIRE

    Martin, NF; Nidever, DL; Besla, G; Olsen, K; Walker, AR; Vivas, AK; Gruendl, RA; Kaleida, CC; Muñoz, RR; Blum, RD; Saha, A; Conn, BC; Bell, EF; Chu, YH; Cioni, MRL

    2015-01-01

    © 2015. The American Astronomical Society. All rights reserved.We present the discovery of a new dwarf galaxy, Hydra II, found serendipitously within the data from the ongoing Survey of the Magellanic Stellar History conducted with the Dark Energy Camera on the Blanco 4 m Telescope. The new satellite is compact (rh = 68 ± 11 pc) and faint (MV = -4.8 ± 0.3), but well within the realm of dwarf galaxies. The stellar distribution of Hydra II in the color-magnitude diagram is well-described by a m...

  13. A two-phase sampling survey for nonresponse and its paradata to correct nonresponse bias in a health surveillance survey.

    Science.gov (United States)

    Santin, G; Bénézet, L; Geoffroy-Perez, B; Bouyer, J; Guéguen, A

    2017-02-01

    The decline in participation rates in surveys, including epidemiological surveillance surveys, has become a real concern since it may increase nonresponse bias. The aim of this study is to estimate the contribution of a complementary survey among a subsample of nonrespondents, and the additional contribution of paradata in correcting for nonresponse bias in an occupational health surveillance survey. In 2010, 10,000 workers were randomly selected and sent a postal questionnaire. Sociodemographic data were available for the whole sample. After data collection of the questionnaires, a complementary survey among a random subsample of 500 nonrespondents was performed using a questionnaire administered by an interviewer. Paradata were collected for the complete subsample of the complementary survey. Nonresponse bias in the initial sample and in the combined samples were assessed using variables from administrative databases available for the whole sample, not subject to differential measurement errors. Corrected prevalences by reweighting technique were estimated by first using the initial survey alone and then the initial and complementary surveys combined, under several assumptions regarding the missing data process. Results were compared by computing relative errors. The response rates of the initial and complementary surveys were 23.6% and 62.6%, respectively. For the initial and the combined surveys, the relative errors decreased after correction for nonresponse on sociodemographic variables. For the combined surveys without paradata, relative errors decreased compared with the initial survey. The contribution of the paradata was weak. When a complex descriptive survey has a low response rate, a short complementary survey among nonrespondents with a protocol which aims to maximize the response rates, is useful. The contribution of sociodemographic variables in correcting for nonresponse bias is important whereas the additional contribution of paradata in

  14. Optimizing learning of a locomotor task: amplifying errors as needed.

    Science.gov (United States)

    Marchal-Crespo, Laura; López-Olóriz, Jorge; Jaeger, Lukas; Riener, Robert

    2014-01-01

    Research on motor learning has emphasized that errors drive motor adaptation. Thereby, several researchers have proposed robotic training strategies that amplify movement errors rather than decrease them. In this study, the effect of different robotic training strategies that amplify errors on learning a complex locomotor task was investigated. The experiment was conducted with a one degree-of freedom robotic stepper (MARCOS). Subjects were requested to actively coordinate their legs in a desired gait-like pattern in order to track a Lissajous figure presented on a visual display. Learning with three different training strategies was evaluated: (i) No perturbation: the robot follows the subjects' movement without applying any perturbation, (ii) Error amplification: existing errors were amplified with repulsive forces proportional to errors, (iii) Noise disturbance: errors were evoked with a randomly-varying force disturbance. Results showed that training without perturbations was especially suitable for a subset of initially less-skilled subjects, while error amplification seemed to benefit more skilled subjects. Training with error amplification, however, limited transfer of learning. Random disturbing forces benefited learning and promoted transfer in all subjects, probably because it increased attention. These results suggest that learning a locomotor task can be optimized when errors are randomly evoked or amplified based on subjects' initial skill level.

  15. A MEASUREMENT OF THE RATE OF TYPE Ia SUPERNOVAE IN GALAXY CLUSTERS FROM THE SDSS-II SUPERNOVA SURVEY

    International Nuclear Information System (INIS)

    Dilday, Benjamin; Jha, Saurabh W.; Bassett, Bruce; Becker, Andrew; Bender, Ralf; Hopp, Ulrich; Castander, Francisco; Cinabro, David; Frieman, Joshua A.; Galbany, LluIs; Miquel, Ramon; Garnavich, Peter; Goobar, Ariel; Ihara, Yutaka; Kessler, Richard; Lampeitl, Hubert; Nichol, Robert C.; Marriner, John; Molla, Mercedes

    2010-01-01

    We present measurements of the Type Ia supernova (SN) rate in galaxy clusters based on data from the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey. The cluster SN Ia rate is determined from 9 SN events in a set of 71 C4 clusters at z ≤ 0.17 and 27 SN events in 492 maxBCG clusters at 0.1 ≤ z ≤ 0.3. We find values for the cluster SN Ia rate of (0.37 +0.17+0.01 -0.12-0.01 ) SNur h 2 and (0.55 +0.13+0.02 -0.11-0.01 ) SNur h 2 (SNux = 10 -12 L -1 xsun yr -1 ) in C4 and maxBCG clusters, respectively, where the quoted errors are statistical and systematic, respectively. The SN rate for early-type galaxies is found to be (0.31 +0.18+0.01 -0.12-0.01 ) SNur h 2 and (0.49 +0.15+0.02 -0.11-0.01 ) SNur h 2 in C4 and maxBCG clusters, respectively. The SN rate for the brightest cluster galaxies (BCG) is found to be (2.04 +1.99+0.07 -1.11-0.04 ) SNur h 2 and (0.36 +0.84+0.01 -0.30-0.01 ) SNur h 2 in C4 and maxBCG clusters, respectively. The ratio of the SN Ia rate in cluster early-type galaxies to that of the SN Ia rate in field early-type galaxies is 1.94 +1.31+0.043 -0.91-0.015 and 3.02 +1.31+0.062 -1.03-0.048 , for C4 and maxBCG clusters, respectively. The SN rate in galaxy clusters as a function of redshift, which probes the late time SN Ia delay distribution, shows only weak dependence on redshift. Combining our current measurements with previous measurements, we fit the cluster SN Ia rate data to a linear function of redshift, and find r L = [(0.49 +0.15 -0.14 )+(0.91 +0.85 -0.81 ) x z] SNuB h 2 . A comparison of the radial distribution of SNe in cluster to field early-type galaxies shows possible evidence for an enhancement of the SN rate in the cores of cluster early-type galaxies. With an observation of at most three hostless, intra-cluster SNe Ia, we estimate the fraction of cluster SNe that are hostless to be (9.4 +8.3 -5.1 )%.

  16. Relationship of blood lead levels and blood pressure in NHANES II: additional calculations

    International Nuclear Information System (INIS)

    Gartside, P.S.

    1988-01-01

    In performing research for associations and relationships among the data thus far published from the NHANES II survey, only the data for the 64 communities involved may be used. The simple omission of a few essential data makes impossible any valid analysis from the data for the 20,325 individual respondents. In this research for associations between blood lead levels and blood pressure in NHANES II, the method of forward stepwise regression was used. This avoids the problem of inflated error rates for blood lead, maximizes the number of data analyzed, and minimizes the number of independent variables entered into the regression model, thus avoiding the pitfalls that previous NHANES II research of blood lead and blood pressure has fallen into when using backward stepwise regression. The results of this research for white male adults, white female adults, and black adults were contradictory and lacked consistency and reliability. In addition, the overall average association between blood lead level and blood pressure was so minute that the only rational conclusion is that there is no evidence for this association to be found in the NHANES II data

  17. Analysis of Human Error Types and Performance Shaping Factors in the Next Generation Main Control Room

    International Nuclear Information System (INIS)

    Sin, Y. C.; Jung, Y. S.; Kim, K. H.; Kim, J. H.

    2008-04-01

    Main control room of nuclear power plants has been computerized and digitalized in new and modernized plants, as information and digital technologies make great progresses and become mature. Survey on human factors engineering issues in advanced MCRs: Model-based approach, Literature survey-based approach. Analysis of human error types and performance shaping factors is analysis of three human errors. The results of project can be used for task analysis, evaluation of human error probabilities, and analysis of performance shaping factors in the HRA analysis

  18. THE SELF-CORRECTION OF ENGLISH SPEECH ERRORS IN SECOND LANGUANGE LEARNING

    Directory of Open Access Journals (Sweden)

    Ketut Santi Indriani

    2015-05-01

    Full Text Available The process of second language (L2 learning is strongly influenced by the factors of error reconstruction that occur when the language is learned. Errors will definitely appear in the learning process. However, errors can be used as a step to accelerate the process of understanding the language. Doing self-correction (with or without giving cues is one of the examples. In the aspect of speaking, self-correction is done immediately after the error appears. This study is aimed at finding (i what speech errors the L2 speakers are able to identify, (ii of the errors identified, what speech errors the L2 speakers are able to self correct and (iii whether the self-correction of speech error are able to immediately improve the L2 learning. Based on the data analysis, it was found that the majority identified errors are related to noun (plurality, subject-verb agreement, grammatical structure and pronunciation.. B2 speakers tend to correct errors properly. Of the 78% identified speech errors, as much as 66% errors could be self-corrected accurately by the L2 speakers. Based on the analysis, it was also found that self-correction is able to improve L2 learning ability directly. This is evidenced by the absence of repetition of the same error after the error had been corrected.

  19. Issues with data and analyses: Errors, underlying themes, and potential solutions.

    Science.gov (United States)

    Brown, Andrew W; Kaiser, Kathryn A; Allison, David B

    2018-03-13

    Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge.

  20. Variability in Threshold for Medication Error Reporting Between Physicians, Nurses, Pharmacists, and Families.

    Science.gov (United States)

    Keefer, Patricia; Kidwell, Kelley; Lengyel, Candice; Warrier, Kavita; Wagner, Deborah

    2017-01-01

    Voluntary medication error reporting is an imperfect resource used to improve the quality of medication administration. It requires judgment by front-line staff to determine how to report enough to identify opportunities to improve patients' safety but not jeopardize that safety by creating a culture of "report fatigue." This study aims to provide information on interpretability of medication error and the variability between the subgroups of caregivers in the hospital setting. Survey participants included nursing, physician (trainee and graduated), patient/families, pharmacist across a large academic health system, including an attached free-standing pediatric hospital. Demographics and survey questions were collected and analyzed using Fischer's exact testing with SAS v9.3. Statistically significant variability existed between the four groups for a majority of the questions. This included all cases designated as administration errors and many, but not all, cases of prescribing events. Commentary provided in the free-text portion of the survey was sub-analyzed and found to be associated with medication allergy reporting and lack of education surrounding report characteristics. There is significant variability in the threshold to report specific medication errors in the hospital setting. More work needs to be done to further improve the education surrounding error reporting in hospitals for all noted subgroups. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  1. Treatment errors resulting from use of lasers and IPL by medical laypersons: results of a nationwide survey.

    Science.gov (United States)

    Hammes, Stefan; Karsai, Syrus; Metelmann, Hans-Robert; Pohl, Laura; Kaiser, Kathrine; Park, Bo-Hyun; Raulin, Christian

    2013-02-01

    The demand for hair and tattoo removal with laser and IPL technology (intense pulsed light technology) is continually increasing. Nowadays these treatments are often carried out by medical laypersons without medical supervision in franchise companies, wellness facilities, cosmetic institutes and hair or tattoo studios. This is the first survey is to document and discuss this issue and its effects on public health. Fifty patients affected by treatment errors caused by medical laypersons with laser and IPL applications were evaluated in this retrospective study. We used a standardized questionnaire with accompanying photographic documentation. Among the reports there were some missing or no longer traceable parameters, which is why 7 cases could not be evaluated. The following complications occurred, with possible multiple answers: 81.4% pigmentation changes, 25.6% scars, 14% textural changes and 4.6% incorrect information. The sources of error (multiple answers possible) were the following: 62.8% excessively high energy, 39.5% wrong device for the indication, 20.9% treatment of patients with darker skin or marked tanning, 7% no cooling, and 4.6% incorrect information. The causes of malpractice suggest insufficient training, inadequate diagnostic abilities, and promising unrealistic results. Direct supervision by a medical specialist, comprehensive experience in laser therapy, and compliance with quality guidelines are prerequisites for safe laser and IPL treatments. Legal measures to make such changes mandatory are urgently needed. © The Authors | Journal compilation © Blackwell Verlag GmbH, Berlin.

  2. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators......We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...

  3. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  4. THE GREEN BANK TELESCOPE H II REGION DISCOVERY SURVEY. IV. HELIUM AND CARBON RECOMBINATION LINES

    Energy Technology Data Exchange (ETDEWEB)

    Wenger, Trey V.; Bania, T. M. [Astronomy Department, 725 Commonwealth Avenue, Boston University, Boston, MA 02215 (United States); Balser, Dana S. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA, 22903-2475 (United States); Anderson, L. D. [Department of Physics, West Virginia University, Morgantown, WV 26506 (United States)

    2013-02-10

    The Green Bank Telescope H II Region Discovery Survey (GBT HRDS) found hundreds of previously unknown Galactic regions of massive star formation by detecting hydrogen radio recombination line (RRL) emission from candidate H II region targets. Since the HRDS nebulae lie at large distances from the Sun, they are located in previously unprobed zones of the Galactic disk. Here, we derive the properties of helium and carbon RRL emission from HRDS nebulae. Our target sample is the subset of the HRDS that has visible helium or carbon RRLs. This criterion gives a total of 84 velocity components (14% of the HRDS) with helium emission and 52 (9%) with carbon emission. For our highest quality sources, the average {sup 4}He{sup +}/H{sup +} abundance ratio by number, (y {sup +}), is 0.068 {+-} 0.023(1{sigma}). This is the same ratio as that measured for the sample of previously known Galactic H II regions. Nebulae without detected helium emission give robust y {sup +} upper limits. There are 5 RRL emission components with y {sup +} less than 0.04 and another 12 with upper limits below this value. These H II regions must have either a very low {sup 4}He abundance or contain a significant amount of neutral helium. The HRDS has 20 nebulae with carbon RRL emission but no helium emission at its sensitivity level. There is no correlation between the carbon RRL parameters and the 8 {mu}m mid-infrared morphology of these nebulae.

  5. THE TYPE II SUPERNOVA RATE IN z {approx} 0.1 GALAXY CLUSTERS FROM THE MULTI-EPOCH NEARBY CLUSTER SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Graham, M. L.; Sand, D. J. [Las Cumbres Observatory Global Telescope Network, 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Bildfell, C. J.; Pritchet, C. J. [Department of Physics and Astronomy, University of Victoria, P.O. Box 3055, STN CSC, Victoria BC V8W 3P6 (Canada); Zaritsky, D.; Just, D. W.; Herbert-Fort, S. [Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States); Hoekstra, H. [Leiden Observatory, Leiden University, Niels Bohrweg 2, NL-2333 CA Leiden (Netherlands); Sivanandam, S. [Dunlap Institute for Astronomy and Astrophysics, 50 St. George St., Toronto, ON M5S 3H4 (Canada); Foley, R. J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2012-07-01

    We present seven spectroscopically confirmed Type II cluster supernovae (SNe II) discovered in the Multi-Epoch Nearby Cluster Survey, a supernova survey targeting 57 low-redshift 0.05 < z < 0.15 galaxy clusters with the Canada-France-Hawaii Telescope. We find the rate of Type II supernovae within R{sub 200} of z {approx} 0.1 galaxy clusters to be 0.026{sup +0.085}{sub -0.018}(stat){sup +0.003}{sub -0.001}(sys) SNuM. Surprisingly, one SN II is in a red-sequence host galaxy that shows no clear evidence of recent star formation (SF). This is unambiguous evidence in support of ongoing, low-level SF in at least some cluster elliptical galaxies, and illustrates that galaxies that appear to be quiescent cannot be assumed to host only Type Ia SNe. Based on this single SN II we make the first measurement of the SN II rate in red-sequence galaxies, and find it to be 0.007{sup +0.014}{sub -0.007}(stat){sup +0.009}{sub -0.001}(sys) SNuM. We also make the first derivation of cluster specific star formation rates (sSFR) from cluster SN II rates. We find that for all galaxy types the sSFR is 5.1{sup +15.8}{sub -3.1}(stat) {+-} 0.9(sys) M{sub Sun} yr{sup -1} (10{sup 12} M{sub Sun }){sup -1}, and for red-sequence galaxies only it is 2.0{sup +4.2}{sub -0.9}(stat) {+-} 0.4(sys) M{sub Sun} yr{sup -1} (10{sup 12} M{sub Sun }){sup -1}. These values agree with SFRs measured from infrared and ultraviolet photometry, and H{alpha} emission from optical spectroscopy. Additionally, we use the SFR derived from our SNII rate to show that although a small fraction of cluster Type Ia SNe may originate in the young stellar population and experience a short delay time, these results do not preclude the use of cluster SN Ia rates to derive the late-time delay time distribution for SNe Ia.

  6. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  7. The NASA F-15 Intelligent Flight Control Systems: Generation II

    Science.gov (United States)

    Buschbacher, Mark; Bosworth, John

    2006-01-01

    The Second Generation (Gen II) control system for the F-15 Intelligent Flight Control System (IFCS) program implements direct adaptive neural networks to demonstrate robust tolerance to faults and failures. The direct adaptive tracking controller integrates learning neural networks (NNs) with a dynamic inversion control law. The term direct adaptive is used because the error between the reference model and the aircraft response is being compensated or directly adapted to minimize error without regard to knowing the cause of the error. No parameter estimation is needed for this direct adaptive control system. In the Gen II design, the feedback errors are regulated with a proportional-plus-integral (PI) compensator. This basic compensator is augmented with an online NN that changes the system gains via an error-based adaptation law to improve aircraft performance at all times, including normal flight, system failures, mispredicted behavior, or changes in behavior resulting from damage.

  8. Error-transparent evolution: the ability of multi-body interactions to bypass decoherence

    International Nuclear Information System (INIS)

    Vy, Os; Jacobs, Kurt; Wang Xiaoting

    2013-01-01

    We observe that multi-body interactions, unlike two-body interactions, can implement any unitary operation on an encoded system in such a way that the evolution is uninterrupted by noise that the encoding is designed to protect against. Such ‘error-transparent’ evolution is distinct from that usually considered in quantum computing, as the latter is merely correctable. We prove that the minimum body-ness required to protect (i) a qubit from a single type of Pauli error, (ii) a target qubit from a controller with such errors and (iii) a single qubit from all errors is three-body, four-body and five-body, respectively. We also discuss applications to computing, coherent feedback control and quantum metrology. Finally, we evaluate the performance of error-transparent evolution for some examples using numerical simulations. (paper)

  9. THE HETDEX PILOT SURVEY. IV. THE EVOLUTION OF [O II] EMITTING GALAXIES FROM z ∼ 0.5 TO z ∼ 0

    International Nuclear Information System (INIS)

    Ciardullo, Robin; Gronwall, Caryl; Schneider, Donald P.; Zeimann, Gregory R.

    2013-01-01

    We present an analysis of the luminosities and equivalent widths of the 284 z 2 pilot survey for the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX). By combining emission-line fluxes obtained from the Mitchell spectrograph on the McDonald 2.7 m telescope with deep broadband photometry from archival data, we derive each galaxy's dereddened [O II] λ3727 luminosity and calculate its total star formation rate. We show that over the last ∼5 Gyr of cosmic time, there has been substantial evolution in the [O II] emission-line luminosity function, with L* decreasing by ∼0.6 ± 0.2 dex in the observed function, and by ∼0.9 ± 0.2 dex in the dereddened relation. Accompanying this decline is a significant shift in the distribution of [O II] equivalent widths, with the fraction of high equivalent-width emitters declining dramatically with time. Overall, the data imply that the relative intensity of star formation within galaxies has decreased over the past ∼5 Gyr, and that the star formation rate density of the universe has declined by a factor of ∼2.5 between z ∼ 0.5 and z ∼ 0. These observations represent the first [O II]-based star formation rate density measurements in this redshift range, and foreshadow the advancements which will be generated by the main HETDEX survey.

  10. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. When Is a Failure to Replicate Not a Type II Error?

    Science.gov (United States)

    Vasconcelos, Marco; Urcuioli, Peter J.; Lionello-DeNolf, Karen M.

    2007-01-01

    Zentall and Singer (2007) challenge our conclusion that the work-ethic effect reported by Clement, Feltus, Kaiser, and Zentall (2000) may have been a Type I error by arguing that (a) the effect has been extensively replicated and (b) the amount of overtraining our pigeons received may not have been sufficient to produce it. We believe that our…

  12. Artificial neural networks study of the catalytic reduction of resazurin: stopped-flow injection kinetic-spectrophotometric determination of Cu(II) and Ni(II)

    International Nuclear Information System (INIS)

    Magni, Diana M.; Olivieri, Alejandro C.; Bonivardi, Adrian L.

    2005-01-01

    An artificial neural network (ANN) procedure was used in the development of a catalytic spectrophotometric method for the determination of Cu(II) and Ni(II) employing a stopped-flow injection system. The method is based on the catalytic action of these ions on the reduction of resazurin by sulfide. ANNs trained by back-propagation of errors allowed us to model the systems in a concentration range of 0.5-6 and 1-15 mg l -1 for Cu(II) and Ni(II), respectively, with a low relative error of prediction (REP) for each cation: REP Cu(II) = 0.85% and REP Ni(II) = 0.79%. The standard deviations of the repeatability (s r ) and of the within-laboratory reproducibility (s w ) were measured using standard solutions of Cu(II) and Ni(II) equal to 2.75 and 3.5 mg l -1 , respectively: s r [Cu(II)] = 0.039 mg l -1 , s r [Ni(II)] = 0.044 mg l -1 , s w [Ni(II)] = 0.045 mg l -1 and s w [Ni(II)] = 0.050 mg l -1 . The ANNs-kinetic method has been applied to the determination of Cu(II) and Ni(II) in electroplating solutions and provided satisfactory results as compared with flame atomic absorption spectrophotometry method. The effect of resazurin, NaOH and Na 2 S concentrations and the reaction temperature on the analytical sensitivity is discussed

  13. Teamwork and Clinical Error Reporting among Nurses in Korean Hospitals

    OpenAIRE

    Jee-In Hwang, PhD; Jeonghoon Ahn, PhD

    2015-01-01

    Purpose: To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. Methods: The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitori...

  14. Learning a locomotor task: with or without errors?

    Science.gov (United States)

    Marchal-Crespo, Laura; Schneider, Jasmin; Jaeger, Lukas; Riener, Robert

    2014-03-04

    Robotic haptic guidance is the most commonly used robotic training strategy to reduce performance errors while training. However, research on motor learning has emphasized that errors are a fundamental neural signal that drive motor adaptation. Thus, researchers have proposed robotic therapy algorithms that amplify movement errors rather than decrease them. However, to date, no study has analyzed with precision which training strategy is the most appropriate to learn an especially simple task. In this study, the impact of robotic training strategies that amplify or reduce errors on muscle activation and motor learning of a simple locomotor task was investigated in twenty two healthy subjects. The experiment was conducted with the MAgnetic Resonance COmpatible Stepper (MARCOS) a special robotic device developed for investigations in the MR scanner. The robot moved the dominant leg passively and the subject was requested to actively synchronize the non-dominant leg to achieve an alternating stepping-like movement. Learning with four different training strategies that reduce or amplify errors was evaluated: (i) Haptic guidance: errors were eliminated by passively moving the limbs, (ii) No guidance: no robot disturbances were presented, (iii) Error amplification: existing errors were amplified with repulsive forces, (iv) Noise disturbance: errors were evoked intentionally with a randomly-varying force disturbance on top of the no guidance strategy. Additionally, the activation of four lower limb muscles was measured by the means of surface electromyography (EMG). Strategies that reduce or do not amplify errors limit muscle activation during training and result in poor learning gains. Adding random disturbing forces during training seems to increase attention, and therefore improve motor learning. Error amplification seems to be the most suitable strategy for initially less skilled subjects, perhaps because subjects could better detect their errors and correct them

  15. Pilot information needs survey regarding climate relevant technologies

    International Nuclear Information System (INIS)

    Van Berkel, R.; Van Roekel, A.

    1997-02-01

    The objective of this pilot survey was to arrive at a preliminary understanding of the initial technology and technology information needs in non-Annex II countries in order to support international efforts to facilitate the transfer of technologies and know-how conducive to mitigating and adapting to climate change. The study encompassed two main components, i.e. the development of a survey instrument and the execution of a pilot survey among selected non-Annex II countries. The survey instrument addresses the present status of enabling activities; technology and technology information needs; and issues related to information supply and accessibility. The survey was distributed to national focal points in 20 non-Annex II countries and to at least 35 other stakeholders in five of these non-Annex II countries. A total of 27 completed questionnaires were received, covering 10 non-Annex II countries. 3 refs

  16. Pilot information needs survey regarding climate relevant technologies

    Energy Technology Data Exchange (ETDEWEB)

    Van Berkel, R.; Van Roekel, A.

    1997-02-01

    The objective of this pilot survey was to arrive at a preliminary understanding of the initial technology and technology information needs in non-Annex II countries in order to support international efforts to facilitate the transfer of technologies and know-how conducive to mitigating and adapting to climate change. The study encompassed two main components, i.e. the development of a survey instrument and the execution of a pilot survey among selected non-Annex II countries. The survey instrument addresses the present status of enabling activities; technology and technology information needs; and issues related to information supply and accessibility. The survey was distributed to national focal points in 20 non-Annex II countries and to at least 35 other stakeholders in five of these non-Annex II countries. A total of 27 completed questionnaires were received, covering 10 non-Annex II countries. 3 refs.

  17. Environmental monitoring survey of oil and gas fields in Region II in 2009. Summary report

    Energy Technology Data Exchange (ETDEWEB)

    2010-03-15

    The oil companies Statoil ASA, ExxonMobil Exploration and Production Norway AS, Total E&P Norge AS, Talisman Energy Norge AS and Marathon Petroleum Norge AS commissioned Section of Applied Environmental Research at UNI RESEARCH AS to undertake the monitoring survey of Region II in 2009. Similar monitoring surveys in Region II have been carried out in 1996, 2000, 2003 and 2006. The survey in 2009 included in total 18 fields: Rev, Varg, Sigyn, Sleipner Vest, Sleipner OEst, Sleipner Alfa Nord, Glitne, Grane, Balder, Ringhorne, Jotun, Vale, Skirne, Byggve, Heimdal, Volve, Vilje og Alvheim. Sampling was conducted from the vessel MV Libas between May 18 and May 27. Samples were collected from in totally 137 sampling sites, of which 15 were regional sampling sites. Samples for chemical analysis were collected at all sites, whereas samples for benthos analysis were collected at 12 fields. As in previous surveys, Region II is divided into natural sub-regions. One sub-region is shallow (77-96 m) sub-region, a central sub-region (107-130 m) and a northern subregion (115-119 m). The sediments of the shallow sub-region had relatively lower content of TOM and pelite and higher content of fine sand than the central and northern sub-regions. Calculated areas of contamination are shown for the sub-regions in Table 1.1. The fields Sigyn, Sleipner Alfa Nord, Glitne, Grane, Balder, Ringhorne, Jotun, Skirne, Byggve, Vilje og Alvheim showed no contamination of THC. At the other fields there were minor changes from 2006. The concentrations of barium increased in the central sub-region from 2006 to 2009, also at fields where no drilling had been undertaken during the last years. The same laboratory and methods are used during the three last regional investigations. The changes in barium concentrations may be due to high variability of barium concentrations in the sediments. This is supported by relatively large variations in average barium concentrations at the regional sampling sites in

  18. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation.

    Science.gov (United States)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors Δ ω N was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal Δ ω N was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  19. The Optical Design of the PEP-II Injection Beamlines

    CERN Document Server

    Fieguth, T

    1996-01-01

    The optical design of the PEP-II electron and positron Injection Beamlines is described. Use of the existing high power, low emittance beams available from the SLC damping rings require that pulsed extraction of 9.0 GeV electrons and 3.1 GeV positrons for injection into the PEP-II rings occur in the early sectors of the accelerator. More than 5 kilometers of new beam transport lines have been designed and are being constructed to bring these beams to their respective rings. The optical design maximizes the tolerance to errors especially to those contributing to beam size and position jitter. Secondly, the design minimizes costs by utilizing existing components or component designs and minimizing the number required. Here we discuss important attributes including choice of lattice, specification of error tolerances, including errors in construction, alignment, field errors, power supply stability, and orbit correction.

  20. The Optical Design of the PEP-II Injection Beamlines

    Energy Technology Data Exchange (ETDEWEB)

    Fieguth, Ted

    2003-05-23

    The optical design of the PEP-II electron and positron Injection Beamlines is described. Use of the existing high power, low emittance beams available from the SLC damping rings require that pulsed extraction of 9.0 GeV electrons and 3.1 GeV positrons for injection into the PEP-II rings occur in the early sectors of the accelerator. More than 5 kilometers of new beam transport lines have been designed and are being constructed to bring these beams to their respective rings. The optical design maximizes the tolerance to errors especially to those contributing to beam size and position jitter. Secondly, the design minimizes costs by utilizing existing components or component designs and minimizing the number required. Here we discuss important attributes including choice of lattice, specification of error tolerances, including errors in construction, alignment, field errors, power supply stability, and orbit correction.

  1. SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model

    Energy Technology Data Exchange (ETDEWEB)

    Dojcsak, L.; Marriner, J.; /Fermilab

    2010-08-01

    In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.

  2. The comparison of cervical repositioning errors according to smartphone addiction grades.

    Science.gov (United States)

    Lee, Jeonhyeong; Seo, Kyochul

    2014-04-01

    [Purpose] The purpose of this study was to compare cervical repositioning errors according to smartphone addiction grades of adults in their 20s. [Subjects and Methods] A survey of smartphone addiction was conducted of 200 adults. Based on the survey results, 30 subjects were chosen to participate in this study, and they were divided into three groups of 10; a Normal Group, a Moderate Addiction Group, and a Severe Addiction Group. After attaching a C-ROM, we measured the cervical repositioning errors of flexion, extension, right lateral flexion and left lateral flexion. [Results] Significant differences in the cervical repositioning errors of flexion, extension, and right and left lateral flexion were found among the Normal Group, Moderate Addiction Group, and Severe Addiction Group. In particular, the Severe Addiction Group showed the largest errors. [Conclusion] The result indicates that as smartphone addiction becomes more severe, a person is more likely to show impaired proprioception, as well as impaired ability to recognize the right posture. Thus, musculoskeletal problems due to smartphone addiction should be resolved through social cognition and intervention, and physical therapeutic education and intervention to educate people about correct postures.

  3. Hydra II: A Faint and Compact Milky Way Dwarf Galaxy Found in the Survey of the Magellanic Stellar History

    NARCIS (Netherlands)

    Martin, Nicolas F.; Nidever, David L.; Besla, Gurtina; Olsen, Knut; Walker, Alistair R.; Vivas, A. Katherina; Gruendl, Robert A.; Kaleida, Catherine C.; Muñoz, Ricardo R.; Blum, Robert D.; Saha, Abhijit; Conn, Blair C.; Bell, Eric F.; Chu, You-Hua; Cioni, Maria-Rosa L.; de Boer, Thomas J. L.; Gallart, Carme; Jin, Shoko; Kunder, Andrea; Majewski, Steven R.; Martinez-Delgado, David; Monachesi, Antonela; Monelli, Matteo; Monteagudo, Lara; Noël, Noelia E. D.; Olszewski, Edward W.; Stringfellow, Guy S.; van der Marel, Roeland P.; Zaritsky, Dennis

    We present the discovery of a new dwarf galaxy, Hydra II, found serendipitously within the data from the ongoing Survey of the Magellanic Stellar History conducted with the Dark Energy Camera on the Blanco 4 m Telescope. The new satellite is compact ({{r}h}=68 ± 11 pc) and faint ({{M}V}=-4.8 ± 0.3),

  4. Pediatric Anesthesiology Fellows' Perception of Quality of Attending Supervision and Medical Errors.

    Science.gov (United States)

    Benzon, Hubert A; Hajduk, John; De Oliveira, Gildasio; Suresh, Santhanam; Nizamuddin, Sarah L; McCarthy, Robert; Jagannathan, Narasimhan

    2018-02-01

    Appropriate supervision has been shown to reduce medical errors in anesthesiology residents and other trainees across various specialties. Nonetheless, supervision of pediatric anesthesiology fellows has yet to be evaluated. The main objective of this survey investigation was to evaluate supervision of pediatric anesthesiology fellows in the United States. We hypothesized that there was an indirect association between perceived quality of faculty supervision of pediatric anesthesiology fellow trainees and the frequency of medical errors reported. A survey of pediatric fellows from 53 pediatric anesthesiology fellowship programs in the United States was performed. The primary outcome was the frequency of self-reported errors by fellows, and the primary independent variable was supervision scores. Questions also assessed barriers for effective faculty supervision. One hundred seventy-six pediatric anesthesiology fellows were invited to participate, and 104 (59%) responded to the survey. Nine of 103 (9%, 95% confidence interval [CI], 4%-16%) respondents reported performing procedures, on >1 occasion, for which they were not properly trained for. Thirteen of 101 (13%, 95% CI, 7%-21%) reported making >1 mistake with negative consequence to patients, and 23 of 104 (22%, 95% CI, 15%-31%) reported >1 medication error in the last year. There were no differences in median (interquartile range) supervision scores between fellows who reported >1 medication error compared to those reporting ≤1 errors (3.4 [3.0-3.7] vs 3.4 [3.1-3.7]; median difference, 0; 99% CI, -0.3 to 0.3; P = .96). Similarly, there were no differences in those who reported >1 mistake with negative patient consequences, 3.3 (3.0-3.7), compared with those who did not report mistakes with negative patient consequences (3.4 [3.3-3.7]; median difference, 0.1; 99% CI, -0.2 to 0.6; P = .35). We detected a high rate of self-reported medication errors in pediatric anesthesiology fellows in the United States

  5. Image defects from surface and alignment errors in grazing incidence telescopes

    Science.gov (United States)

    Saha, Timo T.

    1989-01-01

    The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.

  6. An Analysis of College Students' Attitudes towards Error Correction in EFL Context

    Science.gov (United States)

    Zhu, Honglin

    2010-01-01

    This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…

  7. Error Analysis Of Clock Time (T), Declination (*) And Latitude ...

    African Journals Online (AJOL)

    ), latitude (Φ), longitude (λ) and azimuth (A); which are aimed at establishing fixed positions and orientations of survey points and lines on the earth surface. The paper attempts the analysis of the individual and combined effects of error in time ...

  8. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  9. Radiology errors: are we learning from our mistakes?

    International Nuclear Information System (INIS)

    Mankad, K.; Hoey, E.T.D.; Jones, J.B.; Tirukonda, P.; Smith, J.T.

    2009-01-01

    Aim: To question practising radiologists and radiology trainees at a large international meeting in an attempt to survey individuals about error reporting. Materials and methods: Radiologists attending the 2007 Radiological Society of North America (RSNA) annual meeting were approached to fill in a written questionnaire. Participants were questioned as to their grade, country in which they practised, and subspecialty interest. They were asked whether they kept a personal log of their errors (with an error defined as 'a mistake that has management implications for the patient'), how many errors they had made in the preceding 12 months, and the types of errors that had occurred. They were also asked whether their local department held regular discrepancy/errors meetings, how many they had attended in the preceding 12 months, and the perceived atmosphere at these meetings (on a qualitative scale). Results: A total of 301 radiologists with a wide range of specialty interests from 32 countries agreed to take part. One hundred and sixty-six of 301 (55%) of responders were consultant/attending grade. One hundred and thirty-five of 301 (45%) were residents/fellows. Fifty-nine of 301 (20%) of responders kept a personal record of their errors. The number of errors made per person per year ranged from none (2%) to 16 or more (7%). The majority (91%) reported making between one and 15 errors/year. Overcalls (40%), under-calls (25%), and interpretation error (15%) were the predominant error types. One hundred and seventy-eight of 301 (59%) of participants stated that their department held regular errors meeting. One hundred and twenty-seven of 301 (42%) had attended three or more meetings in the preceding year. The majority (55%) who had attended errors meetings described the atmosphere as 'educational.' Only a small minority (2%) described the atmosphere as 'poor' meaning non-educational and/or blameful. Conclusion: Despite the undeniable importance of learning from errors

  10. Drought Persistence Errors in Global Climate Models

    Science.gov (United States)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  11. Graphics Education Survey. Part II.

    Science.gov (United States)

    Ernst, Sandra B.

    After a 1977 survey reflected the importance of graphics education for news students, a study was developed to investigate the state of graphics education in the whole field of journalism. A questionnaire was sent to professors and administrators in four print-oriented professional fields of education: magazine, advertising, public relations, and…

  12. Perceptual error and the culture of open disclosure in Australian radiology.

    Science.gov (United States)

    Pitman, A G

    2006-06-01

    The work of diagnostic radiology consists of the complete detection of all abnormalities in an imaging examination and their accurate diagnosis. Errors in diagnostic radiology comprise perceptual errors, which are a failure of detection, and interpretation errors, which are errors of diagnosis. Perceptual errors are subject to rules of human perception and can be expected in a proportion of observations by any human observer including a trained professional under ideal conditions. Current legal standards of medical negligence make no allowance for perceptual errors, comparing human performance to an ideal standard. Diagnostic radiology in Australia has a culture of open disclosure, where full unbiased evidence from an examination is provided to the patient together with the report. This practice benefits the public by allowing genuine differences of opinion and also by allowing a second chance of correct diagnosis in cases of perceptual error. The culture of open disclosure, which is unique to diagnostic radiology, places radiologists at distinct medicolegal disadvantage compared with other specialties. (i) Perceptual error should be acknowledged as an integral inevitable part of diagnostic radiology; (ii) culture of open disclosure should be encouraged by the profession; and (iii) a pragmatic definition of medical negligence should reflect the imperfect performance of human observers.

  13. Perceptual error and the culture of open disclosure in Australian radiology

    International Nuclear Information System (INIS)

    Pitman, A.G.

    2006-01-01

    The work of diagnostic radiology consists of the complete detection of all abnormalities in an imaging examination and their accurate diagnosis. Errors in diagnostic radiology comprise perceptual errors, which are a failure of detection, and interpretation errors, which are errors of diagnosis. Perceptual errors are subject to rules of human perception and can be expected in a proportion of observations by any human observer including a trained professional under ideal conditions. Current legal standards of medical negligence make no allowance for perceptual errors, comparing human performance to an ideal standard. Diagnostic radiology in Australia has a culture of open disclosure, where full unbiased evidence from an examination is provided to the patient together with the report. This practice benefits the public by allowing genuine differences of opinion and also by allowing a second chance of correct diagnosis in cases of perceptual error. The culture of open disclosure, which is unique to diagnostic radiology, places radiologists at distinct medicolegal disadvantage compared with other specialties, (i) Perceptual error should be acknowledged as an integral inevitable part of diagnostic radiology; (ii) culture of open disclosure should be encouraged by the profession; and (iii) a pragmatic definition of medical negligence should reflect the imperfect performance of human observers Copyright (2006) Blackwell Publishing Asia Pty Ltd

  14. A Measurement of the Rate of Type Ia Supernovae in Galaxy Clusters from the SDSS-II Supernova Survey

    Energy Technology Data Exchange (ETDEWEB)

    Dilday, Benjamin; /Rutgers U., Piscataway /Chicago U. /KICP, Chicago; Bassett, Bruce; /Cape Town U., Dept. Math. /South African Astron. Observ.; Becker, Andrew; /Washington U., Seattle, Astron. Dept.; Bender, Ralf; /Munich, Tech. U. /Munich U. Observ.; Castander, Francisco; /Barcelona, IEEC; Cinabro, David; /Wayne State U.; Frieman, Joshua A.; /Chicago U. /Fermilab; Galbany, Lluis; /Barcelona, IFAE; Garnavich, Peter; /Notre Dame U.; Goobar, Ariel; /Stockholm U., OKC /Stockholm U.; Hopp, Ulrich; /Munich, Tech. U. /Munich U. Observ. /Tokyo U.

    2010-03-01

    We present measurements of the Type Ia supernova (SN) rate in galaxy clusters based on data from the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey. The cluster SN Ia rate is determined from 9 SN events in a set of 71 C4 clusters at z {le} 0.17 and 27 SN events in 492 maxBCG clusters at 0.1 {le} z {le} 0.3. We find values for the cluster SN Ia rate of (0.37{sub -0.12-0.01}{sup +0.17+0.01}) SNur h{sup 2} and (0.55{sub -0.11-0.01}{sup +0.13+0.02}) SNur h{sup 2} (SNux = 10{sup -12}L{sub x{circle_dot}}{sup -1} yr{sup -1}) in C4 and maxBCG clusters, respectively, where the quoted errors are statistical and systematic, respectively. The SN rate for early-type galaxies is found to be (0.31{sub -0.12-0.01}{sup +0.18+0.01}) SNur h{sup 2} and (0.49{sub -0.11-0.01}{sup +0.15+0.02}) SNur h{sup 2} in C4 and maxBCG clusters, respectively. The SN rate for the brightest cluster galaxies (BCG) is found to be (2.04{sub -1.11-0.04}{sup +1.99+0.07}) SNur h{sup 2} and (0.36{sub -0.30-0.01}{sup +0.84+0.01}) SNur h{sup 2} in C4 and maxBCG clusters, respectively. The ratio of the SN Ia rate in cluster early-type galaxies to that of the SN Ia rate in field early-type galaxies is 1.94{sub -0.91-0.015}{sup +1.31+0.043} and 3.02{sub -1.03-0.048}{sup +1.31+0.062}, for C4 and maxBCG clusters, respectively. The SN rate in galaxy clusters as a function of redshift, which probes the late time SN Ia delay distribution, shows only weak dependence on redshift. Combining our current measurements with previous measurements, we fit the cluster SN Ia rate data to a linear function of redshift, and find r{sub L} = [(0.49{sub -0.14}{sup +0.15}) + (0.91{sub -0.81}{sup +0.85}) x z] SNuB h{sup 2}. A comparison of the radial distribution of SNe in cluster to field early-type galaxies shows possible evidence for an enhancement of the SN rate in the cores of cluster early-type galaxies. With an observation of at most 3 hostless, intra-cluster SNe Ia, we estimate the fraction of cluster SNe that are

  15. Objectives and methodology of Romanian SEPHAR II Survey. Project for comparing the prevalence and control of cardiovascular risk factors in two East-European countries: Romania and Poland.

    Science.gov (United States)

    Dorobantu, Maria; Tautu, Oana-Florentina; Darabont, Roxana; Ghiorghe, Silviu; Badila, Elisabeta; Dana, Minca; Dobreanu, Minodora; Baila, Ilarie; Rutkowski, Marcin; Zdrojewski, Tomasz

    2015-08-12

    Comparing results of representative surveys conducted in different East-European countries could contribute to a better understanding and management of cardiovascular risk factors, offering grounds for the development of health policies addressing the special needs of this high cardiovascular risk region of Europe. The aim of this paper was to describe the methodology on which the comparison between the Romanian survey SEPHAR II and the Polish survey NATPOL 2011 results is based. SEPHAR II, like NATPOL 2011, is a cross-sectional survey conducted on a representative sample of the adult Romanian population (18 to 80 years) and encompasses two visits with the following components: completing the study questionnaire, blood pressure and anthropometric measurements, and collection of blood and urine samples. From a total of 2223 subjects found at 2860 visited addresses, 2044 subjects gave written consent but only 1975 subjects had eligible data for the analysis, accounting for a response rate of 69.06%. Additionally we excluded 11 subjects who were 80 years of age (NATPOL 2011 included adult subjects up to 79 years). Therefore, the sample size included in the statistical analysis is 1964. It has similar age groups and gender structure as the Romanian population aged 18-79 years from the last census available at the moment of conducting the survey (weight adjustments for epidemiological analyses range from 0.48 to 8.7). Sharing many similarities, the results of SEPHAR II and NATPOL 2011 surveys can be compared by a proper statistical method offering crucial information regarding cardiovascular risk factors in a high-cardiovascular risk European region.

  16. Barriers to medication error reporting among hospital nurses.

    Science.gov (United States)

    Rutledge, Dana N; Retrosi, Tina; Ostrowski, Gary

    2018-03-01

    The study purpose was to report medication error reporting barriers among hospital nurses, and to determine validity and reliability of an existing medication error reporting barriers questionnaire. Hospital medication errors typically occur between ordering of a medication to its receipt by the patient with subsequent staff monitoring. To decrease medication errors, factors surrounding medication errors must be understood; this requires reporting by employees. Under-reporting can compromise patient safety by disabling improvement efforts. This 2017 descriptive study was part of a larger workforce engagement study at a faith-based Magnet ® -accredited community hospital in California (United States). Registered nurses (~1,000) were invited to participate in the online survey via email. Reported here are sample demographics (n = 357) and responses to the 20-item medication error reporting barriers questionnaire. Using factor analysis, four factors that accounted for 67.5% of the variance were extracted. These factors (subscales) were labelled Fear, Cultural Barriers, Lack of Knowledge/Feedback and Practical/Utility Barriers; each demonstrated excellent internal consistency. The medication error reporting barriers questionnaire, originally developed in long-term care, demonstrated good validity and excellent reliability among hospital nurses. Substantial proportions of American hospital nurses (11%-48%) considered specific factors as likely reporting barriers. Average scores on most barrier items were categorised "somewhat unlikely." The highest six included two barriers concerning the time-consuming nature of medication error reporting and four related to nurses' fear of repercussions. Hospitals need to determine the presence of perceived barriers among nurses using questionnaires such as the medication error reporting barriers and work to encourage better reporting. Barriers to medication error reporting make it less likely that nurses will report medication

  17. Experimental Evaluation of a Mixed Controller That Amplifies Spatial Errors and Reduces Timing Errors

    Directory of Open Access Journals (Sweden)

    Laura Marchal-Crespo

    2017-06-01

    Full Text Available Research on motor learning suggests that training with haptic guidance enhances learning of the timing components of motor tasks, whereas error amplification is better for learning the spatial components. We present a novel mixed guidance controller that combines haptic guidance and error amplification to simultaneously promote learning of the timing and spatial components of complex motor tasks. The controller is realized using a force field around the desired position. This force field has a stable manifold tangential to the trajectory that guides subjects in velocity-related aspects. The force field has an unstable manifold perpendicular to the trajectory, which amplifies the perpendicular (spatial error. We also designed a controller that applies randomly varying, unpredictable disturbing forces to enhance the subjects’ active participation by pushing them away from their “comfort zone.” We conducted an experiment with thirty-two healthy subjects to evaluate the impact of four different training strategies on motor skill learning and self-reported motivation: (i No haptics, (ii mixed guidance, (iii perpendicular error amplification and tangential haptic guidance provided in sequential order, and (iv randomly varying disturbing forces. Subjects trained two motor tasks using ARMin IV, a robotic exoskeleton for upper limb rehabilitation: follow circles with an ellipsoidal speed profile, and move along a 3D line following a complex speed profile. Mixed guidance showed no detectable learning advantages over the other groups. Results suggest that the effectiveness of the training strategies depends on the subjects’ initial skill level. Mixed guidance seemed to benefit subjects who performed the circle task with smaller errors during baseline (i.e., initially more skilled subjects, while training with no haptics was more beneficial for subjects who created larger errors (i.e., less skilled subjects. Therefore, perhaps the high functional

  18. Visual impairment attributable to uncorrected refractive error and other causes in the Ghanaian youth: The University of Cape Coast Survey.

    Science.gov (United States)

    Abokyi, Samuel; Ilechie, Alex; Nsiah, Peter; Darko-Takyi, Charles; Abu, Emmanuel Kwasi; Osei-Akoto, Yaw Jnr; Youfegan-Baanam, Mathurin

    2016-01-01

    To determine the prevalence of visual impairment attributable to refractive error and other causes in a youthful Ghanaian population. A prospective survey of all consecutive visits by first-year tertiary students to the Optometry clinic between August, 2013 and April, 2014. Of the 4378 first-year students aged 16-39 years enumerated, 3437 (78.5%) underwent the eye examination. The examination protocol included presenting visual acuity (PVA), ocular motility, and slit-lamp examination of the external eye, anterior segment and media, and non-dilated fundus examination. Pinhole acuity and fundus examination were performed when the PVA≤6/12 in one or both eyes to determine the principal cause of the vision loss. The mean age of participants was 21.86 years (95% CI: 21.72-21.99). The prevalence of bilateral visual impairment (BVI; PVA in the better eye ≤6/12) and unilateral visual impairment UVI; PVA in the worse eye ≤6/12) were 3.08% (95% CI: 2.56-3.72) and 0.79% (95% CI: 0.54-1.14), respectively. Among 106 participants with BVI, refractive error (96.2%) and corneal opacity (3.8%) were the causes. Of the 27 participants with UVI, refractive error (44.4%), maculopathy (18.5%) and retinal disease (14.8%) were the major causes. There was unequal distribution of BVI in the different age groups, with those above 20 years having a lesser burden. Eye screening and provision of affordable spectacle correction to the youth could be timely to eliminate visual impairment. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  19. Efficient error correction for next-generation sequencing of viral amplicons.

    Science.gov (United States)

    Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury

    2012-06-25

    Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.

  20. Audit of medication errors by anesthetists in North Western Nigeria

    African Journals Online (AJOL)

    2013-08-03

    Aug 3, 2013 ... Materials and Methods. This multi‑center cross‑sectional survey was conducted ... vigilance (9), appropriate and double checking of drug labels (18), and color coding of syringes (7) as ways to minimize medication errors.

  1. Environmental monitoring survey of oil and gas fields in Region II in 2009. Summary report; Miljoeovervaaking av olje- og gassfelt i Region II i 2009

    Energy Technology Data Exchange (ETDEWEB)

    2010-03-15

    The oil companies Statoil ASA, ExxonMobil Exploration and Production Norway AS, Total E&P Norge AS, Talisman Energy Norge AS and Marathon Petroleum Norge AS commissioned Section of Applied Environmental Research at UNI RESEARCH AS to undertake the monitoring survey of Region II in 2009. Similar monitoring surveys in Region II have been carried out in 1996, 2000, 2003 and 2006. The survey in 2009 included in total 18 fields: Rev, Varg, Sigyn, Sleipner Vest, Sleipner OEst, Sleipner Alfa Nord, Glitne, Grane, Balder, Ringhorne, Jotun, Vale, Skirne, Byggve, Heimdal, Volve, Vilje og Alvheim. Sampling was conducted from the vessel MV Libas between May 18 and May 27. Samples were collected from in totally 137 sampling sites, of which 15 were regional sampling sites. Samples for chemical analysis were collected at all sites, whereas samples for benthos analysis were collected at 12 fields. As in previous surveys, Region II is divided into natural sub-regions. One sub-region is shallow (77-96 m) sub-region, a central sub-region (107-130 m) and a northern subregion (115-119 m). The sediments of the shallow sub-region had relatively lower content of TOM and pelite and higher content of fine sand than the central and northern sub-regions. Calculated areas of contamination are shown for the sub-regions in Table 1.1. The fields Sigyn, Sleipner Alfa Nord, Glitne, Grane, Balder, Ringhorne, Jotun, Skirne, Byggve, Vilje og Alvheim showed no contamination of THC. At the other fields there were minor changes from 2006. The concentrations of barium increased in the central sub-region from 2006 to 2009, also at fields where no drilling had been undertaken during the last years. The same laboratory and methods are used during the three last regional investigations. The changes in barium concentrations may be due to high variability of barium concentrations in the sediments. This is supported by relatively large variations in average barium concentrations at the regional sampling sites in

  2. Learning (from) the errors of a systems biology model.

    Science.gov (United States)

    Engelhardt, Benjamin; Frőhlich, Holger; Kschischo, Maik

    2016-02-11

    Mathematical modelling is a labour intensive process involving several iterations of testing on real data and manual model modifications. In biology, the domain knowledge guiding model development is in many cases itself incomplete and uncertain. A major problem in this context is that biological systems are open. Missed or unknown external influences as well as erroneous interactions in the model could thus lead to severely misleading results. Here we introduce the dynamic elastic-net, a data driven mathematical method which automatically detects such model errors in ordinary differential equation (ODE) models. We demonstrate for real and simulated data, how the dynamic elastic-net approach can be used to automatically (i) reconstruct the error signal, (ii) identify the target variables of model error, and (iii) reconstruct the true system state even for incomplete or preliminary models. Our work provides a systematic computational method facilitating modelling of open biological systems under uncertain knowledge.

  3. Potential Errors and Test Assessment in Software Product Line Engineering

    Directory of Open Access Journals (Sweden)

    Hartmut Lackner

    2015-04-01

    Full Text Available Software product lines (SPL are a method for the development of variant-rich software systems. Compared to non-variable systems, testing SPLs is extensive due to an increasingly amount of possible products. Different approaches exist for testing SPLs, but there is less research for assessing the quality of these tests by means of error detection capability. Such test assessment is based on error injection into correct version of the system under test. However to our knowledge, potential errors in SPL engineering have never been systematically identified before. This article presents an overview over existing paradigms for specifying software product lines and the errors that can occur during the respective specification processes. For assessment of test quality, we leverage mutation testing techniques to SPL engineering and implement the identified errors as mutation operators. This allows us to run existing tests against defective products for the purpose of test assessment. From the results, we draw conclusions about the error-proneness of the surveyed SPL design paradigms and how quality of SPL tests can be improved.

  4. Post-Retention Changes in Class II Correction With the Forsus (trademark) Appliance

    Science.gov (United States)

    2015-06-01

    level of significance was defined when p≤0.02. Error of Measurement Study In dentistry , when interpreting the results of a study, the investigator...Class II malocclusion in children 8-10 years of age. Angle Orthod. 1981;51:177-202. 4) Jones G, Buschang PH, Kim KB, Oliver DR. Class II non...2: Dahlberg’s error, Bland-Altman method, and Kappa coefficient. Restorative Dentistry and Endodontics 2013;38;3;182-185. 39) Springate, SD. The

  5. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Associations between communication climate and the frequency of medical error reporting among pharmacists within an inpatient setting.

    Science.gov (United States)

    Patterson, Mark E; Pace, Heather A; Fincham, Jack E

    2013-09-01

    Although error-reporting systems enable hospitals to accurately track safety climate through the identification of adverse events, these systems may be underused within a work climate of poor communication. The objective of this analysis is to identify the extent to which perceived communication climate among hospital pharmacists impacts medical error reporting rates. This cross-sectional study used survey responses from more than 5000 pharmacists responding to the 2010 Hospital Survey on Patient Safety Culture (HSOPSC). Two composite scores were constructed for "communication openness" and "feedback and about error," respectively. Error reporting frequency was defined from the survey question, "In the past 12 months, how many event reports have you filled out and submitted?" Multivariable logistic regressions were used to estimate the likelihood of medical error reporting conditional upon communication openness or feedback levels, controlling for pharmacist years of experience, hospital geographic region, and ownership status. Pharmacists with higher communication openness scores compared with lower scores were 40% more likely to have filed or submitted a medical error report in the past 12 months (OR, 1.4; 95% CI, 1.1-1.7; P = 0.004). In contrast, pharmacists with higher communication feedback scores were not any more likely than those with lower scores to have filed or submitted a medical report in the past 12 months (OR, 1.0; 95% CI, 0.8-1.3; P = 0.97). Hospital work climates that encourage pharmacists to freely communicate about problems related to patient safety is conducive to medical error reporting. The presence of feedback infrastructures about error may not be sufficient to induce error-reporting behavior.

  7. Calibration of a neutron log in partially saturated media. Part II. Error analysis

    International Nuclear Information System (INIS)

    Hearst, J.R.; Kasameyer, P.W.; Dreiling, L.A.

    1981-01-01

    Four sources or error (uncertainty) are studied in water content obtained from neutron logs calibrated in partially saturated media for holes up to 3 m. For this calibration a special facility was built and an algorithm for a commercial epithermal neutron log was developed that obtains water content from count rate, bulk density, and gap between the neutron sonde and the borehole wall. The algorithm contained errors due to the calibration and lack of fit, while the field measurements included uncertainties in the count rate (caused by statistics and a short time constant), gap, and density. There can be inhomogeneity in the material surrounding the borehole. Under normal field conditions the hole-size-corrected water content obtained from such neutron logs can have an uncertainty as large as 15% of its value

  8. Errors and mistakes in the traditional optimum design of experiments on exponential absorption

    International Nuclear Information System (INIS)

    Burge, E.J.

    1977-01-01

    The treatment of statistical errors in absorption experiments using particle counters, given by Rose and Shapiro (1948), is shown to be incorrect for non-zero background counts. For the simplest case of only one absorber thickness, revised conditions are computed for the optimum geometry and the best apportionment of counting times for the incident and transmitted beams for a wide range of relative backgrounds (0, 10 -5 -10 2 ). The two geometries of Rose and Shapiro are treated, (I) beam area fixed, absorber thickness varied, and (II) beam area and absorber thickness both varied, but with effective volume of absorber constant. For case (I) the new calculated errors in the absorption coefficients are shown to be about 0.7 of the Rose and Shapiro values for the largest background, and for case (II) about 0.4. The corresponding fractional times for background counts are (I) 0.7 and (II) 0.07 of those given by Rose and Shapiro. For small backgrounds the differences are negligible. Revised values are also computed for the sensitivity of the accuracy to deviations from optimum transmission. (Auth.)

  9. ERRORS AND CORRECTIVE FEEDBACK IN WRITING: IMPLICATIONS TO OUR CLASSROOM PRACTICES

    Directory of Open Access Journals (Sweden)

    Maria Corazon Saturnina A Castro

    2017-10-01

    Full Text Available Error correction is one of the most contentious and misunderstood issues in both foreign and second language teaching. Despite varying positions on the effectiveness of error correction or the lack of it, corrective feedback remains an institution in the writing classes. Given this context, this action research endeavors to survey prevalent attitudes of teachers and students toward corrective feedback and examine their implications to classroom practices.  This paper poses the major problem:  How do teachers’ perspectives on corrective feedback match the students’ views and expectations about error treatment in their writing? Professors of the University of the Philippines who teach composition classes and over a hundred students enrolled in their classes were surveyed.  Results showed that there are differing perceptions of teachers and students regarding corrective feedback. These oppositions must be addressed as they have implications to current pedagogical practices which include constructing and establishing appropriate lesson goals, using alternative corrective strategies, teaching grammar points in class even in the tertiary level, and further understanding the learning process.

  10. Determining Type I and Type II Errors when Applying Information Theoretic Change Detection Metrics for Data Association and Space Situational Awareness

    Science.gov (United States)

    Wilkins, M.; Moyer, E. J.; Hussein, Islam I.; Schumacher, P. W., Jr.

    Correlating new detections back to a large catalog of resident space objects (RSOs) requires solving one of three types of data association problems: observation-to-track, track-to-track, or observation-to-observation. The authors previous work has explored the use of various information divergence metrics for solving these problems: Kullback-Leibler (KL) divergence, mutual information, and Bhattacharrya distance. In addition to approaching the data association problem strictly from the metric tracking aspect, we have explored fusing metric and photometric data using Bayesian probabilistic reasoning for RSO identification to aid in our ability to correlate data to specific RS Os. In this work, we will focus our attention on the KL Divergence, which is a measure of the information gained when new evidence causes the observer to revise their beliefs. We can apply the Principle of Minimum Discrimination Information such that new data produces as small an information gain as possible and this information change is bounded by ɛ. Choosing an appropriate value for ɛ for both convergence and change detection is a function of your risk tolerance. Small ɛ for change detection increases alarm rates while larger ɛ for convergence means that new evidence need not be identical in information content. We need to understand what this change detection metric implies for Type I α and Type II β errors when we are forced to make a decision on whether new evidence represents a true change in characterization of an object or is merely within the bounds of our measurement uncertainty. This is unclear for the case of fusing multiple kinds and qualities of characterization evidence that may exist in different metric spaces or are even semantic statements. To this end, we explore the use of Sequential Probability Ratio Testing where we suppose that we may need to collect additional evidence before accepting or rejecting the null hypothesis that a change has occurred. In this work, we

  11. Plane and geodetic surveying

    CERN Document Server

    Johnson, Aylmer

    2014-01-01

    IntroductionAim And ScopeClassification Of SurveysThe Structure Of This BookGeneral Principles Of SurveyingErrorsRedundancyStiffnessAdjustmentPlanning And Record KeepingPrincipal Surveying ActivitiesEstablishing Control NetworksMappingSetting OutResectioningDeformation MonitoringAngle MeasurementThe Surveyor's CompassThe ClinometerThe Total StationMaking ObservationsChecks On Permanent AdjustmentsDistance MeasurementGeneralTape MeasurementsOptical Methods (Tachymetry)Electromagnetic Distance Measurement (EDM)Ultrasonic MethodsGNSSLevellingTheoryThe InstrumentTechniqueBookingPermanent Adjustmen

  12. Validating the standard for the National Board Dental Examination Part II.

    Science.gov (United States)

    Tsai, Tsung-Hsun; Neumann, Laura M; Littlefield, John H

    2012-05-01

    As part of the overall exam validation process, the Joint Commission on National Dental Examinations periodically reviews and validates the pass/fail standard for the National Board Dental Examination (NBDE), Parts I and II. The most recent standard-setting activities for NBDE Part II used the Objective Standard Setting method. This report describes the process used to set the pass/fail standard for the 2009 exam. The failure rate on the NBDE Part II increased from 5.3 percent in 2008 to 13.7 percent in 2009 and then decreased to 10 percent in 2010. This article describes the Objective Standard Setting method and presents the estimated probabilities of classification errors based on the beta binomial mathematical model. The results show that the probability of correct classifications of candidate performance is very high (0.97) and that probabilities of false negative and false positive errors are very small (.03 and <0.001, respectively). The low probability of classification errors supports the conclusion that the pass/fail score on the NBDE Part II is a valid guide for making decisions about candidates for dental licensure.

  13. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  14. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... that contains very detailed information about incomes. This gives a unique opportunity to learn about the magnitude and nature of the measurement error in income reported by the respondents in the Danish NTS compared to income from the administrative register (correct measure). We find that the classical...

  15. Refractive errors in children and adolescents in Bucaramanga (Colombia).

    Science.gov (United States)

    Galvis, Virgilio; Tello, Alejandro; Otero, Johanna; Serrano, Andrés A; Gómez, Luz María; Castellanos, Yuly

    2017-01-01

    The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia). This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D). Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.

  16. Refractive errors in children and adolescents in Bucaramanga (Colombia

    Directory of Open Access Journals (Sweden)

    Virgilio Galvis

    Full Text Available ABSTRACT Purpose: The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia. Methods: This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. Results: One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D. Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. Conclusions: The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.

  17. I think I know what you did last summer : improving data quality in panel surveys

    NARCIS (Netherlands)

    Lugtig, P.J.|info:eu-repo/dai/nl/304824658

    2012-01-01

    Five specific studies investigate how the methodology of panel surveys can be improved. 1) Propensity Score Matching to separate different sources of measurement error. Errors of non-observation and measurement errors are easily confounded in mixed-mode surveys. I propose propensity score matching

  18. Team safety and innovation by learning from errors in long-term care settings.

    Science.gov (United States)

    Buljac-Samardžić, Martina; van Woerkom, Marianne; Paauwe, Jaap

    2012-01-01

    Team safety and team innovation are underexplored in the context of long-term care. Understanding the issues requires attention to how teams cope with error. Team managers could have an important role in developing a team's error orientation and managing team membership instabilities. The aim of this study was to examine the impact of team member stability, team coaching, and a team's error orientation on team safety and innovation. A cross-sectional survey method was employed within 2 long-term care organizations. Team members and team managers received a survey that measured safety and innovation. Team members assessed member stability, team coaching, and team error orientation (i.e., problem-solving and blaming approach). The final sample included 933 respondents from 152 teams. Stable teams and teams with managers who take on the role of coach are more likely to adopt a problem-solving approach and less likely to adopt a blaming approach toward errors. Both error orientations are related to team member ratings of safety and innovation, but only the blaming approach is (negatively) related to manager ratings of innovation. Differences between members' and managers' ratings of safety are greater in teams with relatively high scores for the blaming approach and relatively low scores for the problem-solving approach. Team coaching was found to be positively related to innovation, especially in unstable teams. Long-term care organizations that wish to enhance team safety and innovation should encourage a problem-solving approach and discourage a blaming approach. Team managers can play a crucial role in this by coaching team members to see errors as sources of learning and improvement and ensuring that individuals will not be blamed for errors.

  19. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  20. Environmental monitoring survey of oil and gas fields in Region II in 2009. Summary report; Miljoeovervaaking av olje- og gassfelt i Region II i 2009. Sammendragsrapport

    Energy Technology Data Exchange (ETDEWEB)

    2010-03-15

    The oil companies Statoil ASA, ExxonMobil Exploration and Production Norway AS, Total E&P Norge AS, Talisman Energy Norge AS and Marathon Petroleum Norge AS commissioned Section of Applied Environmental Research at UNI RESEARCH AS to undertake the monitoring survey of Region II in 2009. Similar monitoring surveys in Region II have been carried out in 1996, 2000, 2003 and 2006. The survey in 2009 included in total 18 fields: Rev, Varg, Sigyn, Sleipner Vest, Sleipner Oest, Sleipner Alfa Nord, Glitne, Grane, Balder, Ringhorne, Jotun, Vale, Skirne, Byggve, Heimdal, Volve, Vilje og Alvheim. Sampling was conducted from the vessel MV Libas between May 18 and May 27. Samples were collected from in totally 137 sampling sites, of which 15 were regional sampling sites. Samples for chemical analysis were collected at all sites, whereas samples for benthos analysis were collected at 12 fields. As in previous surveys, Region II is divided into natural sub-regions. One sub-region is shallow (77-96 m) sub-region, a central sub-region (107-130 m) and a northern subregion (115-119 m). The sediments of the shallow sub-region had relatively lower content of TOM and pelite and higher content of fine sand than the central and northern sub-regions. Calculated areas of contamination are shown for the sub-regions in Table 1.1. The fields Sigyn, Sleipner Alfa Nord, Glitne, Grane, Balder, Ringhorne, Jotun, Skirne, Byggve, Vilje og Alvheim showed no contamination of THC. At the other fields there were minor changes from 2006. The concentrations of barium increased in the central sub-region from 2006 to 2009, also at fields where no drilling had been undertaken during the last years. The same laboratory and methods are used during the three last regional investigations. The changes in barium concentrations may be due to high variability of barium concentrations in the sediments. This is supported by relatively large variations in average barium concentrations at the regional sampling sites in

  1. Type Ia supernova rate studies from the SDSS-II Supernova Study

    Energy Technology Data Exchange (ETDEWEB)

    Dilday, Benjamin [Univ. of Chicago, IL (United States)

    2008-08-01

    The author presents new measurements of the type Ia SN rate from the SDSS-II Supernova Survey. The SDSS-II Supernova Survey was carried out during the Fall months (Sept.-Nov.) of 2005-2007 and discovered ~ 500 spectroscopically confirmed SNe Ia with densely sampled (once every ~ 4 days), multi-color light curves. Additionally, the SDSS-II Supernova Survey has discovered several hundred SNe Ia candidates with well-measured light curves, but without spectroscopic confirmation of type. This total, achieved in 9 months of observing, represents ~ 15-20% of the total SNe Ia discovered worldwide since 1885. The author describes some technical details of the SN Survey observations and SN search algorithms that contributed to the extremely high-yield of discovered SNe and that are important as context for the SDSS-II Supernova Survey SN Ia rate measurements.

  2. Frecuencia de errores de los pacientes con su medicación Frequency of medication errors by patients

    Directory of Open Access Journals (Sweden)

    José Joaquín Mira

    2012-02-01

    Full Text Available OBJETIVO: Analizar la frecuencia de errores de medicación que son cometidos e informados por los pacientes. MÉTODOS: Estudio descriptivo basado en encuestas telefónicas a una muestra aleatoria de pacientes adultos del nivel primario de salud del sistema público español. Respondieron un total de 1 247 pacientes (tasa de respuesta, 75%. El 63% eran mujeres y 29% eran mayores de 70 años. RESULTADOS: Mientras 37 pacientes (3%, IC 95%: 2-4 sufrieron complicaciones asociadas a la medicación en el curso del tratamiento, 241 (19,4%, IC 95%: 17-21 informaron haber cometido algún error con la medicación. Un menor tiempo de consulta (P OBJECTIVE: Analyze the frequency of medication errors committed and reported by patients. METHODS: Descriptive study based on a telephone survey of a random sample of adult patients from the primary care level of the Spanish public health care system. A total of 1 247 patients responded (75% response rate; 63% were women and 29% were older than 70 years. RESULTS: While 37 patients (3%, 95% CI: 2-4 experienced complications associated with medication in the course of treatment, 241 (19.4%, 95% CI: 17-21 reported having made some mistake with their medication. A shorter consultation time (P < 0.01 and a worse assessment of the information provided by the physician (P < 0.01 were associated with the fact that during pharmacy dispensing the patient was told that the prescribed treatment was not appropriate. CONCLUSIONS: In addition to the known risks of an adverse event due to a health intervention resulting from a system or practitioner error, there are risks associated with patient errors in the self-administration of medication. Patients who were unsatisfied with the information provided by the physician reported a greater number of errors.

  3. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  4. Evolution of errors in the altimetric bathymetry model used by Google Earth and GEBCO

    Science.gov (United States)

    Marks, K. M.; Smith, W. H. F.; Sandwell, D. T.

    2010-09-01

    We analyze errors in the global bathymetry models of Smith and Sandwell that combine satellite altimetry with acoustic soundings and shorelines to estimate depths. Versions of these models have been incorporated into Google Earth and the General Bathymetric Chart of the Oceans (GEBCO). We use Japan Agency for Marine-Earth Science and Technology (JAMSTEC) multibeam surveys not previously incorporated into the models as "ground truth" to compare against model versions 7.2 through 12.1, defining vertical differences as "errors." Overall error statistics improve over time: 50th percentile errors declined from 57 to 55 to 49 m, and 90th percentile errors declined from 257 to 235 to 219 m, in versions 8.2, 11.1 and 12.1. This improvement is partly due to an increasing number of soundings incorporated into successive models, and partly to improvements in the satellite gravity model. Inspection of specific sites reveals that changes in the algorithms used to interpolate across survey gaps with altimetry have affected some errors. Versions 9.1 through 11.1 show a bias in the scaling from gravity in milliGals to topography in meters that affected the 15-160 km wavelength band. Regionally averaged (>160 km wavelength) depths have accumulated error over successive versions 9 through 11. These problems have been mitigated in version 12.1, which shows no systematic variation of errors with depth. Even so, version 12.1 is in some respects not as good as version 8.2, which employed a different algorithm.

  5. The role of respondents’ comfort for variance in stated choice surveys

    DEFF Research Database (Denmark)

    Emang, Diana; Lundhede, Thomas; Thorsen, Bo Jellesmark

    2017-01-01

    they complete surveys correlates with the error variance in stated choice models of their responses. Comfort-related variables are included in the scale functions of the scaled multinomial logit models. The hypothesis was that higher comfort reduces error variance in answers, as revealed by a higher scale...... parameter and vice versa. Information on, e.g., sleep and time since eating (higher comfort) correlated with scale heterogeneity, and produced lower error variance when controlled for in the model. That respondents’ comfort may influence choice behavior suggests that knowledge of the respondents’ activity......Preference elicitation among outdoor recreational users is subject to measurement errors that depend, in part, on survey planning. This study uses data from a choice experiment survey on recreational SCUBA diving to investigate whether self-reported information on respondents’ comfort when...

  6. Using snowball sampling method with nurses to understand medication administration errors.

    Science.gov (United States)

    Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In

    2009-02-01

    We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non

  7. The treatment of commission errors in first generation human reliability analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da, E-mail: bayout@cnen.gov.b, E-mail: rfonseca@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN) Rio de Janeiro, RJ (Brazil); Melo, Paulo Fernando Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)

  8. BANKRUPTCY PREDICTION MODEL WITH ZETAc OPTIMAL CUT-OFF SCORE TO CORRECT TYPE I ERRORS

    Directory of Open Access Journals (Sweden)

    Mohamad Iwan

    2005-06-01

    This research has successfully attained the following results: (1 type I error is in fact 59,83 times more costly compared to type II error, (2 22 ratios distinguish between bankrupt and non-bankrupt groups, (3 2 financial ratios proved to be effective in predicting bankruptcy, (4 prediction using ZETAc optimal cut-off score predicts more companies filing for bankruptcy within one year compared to prediction using Hair et al. optimum cutting score, (5 Although prediction using Hair et al. optimum cutting score is more accurate, prediction using ZETAc optimal cut-off score proved to be able to minimize cost incurred from classification errors.

  9. Radiation survey of first Hi-Art II Tomotherapy vault design in India

    International Nuclear Information System (INIS)

    Kinhikar, Rajesh A.; Jamema, S.V.; Pai, Rajeshree; Sharma, P.K. Dash; Deshpande, Deepak D.

    2009-01-01

    A vault as per government-regulation compliance with adequate shielding needs was designed and constructed for Hi-Art II Tomotherapy machine being the first in India. Radiation measurements around this Tomotherapy treatment vault were carried out to check the shielding adequacy of the source housing and the vault. It was mandatory to get this un-conventional machine 'Type Approved' by Atomic Energy Regulatory Board (AERB) in India. The aim of this paper was to report on the radiation levels measured during the radiation survey carried out for this machine. The radiation levels in and around the vault were measured for stationary as well as rotational treatment procedures with the largest open field size (5 cm x 40 cm) at the isocenter with and without scattering medium. The survey was performed at three locations near each wall surrounding the vault as well. The leakage radiation from the source housing was measured both in the patient plane outside the treatment field and one meter distance from the source outside the patient plane. The radiation levels both for stationary as well as rotational procedures were within 1 mR/h. No significance difference was observed in the radiation levels measured for rotational procedures with and without scattering medium. The leakage radiation in the patient plane was found to be 0.04% (Tolerance 0.2%), while the head leakage was 0.007% (Tolerance 0.5%) of the dose rate at the isocenter. The treatment delivery with Tomotherapy does play safe radiation levels around the installation layout and also passes the leakage criteria as well.

  10. Cultural-resource survey report: Hoover Dam Powerplant Modification Project II. Associated transmission-line facility

    International Nuclear Information System (INIS)

    Queen, R.L.

    1991-06-01

    The Bureau of Reclamation (Reclamation) is proposing to modify or install additional transmission facilities between the Hoover Dam hydroelectric plant and the Western Area Power Authority substation near Boulder City, Nevada. Reclamation has completed cultural resource investigations to identify historic or prehistoric resources in the project area that might be affected during construction of the transmission line. Four possible transmission corridors approximately 50 feet wide and between 9.5 and 11.5 miles long were investigated. The proposed transmission lines either parallel or replace existing transmission lines. The corridors generally have undergone significant disturbance from past transmission line construction. A Class II sampling survey covering approximately 242 acres was conducted. Access or construction roads have not been identified and surveys of these areas will have to be completed in the future. No historic or prehistoric archeological sites were encountered within the four corridor right-of-ways. It is believed that the probability for prehistoric sites is very low. Four historic period sites were recorded that are outside, but near, the proposed corridor. These sites are not individually eligible for the National Register of Historic Places, but may be associated with the construction of Hoover Dam and contribute to a historic district or multiple property resource area focusing on the dam and its construction

  11. The socio-economic patterning of survey participation and non-response error in a multilevel study of food purchasing behaviour: area- and individual-level characteristics.

    Science.gov (United States)

    Turrell, Gavin; Patterson, Carla; Oldenburg, Brian; Gould, Trish; Roy, Marie-Andree

    2003-04-01

    To undertake an assessment of survey participation and non-response error in a population-based study that examined the relationship between socio-economic position and food purchasing behaviour. The study was conducted in Brisbane City (Australia) in 2000. The sample was selected using a stratified two-stage cluster design. Respondents were recruited using a range of strategies that attempted to maximise the involvement of persons from disadvantaged backgrounds: respondents were contacted by personal visit and data were collected using home-based face-to-face interviews; multiple call-backs on different days and at different times were used; and a financial gratuity was provided. Non-institutionalised residents of private dwellings located in 50 small areas that differed in their socio-economic characteristics. Rates of survey participation - measured by non-contacts, exclusions, dropped cases, response rates and completions - were similar across areas, suggesting that residents of socio-economically advantaged and disadvantaged areas were equally likely to be recruited. Individual-level analysis, however, showed that respondents and non-respondents differed significantly in their sociodemographic and food purchasing characteristics: non-respondents were older, less educated and exhibited different purchasing behaviours. Misclassification bias probably accounted for the inconsistent pattern of association between the area- and individual-level results. Estimates of bias due to non-response indicated that although respondents and non-respondents were qualitatively different, the magnitude of error associated with this differential was minimal. Socio-economic position measured at the individual level is a strong and consistent predictor of survey non-participation. Future studies that set out to examine the relationship between socio-economic position and diet need to adopt sampling strategies and data collection methods that maximise the likelihood of recruiting

  12. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  13. Average [O II] nebular emission associated with Mg II absorbers: dependence on Fe II absorption

    Science.gov (United States)

    Joshi, Ravi; Srianand, Raghunathan; Petitjean, Patrick; Noterdaeme, Pasquier

    2018-05-01

    We investigate the effect of Fe II equivalent width (W2600) and fibre size on the average luminosity of [O II] λλ3727, 3729 nebular emission associated with Mg II absorbers (at 0.55 ≤ z ≤ 1.3) in the composite spectra of quasars obtained with 3 and 2 arcsec fibres in the Sloan Digital Sky Survey. We confirm the presence of strong correlations between [O II] luminosity (L_{[O II]}) and equivalent width (W2796) and redshift of Mg II absorbers. However, we show L_{[O II]} and average luminosity surface density suffer from fibre size effects. More importantly, for a given fibre size, the average L_{[O II]} strongly depends on the equivalent width of Fe II absorption lines and found to be higher for Mg II absorbers with R ≡W2600/W2796 ≥ 0.5. In fact, we show the observed strong correlations of L_{[O II]} with W2796 and z of Mg II absorbers are mainly driven by such systems. Direct [O II] detections also confirm the link between L_{[O II]} and R. Therefore, one has to pay attention to the fibre losses and dependence of redshift evolution of Mg II absorbers on W2600 before using them as a luminosity unbiased probe of global star formation rate density. We show that the [O II] nebular emission detected in the stacked spectrum is not dominated by few direct detections (i.e. detections ≥3σ significant level). On an average, the systems with R ≥ 0.5 and W2796 ≥ 2 Å are more reddened, showing colour excess E(B - V) ˜ 0.02, with respect to the systems with R < 0.5 and most likely trace the high H I column density systems.

  14. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  15. Errors in nonword repetition: bridging short- and long-term memory.

    Science.gov (United States)

    Santos, F H; Bueno, O F A; Gathercole, S E

    2006-03-01

    According to the working memory model, the phonological loop is the component of working memory specialized in processing and manipulating limited amounts of speech-based information. The Children's Test of Nonword Repetition (CNRep) is a suitable measure of phonological short-term memory for English-speaking children, which was validated by the Brazilian Children's Test of Pseudoword Repetition (BCPR) as a Portuguese-language version. The objectives of the present study were: i) to investigate developmental aspects of the phonological memory processing by error analysis in the nonword repetition task, and ii) to examine phoneme (substitution, omission and addition) and order (migration) errors made in the BCPR by 180 normal Brazilian children of both sexes aged 4-10, from preschool to 4th grade. The dominant error was substitution [F(3,525) = 180.47; P long than in short items, was observed [F(3,519) = 108.36; P long-term memory contributes to holding memory trace. The findings were discussed in terms of distinctiveness, clustering and redintegration hypotheses.

  16. Sleep, mental health status, and medical errors among hospital nurses in Japan.

    Science.gov (United States)

    Arimura, Mayumi; Imai, Makoto; Okawa, Masako; Fujimura, Toshimasa; Yamada, Naoto

    2010-01-01

    Medical error involving nurses is a critical issue since nurses' actions will have a direct and often significant effect on the prognosis of their patients. To investigate the significance of nurse health in Japan and its potential impact on patient services, a questionnaire-based survey amongst nurses working in hospitals was conducted, with the specific purpose of examining the relationship between shift work, mental health and self-reported medical errors. Multivariate analysis revealed significant associations between the shift work system, General Health Questionnaire (GHQ) scores and nurse errors: the odds ratios for shift system and GHQ were 2.1 and 1.1, respectively. It was confirmed that both sleep and mental health status among hospital nurses were relatively poor, and that shift work and poor mental health were significant factors contributing to medical errors.

  17. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali.

    Science.gov (United States)

    Minetti, Andrea; Riera-Montes, Margarita; Nackers, Fabienne; Roederer, Thomas; Koudika, Marie Hortense; Sekkenes, Johanne; Taconet, Aurore; Fermon, Florence; Touré, Albouhary; Grais, Rebecca F; Checchi, Francesco

    2012-10-12

    Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  18. Teamwork and clinical error reporting among nurses in Korean hospitals.

    Science.gov (United States)

    Hwang, Jee-In; Ahn, Jeonghoon

    2015-03-01

    To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitoring, mutual support, and communication. Using logistic regression analysis, we determined the relationships between teamwork and error reporting. The response rate was 85.5%. The mean score of teamwork was 3.5 out of 5. At the subscale level, mutual support was rated highest, while leadership was rated lowest. Of the participating nurses, 522 responded that they had experienced at least one clinical error in the last 6 months. Among those, only 53.0% responded that they always or usually reported clinical errors to their managers and/or the patient safety department. Teamwork was significantly associated with better error reporting. Specifically, nurses with a higher team communication score were more likely to report clinical errors to their managers and the patient safety department (odds ratio = 1.82, 95% confidence intervals [1.05, 3.14]). Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety. Copyright © 2015. Published by Elsevier B.V.

  19. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  20. Relationship Between Patients' Perceptions of Care Quality and Health Care Errors in 11 Countries: A Secondary Data Analysis.

    Science.gov (United States)

    Hincapie, Ana L; Slack, Marion; Malone, Daniel C; MacKinnon, Neil J; Warholak, Terri L

    2016-01-01

    Patients may be the most reliable reporters of some aspects of the health care process; their perspectives should be considered when pursuing changes to improve patient safety. The authors evaluated the association between patients' perceived health care quality and self-reported medical, medication, and laboratory errors in a multinational sample. The analysis was conducted using the 2010 Commonwealth Fund International Health Policy Survey, a multinational consumer survey conducted in 11 countries. Quality of care was measured by a multifaceted construct developed using Rasch techniques. After adjusting for potentially important confounding variables, an increase in respondents' perceptions of care coordination decreased the odds of self-reporting medical errors, medication errors, and laboratory errors (P < .001). As health care stakeholders continue to search for initiatives that improve care experiences and outcomes, this study's results emphasize the importance of guaranteeing integrated care.

  1. Self-reported medical, medication and laboratory error in eight countries: risk factors for chronically ill adults.

    Science.gov (United States)

    Scobie, Andrea

    2011-04-01

    To identify risk factors associated with self-reported medical, medication and laboratory error in eight countries. The Commonwealth Fund's 2008 International Health Policy Survey of chronically ill patients in eight countries. None. A multi-country telephone survey was conducted between 3 March and 30 May 2008 with patients in Australia, Canada, France, Germany, the Netherlands, New Zealand, the UK and the USA who self-reported being chronically ill. A bivariate analysis was performed to determine significant explanatory variables of medical, medication and laboratory error (P error: age 65 and under, education level of some college or less, presence of two or more chronic conditions, high prescription drug use (four+ drugs), four or more doctors seen within 2 years, a care coordination problem, poor doctor-patient communication and use of an emergency department. Risk factors with the greatest ability to predict experiencing an error encompassed issues with coordination of care and provider knowledge of a patient's medical history. The identification of these risk factors could help policymakers and organizations to proactively reduce the likelihood of error through greater examination of system- and organization-level practices.

  2. Gotta survey somebody : Methodological challenges in population studies of older people

    OpenAIRE

    Kelfve, Susanne

    2015-01-01

    Conducting representative surveys of older people is challenging. This thesis aims to analyze a) the characteristics of individuals at risk of being underrepresented in surveys of older people, b) the systematic errors likely to occur as a result of these selections, and c) whether these systematic errors can be minimized by weighting adjustments.   In Study I, we investigated a) who would be missing from a survey that excluded those living in institutions and that did not use indirect interv...

  3. Near Misses in Financial Trading: Skills for Capturing and Averting Error.

    Science.gov (United States)

    Leaver, Meghan; Griffiths, Alex; Reader, Tom

    2018-05-01

    The aims of this study were (a) to determine whether near-miss incidents in financial trading contain information on the operator skills and systems that detect and prevent near misses and the patterns and trends revealed by these data and (b) to explore if particular operator skills and systems are found as important for avoiding particular types of error on the trading floor. In this study, we examine a cohort of near-miss incidents collected from a financial trading organization using the Financial Incident Analysis System and report on the nontechnical skills and systems that are used to detect and prevent error in this domain. One thousand near-miss incidents are analyzed using distribution, mean, chi-square, and associative analysis to describe the data; reliability is provided. Slips/lapses (52%) and human-computer interface problems (21%) often occur alone and are the main contributors to error causation, whereas the prevention of error is largely a result of teamwork (65%) and situation awareness (46%) skills. No matter the cause of error, situation awareness and teamwork skills are used most often to detect and prevent the error. Situation awareness and teamwork skills appear universally important as a "last line" of defense for capturing error, and data from incident-monitoring systems can be analyzed in a fashion more consistent with a "Safety-II" approach. This research provides data for ameliorating risk within financial trading organizations, with implications for future risk management programs and regulation.

  4. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  5. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  6. Comparison of the effect of paper and computerized procedures on operator error rate and speed of performance

    International Nuclear Information System (INIS)

    Converse, S.A.; Perez, P.B.; Meyer, S.; Crabtree, W.

    1994-01-01

    The Computerized Procedures Manual (COPMA-II) is an advanced procedure manual that can be used to select and execute procedures, to monitor the state of plant parameters, and to help operators track their progress through plant procedures. COPMA-II was evaluated in a study that compared the speed and accuracy of operators' performance when they performed with COPMA-II and traditional paper procedures. Sixteen licensed reactor operators worked in teams of two to operate the Scales Pressurized Water Reactor Facility at North Carolina State University. Each team performed one change of power with each type of procedure to simulate performance under normal operating conditions. Teams then performed one accident scenario with COPMA-II and one with paper procedures. Error rates, performance times, and subjective estimates of workload were collected, and were evaluated for each combination of procedure type and scenario type. For the change of power task, accuracy and response time were not different for COPMA-II and paper procedures. Operators did initiate responses to both accident scenarios fastest with paper procedures. However, procedure type did not moderate response completion time for either accident scenario. For accuracy, performance with paper procedures resulted in twice as many errors as did performance with COPMA-II. Subjective measures of mental workload for the accident scenarios were not affected by procedure type

  7. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  8. Longitudinal Cut Method Revisited: A Survey on the Main Error Sources

    OpenAIRE

    Moriconi, Alessandro; Lalli, Francesco; Di Felice, Fabio; Esposito, Pier Giorgio; Piscopia, Rodolfo

    2000-01-01

    Some of the main error sources in wave pattern resistance determination were investigated. The experimental data obtained at the Italian Ship Model Basin (longitudinal wave cuts concerned with the steady motion of the Series 60 model and a hard-chine catamaran) were analyzed. It was found that, within the range of Froude numbers tested (0.225 ≤ Fr ≤ 0.345 for the Series 60 and 0.5 ≤ Fr ≤ 1 for the catamaran) two sources of uncertainty play a significant role: (i) the p...

  9. Assessing explicit error reporting in the narrative electronic medical record using keyword searching.

    Science.gov (United States)

    Cao, Hui; Stetson, Peter; Hripcsak, George

    2003-01-01

    Many types of medical errors occur in and outside of hospitals, some of which have very serious consequences and increase cost. Identifying errors is a critical step for managing and preventing them. In this study, we assessed the explicit reporting of medical errors in the electronic record. We used five search terms "mistake," "error," "incorrect," "inadvertent," and "iatrogenic" to survey several sets of narrative reports including discharge summaries, sign-out notes, and outpatient notes from 1991 to 2000. We manually reviewed all the positive cases and identified them based on the reporting of physicians. We identified 222 explicitly reported medical errors. The positive predictive value varied with different keywords. In general, the positive predictive value for each keyword was low, ranging from 3.4 to 24.4%. Therapeutic-related errors were the most common reported errors and these reported therapeutic-related errors were mainly medication errors. Keyword searches combined with manual review indicated some medical errors that were reported in medical records. It had a low sensitivity and a moderate positive predictive value, which varied by search term. Physicians were most likely to record errors in the Hospital Course and History of Present Illness sections of discharge summaries. The reported errors in medical records covered a broad range and were related to several types of care providers as well as non-health care professionals.

  10. An intervention to decrease patient identification band errors in a children's hospital.

    Science.gov (United States)

    Hain, Paul D; Joers, B; Rush, M; Slayton, J; Throop, P; Hoagg, S; Allen, L; Grantham, J; Deshpande, J K

    2010-06-01

    Patient misidentification continues to be a quality and safety issue. There is a paucity of US data describing interventions to reduce identification band error rates. Monroe Carell Jr Children's Hospital at Vanderbilt. Percentage of patients with defective identification bands. Web-based surveys were sent, asking hospital personnel to anonymously identify perceived barriers to reaching zero defects with identification bands. Corrective action plans were created and implemented with ideas from leadership, front-line staff and the online survey. Data from unannounced audits of patient identification bands were plotted on statistical process control charts and shared monthly with staff. All hospital personnel were expected to "stop the line" if there were any patient identification questions. The first audit showed a defect rate of 20.4%. The original mean defect rate was 6.5%. After interventions and education, the new mean defect rate was 2.6%. (a) The initial rate of patient identification band errors in the hospital was higher than expected. (b) The action resulting in most significant improvement was staff awareness of the problem, with clear expectations to immediately stop the line if a patient identification error was present. (c) Staff surveys are an excellent source of suggestions for combating patient identification issues. (d) Continued audit and data collection is necessary for sustainable staff focus and continued improvement. (e) Statistical process control charts are both an effective method to track results and an easily understood tool for sharing data with staff.

  11. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  12. Predictors of BMI Vary along the BMI Range of German Adults – Results of the German National Nutrition Survey II

    Science.gov (United States)

    Moon, Kilson; Krems, Carolin; Heuer, Thorsten; Roth, Alexander; Hoffmann, Ingrid

    2017-01-01

    Objective The objective of the study was to identify predictors of BMI in German adults by considering the BMI distribution and to determine whether the association between BMI and its predictors varies along the BMI distribution. Methods The sample included 9,214 adults aged 18–80 years from the German National Nutrition Survey II (NVS II). Quantile regression analyses were conducted to examine the association between BMI and the following predictors: age, sports activities, socio-economic status (SES), healthy eating index-NVS II (HEI-NVS II), dietary knowledge, sleeping duration and energy intake as well as status of smoking, partner relationship and self-reported health. Results Age, SES, self-reported health status, sports activities and energy intake were the strongest predictors of BMI. The important outcome of this study is that the association between BMI and its predictors varies along the BMI distribution. Especially, energy intake, health status and SES were marginally associated with BMI in normal-weight subjects; this relationships became stronger in the range of overweight, and were strongest in the range of obesity. Conclusions Predictors of BMI and the strength of these associations vary across the BMI distribution in German adults. Consequently, to identify predictors of BMI, the entire BMI distribution should be considered. PMID:28219069

  13. Error propagation of partial least squares for parameters optimization in NIR modeling

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  14. Error propagation of partial least squares for parameters optimization in NIR modeling.

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  15. Rare kaon decays at LAMPF II

    International Nuclear Information System (INIS)

    Sanford, T.W.L.

    1982-06-01

    At LAMPF II, intense beams of kaons will be available that will enable the rare kaon-decay processes to be investigated. This note explores some of the possibilities, which divide into two classes: (1) those that test the standard model of Weinberg and Salam and (2) those that are sensitive to new interactions. For both classes, experiments have been limited not by systematic errors but rather by statistical ones. LAMPF II with its intense flux of kaons thus will enable the frontier of rare kaon decay to be realistically probed

  16. Design and field measurement of the BEPC-II interaction region dual-aperture quadrupoles

    International Nuclear Information System (INIS)

    Yin, Z.S.; Wu, Y.Z.; Zhang, J.F.; Chen, W.; Li, Y.J.; Li, L.; Hou, R.; Yin, B.G.; Sun, X.J.; Ren, F.L.; Wang, F.A.; Chen, F.S.; Yu, C.H.; Chen, C.

    2007-01-01

    With the Beijing Electron Positron Collider upgrade project (BEPC-II), two dual-aperture septum-style quadrupole magnets are used in the interaction region for the final focusing of the electron and positron beams. The BEPC-II lattice design calls for the same high quality integral quadrupole field and large good field region in both apertures for each magnet. Two-dimensional contour optimization and pole-end chamfer iteration are used to minimize the systematic harmonic errors. Unexpected non-systematic errors induced by the unsymmetrical structure and the manufacturing errors are compensated with the pole-end shimming. Magnet measurements with rotating coils are performed to guide and confirm the magnet design. This paper discusses the design consideration, optimizing procedure and measurement results of these dual-aperture magnets

  17. A vignette study to examine health care professionals' attitudes towards patient involvement in error prevention.

    Science.gov (United States)

    Schwappach, David L B; Frank, Olga; Davis, Rachel E

    2013-10-01

    Various authorities recommend the participation of patients in promoting patient safety, but little is known about health care professionals' (HCPs') attitudes towards patients' involvement in safety-related behaviours. To investigate how HCPs evaluate patients' behaviours and HCP responses to patient involvement in the behaviour, relative to different aspects of the patient, the involved HCP and the potential error. Cross-sectional fractional factorial survey with seven factors embedded in two error scenarios (missed hand hygiene, medication error). Each survey included two randomized vignettes that described the potential error, a patient's reaction to that error and the HCP response to the patient. Twelve hospitals in Switzerland. A total of 1141 HCPs (response rate 45%). Approval of patients' behaviour, HCP response to the patient, anticipated effects on the patient-HCP relationship, HCPs' support for being asked the question, affective response to the vignettes. Outcomes were measured on 7-point scales. Approval of patients' safety-related interventions was generally high and largely affected by patients' behaviour and correct identification of error. Anticipated effects on the patient-HCP relationship were much less positive, little correlated with approval of patients' behaviour and were mainly determined by the HCP response to intervening patients. HCPs expressed more favourable attitudes towards patients intervening about a medication error than about hand sanitation. This study provides the first insights into predictors of HCPs' attitudes towards patient engagement in safety. Future research is however required to assess the generalizability of the findings into practice before training can be designed to address critical issues. © 2012 John Wiley & Sons Ltd.

  18. Drug Administration Errors by South African Anaesthetists – a Survey

    African Journals Online (AJOL)

    Adele

    TRAVEL FELLOWSHIP. Objectives. To investigate the incidence, nature of and factors contributing towards “wrong drug administrations” by South African anaesthetists. Design. A confidential, self-reporting survey was sent out to the 720 anaesthetists on the database of the South African Society of. Anaesthesiologists.

  19. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  20. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  1. Centrifuge workers study. Phase II, completion report

    International Nuclear Information System (INIS)

    Wooten, H.D.

    1994-09-01

    Phase II of the Centrifuge Workers Study was a follow-up to the Phase I efforts. The Phase I results had indicated a higher risk than expected among centrifuge workers for developing bladder cancer when compared with the risk in the general population for developing this same type of cancer. However, no specific agent could be identified as the causative agent for these bladder cancers. As the Phase II Report states, Phase I had been limited to workers who had the greatest potential for exposure to substances used in the centrifuge process. Phase II was designed to expand the survey to evaluate the health of all employees who had ever worked in Centrifuge Program Departments 1330-1339 but who had not been interviewed in Phase I. Employees in analytical laboratories and maintenance departments who provided support services for the Centrifuge Program were also included in Phase II. In December 1989, the Oak Ridge Associated Universities (ORAU), now known as Oak Ridge Institute for Science and Education (ORISE), was contracted to conduct a follow-up study (Phase II). Phase H of the Centrifuge Workers Study expanded the survey to include all former centrifuge workers who were not included in Phase I. ORISE was chosen because they had performed the Phase I tasks and summarized the corresponding survey data therefrom

  2. Centrifuge workers study. Phase II, completion report

    Energy Technology Data Exchange (ETDEWEB)

    Wooten, H.D.

    1994-09-01

    Phase II of the Centrifuge Workers Study was a follow-up to the Phase I efforts. The Phase I results had indicated a higher risk than expected among centrifuge workers for developing bladder cancer when compared with the risk in the general population for developing this same type of cancer. However, no specific agent could be identified as the causative agent for these bladder cancers. As the Phase II Report states, Phase I had been limited to workers who had the greatest potential for exposure to substances used in the centrifuge process. Phase II was designed to expand the survey to evaluate the health of all employees who had ever worked in Centrifuge Program Departments 1330-1339 but who had not been interviewed in Phase I. Employees in analytical laboratories and maintenance departments who provided support services for the Centrifuge Program were also included in Phase II. In December 1989, the Oak Ridge Associated Universities (ORAU), now known as Oak Ridge Institute for Science and Education (ORISE), was contracted to conduct a follow-up study (Phase II). Phase H of the Centrifuge Workers Study expanded the survey to include all former centrifuge workers who were not included in Phase I. ORISE was chosen because they had performed the Phase I tasks and summarized the corresponding survey data therefrom.

  3. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  4. A spectroscopic survey of EC4, an extended cluster in Andromeda's halo

    Science.gov (United States)

    Collins, M. L. M.; Chapman, S. C.; Irwin, M.; Ibata, R.; Martin, N. F.; Ferguson, A. M. N.; Huxor, A.; Lewis, G. F.; Mackey, A. D.; McConnachie, A. W.; Tanvir, N.

    2009-07-01

    We present a spectroscopic survey of candidate red giant branch stars in the extended star cluster, EC4, discovered in the halo of M31 from our Canada-France-Hawaii Telescope/MegaCam survey, overlapping the tidal streams, Streams`Cp' and `Cr'. These observations used the DEep Imaging Multi-Object Spectrograph mounted on the Keck II telescope to obtain spectra around the CaII triplet region with ~1.3 Å resolution. Six stars lying on the red giant branch within two core radii of the centre of EC4 are found to have an average vr = -287.9+1.9-2.4kms-1 and σv,corr = 2.7+4.2-2.7kms-1, taking instrumental errors into account. The resulting mass-to-light ratio for EC4 is M/L = 6.7+15-6.7Msolar/Lsolar, a value that is consistent with a globular cluster within the 1σ errors we derive. From the summed spectra of our member stars, we find EC4 to be metal-poor, with [Fe/H] = -1.6 +/- 0.15. We discuss several formation and evolution scenarios which could account for our kinematic and metallicity constraints on EC4, and conclude that EC4 is most comparable with an extended globular cluster. We also compare the kinematics and metallicity of EC4 with Streams `Cp' and`Cr', and find that EC4 bears a striking resemblance to Stream`Cp' in terms of velocity, and that the two structures are identical in terms of both their spectroscopic and photometric metallicities. From this, we conclude that EC4 is likely related to Stream`Cp'. The data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. E-mail: mlmc2@ast.cam.ac.uk

  5. A Human Reliability Analysis of Post- Accident Human Errors in the Low Power and Shutdown PSA of KSNP

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Daeil; Kim, J. H.; Jang, S. C

    2007-03-15

    Korea Atomic Energy Research Institute, using the ANS low power and shutdown (LPSD) probabilistic risk assessment (PRA) Standard, evaluated the LPSD PSA model of the KSNP, Yonggwang Units 5 and 6, and identified the items to be improved. The evaluation results of human reliability analysis (HRA) of the post-accident human errors in the LPSD PSA model for the KSNP showed that 10 items among 19 items of supporting requirements for those in the ANS PRA Standard were identified as them to be improved. Thus, we newly carried out a HRA for post-accident human errors in the LPSD PSA model for the KSNP. Following tasks are the improvements in the HRA of post-accident human errors of the LPSD PSA model for the KSNP compared with the previous one: Interviews with operators in the interpretation of the procedure, modeling of operator actions, and the quantification results of human errors, site visit. Applications of limiting value to the combined post-accident human errors. Documentation of information of all the input and bases for the detailed quantifications and the dependency analysis using the quantification sheets The assessment results for the new HRA results of post-accident human errors using the ANS LPSD PRA Standard show that above 80% items of its supporting requirements for post-accident human errors were graded as its Category II. The number of the re-estimated human errors using the LPSD Korea Standard HRA method is 385. Among them, the number of individual post-accident human errors is 253. The number of dependent post-accident human errors is 135. The quantification results of the LPSD PSA model for the KSNP with new HEPs show that core damage frequency (CDF) is increased by 5.1% compared with the previous baseline CDF It is expected that this study results will be greatly helpful to improve the PSA quality for the domestic nuclear power plants because they have sufficient PSA quality to meet the Category II of Supporting Requirements for the post

  6. A Human Reliability Analysis of Post- Accident Human Errors in the Low Power and Shutdown PSA of KSNP

    International Nuclear Information System (INIS)

    Kang, Daeil; Kim, J. H.; Jang, S. C.

    2007-03-01

    Korea Atomic Energy Research Institute, using the ANS low power and shutdown (LPSD) probabilistic risk assessment (PRA) Standard, evaluated the LPSD PSA model of the KSNP, Yonggwang Units 5 and 6, and identified the items to be improved. The evaluation results of human reliability analysis (HRA) of the post-accident human errors in the LPSD PSA model for the KSNP showed that 10 items among 19 items of supporting requirements for those in the ANS PRA Standard were identified as them to be improved. Thus, we newly carried out a HRA for post-accident human errors in the LPSD PSA model for the KSNP. Following tasks are the improvements in the HRA of post-accident human errors of the LPSD PSA model for the KSNP compared with the previous one: Interviews with operators in the interpretation of the procedure, modeling of operator actions, and the quantification results of human errors, site visit. Applications of limiting value to the combined post-accident human errors. Documentation of information of all the input and bases for the detailed quantifications and the dependency analysis using the quantification sheets The assessment results for the new HRA results of post-accident human errors using the ANS LPSD PRA Standard show that above 80% items of its supporting requirements for post-accident human errors were graded as its Category II. The number of the re-estimated human errors using the LPSD Korea Standard HRA method is 385. Among them, the number of individual post-accident human errors is 253. The number of dependent post-accident human errors is 135. The quantification results of the LPSD PSA model for the KSNP with new HEPs show that core damage frequency (CDF) is increased by 5.1% compared with the previous baseline CDF It is expected that this study results will be greatly helpful to improve the PSA quality for the domestic nuclear power plants because they have sufficient PSA quality to meet the Category II of Supporting Requirements for the post

  7. Characterization of measurement errors using structure-from-motion and photogrammetry to measure marine habitat structural complexity.

    Science.gov (United States)

    Bryson, Mitch; Ferrari, Renata; Figueira, Will; Pizarro, Oscar; Madin, Josh; Williams, Stefan; Byrne, Maria

    2017-08-01

    Habitat structural complexity is one of the most important factors in determining the makeup of biological communities. Recent advances in structure-from-motion and photogrammetry have resulted in a proliferation of 3D digital representations of habitats from which structural complexity can be measured. Little attention has been paid to quantifying the measurement errors associated with these techniques, including the variability of results under different surveying and environmental conditions. Such errors have the potential to confound studies that compare habitat complexity over space and time. This study evaluated the accuracy, precision, and bias in measurements of marine habitat structural complexity derived from structure-from-motion and photogrammetric measurements using repeated surveys of artificial reefs (with known structure) as well as natural coral reefs. We quantified measurement errors as a function of survey image coverage, actual surface rugosity, and the morphological community composition of the habitat-forming organisms (reef corals). Our results indicated that measurements could be biased by up to 7.5% of the total observed ranges of structural complexity based on the environmental conditions present during any particular survey. Positive relationships were found between measurement errors and actual complexity, and the strength of these relationships was increased when coral morphology and abundance were also used as predictors. The numerous advantages of structure-from-motion and photogrammetry techniques for quantifying and investigating marine habitats will mean that they are likely to replace traditional measurement techniques (e.g., chain-and-tape). To this end, our results have important implications for data collection and the interpretation of measurements when examining changes in habitat complexity using structure-from-motion and photogrammetry.

  8. Quantifying behavioural determinants relating to health professional reporting of medication errors: a cross-sectional survey using the Theoretical Domains Framework.

    Science.gov (United States)

    Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek

    2016-11-01

    The aims of this study were to quantify the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE) and to explore any differences between respondents. A cross-sectional survey of patient-facing doctors, nurses and pharmacists within three major hospitals of Abu Dhabi, the UAE. An online questionnaire was developed based on the Theoretical Domains Framework (TDF, a framework of behaviour change theories). Principal component analysis (PCA) was used to identify components and internal reliability determined. Ethical approval was obtained from a UK university and all hospital ethics committees. Two hundred and ninety-four responses were received. Questionnaire items clustered into six components of knowledge and skills, feedback and support, action and impact, motivation, effort and emotions. Respondents generally gave positive responses for knowledge and skills, feedback and support and action and impact components. Responses were more neutral for the motivation and effort components. In terms of emotions, the component with the most negative scores, there were significant differences in terms of years registered as health professional (those registered longest most positive, p = 0.002) and age (older most positive, p Theoretical Domains Framework to quantify the behavioural determinants of health professional reporting of medication errors. • Questionnaire items relating to emotions surrounding reporting generated the most negative responses with significant differences in terms of years registered as health professional (those registered longest most positive) and age (older most positive) with no differences for gender and health profession. • Interventions based on behaviour change techniques mapped to emotions should be prioritised for development.

  9. 200-UP-2 operable unit radiological surveys

    International Nuclear Information System (INIS)

    Wendling, M.A.

    1994-01-01

    This report summarizes and documents the results of the radiological surveys conducted from August 17 through December 16, 1993 over a partial area of the 200-UP-2 Operable Unit, 200-W Area, Hanford Site, Richland, Washington. In addition, this report explains the survey methodology of the Mobile Surface Contamination Monitor 11 (MSCM-II) and the Ultra Sonic Ranging And Data System (USRADS). The radiological survey of the 200-UP-2 Operable Unit was conducted by the Site Investigative Surveys/Environmental Restoration Health Physics Organization of the Westinghouse Hanford Company. The survey methodology for the majority of area was based on utilization of the MSCM-II or the USRADS for automated recording of the gross beta/gamma radiation levels at or near six (6) inches from the surface soil

  10. Feasibility of non-linear simulation for Field II using an angular spectrum approach

    DEFF Research Database (Denmark)

    Du, Yigang; Jensen, Jørgen Arendt

    2008-01-01

    this procedure is to find the accuracy of the approach for linear propagation, where the result can be validated using Field II simulations. The ASA calculations are carried out by 3D fast Fourier transform using Matlab, where lambda=2 is chosen as the spatial sampling rate to reduce aliasing errors. Zero......-padding is applied to enlarge the source plane to a (4N - 1) times (4N - 1) matrix to overcome artifacts in terms of the circular convolution. The source plane covering an area of 9 times 9 mm2 with N = 61 samples along both side, is 0.05 mm away from a 5 MHz planar piston transducer, which is simulated by Field II....... To determine the accuracy, different sampling intervals and zero-paddings are compared and the errors are calculated with Field II as a reference. It can be seen that zero-padding with 4N - 1 and lambda=2 sampling can both reduce the errors from 25.7% to 12.9% for the near-field and from 18.1% to 5...

  11. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  12. Random Numbers Demonstrate the Frequency of Type I Errors: Three Spreadsheets for Class Instruction

    Science.gov (United States)

    Duffy, Sean

    2010-01-01

    This paper describes three spreadsheet exercises demonstrating the nature and frequency of type I errors using random number generation. The exercises are designed specifically to address issues related to testing multiple relations using correlation (Demonstration I), t tests varying in sample size (Demonstration II) and multiple comparisons…

  13. Impaired learning from errors in cannabis users: Dorsal anterior cingulate cortex and hippocampus hypoactivity.

    Science.gov (United States)

    Carey, Susan E; Nestor, Liam; Jones, Jennifer; Garavan, Hugh; Hester, Robert

    2015-10-01

    The chronic use of cannabis has been associated with error processing dysfunction, in particular, hypoactivity in the dorsal anterior cingulate cortex (dACC) during the processing of cognitive errors. Given the role of such activity in influencing post-error adaptive behaviour, we hypothesised that chronic cannabis users would have significantly poorer learning from errors. Fifteen chronic cannabis users (four females, mean age=22.40 years, SD=4.29) and 15 control participants (two females, mean age=23.27 years, SD=3.67) were administered a paired associate learning task that enabled participants to learn from their errors, during fMRI data collection. Compared with controls, chronic cannabis users showed (i) a lower recall error-correction rate and (ii) hypoactivity in the dACC and left hippocampus during the processing of error-related feedback and re-encoding of the correct response. The difference in error-related dACC activation between cannabis users and healthy controls varied as a function of error type, with the control group showing a significantly greater difference between corrected and repeated errors than the cannabis group. The present results suggest that chronic cannabis users have poorer learning from errors, with the failure to adapt performance associated with hypoactivity in error-related dACC and hippocampal regions. The findings highlight a consequence of performance monitoring dysfunction in drug abuse and the potential consequence this cognitive impairment has for the symptom of failing to learn from negative feedback seen in cannabis and other forms of dependence. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Error identification, disclosure, and reporting: practice patterns of three emergency medicine provider types.

    Science.gov (United States)

    Hobgood, Cherri; Xie, Jipan; Weiner, Bryan; Hooker, James

    2004-02-01

    To gather preliminary data on how the three major types of emergency medicine (EM) providers, physicians, nurses (RNs), and out-of-hospital personnel (EMTs), differ in error identification, disclosure, and reporting. A convenience sample of emergency department (ED) providers completed a brief survey designed to evaluate error frequency, disclosure, and reporting practices as well as error-based discussion and educational activities. One hundred sixteen subjects participated: 41 EMTs (35%), 33 RNs (28%), and 42 physicians (36%). Forty-five percent of EMTs, 56% of RNs, and 21% of physicians identified no clinical errors during the preceding year. When errors were identified, physicians learned of them via dialogue with RNs (58%), patients (13%), pharmacy (35%), and attending physicians (35%). For known errors, all providers were equally unlikely to inform the team caring for the patient. Disclosure to patients was limited and varied by provider type (19% EMTs, 23% RNs, and 74% physicians). Disclosure education was rare, with error to a patient. Error discussions are widespread, with all providers indicating they discussed their own as well as the errors of others. This study suggests that error identification, disclosure, and reporting challenge all members of the ED care delivery team. Provider-specific education and enhanced teamwork training will be required to further the transformation of the ED into a high-reliability organization.

  15. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  16. Test-Retest Reliability of the Adaptive Chemistry Assessment Survey for Teachers: Measurement Error and Alternatives to Correlation

    Science.gov (United States)

    Harshman, Jordan; Yezierski, Ellen

    2016-01-01

    Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…

  17. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali

    Directory of Open Access Journals (Sweden)

    Minetti Andrea

    2012-10-01

    Full Text Available Abstract Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i health areas not requiring supplemental activities; ii health areas requiring additional vaccination; iii health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3, standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15 are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  18. Undetected latent failures of safety-related systems. Preliminary survey of events in nuclear power plants 1980-1997

    International Nuclear Information System (INIS)

    Lydell, B.

    1998-03-01

    This report summarizes results and insights from a preliminary survey of events involving undetected, latent failures of safety-related systems. The survey was limited to events where mispositioned equipment (e.g., valves, switches) remained undetected, thus rendering standby equipment or systems unavailable for short or long time periods. Typically, these events were symptoms of underlying latent errors (e.g., design errors, procedure errors, unanalyzed safety conditions) and programmatic errors. The preliminary survey identified well over 300 events. Of these, 95 events are documented in this report. Events involving mispositioned equipment are commonplace. Most events are discovered soon after occurrence, however. But as evidenced by the survey results, some events remained undetected beyond several shift changes. The recommendations developed by the survey emphasize the importance of applying modern root cause analysis techniques to the event analysis to ensure that the causes and implications of occurred events are fully understood

  19. Informed Design of Mixed-Mode Surveys : Evaluating mode effects on measurement and selection error

    NARCIS (Netherlands)

    Klausch, Thomas|info:eu-repo/dai/nl/341427306

    2014-01-01

    “Mixed-mode designs” are innovative types of surveys which combine more than one mode of administration in the same project, such as surveys administered partly on the web (online), on paper, by telephone, or face-to-face. Mixed-mode designs have become increasingly popular in international survey

  20. Error of the slanted edge method for measuring the modulation transfer function of imaging systems.

    Science.gov (United States)

    Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu

    2018-03-01

    The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.

  1. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  3. The Mark II Vertex Drift Chamber

    International Nuclear Information System (INIS)

    Alexander, J.P.; Baggs, R.; Fujino, D.

    1989-03-01

    We have completed constructing and begun operating the Mark II Drift Chamber Vertex Detector. The chamber, based on a modified jet cell design, achieves 30 μm spatial resolution and 2 gas mixtures. Special emphasis has been placed on controlling systematic errors including the use of novel construction techniques which permit accurate wire placement. Chamber performance has been studied with cosmic ray tracks collected with the chamber located both inside and outside the Mark II. Results on spatial resolution, average pulse shape, and some properties of CO 2 mixtures are presented. 10 refs., 12 figs., 1 tab

  4. On minimizing assignment errors and the trade-off between false positives and negatives in parentage analysis

    KAUST Repository

    Harrison, Hugo B.

    2013-11-04

    Genetic parentage analyses provide a practical means with which to identify parent-offspring relationships in the wild. In Harrison et al.\\'s study (2013a), we compare three methods of parentage analysis and showed that the number and diversity of microsatellite loci were the most important factors defining the accuracy of assignments. Our simulations revealed that an exclusion-Bayes theorem method was more susceptible to false-positive and false-negative assignments than other methods tested. Here, we analyse and discuss the trade-off between type I and type II errors in parentage analyses. We show that controlling for false-positive assignments, without reporting type II errors, can be misleading. Our findings illustrate the need to estimate and report both the rate of false-positive and false-negative assignments in parentage analyses. © 2013 John Wiley & Sons Ltd.

  5. On minimizing assignment errors and the trade-off between false positives and negatives in parentage analysis

    KAUST Repository

    Harrison, Hugo B.; Saenz Agudelo, Pablo; Planes, Serge; Jones, Geoffrey P.; Berumen, Michael L.

    2013-01-01

    Genetic parentage analyses provide a practical means with which to identify parent-offspring relationships in the wild. In Harrison et al.'s study (2013a), we compare three methods of parentage analysis and showed that the number and diversity of microsatellite loci were the most important factors defining the accuracy of assignments. Our simulations revealed that an exclusion-Bayes theorem method was more susceptible to false-positive and false-negative assignments than other methods tested. Here, we analyse and discuss the trade-off between type I and type II errors in parentage analyses. We show that controlling for false-positive assignments, without reporting type II errors, can be misleading. Our findings illustrate the need to estimate and report both the rate of false-positive and false-negative assignments in parentage analyses. © 2013 John Wiley & Sons Ltd.

  6. Preanalytical Blood Sampling Errors in Clinical Settings

    International Nuclear Information System (INIS)

    Zehra, N.; Malik, A. H.; Arshad, Q.; Sarwar, S.; Aslam, S.

    2016-01-01

    Background: Blood sampling is one of the common procedures done in every ward for disease diagnosis and prognosis. Daily hundreds of samples are collected from different wards but lack of appropriate knowledge of blood sampling by paramedical staff and accidental errors make the samples inappropriate for testing. Thus the need to avoid these errors for better results still remains. We carried out this research with an aim to determine the common errors during blood sampling; find factors responsible and propose ways to reduce these errors. Methods: A cross sectional descriptive study was carried out at the Military and Combined Military Hospital Rawalpindi during February and March 2014. A Venous Blood Sampling questionnaire (VBSQ) was filled by the staff on voluntary basis in front of the researchers. The staff was briefed on the purpose of the survey before filling the questionnaire. Sample size was 228. Results were analysed using SPSS-21. Results: When asked in the questionnaire, around 61.6 percent of the paramedical staff stated that they cleaned the vein by moving the alcohol swab from inward to outwards while 20.8 percent of the staff reported that they felt the vein after disinfection. On contrary to WHO guidelines, 89.6 percent identified that they had a habit of placing blood in the test tube by holding it in the other hand, which should actually be done after inserting it into the stand. Although 86 percent thought that they had ample knowledge regarding the blood sampling process but they did not practice it properly. Conclusion: Pre analytical blood sampling errors are common in our setup. Eighty six percent participants though thought that they had adequate knowledge regarding blood sampling, but most of them were not adhering to standard protocols. There is a need of continued education and refresher courses. (author)

  7. Indicators to examine quality of large scale survey data: an example through district level household and facility survey.

    Directory of Open Access Journals (Sweden)

    Kakoli Borkotoky

    Full Text Available BACKGROUND: Large scale surveys are the main source of data pertaining to all the social and demographic indicators, hence its quality is also of great concern. In this paper, we discuss the indicators used to examine the quality of data. We focus on age misreporting, incompleteness and inconsistency of information; and skipping of questions on reproductive and sexual health related issues. In order to observe the practical consequences of errors in a survey; the District Level Household and Facility Survey (DLHS-3 is used as an example dataset. METHODS: Whipple's and Myer's indices are used to identify age misreporting. Age displacements are identified by estimating downward and upward transfers for women from bordering age groups of the eligible age range. Skipping pattern is examined by recording the responses to the questions which precede the sections on birth history, immunization, and reproductive and sexual health. RESULTS: The study observed errors in age reporting, in all the states, but the extent of misreporting differs by state and individual characteristics. Illiteracy, rural residence and poor economic condition are the major factors that lead to age misreporting. Female were excluded from the eligible age group, to reduce the duration of interview. The study further observed that respondents tend to skip questions on HIV/RTI and other questions which follow a set of questions. CONCLUSION: The study concludes that age misreporting, inconsistency and incomplete response are three sources of error that need to be considered carefully before drawing conclusions from any survey. DLHS-3 also suffers from age misreporting, particularly for female in the reproductive ages. In view of the coverage of the survey, it may not be possible to control age misreporting completely, but some extra effort to probe a better answer may help in improving the quality of data in the survey.

  8. Brazil Geological Basic Survey Program - Ponte Nova - Sheet SF.23-X-B-II - Minas Gerais State

    International Nuclear Information System (INIS)

    Brandalise, L.A.

    1991-01-01

    The present report refers to the Ponte Nova Sheet (SF.23-X-B-II) systematic geological mapping, on the 1:100.000 scale. The Sheet covers the Zona da Mata region, Minas Gerais State, in the Mantiqueira Geotectonic Province, to the eastern part of Sao Francisco Geotectonic Province, as defined in the project. The high grade metamorphic rocks to low amphibolite, occurring in the area were affected by a marked low angle shearing transposition, and show diphtheritic effects. Archaean to Proterozoic ages are attributed to the metamorphites mostly by comparison to similar types of the region. Three deformed events were registered in the region. Analysis of the crustal evolution pattern based on geological mapping, laboratorial analyses, gravimetric and air magnetometry data, and available geochronologic data is given in the 6. Chapter, Part II, in the text. Major element oxides, trace-elements, and rare-earths elements were analysed to establish parameters for the rocks environment elucidation. Geochemical survey was carried out with base on pan concentrated and stream sediments distributed throughout the Sheet. Gneisses quarries (industrial rocks) in full exploration activity have been registered, as well as sand and clay deposits employed in construction industry. Metallogenetic/Provisional analysis points out the area as a favorable one for gold prospection. (author)

  9. Residents' Ratings of Their Clinical Supervision and Their Self-Reported Medical Errors: Analysis of Data From 2009.

    Science.gov (United States)

    Baldwin, DeWitt C; Daugherty, Steven R; Ryan, Patrick M; Yaghmour, Nicholas A; Philibert, Ingrid

    2018-04-01

    Medical errors and patient safety are major concerns for the medical and medical education communities. Improving clinical supervision for residents is important in avoiding errors, yet little is known about how residents perceive the adequacy of their supervision and how this relates to medical errors and other education outcomes, such as learning and satisfaction. We analyzed data from a 2009 survey of residents in 4 large specialties regarding the adequacy and quality of supervision they receive as well as associations with self-reported data on medical errors and residents' perceptions of their learning environment. Residents' reports of working without adequate supervision were lower than data from a 1999 survey for all 4 specialties, and residents were least likely to rate "lack of supervision" as a problem. While few residents reported that they received inadequate supervision, problems with supervision were negatively correlated with sufficient time for clinical activities, overall ratings of the residency experience, and attending physicians as a source of learning. Problems with supervision were positively correlated with resident reports that they had made a significant medical error, had been belittled or humiliated, or had observed others falsifying medical records. Although working without supervision was not a pervasive problem in 2009, when it happened, it appeared to have negative consequences. The association between inadequate supervision and medical errors is of particular concern.

  10. A method of encountering the ratio of adjacent sides and its applied study in nuclear engineering survey

    International Nuclear Information System (INIS)

    Wu Jingqin

    1996-01-01

    The cross side or range net survey method is to compute the average error of the measured lengths of sides. With the increment of the side length, the viewing variance increases greatly. Generally the photo-electrical distance survey equipment has a high inside precision, but it is affected by typical weather error so that the outside precision is decreased, and this weather error similar to systematic error greatly decreases the viewing side precision. To solve this problem, theoretical study and field test were carried out for the correlation of ratios among short sides by photo-electrical survey, and the stability degree of the ratios of sides, a new method of ratio encountering of adjacent sides is put forward. Because of the weights of the ration variance σ γ 2 = 2η 2 γ 2 and the angular variance σ β 2 = 2J 2 ρ 2 match each other, so the systematic error can be eliminated completely, and a survey point co-ordinate of high precision can be obtained. It is easy to operate, as it does not require multi-photo-band survey or to operate at the optimal observation time, and is especially suitable to nuclear engineering survey applications. (3 tabs.)

  11. Undetected latent failures of safety-related systems. Preliminary survey of events in nuclear power plants 1980-1997

    Energy Technology Data Exchange (ETDEWEB)

    Lydell, B. [RSA Technologies, Vista, CA (United States)

    1998-03-01

    This report summarizes results and insights from a preliminary survey of events involving undetected, latent failures of safety-related systems. The survey was limited to events where mispositioned equipment (e.g., valves, switches) remained undetected, thus rendering standby equipment or systems unavailable for short or long time periods. Typically, these events were symptoms of underlying latent errors (e.g., design errors, procedure errors, unanalyzed safety conditions) and programmatic errors. The preliminary survey identified well over 300 events. Of these, 95 events are documented in this report. Events involving mispositioned equipment are commonplace. Most events are discovered soon after occurrence, however. But as evidenced by the survey results, some events remained undetected beyond several shift changes. The recommendations developed by the survey emphasize the importance of applying modern root cause analysis techniques to the event analysis to ensure that the causes and implications of occurred events are fully understood. 7 refs, 4 tabs, 3 figs. Also available at the SKI Home page: //www.ski.se.

  12. Errors in nonword repetition: bridging short- and long-term memory

    Directory of Open Access Journals (Sweden)

    F.H. Santos

    2006-03-01

    Full Text Available According to the working memory model, the phonological loop is the component of working memory specialized in processing and manipulating limited amounts of speech-based information. The Children's Test of Nonword Repetition (CNRep is a suitable measure of phonological short-term memory for English-speaking children, which was validated by the Brazilian Children's Test of Pseudoword Repetition (BCPR as a Portuguese-language version. The objectives of the present study were: i to investigate developmental aspects of the phonological memory processing by error analysis in the nonword repetition task, and ii to examine phoneme (substitution, omission and addition and order (migration errors made in the BCPR by 180 normal Brazilian children of both sexes aged 4-10, from preschool to 4th grade. The dominant error was substitution [F(3,525 = 180.47; P < 0.0001]. The performance was age-related [F(4,175 = 14.53; P < 0.0001]. The length effect, i.e., more errors in long than in short items, was observed [F(3,519 = 108.36; P < 0.0001]. In 5-syllable pseudowords, errors occurred mainly in the middle of the stimuli, before the syllabic stress [F(4,16 = 6.03; P = 0.003]; substitutions appeared more at the end of the stimuli, after the stress [F(12,48 = 2.27; P = 0.02]. In conclusion, the BCPR error analysis supports the idea that phonological loop capacity is relatively constant during development, although school learning increases the efficiency of this system. Moreover, there are indications that long-term memory contributes to holding memory trace. The findings were discussed in terms of distinctiveness, clustering and redintegration hypotheses.

  13. Perceived Effects of Prevalent Errors in Contract Documents on Construction Projects

    Directory of Open Access Journals (Sweden)

    Oluwaseun Sunday Dosumu

    2018-03-01

    Full Text Available One of the highly rated causes of poor performance is errors in contract documents. The objectives of this study are to investigate the prevalent errors in contract documents and their effects on construction projects. Questionnaire survey and 51 case study projects (mixed method were adopted for the study. The study also involved the use of Delphi technique to extract the possible errors that may be contained in contract documents; it did not however constitute the empirical data for the study. The sample of the study consists of 985 consulting and 275 contracting firms that engaged in the construction of building projects that were completed between 2013 and 2016 and were above the ground floor. The two-stage stratified random sampling technique was adopted for the study. The data for the study were analysed with descriptive and inferential statistics (based on Shapiro-Wilk’s test. The results of the study indicate that errors in contract documents were moderately prevalent. However, overmeasurement in bill of quantities was prevalent in private, institutional and management procured projects. Traditionally procured projects contain 68% of the errors in contract documents among the procurement methods. Drawings contain the highest number of errors, followed by bill of quantities and specifications. The severe effects of errors in contract documents were structural collapse, deterioration of buildings and contractors’ claims among others. The result of the study implies that, management procurement method is the route to error minimization in developing countries, but it may need to be backed by law and guarded against overmeasurement.

  14. Errors associated with moose-hunter counts of occupied beaver Castor fiber lodges in Norway

    OpenAIRE

    Parker, Howard; Rosell, Frank; Gustavsen, Per Øyvind

    2002-01-01

    In Norway, Sweden and Finland moose Alces alces hunting teams are often employed to survey occupied beaver (Castor fiber and C. canadensis) lodges while hunting. Results may be used to estimate population density or trend, or for issuing harvest permits. Despite the method's increasing popularity, the errors involved have never been identified. In this study we 1) compare hunting-team counts of occupied lodges with total counts, 2) identify the sources of error between counts and 3) evaluate ...

  15. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  16. On the ortho-positronium quenching reactions promoted by Fe(II), Fe(III), Co(III), Ni(II), Zn(II) and Cd(II) cyanocomplexes

    Science.gov (United States)

    Fantola Lazzarini, Anna L.; Lazzarini, Ennio

    The o-Ps quenching reactions promoted in aqueous solutions by the following six cyanocomplexes: [Fe(CN) 6] 4-; [Co(CN) 6] 3-; [Zn(CN) 4] 2-; [Cd(CN) 6] 2-; [Fe(CN) 6] 3-; [Ni(CN) 4] 2- were investigated. The first four reactions probably consist in o-Ps addition across the CN bond, their rate constants at room temperature, Tr, being ⩽(0.04±0.02) × 10 9 M -1 s -1, i.e. almost at the limit of experimental errors. The rate constant of the fifth reaction, in o-Ps oxydation, at Tr is (20.3±0.4) × 10 9 M -1 s -1. The [Ni(CN) 4] 2-k value at Tr, is (0.27±0.01) × 10 9 M -1 s -1, i.e. 100 times less than the rate constants of o-Ps oxydation, but 10 times larger than those of the o-Ps addition across the CN bond. The [Ni(CN) 4] 2- reaction probably results in formation of the following positronido complex: [Ni(CN) 4Ps] 2-. However, it is worth noting that the existence of such a complex is only indirectly deduced. In fact it arises from comparison of the [Ni(CN) 4] 2- rate constant with those of the Fe(II), Zn(II), Cd(II), and Co(III) cyanocomplexes, which, like the Ni(II) cyanocomplex, do not promote o-Ps oxydation or spin exchange reactions.

  17. Pollutant forecasting error based on persistence of wind direction

    International Nuclear Information System (INIS)

    Cooper, R.E.

    1978-01-01

    The purpose of this report is to provide a means of estimating the reliability of forecasts of downwind pollutant concentrations from atmospheric puff releases. These forecasts are based on assuming the persistence of wind direction as determined at the time of release. This initial forecast will be used to deploy survey teams, to predict population centers that may be affected, and to estimate the amount of time available for emergency response. Reliability of forecasting is evaluated by developing a cumulative probability distribution of error as a function of lapsed time following an assumed release. The cumulative error is determined by comparing the forecast pollutant concentration with the concentration measured by sampling along the real-time meteorological trajectory. It may be concluded that the assumption of meteorological persistence for emergency response is not very good for periods longer than 3 hours. Even within this period, the possibiity for large error exists due to wind direction shifts. These shifts could affect population areas totally different from those areas first indicated

  18. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  19. An algorithm to assess methodological quality of nutrition and mortality cross-sectional surveys: development and application to surveys conducted in Darfur, Sudan.

    Science.gov (United States)

    Prudhon, Claudine; de Radiguès, Xavier; Dale, Nancy; Checchi, Francesco

    2011-11-09

    Nutrition and mortality surveys are the main tools whereby evidence on the health status of populations affected by disasters and armed conflict is quantified and monitored over time. Several reviews have consistently revealed a lack of rigor in many surveys. We describe an algorithm for analyzing nutritional and mortality survey reports to identify a comprehensive range of errors that may result in sampling, response, or measurement biases and score quality. We apply the algorithm to surveys conducted in Darfur, Sudan. We developed an algorithm based on internationally agreed upon methods and best practices. Penalties are attributed for a list of errors, and an overall score is built from the summation of penalties accrued by the survey as a whole. To test the algorithm reproducibility, it was independently applied by three raters on 30 randomly selected survey reports. The algorithm was further applied to more than 100 surveys conducted in Darfur, Sudan. The Intra Class Correlation coefficient was 0.79 for mortality surveys and 0.78 for nutrition surveys. The overall median quality score and range of about 100 surveys conducted in Darfur were 0.60 (0.12-0.93) and 0.675 (0.23-0.86) for mortality and nutrition surveys, respectively. They varied between the organizations conducting the surveys, with no major trend over time. Our study suggests that it is possible to systematically assess quality of surveys and reveals considerable problems with the quality of nutritional and particularly mortality surveys conducted in the Darfur crisis.

  20. A multi-institutional survey evaluating patient related QA – phase II

    Directory of Open Access Journals (Sweden)

    Teichmann Tobias

    2017-09-01

    Full Text Available In phase I of the survey a planning intercomparison of patient-related QA was performed at 12 institutions. The participating clinics created phantom based IMRT and VMAT plans which were measured utilizing the ArcCheck diode array. Mobius3D (M3D was used in phase II. It acts as a secondary dose verification tool for patient-specific QA based on average linac beam data collected by Mobius Medical Systems. All Quasimodo linac plans will be analyzed for the continuation of the intercomparison. We aim to determine if Mobius3D is suited for use with diverse treatment techniques, if beam model customization is needed. Initially we computed first Mobius3D results by transferring all plans from phase I to our Mobius3D server. Because of some larger PTV mean dose differences we checked if output factor customization would be beneficial. We performed measurements and output factor correction to account for discrepancies in reference conditions. Compared to Mobius3D's preconfigured average beam data values, these corrected output factors differed by ±1.5% for field sizes between 7x7cm2 and 30x30cm2 and to −3.9% for 3x3cm2. Our method of correcting the output factors turns out good congruence to M3D's reference values for these medium field sizes.

  1. The APM galaxy survey: Pt. 2; Photometric corrections

    Energy Technology Data Exchange (ETDEWEB)

    Maddox, S.J.; Efstathiou, G.; Sutherland, W.J. (Oxford Univ. (UK). Dept. of Astrophysics)

    1990-10-01

    We describe the methods that we have used to establish accurate photometry in a survey of two million galaxies brighter then b{sub j} 20.5 covering over 4300 square degrees of the South Galactic cap. We apply a field correction for vignetting and differential desensitization which is accurate to ''< approx''0.04 mag across each of 185 Schmidt plates. Images in the overlapping regions of neighbouring plate pairs are used to establish a uniform magnitude system over the entire survey. We discuss the residual magnitude differences in the overlaps and the propagation of plate-to-plate magnitude errors across the network of plates. We argue that the rms plate zero-point error in the final matched survey is 0.04 mag. CCD photometry of 40 faint galaxy sequences is used to calibrate the matched magnitudes and to set stringent limits on large-scale gradients in the matched survey photometry. (author).

  2. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  3. Effects of Lexico-syntactic Errors on Teaching Materials: A Study of Textbooks Written by Nigerians

    Directory of Open Access Journals (Sweden)

    Peace Chinwendu Israel

    2014-01-01

    Full Text Available This study examined lexico-syntactic errors in selected textbooks written by Nigerians. Our focus was on the educated bilinguals (acrolect who acquired their primary, secondary and tertiary education in Nigeria and the selected textbooks were textbooks published by Vanity Publishers/Press. The participants (authors cut across the three major ethnic groups in Nigeria – Hausa, Igbo and Yoruba and the selection of the textbooks covered the major disciplines of study. We adopted the descriptive research design and specifically employed the survey method to accomplish the purpose of our exploratory research.  The lexico-syntactic errors in the selected textbooks were identified and classified into various categories. These errors were not different from those identified over the years in students’ essays and exam scripts. This buttressed our argument that students are merely the conveyor belt of errors contained in the teaching material and that we can analyse the students’ lexico-syntactic errors in tandem with errors contained in the material used in teaching.

  4. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  5. Gravitational lensing statistics with extragalactic surveys - II. Analysis of the Jodrell Bank-VLA Astrometric Survey

    NARCIS (Netherlands)

    Helbig, P; Marlow, D; Quast, R; Wilkinson, PN; Browne, IWA; Koopmans, LVE

    We present constraints on the cosmological constant lambda(0) from gravitational lensing statistics of the Jodrell Bank-VLA Astrometric Survey (JVAS). Although this is the largest gravitational lens survey which has been analysed, cosmological constraints are only comparable to those from optical

  6. Leniency programs and socially beneficial cooperation: Effects of type I errors

    Directory of Open Access Journals (Sweden)

    Natalia Pavlova

    2016-12-01

    Full Text Available This study operationalizes the concept of hostility tradition in antitrust as mentioned by Oliver Williamson and Ronald Coase through erroneous law enforcement effects. The antitrust agency may commit type I, not just type II, errors when evaluating an agreement in terms of cartels. Moreover, firms can compete in a standard way, collude or engage in cooperative agreements that improve efficiency. The antitrust agency may misinterpret such cooperative agreements, committing a type I error (over-enforcement. The model set-up is drawn from Motta and Polo (2003 and is extended as described above using the findings of Ghebrihiwet and Motchenkova (2010. Three effects play a role in this environment. Type I errors may induce firms that would engage in socially efficient cooperation absent errors to opt for collusion (the deserved punishment effect. For other parameter configurations, type I errors may interrupt ongoing cooperation when investigated. In this case, the firms falsely report collusion and apply for leniency, fearing being erroneously fined (the disrupted cooperation effect. Finally, over-enforcement may prevent beneficial cooperation from starting given the threat of being mistakenly fined (the prevented cooperation effect. The results help us understand the negative impact that a hostility tradition in antitrust — which is more likely for inexperienced regimes and regimes with low standards of evidence — and the resulting type I enforcement errors can have on social welfare when applied to the regulation of horizontal agreements. Additional interpretations are discussed in light of leniency programs for corruption and compliance policies for antitrust violations.

  7. Survey of non-linear hydrodynamic models of type-II Cepheids

    Science.gov (United States)

    Smolec, R.

    2016-03-01

    We present a grid of non-linear convective type-II Cepheid models. The dense model grids are computed for 0.6 M⊙ and a range of metallicities ([Fe/H] = -2.0, -1.5, -1.0), and for 0.8 M⊙ ([Fe/H] = -1.5). Two sets of convective parameters are considered. The models cover the full temperature extent of the classical instability strip, but are limited in luminosity; for the most luminous models, violent pulsation leads to the decoupling of the outermost model shell. Hence, our survey reaches only the shortest period RV Tau domain. In the Hertzsprung-Russell diagram, we detect two domains in which period-doubled pulsation is possible. The first extends through the BL Her domain and low-luminosity W Vir domain (pulsation periods ˜2-6.5 d). The second domain extends at higher luminosities (W Vir domain; periods >9.5 d). Some models within these domains display period-4 pulsation. We also detect very narrow domains (˜10 K wide) in which modulation of pulsation is possible. Another interesting phenomenon we detect is double-mode pulsation in the fundamental mode and in the fourth radial overtone. Fourth overtone is a surface mode, trapped in the outer model layers. Single-mode pulsation in the fourth overtone is also possible on the hot side of the classical instability strip. The origin of the above phenomena is discussed. In particular, the role of resonances in driving different pulsation dynamics as well as in shaping the morphology of the radius variation curves is analysed.

  8. The Impact of Error-Management Climate, Error Type and Error Originator on Auditors’ Reporting Errors Discovered on Audit Work Papers

    NARCIS (Netherlands)

    A.H. Gold-Nöteberg (Anna); U. Gronewold (Ulfert); S. Salterio (Steve)

    2010-01-01

    textabstractWe examine factors affecting the auditor’s willingness to report their own or their peers’ self-discovered errors in working papers subsequent to detailed working paper review. Prior research has shown that errors in working papers are detected in the review process; however, such

  9. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  10. Bases conceptuales y metodológicas de la Encuesta Nacional de Salud II, México 1994 Conceptual and methodological basis of the National Health Survey II, Mexico, 1994

    Directory of Open Access Journals (Sweden)

    1998-01-01

    Full Text Available Se describen las bases conceptuales y metodológicas de la Encuesta Nacional de Salud II (ENSA-II, que integra avances de la investigación multidisciplinaria en salud pública, tanto en el terreno conceptual como en el metodológico, que se han dado en nuestro país últimamente. Su diseño se concentró particularmente en las condiciones del acceso, la calidad y los costos de los servicios de atención a la salud, tanto a nivel ambulatorio como hospitalario. Se incluyen detalles de su marco conceptual, así como los aspectos relacionados con el procesamiento y análisis. La cobertura geográfica fue hecha para cinco regiones; se visitaron 12 615 viviendas a escala nacional, y se recabó información sobre 61 524 individuos. La tasa global de respuesta fue de 96.7%, tanto para los hogares como para los utilizadores identificados de servicios de salud. La conclusión general apunta hacia la incorporación del enfoque de la población al proceso de planeación y asignación de recursos para la atención a la salud.The conceptual and methodological basis of the National Health Survey II (NHS-II are described and recent advances in multidisciplinary public health research in Mexico, both conceptual and methodological, are synthesized. The design of the NHS-II concentrated on the study of the access, quality of care and health attention expenses in ambulatory and hospitalization services. Details on the conceptual framework related with the analysis and processing of data are also included. Five geographic regions were covered; 12 615 households at national level were visited and information on 61524 individuals was gathered. The overall response rate was 96.7% both for households and for identified health service users. The general conclusion emphasizes the need to incorporate the population perspective to the planning and allocation of health resources.

  11. Evaluation of Drug Interactions and Prescription Errors of Poultry Veterinarians in North of Iran

    Directory of Open Access Journals (Sweden)

    Madadi MS

    2014-03-01

    Full Text Available Drug prescription errors are a common cause of adverse incidents and may lead to adverse outcomes, sometimes in subtle ways, being compounded by circumstances or further errors. Therefore, it is important that veterinarians issue the correct drug at the correct dose. Using two or more prescribed drugs may lead to drug interactions. Some drug interactions are very harmful and may have potential threats to the patient's health that is called antagonism. In a survey study, medication errors of 750 prescriptions, including dosage errors and drug interactions were studied. The results indicated that 20.8% of prescriptions had at least one drug interaction. The most interactions were related to antibiotics (69.1%, Sulfonamides (46.7%, Methenamine (46.7% and Florfenicol (20.2%. Analysis of dosage errors indicated that total drugs consumed by broilers in the summer are more than winter seasons. Based on these results, avoiding medication errors are important in the balanced prescribing of drugs and regular education of veterinary practitioners in a certain interval is needed.

  12. The CLASS blazar survey - II. Optical properties

    NARCIS (Netherlands)

    Caccianiga, A; Marcha, MJ; Anton, S; Mack, KH; Neeser, MJ

    2002-01-01

    This paper presents the optical properties of the objects selected in the CLASS blazar survey. Because an optical spectrum is now available for 70 per cent of the 325 sources present in the sample, a spectral classification, based on the appearance of the emission/absorption lines, is possible. A

  13. PHOTOMETRIC SUPERNOVA COSMOLOGY WITH BEAMS AND SDSS-II

    Energy Technology Data Exchange (ETDEWEB)

    Hlozek, Renee [Oxford Astrophysics, Department of Physics, University of Oxford, Keble Road, Oxford, OX1 3RH (United Kingdom); Kunz, Martin [Department de physique theorique, Universite de Geneve, 30, quai Ernest-Ansermet, CH-1211 Geneve 4 (Switzerland); Bassett, Bruce; Smith, Mat; Newling, James [African Institute for Mathematical Sciences, 68 Melrose Road, Muizenberg 7945 (South Africa); Varughese, Melvin [Department of Mathematics and Applied Mathematics, University of Cape Town, Rondebosch, Cape Town, 7700 (South Africa); Kessler, Rick; Frieman, Joshua [The Kavli Institute for Cosmological Physics, The University of Chicago, 933 East 56th Street, Chicago, IL 60637 (United States); Bernstein, Joseph P.; Kuhlmann, Steve; Marriner, John [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Campbell, Heather; Lampeitl, Hubert; Nichol, Robert C. [Institute of Cosmology and Gravitation, Dennis Sciama Building Burnaby Road Portsmouth PO1 3FX (United Kingdom); Dilday, Ben [Las Cumbres Observatory Global Telescope Network, 6740 Cortona Drive, Suite 102, Goleta, CA 93117 (United States); Falck, Bridget; Riess, Adam G. [Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Sako, Masao [Department of Physics and Astronomy, University of Pennsylvania, 203 South 33rd Street, Philadelphia, PA 19104 (United States); Schneider, Donald P., E-mail: rhlozek@astro.princeton.edu [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States)

    2012-06-20

    Supernova (SN) cosmology without spectroscopic confirmation is an exciting new frontier, which we address here with the Bayesian Estimation Applied to Multiple Species (BEAMS) algorithm and the full three years of data from the Sloan Digital Sky Survey II Supernova Survey (SDSS-II SN). BEAMS is a Bayesian framework for using data from multiple species in statistical inference when one has the probability that each data point belongs to a given species, corresponding in this context to different types of SNe with their probabilities derived from their multi-band light curves. We run the BEAMS algorithm on both Gaussian and more realistic SNANA simulations with of order 10{sup 4} SNe, testing the algorithm against various pitfalls one might expect in the new and somewhat uncharted territory of photometric SN cosmology. We compare the performance of BEAMS to that of both mock spectroscopic surveys and photometric samples that have been cut using typical selection criteria. The latter typically either are biased due to contamination or have significantly larger contours in the cosmological parameters due to small data sets. We then apply BEAMS to the 792 SDSS-II photometric SNe with host spectroscopic redshifts. In this case, BEAMS reduces the area of the {Omega}{sub m}, {Omega}{sub {Lambda}} contours by a factor of three relative to the case where only spectroscopically confirmed data are used (297 SNe). In the case of flatness, the constraints obtained on the matter density applying BEAMS to the photometric SDSS-II data are {Omega}{sup BEAMS}{sub m} = 0.194 {+-} 0.07. This illustrates the potential power of BEAMS for future large photometric SN surveys such as Large Synoptic Survey Telescope.

  14. PHOTOMETRIC SUPERNOVA COSMOLOGY WITH BEAMS AND SDSS-II

    International Nuclear Information System (INIS)

    Hlozek, Renée; Kunz, Martin; Bassett, Bruce; Smith, Mat; Newling, James; Varughese, Melvin; Kessler, Rick; Frieman, Joshua; Bernstein, Joseph P.; Kuhlmann, Steve; Marriner, John; Campbell, Heather; Lampeitl, Hubert; Nichol, Robert C.; Dilday, Ben; Falck, Bridget; Riess, Adam G.; Sako, Masao; Schneider, Donald P.

    2012-01-01

    Supernova (SN) cosmology without spectroscopic confirmation is an exciting new frontier, which we address here with the Bayesian Estimation Applied to Multiple Species (BEAMS) algorithm and the full three years of data from the Sloan Digital Sky Survey II Supernova Survey (SDSS-II SN). BEAMS is a Bayesian framework for using data from multiple species in statistical inference when one has the probability that each data point belongs to a given species, corresponding in this context to different types of SNe with their probabilities derived from their multi-band light curves. We run the BEAMS algorithm on both Gaussian and more realistic SNANA simulations with of order 10 4 SNe, testing the algorithm against various pitfalls one might expect in the new and somewhat uncharted territory of photometric SN cosmology. We compare the performance of BEAMS to that of both mock spectroscopic surveys and photometric samples that have been cut using typical selection criteria. The latter typically either are biased due to contamination or have significantly larger contours in the cosmological parameters due to small data sets. We then apply BEAMS to the 792 SDSS-II photometric SNe with host spectroscopic redshifts. In this case, BEAMS reduces the area of the Ω m , Ω Λ contours by a factor of three relative to the case where only spectroscopically confirmed data are used (297 SNe). In the case of flatness, the constraints obtained on the matter density applying BEAMS to the photometric SDSS-II data are Ω BEAMS m = 0.194 ± 0.07. This illustrates the potential power of BEAMS for future large photometric SN surveys such as Large Synoptic Survey Telescope.

  15. Wavefront error budget development for the Thirty Meter Telescope laser guide star adaptive optics system

    Science.gov (United States)

    Gilles, Luc; Wang, Lianqi; Ellerbroek, Brent

    2008-07-01

    This paper describes the modeling effort undertaken to derive the wavefront error (WFE) budget for the Narrow Field Infrared Adaptive Optics System (NFIRAOS), which is the facility, laser guide star (LGS), dual-conjugate adaptive optics (AO) system for the Thirty Meter Telescope (TMT). The budget describes the expected performance of NFIRAOS at zenith, and has been decomposed into (i) first-order turbulence compensation terms (120 nm on-axis), (ii) opto-mechanical implementation errors (84 nm), (iii) AO component errors and higher-order effects (74 nm) and (iv) tip/tilt (TT) wavefront errors at 50% sky coverage at the galactic pole (61 nm) with natural guide star (NGS) tip/tilt/focus/astigmatism (TTFA) sensing in J band. A contingency of about 66 nm now exists to meet the observatory requirement document (ORD) total on-axis wavefront error of 187 nm, mainly on account of reduced TT errors due to updated windshake modeling and a low read-noise NGS wavefront sensor (WFS) detector. A detailed breakdown of each of these top-level terms is presented, together with a discussion on its evaluation using a mix of high-order zonal and low-order modal Monte Carlo simulations.

  16. Random access to mobile networks with advanced error correction

    Science.gov (United States)

    Dippold, Michael

    1990-01-01

    A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.

  17. Survey II of public and leadership attitudes toward nuclear power development in the United States

    International Nuclear Information System (INIS)

    Anon.

    1976-01-01

    In August 1975, Ebasco Services Incorporated released results of a survey conducted by Louis Harris and Associates, Inc. to determine attitudes of the American public and its leaders toward nuclear power development in the U.S. Results showed, among other things, that the public favored building nuclear power plants; that they believed we have an energy shortage that will not go away soon; that they were not willing to make environmental sacrifices; and that, while favoring nuclear power development, they also had concerns about some aspects of nuclear power. Except for the environmental group, the leadership group felt the same way the public does. A follow-up survey was made in July 1976 to measure any shifts in attitudes. Survey II showed that one of the real worries that remains with the American public is the shortage of energy; additionally, the public and the leaders are concerned about the U.S. dependence on imported oil. With exception of the environmentalists, the public and its leaders support a host of measures to build energy sources, including: solar and oil shale development; speeding up the Alaskan pipeline; speeding up off-shore drilling; and building nuclear power plants. The public continues to be unwilling to sacrifice the environment. There is less conviction on the part of the public that electric power will be in short supply over the next decade. The public believes the days of heavy dependence on oil or hydroelectric power are coming to an end. By a margin of 3 to 1, the public favors building more nuclear power plants in the U.S., but some concerns about the risks have not dissipated. Even though the public is worried about radioactivity escaping into the atmosphere, they consider nuclear power generation more safe than unsafe

  18. MMT HYPERVELOCITY STAR SURVEY. II. FIVE NEW UNBOUND STARS

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Warren R.; Geller, Margaret J.; Kenyon, Scott J., E-mail: wbrown@cfa.harvard.edu, E-mail: mgeller@cfa.harvard.edu, E-mail: skenyon@cfa.harvard.edu [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States)

    2012-05-20

    We present the discovery of five new unbound hypervelocity stars (HVSs) in the outer Milky Way halo. Using a conservative estimate of Galactic escape velocity, our targeted spectroscopic survey has now identified 16 unbound HVSs as well as a comparable number of HVSs ejected on bound trajectories. A Galactic center origin for the HVSs is supported by their unbound velocities, the observed number of unbound stars, their stellar nature, their ejection time distribution, and their Galactic latitude and longitude distribution. Other proposed origins for the unbound HVSs, such as runaway ejections from the disk or dwarf galaxy tidal debris, cannot be reconciled with the observations. An intriguing result is the spatial anisotropy of HVSs on the sky, which possibly reflects an anisotropic potential in the central 10-100 pc region of the Galaxy. Further progress requires measurement of the spatial distribution of HVSs over the southern sky. Our survey also identifies seven B supergiants associated with known star-forming galaxies; the absence of B supergiants elsewhere in the survey implies there are no new star-forming galaxies in our survey footprint to a depth of 1-2 Mpc.

  19. Astronomical Surveys and Big Data

    Directory of Open Access Journals (Sweden)

    Mickaelian Areg M.

    2016-03-01

    Full Text Available Recent all-sky and large-area astronomical surveys and their catalogued data over the whole range of electromagnetic spectrum, from γ-rays to radio waves, are reviewed, including such as Fermi-GLAST and INTEGRAL in γ-ray, ROSAT, XMM and Chandra in X-ray, GALEX in UV, SDSS and several POSS I and POSS II-based catalogues (APM, MAPS, USNO, GSC in the optical range, 2MASS in NIR, WISE and AKARI IRC in MIR, IRAS and AKARI FIS in FIR, NVSS and FIRST in radio range, and many others, as well as the most important surveys giving optical images (DSS I and II, SDSS, etc., proper motions (Tycho, USNO, Gaia, variability (GCVS, NSVS, ASAS, Catalina, Pan-STARRS, and spectroscopic data (FBS, SBS, Case, HQS, HES, SDSS, CALIFA, GAMA. An overall understanding of the coverage along the whole wavelength range and comparisons between various surveys are given: galaxy redshift surveys, QSO/AGN, radio, Galactic structure, and Dark Energy surveys. Astronomy has entered the Big Data era, with Astrophysical Virtual Observatories and Computational Astrophysics playing an important role in using and analyzing big data for new discoveries.

  20. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  1. The impact of work-related stress on medication errors in Eastern Region Saudi Arabia.

    Science.gov (United States)

    Salam, Abdul; Segal, David M; Abu-Helalah, Munir Ahmad; Gutierrez, Mary Lou; Joosub, Imran; Ahmed, Wasim; Bibi, Rubina; Clarke, Elizabeth; Qarni, Ali Ahmed Al

    2018-05-07

    To examine the relationship between overall level and source-specific work-related stressors on medication errors rate. A cross-sectional study examined the relationship between overall levels of stress, 25 source-specific work-related stressors and medication error rate based on documented incident reports in Saudi Arabia (SA) hospital, using secondary databases. King Abdulaziz Hospital in Al-Ahsa, Eastern Region, SA. Two hundred and sixty-nine healthcare professionals (HCPs). The odds ratio (OR) and corresponding 95% confidence interval (CI) for HCPs documented incident report medication errors and self-reported sources of Job Stress Survey. Multiple logistic regression analysis identified source-specific work-related stress as significantly associated with HCPs who made at least one medication error per month (P stress were two times more likely to make at least one medication error per month than non-stressed HCPs (OR: 1.95, P = 0.081). This is the first study to use documented incident reports for medication errors rather than self-report to evaluate the level of stress-related medication errors in SA HCPs. Job demands, such as social stressors (home life disruption, difficulties with colleagues), time pressures, structural determinants (compulsory night/weekend call duties) and higher income, were significantly associated with medication errors whereas overall stress revealed a 2-fold higher trend.

  2. Synchrotron power supply of TARN II

    International Nuclear Information System (INIS)

    Watanabe, Shin-ichi.

    1991-07-01

    The construction and performance of synchrotron power supply of TARN II are described. The 1.1 GeV synchrotron-cooler TARN II has been constructed at Institute for Nuclear Study, University of Tokyo. Constructed power supply for the dipole magnets is 600 V, 2500 A operated in the mode of trapezoid wave form with the repetition cycle of 0.1 Hz. The stability of magnetic field within 10 -3 and tracking error of 10 -4 have been attained with the aid of computer control system. First trial of synchrotron acceleration of He 2+ beam has been done up to 600 MeV in April, 1991. (author)

  3. Measuring and detecting errors in occupational coding: an analysis of SHARE data

    NARCIS (Netherlands)

    Belloni, M.; Brugiavini, A.; Meschi, E.; Tijdens, K.

    2016-01-01

    This article studies coding errors in occupational data, as the quality of this data is important but often neglected. In particular, we recoded open-ended questions on occupation for last and current job in the Dutch sample of the “Survey of Health, Ageing and Retirement in Europe” (SHARE) using a

  4. Autonomous Sea-Ice Thickness Survey

    Science.gov (United States)

    2016-06-01

    the conductivity of an infinitely thick slab of sea ice. Ice thickness, Hice, is then obtained by subtracting the height of the ...Thickness Survey of Sea Ice Runway” ERDC/CRREL SR-16-4 ii Abstract We conducted an autonomous survey of sea -ice thickness using the Polar rover Yeti...efficiency relative to manual surveys routinely con- ducted to assess the safety of roads and runways constructed on the sea ice. Yeti executed the

  5. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  6. An Improved Photometric Calibration of the Sloan Digital SkySurvey Imaging Data

    Energy Technology Data Exchange (ETDEWEB)

    Padmanabhan, Nikhil; Schlegel, David J.; Finkbeiner, Douglas P.; Barentine, J.C.; Blanton, Michael R.; Brewington, Howard J.; Gunn, JamesE.; Harvanek, Michael; Hogg, David W.; Ivezic, Zeljko; Johnston, David; Kent, Stephen M.; Kleinman, S.J.; Knapp, Gillian R.; Krzesinski, Jurek; Long, Dan; Neilsen Jr., Eric H.; Nitta, Atsuko; Loomis, Craig; Lupton,Robert H.; Roweis, Sam; Snedden, Stephanie A.; Strauss, Michael A.; Tucker, Douglas L.

    2007-09-30

    We present an algorithm to photometrically calibrate widefield optical imaging surveys, that simultaneously solves for thecalibration parameters and relative stellar fluxes using overlappingobservations. The algorithm decouples the problem of "relative"calibrations from that of "absolute" calibrations; the absolutecalibration is reduced to determining a few numbers for the entiresurvey. We pay special attention to the spatial structure of thecalibration errors, allowing one to isolate particular error modes indownstream analyses. Applying this to the SloanDigital Sky Survey imagingdata, we achieve ~;1 percent relative calibration errors across 8500sq.deg/ in griz; the errors are ~;2 percent for the u band. These errorsare dominated by unmodelled atmospheric variations at Apache PointObservatory. These calibrations, dubbed ubercalibration, are now publicwith SDSS Data Release 6, and will be a part of subsequent SDSS datareleases.

  7. An algorithm to assess methodological quality of nutrition and mortality cross-sectional surveys: development and application to surveys conducted in Darfur, Sudan

    Directory of Open Access Journals (Sweden)

    Prudhon Claudine

    2011-11-01

    Full Text Available Abstract Background Nutrition and mortality surveys are the main tools whereby evidence on the health status of populations affected by disasters and armed conflict is quantified and monitored over time. Several reviews have consistently revealed a lack of rigor in many surveys. We describe an algorithm for analyzing nutritional and mortality survey reports to identify a comprehensive range of errors that may result in sampling, response, or measurement biases and score quality. We apply the algorithm to surveys conducted in Darfur, Sudan. Methods We developed an algorithm based on internationally agreed upon methods and best practices. Penalties are attributed for a list of errors, and an overall score is built from the summation of penalties accrued by the survey as a whole. To test the algorithm reproducibility, it was independently applied by three raters on 30 randomly selected survey reports. The algorithm was further applied to more than 100 surveys conducted in Darfur, Sudan. Results The Intra Class Correlation coefficient was 0.79 for mortality surveys and 0.78 for nutrition surveys. The overall median quality score and range of about 100 surveys conducted in Darfur were 0.60 (0.12-0.93 and 0.675 (0.23-0.86 for mortality and nutrition surveys, respectively. They varied between the organizations conducting the surveys, with no major trend over time. Conclusion Our study suggests that it is possible to systematically assess quality of surveys and reveals considerable problems with the quality of nutritional and particularly mortality surveys conducted in the Darfur crisis.

  8. Errors in second moments estimated from monostatic Doppler sodar winds. II. Application to field measurements

    DEFF Research Database (Denmark)

    Gaynor, J. E.; Kristensen, Leif

    1986-01-01

    Observatory tower. The approximate magnitude of the error due to spatial and temporal pulse volume separation is presented as a function of mean wind angle relative to the sodar configuration and for several antenna pulsing orders. Sodar-derived standard deviations of the lateral wind component, before...

  9. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  10. A chance to avoid mistakes human error

    International Nuclear Information System (INIS)

    Amaro, Pablo; Obeso, Eduardo; Gomez, Ruben

    2010-01-01

    human factor contribution to the events 'The explanations of the error': The evolution of the human error concept and which are the causes that are behind him, are presented in this chapter. Several examples try to facilitate understanding. In the appendix II, we present a series of 'Cause Codes' used in the industry, trying to aid to the technicians when they are assessing and researching events. 'The battle against error': Its the main objective of the book. Present one after other, the tools that are managed in the nuclear industry in a practical way. What's, Who have to use it and When to use it, are described with sufficient detail so that anyone can assimilated the tool and, if is applicable, look for the implementation in his organization. (authors)

  11. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  12. Reducing Check-in Errors at Brigham Young University through Statistical Process Control

    Science.gov (United States)

    Spackman, N. Andrew

    2005-01-01

    The relationship between the library and its patrons is damaged and the library's reputation suffers when returned items are not checked in. An informal survey reveals librarians' concern for this problem and their efforts to combat it, although few libraries collect objective measurements of errors or the effects of improvement efforts. Brigham…

  13. Pediatric crisis resource management training improves emergency medicine trainees' perceived ability to manage emergencies and ability to identify teamwork errors.

    Science.gov (United States)

    Bank, Ilana; Snell, Linda; Bhanji, Farhan

    2014-12-01

    Improved pediatric crisis resource management (CRM) training is needed in emergency medicine residencies because of the variable nature of exposure to critically ill pediatric patients during training. We created a short, needs-based pediatric CRM simulation workshop with postactivity follow-up to determine retention of CRM knowledge. Our aims were to provide a realistic learning experience for residents and to help the learners recognize common errors in teamwork and improve their perceived abilities to manage ill pediatric patients. Residents participated in a 4-hour objectives-based workshop derived from a formal needs assessment. To quantify their subjective abilities to manage pediatric cases, the residents completed a postworkshop survey (with a retrospective precomponent to assess perceived change). Ability to identify CRM errors was determined via a written assessment of scripted errors in a prerecorded video observed before and 1 month after completion of the workshop. Fifteen of the 16 eligible emergency medicine residents (postgraduate year 1-5) attended the workshop and completed the surveys. There were significant differences in 15 of 16 retrospective pre to post survey items using the Wilcoxon rank sum test for non-parametric data. These included ability to be an effective team leader in general (P < 0.008), delegating tasks appropriately (P < 0.009), and ability to ensure closed-loop communication (P < 0.008). There was a significant improvement in identification of CRM errors through the use of the video assessment from 3 of the 12 CRM errors to 7 of the 12 CRM errors (P < 0.006). The pediatric CRM simulation-based workshop improved the residents' self-perceptions of their pediatric CRM abilities and improved their performance on a video assessment task.

  14. Human Error Probability Assessment During Maintenance Activities of Marine Systems

    Directory of Open Access Journals (Sweden)

    Rabiul Islam

    2018-03-01

    Full Text Available Background: Maintenance operations on-board ships are highly demanding. Maintenance operations are intensive activities requiring high man–machine interactions in challenging and evolving conditions. The evolving conditions are weather conditions, workplace temperature, ship motion, noise and vibration, and workload and stress. For example, extreme weather condition affects seafarers' performance, increasing the chances of error, and, consequently, can cause injuries or fatalities to personnel. An effective human error probability model is required to better manage maintenance on-board ships. The developed model would assist in developing and maintaining effective risk management protocols. Thus, the objective of this study is to develop a human error probability model considering various internal and external factors affecting seafarers' performance. Methods: The human error probability model is developed using probability theory applied to Bayesian network. The model is tested using the data received through the developed questionnaire survey of >200 experienced seafarers with >5 years of experience. The model developed in this study is used to find out the reliability of human performance on particular maintenance activities. Results: The developed methodology is tested on the maintenance of marine engine's cooling water pump for engine department and anchor windlass for deck department. In the considered case studies, human error probabilities are estimated in various scenarios and the results are compared between the scenarios and the different seafarer categories. The results of the case studies for both departments are also compared. Conclusion: The developed model is effective in assessing human error probabilities. These probabilities would get dynamically updated as and when new information is available on changes in either internal (i.e., training, experience, and fatigue or external (i.e., environmental and operational conditions

  15. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    Science.gov (United States)

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  16. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  17. Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets

    Science.gov (United States)

    Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua

    2017-09-01

    In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.

  18. Comparison of survey and photogrammetry methods to position gravity data, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Ponce, D.A.; Wu, S.S.C.; Spielman, J.B.

    1985-01-01

    Locations of gravity stations at Yucca Mountain, Nevada, were determined by a survey using an electronic distance-measuring device and by a photogram-metric method. The data from both methods were compared to determine if horizontal and vertical coordinates developed from photogrammetry are sufficently accurate to position gravity data at the site. The results show that elevations from the photogrammetric data have a mean difference of 0.57 +- 0.70 m when compared with those of the surveyed data. Comparison of the horizontal control shows that the two methods agreed to within 0.01 minute. At a latitude of 45 0 , an error of 0.01 minute (18 m) corresponds to a gravity anomaly error of 0.015 mGal. Bouguer gravity anomalies are most sensitive to errors in elevation, thus elevation is the determining factor for use of photogrammetric or survey methods to position gravity data. Because gravity station positions are difficult to locate on aerial photographs, photogrammetric positions are not always exactly at the gravity station; therefore, large disagreements may appear when comparing electronic and photogrammetric measurements. A mean photogrammetric elevation error of 0.57 m corresponds to a gravity anomaly error of 0.11 mGal. Errors of 0.11 mGal are too large for high-precision or detailed gravity measurements but acceptable for regional work. 1 ref. 2 figs., 4 tabs

  19. Evaluation of slow shutdown system flux detectors in Point Lepreau Generating Station - II: dynamic compensation error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Anghel, V.N.P.; Sur, B. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Taylor, D. [New Brunswick Power Nuclear, Point Lepreau, New Brunswick (Canada)

    2009-07-01

    CANDU reactors are protected against reactor overpower by two independent shutdown systems: Shut Down System 1 and 2 (SDS1 and SDS2). At the Point Lepreau Generating Station (PLGS), the shutdown systems can be actuated by measurements of the neutron flux from Platinum-clad Inconel In-Core Flux Detectors. These detectors have a complex dynamic behaviour, characterized by 'prompt' and 'delayed' components with respect to immediate changes in the in-core neutron flux. It was shown previously (I: Dynamic Response Characterization by Anghel et al., this conference) that the dynamic responses of the detectors changed with irradiation, with the SDS2 detectors having 'prompt' signal components that decreased significantly. In this paper we assess the implication of these changes for detector dynamic compensation errors by comparing the compensated detector response with the power-to-fuel and the power-to-coolant responses to neutron flux ramps as assumed by previous error analyses. The dynamic compensation error is estimated at any given trip time for all possible accident flux ramps. Some implications for the shutdown system trip set points, obtained from preliminary results, are discussed. (author)

  20. Fisheries Disaster Survey, 2000

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Responses to selected questions from the Social and Economic Survey administered in spring and summer 2000 to recipients of the second round (Round II) of financial...

  1. Estimation of heading gyrocompass error using a GPS 3DF system: Impact on ADCP measurements

    Directory of Open Access Journals (Sweden)

    Simón Ruiz

    2002-12-01

    Full Text Available Traditionally the horizontal orientation in a ship (heading has been obtained from a gyrocompass. This instrument is still used on research vessels but has an estimated error of about 2-3 degrees, inducing a systematic error in the cross-track velocity measured by an Acoustic Doppler Current Profiler (ADCP. The three-dimensional positioning system (GPS 3DF provides an independent heading measurement with accuracy better than 0.1 degree. The Spanish research vessel BIO Hespérides has been operating with this new system since 1996. For the first time on this vessel, the data from this new instrument are used to estimate gyrocompass error. The methodology we use follows the scheme developed by Griffiths (1994, which compares data from the gyrocompass and the GPS system in order to obtain an interpolated error function. In the present work we apply this methodology on mesoscale surveys performed during the observational phase of the OMEGA project, in the Alboran Sea. The heading-dependent gyrocompass error dominated. Errors in gyrocompass heading of 1.4-3.4 degrees have been found, which give a maximum error in measured cross-track ADCP velocity of 24 cm s-1.

  2. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Science.gov (United States)

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  3. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  4. Aerial remote sensing surveys progress report: Helicopter geophysical survey of the Oak Ridge Reservation

    International Nuclear Information System (INIS)

    Doll, W.E.; Nyquist, J.E.; King, A.D.; Bell, D.T.; Holladay, J.S.; Labson, V.F.; Pellerin, L.

    1993-03-01

    The 35,252 acre Department of Energy Oak Ridge Reservation (ORR) in the western portion of the Appalachian Valley and Ridge Province in Tennessee, has been a nuclear production and development facility for50 years. Contaminants in the many waste sites on the ORR include a wide variety of radioactive isotopes as well as many organic and inorganic compounds. The locations, geometry, and contents of many of these waste sites are reasonably well known, while others are poorly known or unknown. To better characterize the reasonably well known sites and search for additional potentially environmentally hazardous sites, a two-phase aerial survey of the ORR was developed. Phase I began in March 1992 and consisted of aerial radiation, multispectral scanner, and photographic (natural color and color infrared) surveys. Phase II began in November 1992 and is described in this report. Phase II consisted of helicopter electromagnetic (HEM), magnetic, and gamma radiation surveys. Targets of the survey included both man-made (drums, trench boundaries, burn pits, well heads) and geologic (fractures, faults, karst features, geologic contacts) features. The Phase II survey has three components: testing, reconnaissance, and high-resolution data acquisition. To date, the testing and reconnaissance data acquisition have been completed, and some of the data have been processed. They indicate that: (1) magnetic and HEM data are complementary and do not always highlight the same anomaly; (2) under favorable circumstances, helicopter magnetometer systems are capable of detecting groups of four or more 55-gal drums at detector altitudes of 15 m or less; (3) HEM data provide data that compare favorably with surface data collected over burial trenches, (4) well casings may be related to magnetic monopole anomalies, as would be expected; and (5) changes in HEM and magnetic anomaly character are related to lithologic changes and may be used to track contacts between known outcrops

  5. UBVRIz LIGHT CURVES OF 51 TYPE II SUPERNOVAE

    International Nuclear Information System (INIS)

    Galbany, Lluis; Hamuy, Mario; Jaeger, Thomas de; Moraga, Tania; González-Gaitán, Santiago; Gutiérrez, Claudia P.; Phillips, Mark M.; Morrell, Nidia I.; Thomas-Osip, Joanna; Suntzeff, Nicholas B.; Maza, José; González, Luis; Antezana, Roberto; Wishnjewski, Marina; Krisciunas, Kevin; Krzeminski, Wojtek; McCarthy, Patrick; Anderson, Joseph P.; Stritzinger, Maximilian; Folatelli, Gastón

    2016-01-01

    We present a compilation of UBVRIz light curves of 51 type II supernovae discovered during the course of four different surveys during 1986–2003: the Cerro Tololo Supernova Survey, the Calán/Tololo Supernova Program (C and T), the Supernova Optical and Infrared Survey (SOIRS), and the Carnegie Type II Supernova Survey (CATS). The photometry is based on template-subtracted images to eliminate any potential host galaxy light contamination, and calibrated from foreground stars. This work presents these photometric data, studies the color evolution using different bands, and explores the relation between the magnitude at maximum brightness and the brightness decline parameter (s) from maximum light through the end of the recombination phase. This parameter is found to be shallower for redder bands and appears to have the best correlation in the B band. In addition, it also correlates with the plateau duration, being shorter (longer) for larger (smaller) s values

  6. UBVRIz LIGHT CURVES OF 51 TYPE II SUPERNOVAE

    Energy Technology Data Exchange (ETDEWEB)

    Galbany, Lluis; Hamuy, Mario; Jaeger, Thomas de; Moraga, Tania; González-Gaitán, Santiago; Gutiérrez, Claudia P. [Millennium Institute of Astrophysics, Universidad de Chile (Chile); Phillips, Mark M.; Morrell, Nidia I.; Thomas-Osip, Joanna [Carnegie Observatories, Las Campanas Observatory, Casilla 60, La Serena (Chile); Suntzeff, Nicholas B. [Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Maza, José; González, Luis; Antezana, Roberto; Wishnjewski, Marina [Departamento de Astronomía, Universidad de Chile, Camino El Observatorio 1515, Las Condes, Santiago (Chile); Krisciunas, Kevin [George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A. and M. University, Department of Physics and Astronomy, 4242 TAMU, College Station, TX 77843 (United States); Krzeminski, Wojtek [N. Copernicus Astronomical Center, ul. Bartycka 18, 00-716 Warszawa (Poland); McCarthy, Patrick [The Observatories of the Carnegie Institution for Science, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); Anderson, Joseph P. [European Southern Observatory, Alonso de Cordova 3107, Vitacura, Casilla 19001, Santiago (Chile); Stritzinger, Maximilian [Department of Physics and Astronomy, Aarhus University (Denmark); Folatelli, Gastón, E-mail: lgalbany@das.uchile.cl [Instituto de Astrofísica de La Plata (IALP, CONICET) (Argentina); and others

    2016-02-15

    We present a compilation of UBVRIz light curves of 51 type II supernovae discovered during the course of four different surveys during 1986–2003: the Cerro Tololo Supernova Survey, the Calán/Tololo Supernova Program (C and T), the Supernova Optical and Infrared Survey (SOIRS), and the Carnegie Type II Supernova Survey (CATS). The photometry is based on template-subtracted images to eliminate any potential host galaxy light contamination, and calibrated from foreground stars. This work presents these photometric data, studies the color evolution using different bands, and explores the relation between the magnitude at maximum brightness and the brightness decline parameter (s) from maximum light through the end of the recombination phase. This parameter is found to be shallower for redder bands and appears to have the best correlation in the B band. In addition, it also correlates with the plateau duration, being shorter (longer) for larger (smaller) s values.

  7. Frequency of Burnout, Sleepiness and Depression in Emergency Medicine Residents with Medical Errors in the Emergency Department

    Directory of Open Access Journals (Sweden)

    Alireza Aala

    2014-07-01

    Full Text Available Aims: Medical error is a great concern of the patients and physicians. It usually occurs due to physicians’ exhaustion, distress and fatigue. In this study, we aimed to evaluate frequency of distress and fatigue among emergency medicine residents reporting a medical error. Materials and Methods: The study population consisted of emergency medicine residents who completed an emailed questionnaire including self-assessment of medical errors, the Epworth Sleepiness Scale (ESS score, the Maslach Burnout Inventory, and PRIME-MD validated depression screening tool.   Results: In this survey, 100 medical errors were reported including diagnostic errors in 53, therapeutic errors in 24 and following errors in 23 subjects. Most errors were reported by males and third year residents. Residents had no signs of depression, but all had some degrees of sleepiness and burnout. There were significant differences between errors subtypes and age, residency year, depression, sleepiness and burnout scores (p<0.0001.   Conclusion: In conclusion, residents committing a medical error usually experience burnout and have some grades of sleepiness that makes them less motivated increasing the probability of medical errors. However, as none of the residents had depression, it could be concluded that depression has no significant role in medical error occurrence and perhaps it is a possible consequence of medical error.    Keywords: Residents; Medical error; Burnout; Sleepiness; Depression

  8. Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation

    International Nuclear Information System (INIS)

    Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.

    2003-01-01

    The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm

  9. Engineering surveying

    CERN Document Server

    Schofield, W

    2007-01-01

    Engineering surveying involves determining the position of natural and man-made features on or beneath the Earth's surface and utilizing these features in the planning, design and construction of works. It is a critical part of any engineering project. Without an accurate understanding of the size, shape and nature of the site the project risks expensive and time-consuming errors or even catastrophic failure.Engineering Surveying 6th edition covers all the basic principles and practice of this complex subject and the authors bring expertise and clarity. Previous editions of this classic text have given readers a clear understanding of fundamentals such as vertical control, distance, angles and position right through to the most modern technologies, and this fully updated edition continues that tradition.This sixth edition includes:* An introduction to geodesy to facilitate greater understanding of satellite systems* A fully updated chapter on GPS, GLONASS and GALILEO for satellite positioning in surveying* Al...

  10. AQMEII3 evaluation of regional NA/EU simulations and analysis of scale, boundary conditions and emissions error-dependence

    Science.gov (United States)

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...

  11. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    Science.gov (United States)

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  12. Nurses' systems thinking competency, medical error reporting, and the occurrence of adverse events: a cross-sectional study.

    Science.gov (United States)

    Hwang, Jee-In; Park, Hyeoun-Ae

    2017-12-01

    Healthcare professionals' systems thinking is emphasized for patient safety. To report nurses' systems thinking competency, and its relationship with medical error reporting and the occurrence of adverse events. A cross-sectional survey using a previously validated Systems Thinking Scale (STS), was conducted. Nurses from two teaching hospitals were invited to participate in the survey. There were 407 (60.3%) completed surveys. The mean STS score was 54.5 (SD 7.3) out of 80. Nurses with higher STS scores were more likely to report medical errors (odds ratio (OR) = 1.05; 95% confidence interval (CI) = 1.02-1.08) and were less likely to be involved in the occurrence of adverse events (OR = 0.96; 95% CI = 0.93-0.98). Nurses showed moderate systems thinking competency. Systems thinking was a significant factor associated with patient safety. Impact Statement: The findings of this study highlight the importance of enhancing nurses' systems thinking capacity to promote patient safety.

  13. Panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable

    NARCIS (Netherlands)

    Elhorst, J. Paul

    2001-01-01

    This paper surveys panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable. In particular, it focuses on the specification and estimation of four panel data models commonly used in applied research: the fixed effects model, the random effects model, the

  14. NURE aerial gamma-ray and magnetic reconnaissance survey, Colorado-Arizona area: Salton Sea NI II-9, Phoenix NI 12-7, El Centro NI II-12, AJO NI 12-10, Lukeville NH 12-1 quadrangles. Volume I. Narrative report

    International Nuclear Information System (INIS)

    1979-11-01

    A rotary-wing reconnaissance high sensitivity radiometric and magnetic survey, encompassing several 1:250,000 quadrangles in southwestern Arizona and southeastern California, was performed. The surveyed area consisted of approximately 9300 line miles. The radiometric data were corrected and normalized to 400 feet terrain clearance. The data were identified as to rock type by correlating the data samples with existing geologic maps. Statistics defining the mean and standard deviation of each rock type are presented as listings in Volume I of this report. The departure of the data from its corresponding mean rock type is computed in terms of standard deviation units and is presented graphically as anomaly maps in Volume II and as computer listings in microfiche form in Volume I. Profiles of the normalized averaged data are contained in Volume II and include traces of the potassium, uranium and thorium count rates, corresponding ratios, and several ancilliary sensor data traces, magnetometer, radio altimeter and barometric pressure height. A description of the local geology is provided, and a discussion of the magnetic and radiometric data is presented together with an evaluation of selected uranium anomalies

  15. Perceptual learning eases crowding by reducing recognition errors but not position errors.

    Science.gov (United States)

    Xiong, Ying-Zi; Yu, Cong; Zhang, Jun-Yun

    2015-08-01

    When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three-letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order. We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by-product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors.

  16. A Survey of Kurdish Students’ Sound Segment & Syllabic Pattern Errors in the Course of Learning EFL

    Directory of Open Access Journals (Sweden)

    Jahangir Mohammadi

    2014-06-01

    Full Text Available This paper is devoted to finding adequate answers to the following queries: (A what are the segmental and syllabic pattern errors made by Kurdish students in their pronunciation? (B Can the problematic areas in pronunciation be predicted by a systematic comparison of the sound systems of both native and target languages? (C Can there be any consistency between the predictions and the results of the error analysis experiments in the same field? To reach the goals of the study the following steps were taken; 1.The sound systems and syllabic patterns of both languages Kurdish and English were clearly described on the basis of place and manner of articulation and the combinatory power of clusters. 2. To carry out a contrastive analysis, the sound segments (vowels, consonants and diphthongs and the syllabic patterns of both languages were compared in order to surface the similarities and differences.  3. The syllabic patterns and sound segments in English that had no counterparts in Kurdish were detected and considered as problematic areas in pronunciation. 4. To countercheck the acquired predictions, an experiment was carried out with 50 male and female pre-university students. Subjects were given some passages to read. The readability index of these passages ranged from 8.775 to 10.432 which are quite suitable in comparison to the readability index of pre-university texts ranging from 8.675 to 10.475. All samples of bound production were transcribed in IPA and the syllabic patterns were shown by symbols ‘V’ and ‘C’ indicating vowels and consonants respectively. An error analysis of the acquired data proved that English sound segments and syllabic patterns with no counterparts in Kurdish resulted in pronunciation errors.

  17. Feasibility study for an airborne high-sensitivity gamma-ray survey of Alaska. Phase II (final) report: 1976--1979 program

    International Nuclear Information System (INIS)

    1975-01-01

    This study constitutes a determination of the extent to which it is feasible to use airborne, high-sensitivity gamma spectrometer systems for uranium reconnaissance in the State of Alaska, and specification of a preliminary plan for surveying the entire state of the 1975--1979 time frame. Phase I included the design of a program to survey the highest priority areas in 1975 using available aircraft and spectrometer equipment. This has now resulted in a contract for 10,305 flight line miles to cover about 11 of the 1:250,000 scale quadrangles using a DC-3 aircraft with an average 6.25 x 25 mile grid of flight line. Phase II includes the design of alternative programs to cover the remaining 128 quadrangles using either a DC-3 and a Bell 205A helicopter or a Helio Stallion STOL aircraft and a Bell 205A helicopter during 1976-1979. The 1976-1979 time frame allows some time for possible new system developments in both airborne gamma-ray spectrometers and in ancillary equipment, and these are outlined. (auth)

  18. National Youth Survey US: Wave II (NYS-1977)

    Data.gov (United States)

    U.S. Department of Health & Human Services — Youth data for the second wave of the National Youth Survey are contained in this data collection. The first wave was conducted in 1976. Youths were interviewed in...

  19. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  20. 1982 survey of United States uranium marketing activity

    International Nuclear Information System (INIS)

    1983-09-01

    This report is based on survey data from all utilities, reactor manufacturers, and uranium producers who market uranium. The survey forms are mailed in January of each year with updates in July of each year. This year 59 utilities, 5 reactor manufacturers and agents, and 57 uranium producers were surveyed. Completed survey forms were checked for errors, corrected as necessary, and processed. These data formed the basis for the development of the report. This report is intended for Congress, federal and state agencies, the nuclear industry, and the general public

  1. Confounding and exposure measurement error in air pollution epidemiology.

    Science.gov (United States)

    Sheppard, Lianne; Burnett, Richard T; Szpiro, Adam A; Kim, Sun-Young; Jerrett, Michael; Pope, C Arden; Brunekreef, Bert

    2012-06-01

    Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution. The association between long-term exposure to ambient air pollution and mortality has been investigated using cohort studies in which subjects are followed over time with respect to their vital status. In such studies, control for individual-level confounders such as smoking is important, as is control for area-level confounders such as neighborhood socio-economic status. In addition, there may be spatial dependencies in the survival data that need to be addressed. These issues are illustrated using the American Cancer Society Cancer Prevention II cohort. Exposure measurement error is a challenge in epidemiology because inference about health effects can be incorrect when the measured or predicted exposure used in the analysis is different from the underlying true exposure. Air pollution epidemiology rarely if ever uses personal measurements of exposure for reasons of cost and feasibility. Exposure measurement error in air pollution epidemiology comes in various dominant forms, which are different for time-series and cohort studies. The challenges are reviewed and a number of suggested solutions are discussed for both study domains.

  2. Culture and error in space: implications from analog environments.

    Science.gov (United States)

    Helmreich, R L

    2000-09-01

    An ongoing study investigating national, organizational, and professional cultures in aviation and medicine is described. Survey data from 26 nations on 5 continents show highly significant national differences regarding appropriate relationships between leaders and followers, in group vs. individual orientation, and in values regarding adherence to rules and procedures. These findings replicate earlier research on dimensions of national culture. Data collected also isolate significant operational issues in multi-national flight crews. While there are no better or worse cultures, these cultural differences have operational implications for the way crews function in an international space environment. The positive professional cultures of pilots and physicians exhibit a high enjoyment of the job and professional pride. However, a negative component was also identified characterized by a sense of personal invulnerability regarding the effects of stress and fatigue on performance. This misperception of personal invulnerability has operational implications such as failures in teamwork and increased probability of error. A second component of the research examines team error in operational environments. From observational data collected during normal flight operations, new models of threat and error and their management were developed that can be generalized to operations in space and other socio-technological domains. Five categories of crew error are defined and their relationship to training programs in team performance, known generically as Crew Resource Management, is described. The relevance of these data for future spaceflight is discussed.

  3. Complexometric determination, Part II: Complexometric determination of Cu2+-ions

    Directory of Open Access Journals (Sweden)

    Rajković Miloš B.

    2002-01-01

    Full Text Available A copper-selective electrode of the coated wire type based on sulphidized copper wire was applied successfully for determining Cu(II ions by complexometric titration with the disodium salt of EDTA (complexon III. By the formation of internal complex compounds with the Cu(II ion, the copper concentration in the solution decreases, and all this is followed by a change of potential of the indicator system Cu-DWISE (or Cu-EDWISE/SCE. At the terminal point of titration, when all the Cu(II ions are already utilized for the formation of the complex with EDTA, there occurs a steep rise of potential, thus enabling us, through the first or second derivative to note the quantity of copper that is present in the solution. Copper-selective electrode showed a responsivity towards titration with EDTA as a complexing agent, with the absence of "fatigue" due to a great number of repeated measurings. Errors occurring during quantitative measurements were more a characteristic of the overall procedure which involve, because of the impossibility of the complete absence of subjectivity, a constant error, and the reproducibility of the results confirmed this fact. The disodium salt of EDTA appeared as a very efficient titrant in all titrations and with various concentrations ot Cu(II ions in the solution, with somewhat weaker response at lower concentrations in the solution.

  4. Prevalence of uncorrected refractive errors in adults aged 30 years and above in a rural population in Pakistan

    International Nuclear Information System (INIS)

    Abdullah, A.S.; Azam, M.; Nigar, M.

    2015-01-01

    Uncorrected refractive errors are a leading cause of visual disability globally. This population-based study was done to estimate the prevalence of uncorrected refractive errors in adults aged 30 years and above of village Pawakah, Khyber Pakhtunkhwa (KPK), Pakistan. Methods: It was a cross-sectional survey in which 1000 individuals were included randomly. All the individuals were screened for uncorrected refractive errors and those whose visual acuity (VA) was found to be less than 6/6 were refracted. In whom refraction was found to be unsatisfactory (i.e., a best corrected visual acuity of <6/6) further examination was done to establish the cause for the subnormal vision. Results: A total of 917 subjects participated in the survey (response rate 92%). The prevalence of uncorrected refractive errors was found to be 23.97% among males and 20% among females. The prevalence of visually disabling refractive errors was 6.89% in males and 5.71% in females. The prevalence was seen to increase with age, with maximum prevalence in 51-60 years age group. Hypermetropia (10.14%) was found to be the commonest refractive error followed by Myopia (6.00%) and Astigmatism (5.6%). The prevalence of Presbyopia was 57.5% (60.45% in males and 55.23% in females). Poor affordability was the commonest barrier to the use of spectacles, followed by unawareness. Cataract was the commonest reason for impaired vision after refractive correction. The prevalence of blindness was 1.96% (1.53% in males and 2.28% in females) in this community with cataract as the commonest cause. Conclusions: Despite being the most easily avoidable cause of subnormal vision uncorrected refractive errors still account for a major proportion of the burden of decreased vision in this area. Effective measures for the screening and affordable correction of uncorrected refractive errors need to be incorporated into the health care delivery system. (author)

  5. The HIFI spectral survey of AFGL 2591 (CHESS). II. Summary of the survey

    Science.gov (United States)

    Kaźmierczak-Barthel, M.; van der Tak, F. F. S.; Helmich, F. P.; Chavarría, L.; Wang, K.-S.; Ceccarelli, C.

    2014-07-01

    Aims: This paper presents the richness of submillimeter spectral features in the high-mass star forming region AFGL 2591. Methods: As part of the Chemical Herschel Survey of Star Forming Regions (CHESS) key programme, AFGL 2591 was observed by the Herschel (HIFI) instrument. The spectral survey covered a frequency range from 480 to 1240 GHz as well as single lines from 1267 to 1901 GHz (i.e. CO, HCl, NH3, OH, and [CII]). Rotational and population diagram methods were used to calculate column densities, excitation temperatures, and the emission extents of the observed molecules associated with AFGL 2591. The analysis was supplemented with several lines from ground-based JCMT spectra. Results: From the HIFI spectral survey analysis a total of 32 species were identified (including isotopologues). Although the lines are mostly quite weak (∫TmbdV ~ few K km s-1), 268 emission and 16 absorption lines were found (excluding blends). Molecular column densities range from 6 × 1011 to 1 × 1019 cm-2 and excitation temperatures from 19 to 175 K. Cold (e.g. HCN, H2S, and NH3 with temperatures below 70 K) and warm species (e.g. CH3OH, SO2) in the protostellar envelope can be distinguished. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.Appendix A is available in electronic form at http://www.aanda.org

  6. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  7. The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning.

    Science.gov (United States)

    Popa, Laurentiu S; Streng, Martha L; Hewitt, Angela L; Ebner, Timothy J

    2016-04-01

    The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model.

  8. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  9. Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice

    International Nuclear Information System (INIS)

    Kim, Isaac H.

    2011-01-01

    We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.

  10. Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice

    Science.gov (United States)

    Kim, Isaac H.

    2011-05-01

    We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.

  11. Rotational error in path integration: encoding and execution errors in angle reproduction.

    Science.gov (United States)

    Chrastil, Elizabeth R; Warren, William H

    2017-06-01

    Path integration is fundamental to human navigation. When a navigator leaves home on a complex outbound path, they are able to keep track of their approximate position and orientation and return to their starting location on a direct homebound path. However, there are several sources of error during path integration. Previous research has focused almost exclusively on encoding error-the error in registering the outbound path in memory. Here, we also consider execution error-the error in the response, such as turning and walking a homebound trajectory. In two experiments conducted in ambulatory virtual environments, we examined the contribution of execution error to the rotational component of path integration using angle reproduction tasks. In the reproduction tasks, participants rotated once and then rotated again to face the original direction, either reproducing the initial turn or turning through the supplementary angle. One outstanding difficulty in disentangling encoding and execution error during a typical angle reproduction task is that as the encoding angle increases, so does the required response angle. In Experiment 1, we dissociated these two variables by asking participants to report each encoding angle using two different responses: by turning to walk on a path parallel to the initial facing direction in the same (reproduction) or opposite (supplementary angle) direction. In Experiment 2, participants reported the encoding angle by turning both rightward and leftward onto a path parallel to the initial facing direction, over a larger range of angles. The results suggest that execution error, not encoding error, is the predominant source of error in angular path integration. These findings also imply that the path integrator uses an intrinsic (action-scaled) rather than an extrinsic (objective) metric.

  12. Assessing Banks’ Cost of Complying with Basel II

    OpenAIRE

    David VanHoose

    2007-01-01

    This policy brief assesses the implications of Basel II for bank regulatory compliance costs. In spite of widespread complaints by bankers about the costs of complying with Basel II rules, the academic literature has given surprisingly little attention to quantifying these costs. The brief discusses estimates of Basel II compliance costs based on commonly utilized rules of thumb and on survey data collected by the Office of the Comptroller of the Currency (OCC). In addition, it utilizes OCC b...

  13. The role of positional errors while interpolating soil organic carbon contents using satellite imagery

    NARCIS (Netherlands)

    Samsonova, V.P.; Meshalkina, J.L.; Blagoveschensky, Y.N.; Yaroslavtsev, A.M.; Stoorvogel, J.J.

    2018-01-01

    Increasingly, soil surveys make use of a combination of legacy data, ancillary data and new field data. While combining the different sources of information, positional errors can play a large role. For example, the spatial discrepancy between remote sensing images and field data can depend on

  14. Measuring the Accuracy of Survey Responses using Administrative Register Data

    DEFF Research Database (Denmark)

    Kreiner, Claus Thustrup; Lassen, David Dreyer; Leth-Petersen, Søren

    2015-01-01

    This paper shows how Danish administrative register data can be combined with survey data at the person level and be used to validate information collected in the survey. Register data are collected by automatic third party reporting and the potential errors associated with the two data sources...

  15. Grizzly Bear Noninvasive Genetic Tagging Surveys: Estimating the Magnitude of Missed Detections.

    Directory of Open Access Journals (Sweden)

    Jason T Fisher

    Full Text Available Sound wildlife conservation decisions require sound information, and scientists increasingly rely on remotely collected data over large spatial scales, such as noninvasive genetic tagging (NGT. Grizzly bears (Ursus arctos, for example, are difficult to study at population scales except with noninvasive data, and NGT via hair trapping informs management over much of grizzly bears' range. Considerable statistical effort has gone into estimating sources of heterogeneity, but detection error-arising when a visiting bear fails to leave a hair sample-has not been independently estimated. We used camera traps to survey grizzly bear occurrence at fixed hair traps and multi-method hierarchical occupancy models to estimate the probability that a visiting bear actually leaves a hair sample with viable DNA. We surveyed grizzly bears via hair trapping and camera trapping for 8 monthly surveys at 50 (2012 and 76 (2013 sites in the Rocky Mountains of Alberta, Canada. We used multi-method occupancy models to estimate site occupancy, probability of detection, and conditional occupancy at a hair trap. We tested the prediction that detection error in NGT studies could be induced by temporal variability within season, leading to underestimation of occupancy. NGT via hair trapping consistently underestimated grizzly bear occupancy at a site when compared to camera trapping. At best occupancy was underestimated by 50%; at worst, by 95%. Probability of false absence was reduced through successive surveys, but this mainly accounts for error imparted by movement among repeated surveys, not necessarily missed detections by extant bears. The implications of missed detections and biased occupancy estimates for density estimation-which form the crux of management plans-require consideration. We suggest hair-trap NGT studies should estimate and correct detection error using independent survey methods such as cameras, to ensure the reliability of the data upon which species

  16. Grizzly Bear Noninvasive Genetic Tagging Surveys: Estimating the Magnitude of Missed Detections.

    Science.gov (United States)

    Fisher, Jason T; Heim, Nicole; Code, Sandra; Paczkowski, John

    2016-01-01

    Sound wildlife conservation decisions require sound information, and scientists increasingly rely on remotely collected data over large spatial scales, such as noninvasive genetic tagging (NGT). Grizzly bears (Ursus arctos), for example, are difficult to study at population scales except with noninvasive data, and NGT via hair trapping informs management over much of grizzly bears' range. Considerable statistical effort has gone into estimating sources of heterogeneity, but detection error-arising when a visiting bear fails to leave a hair sample-has not been independently estimated. We used camera traps to survey grizzly bear occurrence at fixed hair traps and multi-method hierarchical occupancy models to estimate the probability that a visiting bear actually leaves a hair sample with viable DNA. We surveyed grizzly bears via hair trapping and camera trapping for 8 monthly surveys at 50 (2012) and 76 (2013) sites in the Rocky Mountains of Alberta, Canada. We used multi-method occupancy models to estimate site occupancy, probability of detection, and conditional occupancy at a hair trap. We tested the prediction that detection error in NGT studies could be induced by temporal variability within season, leading to underestimation of occupancy. NGT via hair trapping consistently underestimated grizzly bear occupancy at a site when compared to camera trapping. At best occupancy was underestimated by 50%; at worst, by 95%. Probability of false absence was reduced through successive surveys, but this mainly accounts for error imparted by movement among repeated surveys, not necessarily missed detections by extant bears. The implications of missed detections and biased occupancy estimates for density estimation-which form the crux of management plans-require consideration. We suggest hair-trap NGT studies should estimate and correct detection error using independent survey methods such as cameras, to ensure the reliability of the data upon which species management and

  17. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  18. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  19. Mapping neighborhood scale survey responses with uncertainty metrics

    Directory of Open Access Journals (Sweden)

    Charles Robert Ehlschlaeger

    2016-12-01

    Full Text Available This paper presents a methodology of mapping population-centric social, infrastructural, and environmental metrics at neighborhood scale. This methodology extends traditional survey analysis methods to create cartographic products useful in agent-based modeling and geographic information analysis. It utilizes and synthesizes survey microdata, sub-upazila attributes, land use information, and ground truth locations of attributes to create neighborhood scale multi-attribute maps. Monte Carlo methods are employed to combine any number of survey responses to stochastically weight survey cases and to simulate survey cases' locations in a study area. Through such Monte Carlo methods, known errors from each of the input sources can be retained. By keeping individual survey cases as the atomic unit of data representation, this methodology ensures that important covariates are retained and that ecological inference fallacy is eliminated. These techniques are demonstrated with a case study from the Chittagong Division in Bangladesh. The results provide a population-centric understanding of many social, infrastructural, and environmental metrics desired in humanitarian aid and disaster relief planning and operations wherever long term familiarity is lacking. Of critical importance is that the resulting products have easy to use explicit representation of the errors and uncertainties of each of the input sources via the automatically generated summary statistics created at the application's geographic scale.

  20. Reduction in Chemotherapy Mixing Errors Using Six Sigma: Illinois CancerCare Experience.

    Science.gov (United States)

    Heard, Bridgette; Miller, Laura; Kumar, Pankaj

    2012-03-01

    Chemotherapy mixing errors (CTMRs), although rare, have serious consequences. Illinois CancerCare is a large practice with multiple satellite offices. The goal of this study was to reduce the number of CTMRs using Six Sigma methods. A Six Sigma team consisting of five participants (registered nurses and pharmacy technicians [PTs]) was formed. The team had 10 hours of Six Sigma training in the DMAIC (ie, Define, Measure, Analyze, Improve, Control) process. Measurement of errors started from the time the CT order was verified by the PT to the time of CT administration by the nurse. Data collection included retrospective error tracking software, system audits, and staff surveys. Root causes of CTMRs included inadequate knowledge of CT mixing protocol, inconsistencies in checking methods, and frequent changes in staffing of clinics. Initial CTMRs (n = 33,259) constituted 0.050%, with 77% of these errors affecting patients. The action plan included checklists, education, and competency testing. The postimplementation error rate (n = 33,376, annualized) over a 3-month period was reduced to 0.019%, with only 15% of errors affecting patients. Initial Sigma was calculated at 4.2; this process resulted in the improvement of Sigma to 5.2, representing a 100-fold reduction. Financial analysis demonstrated a reduction in annualized loss of revenue (administration charges and drug wastage) from $11,537.95 (Medicare Average Sales Price) before the start of the project to $1,262.40. The Six Sigma process is a powerful technique in the reduction of CTMRs.

  1. Prediction-error of Prediction Error (PPE)-based Reversible Data Hiding

    OpenAIRE

    Wu, Han-Zhou; Wang, Hong-Xia; Shi, Yun-Qing

    2016-01-01

    This paper presents a novel reversible data hiding (RDH) algorithm for gray-scaled images, in which the prediction-error of prediction error (PPE) of a pixel is used to carry the secret data. In the proposed method, the pixels to be embedded are firstly predicted with their neighboring pixels to obtain the corresponding prediction errors (PEs). Then, by exploiting the PEs of the neighboring pixels, the prediction of the PEs of the pixels can be determined. And, a sorting technique based on th...

  2. Evaluation of medication errors with implementation of electronic health record technology in the medical intensive care unit

    Directory of Open Access Journals (Sweden)

    Liao TV

    2017-05-01

    Full Text Available T Vivian Liao,1 Marina Rabinovich,2 Prasad Abraham,2 Sebastian Perez,3 Christiana DiPlotti,4 Jenny E Han,5 Greg S Martin,5 Eric Honig5 1Department of Pharmacy Practice, College of Pharmacy, Mercer Health Sciences Center, 2Department of Pharmacy and Clinical Nutrition, Grady Health System, 3Department of Surgery, Emory University, 4Pharmacy, Ingles Markets, 5Department of Medicine, Emory University, Atlanta, GA, USA Purpose: Patients in the intensive care unit (ICU are at an increased risk for medication errors (MEs and adverse drug events from multifactorial causes. ME rate ranges from 1.2 to 947 per 1,000 patient days in the medical ICU (MICU. Studies with the implementation of electronic health records (EHR have concluded that it significantly reduced overall prescribing errors and the number of errors that caused patient harm decreased. However, other types of errors, such as wrong dose and omission of required medications increased after EHR implementation. We sought to compare the number of MEs before and after EHR implementation in the MICU, with additional evaluation of error severity.Patients and methods: Prospective, observational, quality improvement study of all patients admitted to a single MICU service at an academic medical center. Patients were evaluated during four periods over 2 years: August–September 2010 (preimplementation; period I, January–February 2011 (2 months postimplementation; period II, August–September 2012 (21 months postimplementation; period III, and January–February 2013 (25 months postimplementation; period IV. All medication orders and administration records were reviewed by an ICU clinical pharmacist and ME was defined as a deviation from established standards for prescribing, dispensing, administering, or documenting medication. The frequency and classification of MEs were compared between groups by chi square; p<0.05 was considered significant.Results: There was a statistically significant increase

  3. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  4. Evaluation of statistical models for forecast errors from the HBV model

    Science.gov (United States)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  5. An Impurity Emission Survey in the near UV and Visible Spectral Ranges of Electron Cyclotron Heated (ECH) Plasma in the TJ-II Stellarator

    International Nuclear Information System (INIS)

    McCarthy, K. J.; Zurro, B.; Baciero, A.

    2001-01-01

    We report on a near-ultraviolet and visible spectroscopic survey (220-600 nm) of electron cyclotron resonance (ECR) heated plasmas created in the TJ-II stellarator, with central electron temperatures up to 2 keV and central electron densities up to 1.7 x 10 ''19 m''-3. Approximately 1200 lines from thirteen elements have been identified. The purpose of the work is to identify the principal impurities and spectral lines present in TJ-II plasmas, as well as their possible origin to search for transitions from highly ionised ions. This work will act as a base for identifying suitable transitions for following the evolution of impurities under different operating regimens and multiplet systems for line polarisation studies. It is intended to use the database creates as a spectral line reference for comparing spectra under different operating and plasma heating regimes. (Author)

  6. The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can...... positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support...

  7. Civilians in World War II and DSM-IV mental disorders: Results from the World Mental Health Survey Initiative

    Science.gov (United States)

    Frounfelker, Rochelle; Gilman, Stephen E.; Betancourt, Theresa S.; Aguilar-Gaxiola, Sergio; Alonso, Jordi; Bromet, Evelyn J.; Bruffaerts, Ronny; de Girolamo, Giovanni; Gluzman, Semyon; Gureje, Oye; Karam, Elie G.; Lee, Sing; Lépine, Jean-Pierre; Ono, Yutaka; Pennell, Beth-Ellen; Popovici, Daniela G.; Have, Margreet ten; Kessler, Ronald C.

    2018-01-01

    Purpose Understanding the effects of war on mental disorders is important for developing effective post-conflict recovery policies and programs. The current study uses cross-sectional, retrospectively reported data collected as part of the World Mental Health (WMH) Survey Initiative to examine the associations of being a civilian in a war zone/region of terror in World War II with a range of DSM-IV mental disorders. Methods Adults (n= 3,370)who lived in countries directly involved in World War II in Europe and Japan were administered structured diagnostic interviews of lifetime DSM-IV mental disorders. The associations of war-related traumas with subsequent disorder onset-persistence were assessed with discrete-time survival analysis (lifetime prevalence) and conditional logistic regression (12-month prevalence). Results Respondents who were civilians in a war zone/region of terror had higher lifetime risks than other respondents of major depressive disorder (MDD; OR 1.5, 95% CI 1.1, 1.9) and anxiety disorder (OR 1.5, 95% CI 1.1, 2.0). The association of war exposure with MDD was strongest in the early years after the war, whereas the association with anxiety disorders increased over time. Among lifetime cases, war exposure was associated with lower past year risk of anxiety disorders. (OR 0.4, 95% CI 0.2, 0.7). Conclusions Exposure to war in World War II was associated with higher lifetime risk of some mental disorders. Whether comparable patterns will be found among civilians living through more recent wars remains to be seen, but should be recognized as a possibility by those projecting future needs for treatment of mental disorders. PMID:29119266

  8. Civilians in World War II and DSM-IV mental disorders: results from the World Mental Health Survey Initiative.

    Science.gov (United States)

    Frounfelker, Rochelle; Gilman, Stephen E; Betancourt, Theresa S; Aguilar-Gaxiola, Sergio; Alonso, Jordi; Bromet, Evelyn J; Bruffaerts, Ronny; de Girolamo, Giovanni; Gluzman, Semyon; Gureje, Oye; Karam, Elie G; Lee, Sing; Lépine, Jean-Pierre; Ono, Yutaka; Pennell, Beth-Ellen; Popovici, Daniela G; Ten Have, Margreet; Kessler, Ronald C

    2018-02-01

    Understanding the effects of war on mental disorders is important for developing effective post-conflict recovery policies and programs. The current study uses cross-sectional, retrospectively reported data collected as part of the World Mental Health (WMH) Survey Initiative to examine the associations of being a civilian in a war zone/region of terror in World War II with a range of DSM-IV mental disorders. Adults (n = 3370) who lived in countries directly involved in World War II in Europe and Japan were administered structured diagnostic interviews of lifetime DSM-IV mental disorders. The associations of war-related traumas with subsequent disorder onset-persistence were assessed with discrete-time survival analysis (lifetime prevalence) and conditional logistic regression (12-month prevalence). Respondents who were civilians in a war zone/region of terror had higher lifetime risks than other respondents of major depressive disorder (MDD; OR 1.5, 95% CI 1.1, 1.9) and anxiety disorder (OR 1.5, 95% CI 1.1, 2.0). The association of war exposure with MDD was strongest in the early years after the war, whereas the association with anxiety disorders increased over time. Among lifetime cases, war exposure was associated with lower past year risk of anxiety disorders (OR 0.4, 95% CI 0.2, 0.7). Exposure to war in World War II was associated with higher lifetime risk of some mental disorders. Whether comparable patterns will be found among civilians living through more recent wars remains to be seen, but should be recognized as a possibility by those projecting future needs for treatment of mental disorders.

  9. Phase II Characterization Survey of the USNS Bridge (T-AOE 10), Military Sealift Fleet Support Command, Naval Station, Norfolk, Virginia

    Energy Technology Data Exchange (ETDEWEB)

    ALTIC, NICK A

    2012-08-30

    In March 2011, the USNS Bridge was deployed off northeastern Honshu, Japan with the carrier USS Ronald Reagan to assist with relief efforts after the 2011 Tōhoku earthquake and tsunami. During that time, the Bridge was exposed to air-borne radioactive materials leaking from the damaged Fukushima I Nuclear Power Plant. The proximity of the Bridge to the air-borne impacted area resulted in the contamination of the ship’s air-handling systems and the associated components, as well as potential contamination of other ship surfaces due to either direct intake/deposition or inadvertent spread from crew/operational activities. Preliminary surveys in the weeks after the event confirmed low-level contamination within the heating, ventilation, and air conditioning (HVAC) ductwork and systems, and engine and other auxiliary air intake systems. Some partial decontamination was performed at that time. In response to the airborne contamination event, Military Sealift Fleet Support Command (MSFSC) contracted Oak Ridge Associated Universities (ORAU), under provisions of the Oak Ridge Institute for Science and Education (ORISE) contract, to assess the radiological condition of the Bridge. Phase I identified contamination within the CPS filters, ventilation systems, miscellaneous equipment, and other suspect locations that could not accessed at that time (ORAU 2011b). Because the Bridge was underway during the characterization, all the potentially impacted systems/spaces could not be investigated. As a result, MSFSC contracted with ORAU to perform Phase II of the characterization, specifically to survey systems/spaces previously inaccessible. During Phase II of the characterization, the ship was in port to perform routine maintenance operations, allowing access to the previously inaccessible systems/spaces.

  10. Requirements on the Redshift Accuracy for future Supernova and Number Count Surveys

    International Nuclear Information System (INIS)

    Huterer, Dragan; Kim, Alex; Broderick, Tamara

    2004-01-01

    We investigate the required redshift accuracy of type Ia supernova and cluster number-count surveys in order for the redshift uncertainties not to contribute appreciably to the dark energy parameter error budget. For the SNAP supernova experiment, we find that, without the assistance of ground-based measurements, individual supernova redshifts would need to be determined to about 0.002 or better, which is a challenging but feasible requirement for a low-resolution spectrograph. However, we find that accurate redshifts for z < 0.1 supernovae, obtained with ground-based experiments, are sufficient to immunize the results against even relatively large redshift errors at high z. For the future cluster number-count surveys such as the South Pole Telescope, Planck or DUET, we find that the purely statistical error in photometric redshift is less important, and that the irreducible, systematic bias in redshift drives the requirements. The redshift bias will have to be kept below 0.001-0.005 per redshift bin (which is determined by the filter set), depending on the sky coverage and details of the definition of the minimal mass of the survey. Furthermore, we find that X-ray surveys have a more stringent required redshift accuracy than Sunyaev-Zeldovich (SZ) effect surveys since they use a shorter lever arm in redshift; conversely, SZ surveys benefit from their high redshift reach only so long as some redshift information is available for distant (zgtrsim1) clusters

  11. Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica

    International Nuclear Information System (INIS)

    Oboh, I.; Aluyor, E.; Audu, T.

    2015-01-01

    The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R 2 ), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used to predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem

  12. Quantifying spatial distribution of snow depth errors from LiDAR using Random Forests

    Science.gov (United States)

    Tinkham, W.; Smith, A. M.; Marshall, H.; Link, T. E.; Falkowski, M. J.; Winstral, A. H.

    2013-12-01

    There is increasing need to characterize the distribution of snow in complex terrain using remote sensing approaches, especially in isolated mountainous regions that are often water-limited, the principal source of terrestrial freshwater, and sensitive to climatic shifts and variations. We apply intensive topographic surveys, multi-temporal LiDAR, and Random Forest modeling to quantify snow volume and characterize associated errors across seven land cover types in a semi-arid mountainous catchment at a 1 and 4 m spatial resolution. The LiDAR-based estimates of both snow-off surface topology and snow depths were validated against ground-based measurements across the catchment. Comparison of LiDAR-derived snow depths to manual snow depth surveys revealed that LiDAR based estimates were more accurate in areas of low lying vegetation such as shrubs (RMSE = 0.14 m) as compared to areas consisting of tree cover (RMSE = 0.20-0.35 m). The highest errors were found along the edge of conifer forests (RMSE = 0.35 m), however a second conifer transect outside the catchment had much lower errors (RMSE = 0.21 m). This difference is attributed to the wind exposure of the first site that led to highly variable snow depths at short spatial distances. The Random Forest modeled errors deviated from the field measured errors with a RMSE of 0.09-0.34 m across the different cover types. Results show that snow drifts, which are important for maintaining spring and summer stream flows and establishing and sustaining water-limited plant species, contained 30 × 5-6% of the snow volume while only occupying 10% of the catchment area similar to findings by prior physically-based modeling approaches. This study demonstrates the potential utility of combining multi-temporal LiDAR with Random Forest modeling to quantify the distribution of snow depth with a reasonable degree of accuracy. Future work could explore the utility of Terrestrial LiDAR Scanners to produce validation of snow-on surface

  13. Adding the s-Process Element Cerium to the APOGEE Survey: Identification and Characterization of Ce II Lines in the H-band Spectral Window

    Science.gov (United States)

    Cunha, Katia; Smith, Verne V.; Hasselquist, Sten; Souto, Diogo; Shetrone, Matthew D.; Allende Prieto, Carlos; Bizyaev, Dmitry; Frinchaboy, Peter; García-Hernández, D. Anibal; Holtzman, Jon; Johnson, Jennifer A.; Jőnsson, Henrik; Majewski, Steven R.; Mészáros, Szabolcs; Nidever, David; Pinsonneault, Mark; Schiavon, Ricardo P.; Sobeck, Jennifer; Skrutskie, Michael F.; Zamora, Olga; Zasowski, Gail; Fernández-Trincado, J. G.

    2017-08-01

    Nine Ce II lines have been identified and characterized within the spectral window observed by the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey (between λ1.51 and 1.69 μm). At solar metallicities, cerium is an element that is produced predominantly as a result of the slow capture of neutrons (the s-process) during asymptotic giant branch stellar evolution. The Ce II lines were identified using a combination of a high-resolution (R=λ /δ λ ={{100,000}}) Fourier Transform Spectrometer (FTS) spectrum of α Boo and an APOGEE spectrum (R = 22,400) of a metal-poor, but s-process enriched, red giant (2M16011638-1201525). Laboratory oscillator strengths are not available for these lines. Astrophysical gf-values were derived using α Boo as a standard star, with the absolute cerium abundance in α Boo set by using optical Ce II lines that have precise published laboratory gf-values. The near-infrared Ce II lines identified here are also analyzed, as consistency checks, in a small number of bright red giants using archival FTS spectra, as well as a small sample of APOGEE red giants, including two members of the open cluster NGC 6819, two field stars, and seven metal-poor N- and Al-rich stars. The conclusion is that this set of Ce II lines can be detected and analyzed in a large fraction of the APOGEE red giant sample and will be useful for probing chemical evolution of the s-process products in various populations of the Milky Way.

  14. A Joint Sea Beam/SeaMARC II Survey of the East Pacific Rise and Its Flanks 7 deg 50 min-10 deg 30 min N, to Establish a Geologic Acoustic Natural Laboratory

    Science.gov (United States)

    1991-01-15

    of Oceanography, University of Rhode Island , Narragansett, R.I. 02882, A. Shor and C. Nishimura, Hawaii Institute of Geophysics, University of Hawaii...across the Clipperton and the absence of intra-transform spreading, and opening across the Siqueiros with sustained intra-transform spreading. An...Ma. Future work will focus on the significant task of combining this survey with three 1987 SeaMARC II surveys of the Clipperton transform, the 9°N

  15. Precious metals in SDSS quasar spectra. II. Tracking the evolution of strong, 0.4 < z < 2.3 Mg II absorbers with thousands of systems

    International Nuclear Information System (INIS)

    Seyffert, Eduardo N.; Simcoe, Robert A.; Cooksey, Kathy L.; O'Meara, John M.; Kao, Melodie M.; Prochaska, J. Xavier

    2013-01-01

    We have performed an analysis of over 34,000 Mg II doublets at 0.36 < z < 2.29 in Sloan Digital Sky Survey (SDSS) Data Release 7 quasar spectra; the catalog, advanced data products, and tools for analysis are publicly available. The catalog was divided into 14 small redshift bins with roughly 2500 doublets in each and from Monte Carlo simulations, we estimate 50% completeness at rest equivalent width W r ≈ 0.8 Å. The equivalent width frequency distribution is described well by an exponential model at all redshifts, and the distribution becomes flatter with increasing redshift, i.e., there are more strong systems relative to weak ones. Direct comparison with previous SDSS Mg II surveys reveals that we recover at least 70% of the doublets in these other catalogs, in addition to detecting thousands of new systems. We discuss how these surveys came by their different results, which qualitatively agree but because of the very small uncertainties, differ by a statistically significant amount. The estimated physical cross section of Mg II-absorbing galaxy halos increased approximately threefold from z = 0.4 to z = 2.3, while the W r ≥ 1 Å absorber line density, dN MgII /dX, grew by roughly 45%. Finally, we explore the different evolution of various absorber populations—damped Lyα absorbers, Lyman limit systems, strong C IV absorbers, and strong and weaker Mg II systems—across cosmic time (0 < z < 6).

  16. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage

  17. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  18. Reducing visual deficits caused by refractive errors in school and preschool children: results of a pilot school program in the Andean region of Apurimac, Peru

    Science.gov (United States)

    Latorre-Arteaga, Sergio; Gil-González, Diana; Enciso, Olga; Phelan, Aoife; García-Muñoz, Ángel; Kohler, Johannes

    2014-01-01

    Background Refractive error is defined as the inability of the eye to bring parallel rays of light into focus on the retina, resulting in nearsightedness (myopia), farsightedness (Hyperopia) or astigmatism. Uncorrected refractive error in children is associated with increased morbidity and reduced educational opportunities. Vision screening (VS) is a method for identifying children with visual impairment or eye conditions likely to lead to visual impairment. Objective To analyze the utility of vision screening conducted by teachers and to contribute to a better estimation of the prevalence of childhood refractive errors in Apurimac, Peru. Design A pilot vision screening program in preschool (Group I) and elementary school children (Group II) was conducted with the participation of 26 trained teachers. Children whose visual acuity was<6/9 [20/30] (Group I) and≤6/9 (Group II) in one or both eyes, measured with the Snellen Tumbling E chart at 6 m, were referred for a comprehensive eye exam. Specificity and positive predictive value to detect refractive error were calculated against clinical examination. Program assessment with participants was conducted to evaluate outcomes and procedures. Results A total sample of 364 children aged 3–11 were screened; 45 children were examined at Centro Oftalmológico Monseñor Enrique Pelach (COMEP) Eye Hospital. Prevalence of refractive error was 6.2% (Group I) and 6.9% (Group II); specificity of teacher vision screening was 95.8% and 93.0%, while positive predictive value was 59.1% and 47.8% for each group, respectively. Aspects highlighted to improve the program included extending training, increasing parental involvement, and helping referred children to attend the hospital. Conclusion Prevalence of refractive error in children is significant in the region. Vision screening performed by trained teachers is a valid intervention for early detection of refractive error, including screening of preschool children. Program

  19. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  20. Reduction in pediatric identification band errors: a quality collaborative.

    Science.gov (United States)

    Phillips, Shannon Connor; Saysana, Michele; Worley, Sarah; Hain, Paul D

    2012-06-01

    Accurate and consistent placement of a patient identification (ID) band is used in health care to reduce errors associated with patient misidentification. Multiple safety organizations have devoted time and energy to improving patient ID, but no multicenter improvement collaboratives have shown scalability of previously successful interventions. We hoped to reduce by half the pediatric patient ID band error rate, defined as absent, illegible, or inaccurate ID band, across a quality improvement learning collaborative of hospitals in 1 year. On the basis of a previously successful single-site intervention, we conducted a self-selected 6-site collaborative to reduce ID band errors in heterogeneous pediatric hospital settings. The collaborative had 3 phases: preparatory work and employee survey of current practice and barriers, data collection (ID band failure rate), and intervention driven by data and collaborative learning to accelerate change. The collaborative audited 11377 patients for ID band errors between September 2009 and September 2010. The ID band failure rate decreased from 17% to 4.1% (77% relative reduction). Interventions including education of frontline staff regarding correct ID bands as a safety strategy; a change to softer ID bands, including "luggage tag" type ID bands for some patients; and partnering with families and patients through education were applied at all institutions. Over 13 months, a collaborative of pediatric institutions significantly reduced the ID band failure rate. This quality improvement learning collaborative demonstrates that safety improvements tested in a single institution can be disseminated to improve quality of care across large populations of children.

  1. A prospective three-step intervention study to prevent medication errors in drug handling in paediatric care.

    Science.gov (United States)

    Niemann, Dorothee; Bertsche, Astrid; Meyrath, David; Koepf, Ellen D; Traiser, Carolin; Seebald, Katja; Schmitt, Claus P; Hoffmann, Georg F; Haefeli, Walter E; Bertsche, Thilo

    2015-01-01

    To prevent medication errors in drug handling in a paediatric ward. One in five preventable adverse drug events in hospitalised children is caused by medication errors. Errors in drug prescription have been studied frequently, but data regarding drug handling, including drug preparation and administration, are scarce. A three-step intervention study including monitoring procedure was used to detect and prevent medication errors in drug handling. After approval by the ethics committee, pharmacists monitored drug handling by nurses on an 18-bed paediatric ward in a university hospital prior to and following each intervention step. They also conducted a questionnaire survey aimed at identifying knowledge deficits. Each intervention step targeted different causes of errors. The handout mainly addressed knowledge deficits, the training course addressed errors caused by rule violations and slips, and the reference book addressed knowledge-, memory- and rule-based errors. The number of patients who were subjected to at least one medication error in drug handling decreased from 38/43 (88%) to 25/51 (49%) following the third intervention, and the overall frequency of errors decreased from 527 errors in 581 processes (91%) to 116/441 (26%). The issue of the handout reduced medication errors caused by knowledge deficits regarding, for instance, the correct 'volume of solvent for IV drugs' from 49-25%. Paediatric drug handling is prone to errors. A three-step intervention effectively decreased the high frequency of medication errors by addressing the diversity of their causes. Worldwide, nurses are in charge of drug handling, which constitutes an error-prone but often-neglected step in drug therapy. Detection and prevention of errors in daily routine is necessary for a safe and effective drug therapy. Our three-step intervention reduced errors and is suitable to be tested in other wards and settings. © 2014 John Wiley & Sons Ltd.

  2. SEDS: THE SPITZER EXTENDED DEEP SURVEY. SURVEY DESIGN, PHOTOMETRY, AND DEEP IRAC SOURCE COUNTS

    International Nuclear Information System (INIS)

    Ashby, M. L. N.; Willner, S. P.; Fazio, G. G.; Huang, J.-S.; Hernquist, L.; Hora, J. L.; Arendt, R.; Barmby, P.; Barro, G.; Faber, S.; Guhathakurta, P.; Bell, E. F.; Bouwens, R.; Cattaneo, A.; Croton, D.; Davé, R.; Dunlop, J. S.; Egami, E.; Finlator, K.; Grogin, N. A.

    2013-01-01

    The Spitzer Extended Deep Survey (SEDS) is a very deep infrared survey within five well-known extragalactic science fields: the UKIDSS Ultra-Deep Survey, the Extended Chandra Deep Field South, COSMOS, the Hubble Deep Field North, and the Extended Groth Strip. SEDS covers a total area of 1.46 deg 2 to a depth of 26 AB mag (3σ) in both of the warm Infrared Array Camera (IRAC) bands at 3.6 and 4.5 μm. Because of its uniform depth of coverage in so many widely-separated fields, SEDS is subject to roughly 25% smaller errors due to cosmic variance than a single-field survey of the same size. SEDS was designed to detect and characterize galaxies from intermediate to high redshifts (z = 2-7) with a built-in means of assessing the impact of cosmic variance on the individual fields. Because the full SEDS depth was accumulated in at least three separate visits to each field, typically with six-month intervals between visits, SEDS also furnishes an opportunity to assess the infrared variability of faint objects. This paper describes the SEDS survey design, processing, and publicly-available data products. Deep IRAC counts for the more than 300,000 galaxies detected by SEDS are consistent with models based on known galaxy populations. Discrete IRAC sources contribute 5.6 ± 1.0 and 4.4 ± 0.8 nW m –2 sr –1 at 3.6 and 4.5 μm to the diffuse cosmic infrared background (CIB). IRAC sources cannot contribute more than half of the total CIB flux estimated from DIRBE data. Barring an unexpected error in the DIRBE flux estimates, half the CIB flux must therefore come from a diffuse component.

  3. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  4. The Ninth Data Release of the Sloan Digital Sky Survey: First Spectroscopic Data from the SDSS-III Baryon Oscillation Spectroscopic Survey

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Christopher P.; Alexandroff, Rachael; Allende Prieto, Carlos; Anderson, Scott F.; Anderton, Timothy; Andrews, Brett H.; Aubourg, Éric; Bailey, Stephen; Balbinot, Eduardo; Barnes, Rory; Bautista, Julian; Beers, Timothy C.; Beifiori, Alessandra; Berlind, Andreas A.; Bhardwaj, Vaishali; Bizyaev, Dmitry; Blake, Cullen H.; Blanton, Michael R.; Blomqvist, Michael; Bochanski, John J.; Bolton, Adam S.; Borde, Arnaud; Bovy, Jo; Brandt, W. N.; Brinkmann, J.; Brown, Peter J.; Brownstein, Joel R.; Bundy, Kevin; Busca, N. G.; Carithers, William; Carnero, Aurelio R.; Carr, Michael A.; Casetti-Dinescu, Dana I.; Chen, Yanmei; Chiappini, Cristina; Comparat, Johan; Connolly, Natalia; Crepp, Justin R.; Cristiani, Stefano; Croft, Rupert A. C.; Cuesta, Antonio J.; da Costa, Luiz N.; Davenport, James R. A.; Dawson, Kyle S.; de Putter, Roland; De Lee, Nathan; Delubac, Timothée; Dhital, Saurav; Ealet, Anne; Ebelke, Garrett L.; Edmondson, Edward M.; Eisenstein, Daniel J.; Escoffier, S.; Esposito, Massimiliano; Evans, Michael L.; Fan, Xiaohui; Femenía Castellá, Bruno; Fernández Alvar, Emma; Ferreira, Leticia D.; Filiz Ak, N.; Finley, Hayley; Fleming, Scott W.; Font-Ribera, Andreu; Frinchaboy, Peter M.; García-Hernández, D. A.; Pérez, A. E. García; Ge, Jian; Génova-Santos, R.; Gillespie, Bruce A.; Girardi, Léo; González Hernández, Jonay I.; Grebel, Eva K.; Gunn, James E.; Guo, Hong; Haggard, Daryl; Hamilton, Jean-Christophe; Harris, David W.; Hawley, Suzanne L.; Hearty, Frederick R.; Ho, Shirley; Hogg, David W.; Holtzman, Jon A.; Honscheid, Klaus; Huehnerhoff, J.; Ivans, Inese I.; Ivezić, Željko; Jacobson, Heather R.; Jiang, Linhua; Johansson, Jonas; Johnson, Jennifer A.; Kauffmann, Guinevere; Kirkby, David; Kirkpatrick, Jessica A.; Klaene, Mark A.; Knapp, Gillian R.; Kneib, Jean-Paul; Le Goff, Jean-Marc; Leauthaud, Alexie; Lee, Khee-Gan; Lee, Young Sun; Long, Daniel C.; Loomis, Craig P.; Lucatello, Sara; Lundgren, Britt; Lupton, Robert H.; Ma, Bo; Ma, Zhibo; MacDonald, Nicholas; Mack, Claude E.; Mahadevan, Suvrath; Maia, Marcio A. G.; Majewski, Steven R.; Makler, Martin; Malanushenko, Elena; Malanushenko, Viktor; Manchado, A.; Mandelbaum, Rachel; Manera, Marc; Maraston, Claudia; Margala, Daniel; Martell, Sarah L.; McBride, Cameron K.; McGreer, Ian D.; McMahon, Richard G.; Ménard, Brice; Meszaros, Sz.; Miralda-Escudé, Jordi; Montero-Dorta, Antonio D.; Montesano, Francesco; Morrison, Heather L.; Muna, Demitri; Munn, Jeffrey A.; Murayama, Hitoshi; Myers, Adam D.; Neto, A. F.; Nguyen, Duy Cuong; Nichol, Robert C.; Nidever, David L.; Noterdaeme, Pasquier; Nuza, Sebastián E.; Ogando, Ricardo L. C.; Olmstead, Matthew D.; Oravetz, Daniel J.; Owen, Russell; Padmanabhan, Nikhil; Palanque-Delabrouille, Nathalie; Pan, Kaike; Parejko, John K.; Parihar, Prachi; Pâris, Isabelle; Pattarakijwanich, Petchara; Pepper, Joshua; Percival, Will J.; Pérez-Fournon, Ismael; Pérez-Ràfols, Ignasi; Petitjean, Patrick; Pforr, Janine; Pieri, Matthew M.; Pinsonneault, Marc H.; Porto de Mello, G. F.; Prada, Francisco; Price-Whelan, Adrian M.; Raddick, M. Jordan; Rebolo, Rafael; Rich, James; Richards, Gordon T.; Robin, Annie C.; Rocha-Pinto, Helio J.; Rockosi, Constance M.; Roe, Natalie A.; Ross, Ashley J.; Ross, Nicholas P.; Rossi, Graziano; Rubiño-Martin, J. A.; Samushia, Lado; Sanchez Almeida, J.; Sánchez, Ariel G.; Santiago, Basílio; Sayres, Conor; Schlegel, David J.; Schlesinger, Katharine J.; Schmidt, Sarah J.; Schneider, Donald P.; Schultheis, Mathias; Schwope, Axel D.; Scóccola, C. G.; Seljak, Uros; Sheldon, Erin; Shen, Yue; Shu, Yiping; Simmerer, Jennifer; Simmons, Audrey E.; Skibba, Ramin A.; Skrutskie, M. F.; Slosar, A.; Sobreira, Flavia; Sobeck, Jennifer S.; Stassun, Keivan G.; Steele, Oliver; Steinmetz, Matthias; Strauss, Michael A.; Streblyanska, Alina; Suzuki, Nao; Swanson, Molly E. C.; Tal, Tomer; Thakar, Aniruddha R.; Thomas, Daniel; Thompson, Benjamin A.; Tinker, Jeremy L.; Tojeiro, Rita; Tremonti, Christy A.; Vargas Magaña, M.; Verde, Licia; Viel, Matteo; Vikas, Shailendra K.; Vogt, Nicole P.; Wake, David A.; Wang, Ji; Weaver, Benjamin A.; Weinberg, David H.; Weiner, Benjamin J.; West, Andrew A.; White, Martin; Wilson, John C.; Wisniewski, John P.; Wood-Vasey, W. M.; Yanny, Brian; Yèche, Christophe; York, Donald G.; Zamora, O.; Zasowski, Gail; Zehavi, Idit; Zhao, Gong-Bo; Zheng, Zheng; Zhu, Guangtun; Zinn, Joel C.

    2012-11-19

    The Sloan Digital Sky Survey III (SDSS-III) presents the first spectroscopic data from the Baryon Oscillation Spectroscopic Survey (BOSS). This ninth data release (DR9) of the SDSS project includes 535,995 new galaxy spectra (median z=0.52), 102,100 new quasar spectra (median z=2.32), and 90,897 new stellar spectra, along with the data presented in previous data releases. These spectra were obtained with the new BOSS spectrograph and were taken between 2009 December and 2011 July. In addition, the stellar parameters pipeline, which determines radial velocities, surface temperatures, surface gravities, and metallicities of stars, has been updated and refined with improvements in temperature estimates for stars with T_eff<5000 K and in metallicity estimates for stars with [Fe/H]>-0.5. DR9 includes new stellar parameters for all stars presented in DR8, including stars from SDSS-I and II, as well as those observed as part of the SDSS-III Sloan Extension for Galactic Understanding and Exploration-2 (SEGUE-2). The astrometry error introduced in the DR8 imaging catalogs has been corrected in the DR9 data products. The next data release for SDSS-III will be in Summer 2013, which will present the first data from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) along with another year of data from BOSS, followed by the final SDSS-III data release in December 2014.

  5. Sloan Digital Sky Survey Photometric Calibration Revisited

    International Nuclear Information System (INIS)

    Marriner, John

    2012-01-01

    The Sloan Digital Sky Survey calibration is revisited to obtain the most accurate photometric calibration. A small but significant error is found in the flat-fielding of the Photometric telescope used for calibration. Two SDSS star catalogs are compared and the average difference in magnitude as a function of right ascension and declination exhibits small systematic errors in relative calibration. The photometric transformation from the SDSS Photometric Telescope to the 2.5 m telescope is recomputed and compared to synthetic magnitudes computed from measured filter bandpasses.

  6. Sloan Digital Sky Survey Photometric Calibration Revisited

    Energy Technology Data Exchange (ETDEWEB)

    Marriner, John; /Fermilab

    2012-06-29

    The Sloan Digital Sky Survey calibration is revisited to obtain the most accurate photometric calibration. A small but significant error is found in the flat-fielding of the Photometric telescope used for calibration. Two SDSS star catalogs are compared and the average difference in magnitude as a function of right ascension and declination exhibits small systematic errors in relative calibration. The photometric transformation from the SDSS Photometric Telescope to the 2.5 m telescope is recomputed and compared to synthetic magnitudes computed from measured filter bandpasses.

  7. Scaling prediction errors to reward variability benefits error-driven learning in humans.

    Science.gov (United States)

    Diederen, Kelly M J; Schultz, Wolfram

    2015-09-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease "adapters'" accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. Copyright © 2015 the American Physiological Society.

  8. Survey on problems in developing technologies for the global environment issues (Version II); Chikyu kankyo mondai gijutsu kaihatsu kadai chosa. 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1990-07-01

    This paper describes a survey on problems in developing technologies for the global environment issues. Technological development of means to reduce generation of environmental issues and substitutive means for non-generation thereof is being moved forward specifically in the Sunshine Project and the Moonlight Project. The Chemical Technology Research Institute assumes that it has a responsibility to positively contribute to developing a technological system that matches the substance circulation mechanism of the earth from the fields of chemistry. Therefore, the Institute has organized working groups that have been identifying problems from their expertise standpoints and have been extracting study assignments. Subsequent to the Version I, the Version II has been compiled. The Version II takes up the simulation of global warming mechanisms, behavior of gases dissolved in oceans, and possibility of fixing CO2 in oceans. With respect to fluorocarbons, the Version II describes development of substitutive substances, their stability, combustion method as a destruction technique, and destruction by means of super criticality. Regarding CO2, the version introduces technologies to re-use CO2 as a resource by means of membrane separation, storage, and contact hydrogenation. The volume also dwells on CO2 reduction by using photo-chemical and electrochemical reactions, CO2 reduction and photo-synthesis by using semiconductors as photo catalysts and electrodes. (NEDO)

  9. A heteroscedastic measurement error model for method comparison data with replicate measurements.

    Science.gov (United States)

    Nawarathna, Lakshika S; Choudhary, Pankaj K

    2015-03-30

    Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  11. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    Directory of Open Access Journals (Sweden)

    Macarena Suárez-Pellicioni

    Full Text Available This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA and seventeen low math-anxious (LMA individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN, the error positivity component (Pe, classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  12. Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey

    Science.gov (United States)

    Gindi, Renee; Cohen, Robin A.

    2012-01-01

    Objectives Using linked administrative data, to validate Medicare coverage estimates among adults aged 65 or older from the National Health Interview Survey (NHIS), and to assess the impact of a recently added Medicare probe question on the validity of these estimates. Data sources Linked 2005 NHIS and Master Beneficiary Record and Payment History Update System files from the Social Security Administration (SSA). Study design We compared Medicare coverage reported on NHIS with “benchmark” benefit records from SSA. Principal findings With the addition of the probe question, more reports of coverage were captured, and the agreement between the NHIS-reported coverage and SSA records increased from 88% to 95%. Few additional overreports were observed. Conclusions Increased accuracy of the Medicare coverage status of NHIS participants was achieved with the Medicare probe question. Though some misclassification remains, data users interested in Medicare coverage as an outcome or correlate can use this survey measure with confidence. PMID:24800138

  13. Large-degree asymptotics of rational Painlevé-II functions: noncritical behaviour

    International Nuclear Information System (INIS)

    Buckingham, Robert J; Miller, Peter D

    2014-01-01

    Rational solutions of the inhomogeneous Painlevé-II equation and of a related coupled Painlevé-II system have recently arisen in studies of fluid vortices and of the sine-Gordon equation. For the sine-Gordon application in particular it is of interest to understand the large-degree asymptotic behaviour of the rational Painlevé-II functions. We explicitly compute the leading-order large-degree asymptotics of these two families of rational functions valid in the whole complex plane with the exception of a neighbourhood of a certain piecewise-smooth closed curve. We obtain rigorous error bounds by using the Deift–Zhou nonlinear steepest-descent method for Riemann–Hilbert problems. (paper)

  14. Awareness of technology-induced errors and processes for identifying and preventing such errors.

    Science.gov (United States)

    Bellwood, Paule; Borycki, Elizabeth M; Kushniruk, Andre W

    2015-01-01

    There is a need to determine if organizations working with health information technology are aware of technology-induced errors and how they are addressing and preventing them. The purpose of this study was to: a) determine the degree of technology-induced error awareness in various Canadian healthcare organizations, and b) identify those processes and procedures that are currently in place to help address, manage, and prevent technology-induced errors. We identified a lack of technology-induced error awareness among participants. Participants identified there was a lack of well-defined procedures in place for reporting technology-induced errors, addressing them when they arise, and preventing them.

  15. Utilising identifier error variation in linkage of large administrative data sources

    Directory of Open Access Journals (Sweden)

    Katie Harron

    2017-02-01

    Full Text Available Abstract Background Linkage of administrative data sources often relies on probabilistic methods using a set of common identifiers (e.g. sex, date of birth, postcode. Variation in data quality on an individual or organisational level (e.g. by hospital can result in clustering of identifier errors, violating the assumption of independence between identifiers required for traditional probabilistic match weight estimation. This potentially introduces selection bias to the resulting linked dataset. We aimed to measure variation in identifier error rates in a large English administrative data source (Hospital Episode Statistics; HES and to incorporate this information into match weight calculation. Methods We used 30,000 randomly selected HES hospital admissions records of patients aged 0–1, 5–6 and 18–19 years, for 2011/2012, linked via NHS number with data from the Personal Demographic Service (PDS; our gold-standard. We calculated identifier error rates for sex, date of birth and postcode and used multi-level logistic regression to investigate associations with individual-level attributes (age, ethnicity, and gender and organisational variation. We then derived: i weights incorporating dependence between identifiers; ii attribute-specific weights (varying by age, ethnicity and gender; and iii organisation-specific weights (by hospital. Results were compared with traditional match weights using a simulation study. Results Identifier errors (where values disagreed in linked HES-PDS records or missing values were found in 0.11% of records for sex and date of birth and in 53% of records for postcode. Identifier error rates differed significantly by age, ethnicity and sex (p < 0.0005. Errors were less frequent in males, in 5–6 year olds and 18–19 year olds compared with infants, and were lowest for the Asian ethic group. A simulation study demonstrated that substantial bias was introduced into estimated readmission rates in the presence

  16. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    Energy Technology Data Exchange (ETDEWEB)

    Mandelbaum, R.; Rowe, B.; Armstrong, R.; Bard, D.; Bertin, E.; Bosch, J.; Boutigny, D.; Courbin, F.; Dawson, W. A.; Donnarumma, A.; Fenech Conti, I.; Gavazzi, R.; Gentile, M.; Gill, M. S. S.; Hogg, D. W.; Huff, E. M.; Jee, M. J.; Kacprzak, T.; Kilbinger, M.; Kuntzer, T.; Lang, D.; Luo, W.; March, M. C.; Marshall, P. J.; Meyers, J. E.; Miller, L.; Miyatake, H.; Nakajima, R.; Ngole Mboula, F. M.; Nurbaeva, G.; Okura, Y.; Paulin-Henriksson, S.; Rhodes, J.; Schneider, M. D.; Shan, H.; Sheldon, E. S.; Simet, M.; Starck, J. -L.; Sureau, F.; Tewes, M.; Zarb Adami, K.; Zhang, J.; Zuntz, J.

    2015-05-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  17. Near-Infrared [Fe II] and H2 Study of the Galactic Supernova Remnants

    Science.gov (United States)

    Lee, Yong-Hyun; Koo, Bon-Chul; Lee, Jae-Joon; Jaffe, Daniel T.; Burton, Michael G.; Ryder, Stuart D.

    2018-01-01

    We have searched for near-infrared (NIR) [Fe II] (1.644 μm) and H2 1-0 S(1) (2.122 μm) emission features associated with Galactic supernova remnants (SNRs) using the narrow-band imaging surveys UWIFE / UWISH2 (UKIRT Widefield Infrared Survey for [Fe II] / H2). Both surveys cover about 180 square degrees of the first Galactic quadrant (7° reversal” phenomenon, i.e., the H2 emission features are detected outside the [Fe II] emission boundary. We carried out high resolution (R~40,000) NIR H- and K-band spectroscopy of the five SNRs showing the [Fe II]-H2 reversal (G11.2-0.3, KES 73, W44, 3C 396, W49B) using IGRINS (Immersion GRating INfrared Spectrograph). Various ro-vibrational H2 lines have been detected, which are used to derive the kinematic distances to the SNRs and to investigate the origin of the H2 emission. The detected H2 lines show broad line width (> 10 km s-1) and line flux ratios of thermal excitation. We discuss the origin of the extended H2 emission features beyond the the [Fe II] emission boundary.

  18. The optimal control of ITU TRIGA Mark II Reactor

    International Nuclear Information System (INIS)

    Can, Burhanettin

    2008-01-01

    In this study, optimal control of ITU TRIGA Mark-II Reactor is discussed. A new controller has been designed for ITU TRIGA Mark-II Reactor. The controller consists of main and auxiliary controllers. The form is based on Pontragyn's Maximum Principle and the latter is based on PID approach. For the desired power program, a cubic function is chosen. Integral Performance Index includes the mean square of error function and the effect of selected period on the power variation. YAVCAN2 Neutronic - Thermal -Hydraulic code is used to solve the equations, namely 11 equations, dealing with neutronic - thermal - hydraulic behavior of the reactor. For the controller design, a new code, KONTCAN, is written. In the application of the code, it is seen that the controller controls the reactor power to follow the desired power program. The overshoot value alters between 100 W and 500 W depending on the selected period. There is no undershoot. The controller rapidly increases reactivity, then decreases, after that increases it until the effect of temperature feedback is compensated. Error function varies between 0-1 kW. (author)

  19. Pajarito Plateau archaeological surveys and excavations. II

    Energy Technology Data Exchange (ETDEWEB)

    Steen, C R

    1982-04-01

    Los Alamos National Laboratory continues its archaeological program of data gathering and salvage excavations. Sites recently added to the archaeological survey are described, as well as the results of five excavations. Among the more interesting and important discoveries are (1) the apparently well-established local use of anhydrous lime, and (2) a late pre-Columbian use of earlier house sites and middens for garden plots. Evidence indicated that the local puebloan population was the result of an expansion of upper Rio Grande peoples, not an influx of migrants.

  20. Solution weighting for the SAND-II Monte Carlo code

    International Nuclear Information System (INIS)

    Oster, C.A.; McElroy, W.N.; Simons, R.L.; Lippincott, E.P.; Odette, G.R.

    1976-01-01

    Modifications to the SAND-II Error Analysis Monte Carlo code to include solution weighting based on input data uncertainties have been made and are discussed together with background information on the SAND-II algorithm. The new procedure permits input data having smaller uncertainties to have a greater influence on the solution spectrum than do the data having larger uncertainties. The results of an indepth study to find a practical procedure and the first results of its application to three important Interlaboratory LMFBR Reaction Rate (ILRR) program benchmark spectra (CFRMF, ΣΣ, and 235 U fission) are discussed

  1. Advances in criticality predictions for EBR-II

    International Nuclear Information System (INIS)

    Schaefer, R.W.; Imel, G.R.

    1994-01-01

    Improvements to startup criticality predictions for the EBR-II reactor have been made. More exact calculational models, methods and data are now used, and better procedures for obtaining experimental data that enter into the prediction are in place. Accuracy improved by more than a factor of two and the largest ECP error observed since the changes is only 18 cents. An experimental method using subcritical counts is also being implemented

  2. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Science.gov (United States)

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  3. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  4. PEP Laser Surveying System

    International Nuclear Information System (INIS)

    Lauritzen, T.; Sah, R.C.

    1979-03-01

    A Laser Surveying System has been developed to survey the beam elements of the PEP storage ring. This system provides automatic data acquisition and analysis in order to increase survey speed and to minimize operator error. Two special instruments, the Automatic Readout Micrometer and the Small Automatic Micrometer, have been built for measuring the locations of fiducial points on beam elements with respect to the light beam from a laser. These instruments automatically encode offset distances and read them into the memory of an on-line computer. Distances along the beam line are automatically encoded with a third instrument, the Automatic Readout Tape Unit. When measurements of several beam elements have been taken, the on-line computer analyzes the measured data, compared them with desired parameters, and calculates the required adjustments to beam element support stands

  5. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  6. A method to automate the radiological survey process

    International Nuclear Information System (INIS)

    Berven, B.A.; Blair, M.S.

    1987-01-01

    This document describes the USRAD system, a hardware/software ranging and data transmission system, that provides real-time position data and combines it with other portable instrument measurements. Live display of position data and onsite data reduction, presentation, and formatting for reports and automatic transfer into databases are among the unusual attributes of USRADS. Approximately 25% of any survey-to-survey report process is dedicated to data recording and formatting, which is eliminated by USRADS. Cost savings are realized by the elimination of manual transcription of instrument readout in the field and clerical formatting of data in the office. Increased data reliability is realized by ensuring complete survey coverage of an area in the field, by elimination of mathematical errors in conversion of instrument readout to unit concentration, and by elimination of errors associated with transcribing data from the field into report format. The USRAD system can be adapted to measure other types of pollutants or physical/chemical/geological/biological conditions in which portable instrumentation exists. 2 refs., 2 figs

  7. Regional quality control survey of blood-gas analysis.

    Science.gov (United States)

    Minty, B D; Nunn, J F

    1977-09-01

    We undertook an external quality control survey of blood-gas analysis in 16 laboratories at 13 hospitals. All samples were prepared in the laboratories under investigation by equilibration of blood or serum with gas mixtures of known composition. pH of serum was measured with no significant bias but with an SD of random error 0.026 pH units, which was almost twice the SD of the reference range (0.015). An acceptable random error (half SD of reference range) was not obtained in a longitudinal internal quality control suvey although there were acceptable results for buffer pH in both field and internal surveys. Blood PO2 was measured with no significant bias but with SD of random error 1.38 kPa which reduced to 0.72 kPa by excluding one egregious result. The latter value was just over half of the SD of the reference range (1.2 kPa). PCO2 of blood was also measured without significant bias but with a much smaller SD of random error of 0.28 kPa (by excluding one egregious result), which was again just over half the SD of the reference range (0.51 kPa). Measurements of blood PO2 and PCO2 seem generally acceptable in relation to their respective reference ranges but measurements of pH were unsatisfactory in both internal and external trials.

  8. Throughput of Type II HARQ-OFDM/TDM Using MMSE-FDE in a Multipath Channel

    Directory of Open Access Journals (Sweden)

    Haris Gacanin

    2009-01-01

    Full Text Available In type II hybrid ARQ (HARQ schemes, the uncoded information bits are transmitted first, while the error correction parity bits are sent upon request. Consequently, frequency diversity cannot be exploited during the first transmission. In this paper, we present the use of OFDM/TDM with MMSE-FDE and type II HARQ to increase throughput of OFDM due to frequency diversity gain.

  9. Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science.

    Science.gov (United States)

    Veldkamp, Coosje L S; Nuijten, Michèle B; Dominguez-Alvarez, Linda; van Assen, Marcel A L M; Wicherts, Jelte M

    2014-01-01

    Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.

  10. Transparency When Things Go Wrong: Physician Attitudes About Reporting Medical Errors to Patients, Peers, and Institutions.

    Science.gov (United States)

    Bell, Sigall K; White, Andrew A; Yi, Jean C; Yi-Frazier, Joyce P; Gallagher, Thomas H

    2017-12-01

    Transparent communication after medical error includes disclosing the mistake to the patient, discussing the event with colleagues, and reporting to the institution. Little is known about whether attitudes about these transparency practices are related. Understanding these relationships could inform educational and organizational strategies to promote transparency. We analyzed responses of 3038 US and Canadian physicians to a medical error communication survey. We used bivariate correlations, principal components analysis, and linear regression to determine whether and how physician attitudes about transparent communication with patients, peers, and the institution after error were related. Physician attitudes about disclosing errors to patients, peers, and institutions were correlated (all P's transparent communication with patients and peers/institution included female sex, US (vs Canadian) doctors, academic (vs private) practice, the belief that disclosure decreased likelihood of litigation, and the belief that system changes occur after error reporting. In addition, younger physicians, surgeons, and those with previous experience disclosing a serious error were more likely to agree with disclosure to patients. In comparison, doctors who believed that disclosure would decrease patient trust were less likely to agree with error disclosure to patients. Previous disclosure education was associated with attitudes supporting greater transparency with peers/institution. Physician attitudes about discussing errors with patients, colleagues, and institutions are related. Several predictors of transparency affect all 3 practices and are potentially modifiable by educational and institutional strategies.

  11. Judgment of line orientation depends on gender, education, and type of error.

    Science.gov (United States)

    Caparelli-Dáquer, Egas M; Oliveira-Souza, Ricardo; Moreira Filho, Pedro F

    2009-02-01

    Visuospatial tasks are particularly proficient at eliciting gender differences during neuropsychological performance. Here we tested the hypothesis that gender and education are related to different types of visuospatial errors on a task of line orientation that allowed the independent scoring of correct responses ("hits", or H) and one type of incorrect responses ("commission errors", or CE). We studied 343 volunteers of roughly comparable ages and with different levels of education. Education and gender were significantly associated with H scores, which were higher in men and in the groups with higher education. In contrast, the differences between men and women on CE depended on education. We concluded that (I) the ability to find the correct responses differs from the ability to avoid the wrong responses amidst an array of possible alternatives, and that (II) education interacts with gender to promote a stable performance on CE earlier in men than in women.

  12. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  13. Statistically based reevaluation of PISC-II round robin test data

    International Nuclear Information System (INIS)

    Heasler, P.G.; Taylor, T.T.; Doctor, S.R.

    1993-05-01

    This report presents a re-analysis of an international PISC-II (Programme for Inspection of Steel Components, Phase 2) round-robin inspection results using formal statistical techniques to account for experimental error. The analysis examines US team performance vs. other participants performance,flaw sizing performance and errors associated with flaw sizing, factors influencing flaw detection probability, performance of all participants with respect to recently adopted ASME Section 11 flaw detection performance demonstration requirements, and develops conclusions concerning ultrasonic inspection capability. Inspection data were gathered on four heavy section steel components which included two plates and two nozzle configurations

  14. Heuristic thinking: interdisciplinary perspectives on medical error

    Directory of Open Access Journals (Sweden)

    Annegret F. Hannawa

    2013-12-01

    Switzerland to stimulate such interdisciplinary dialogue. International scholars from eight disciplines and 17 countries attended the congress to discuss interdisciplinary ideas and perspectives for advancing safer care. The team of invited COME experts collaborated in compiling this issue of the Journal of Public Health Research entitled Interdisciplinary perspectives on medical error. This particular issue introduces relevant North American and European theorizing and research on preventable adverse events. The caliber of scientists who have contributed to this issue is humbling. But rather than naming their affiliations and summarizing their individual manuscripts here, it is more important to reflect on the contribution of this special issue as a whole. Particularly, I would like to raise two important take-home messages that the articles yield: i What new insights can be derived from the papers collected in this issue? ii What are the central challenges implied for future research on medical error?

  15. Aged-care nurses in rural Tasmanian clinical settings more likely to think hypothetical medication error would be reported and disclosed compared to hospital and community nurses.

    Science.gov (United States)

    Carnes, Debra; Kilpatrick, Sue; Iedema, Rick

    2015-12-01

    This study aims to determine the likelihood that rural nurses perceive a hypothetical medication error would be reported in their workplace. This employs cross-sectional survey using hypothetical error scenario with varying levels of harm. Clinical settings in rural Tasmania. Participants were 116 eligible surveys received from registered and enrolled nurses. Frequency of responses indicating the likelihood that severe, moderate and near miss (no harm) scenario would 'always' be reported or disclosed. Eighty per cent of nurses viewed a severe error would 'always' be reported, 64.8% a moderate error and 45.7% a near-miss error. In regards to disclosure, 54.7% felt this was 'always' likely to occur for a severe error, 44.8% for a moderate error and 26.4% for a near miss. Across all levels of severity, aged-care nurses were more likely than nurses in other settings to view error to 'always' be reported (ranging from 72-96%, P = 0.010 to 0.042,) and disclosed (68-88%, P = 0.000). Those in a management role were more likely to view error to 'always' be disclosed compared to those in a clinical role (50-77.3%, P = 0.008-0.024). Further research in rural clinical settings is needed to improve the understanding of error management and disclosure. © 2015 The Authors. Australian Journal of Rural Health published by Wiley Publishing Asia Pty Ltd on behalf of National Rural Health Alliance.

  16. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  17. Quality and Reliability of Large-Eddy Simulations II

    CERN Document Server

    Salvetti, Maria Vittoria; Meyers, Johan; Sagaut, Pierre

    2011-01-01

    The second Workshop on "Quality and Reliability of Large-Eddy Simulations", QLES2009, was held at the University of Pisa from September 9 to September 11, 2009. Its predecessor, QLES2007, was organized in 2007 in Leuven (Belgium). The focus of QLES2009 was on issues related to predicting, assessing and assuring the quality of LES. The main goal of QLES2009 was to enhance the knowledge on error sources and on their interaction in LES and to devise criteria for the prediction and optimization of simulation quality, by bringing together mathematicians, physicists and engineers and providing a platform specifically addressing these aspects for LES. Contributions were made by leading experts in the field. The present book contains the written contributions to QLES2009 and is divided into three parts, which reflect the main topics addressed at the workshop: (i) SGS modeling and discretization errors; (ii) Assessment and reduction of computational errors; (iii) Mathematical analysis and foundation for SGS modeling.

  18. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    Directory of Open Access Journals (Sweden)

    Zhongzhou Du

    2015-04-01

    Full Text Available The signal transmission module of a magnetic nanoparticle thermometer (MNPT was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias, was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA when the hardware system of the MNPT was designed with the aforementioned method.

  19. Association between drug use and urban violence: Data from the II Brazilian National Alcohol and Drugs Survey (BNADS

    Directory of Open Access Journals (Sweden)

    Renata Rigacci Abdalla

    2018-06-01

    Full Text Available Objective: To investigate the association of alcohol and cocaine use with urban violence (both as victim and as perpetrator in a representative sample of the Brazilian population. Method: The Second Brazilian Alcohol and Drugs Survey (II BNADS interviewed 4607 individuals aged 14years and older from the Brazilian household population including an oversample of 1157 adolescents (14 to 18years old. The survey gathered information on alcohol, tobacco and illegal substances use as well as on risk factors for abuse and dependence, behaviors associated with the use of substances and the possible consequences, as urban violence indicators. Results: Approximately 9.3% of the Brazilian population has been victim of at least one form of urban violence. This proportion increases to 19.7% among cocaine users and to 18.1% among individuals with alcohol use disorders (AUD. Perpetration of violence was reported by 6.2% of the sample. Cocaine use and AUD increased in almost four times the chances of being an aggressor. Being religious and married decreased the chances of being a victim and/or perpetrador of urban violence. Higher education also decreased the chances of involvement in both victimization or perpetration of violence. Both Parallel Mediation Models considering cocaine use as a predictor of urban violence (victimization or perpetration were valid and alcohol consumption and depressive symptoms were mediators of this relationship. Conclusions: This study presents relevant data of interest to Brazil as this country is one of the major consumer market of cocaine and also is among the most violent countries worldwide. Keywords: Urban violence, Cocaine, Alcohol use disorder, Household survey, Epidemiology

  20. Image sensor for testing refractive error of eyes

    Science.gov (United States)

    Li, Xiangning; Chen, Jiabi; Xu, Longyun

    2000-05-01

    It is difficult to detect ametropia and anisometropia for children. Image sensor for testing refractive error of eyes does not need the cooperation of children and can be used to do the general survey of ametropia and anisometropia for children. In our study, photographs are recorded by a CCD element in a digital form which can be directly processed by a computer. In order to process the image accurately by digital technique, formula considering the effect of extended light source and the size of lens aperture has been deduced, which is more reliable in practice. Computer simulation of the image sensing is made to verify the fineness of the results.

  1. Support of protective work of human error in a nuclear power plant

    International Nuclear Information System (INIS)

    Yoshizawa, Yuriko

    1999-01-01

    The nuclear power plant human factor group of the Tokyo Electric Power Co., Ltd. supports various protective work of human error conducted at the nuclear power plant. Its main researching theme are studies on human factor on operation of a nuclear power plant, and on recovery and common basic study on human factor. In addition, on a base of the obtained informations, assistance to protective work of human error conducted at the nuclear power plant as well as development for its actual use was also promoted. Especially, for actions sharing some dangerous informations, various assistances such as a proposal on actual example analytical method to effectively understand a dangerous information not facially but faithfully, construction of a data base to conveniently share such dangerous information, and practice on non-accident business survey for a hint of effective promotion of the protection work, were promoted. Here were introduced on assistance and investigation for effective sharing of the dangerous informations for various actions on protection of human error mainly conducted in nuclear power plant. (G.K.)

  2. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  3. Surveying problem solution with theory and objective type questions

    CERN Document Server

    Chandra, AM

    2005-01-01

    The book provides a lucid and step-by-step treatment of the various principles and methods for solving problems in land surveying. Each chapter starts with basic concepts and definitions, then solution of typical field problems and ends with objective type questions. The book explains errors in survey measurements and their propagation. Survey measurements are detailed next. These include horizontal and vertical distance, slope, elevation, angle, and direction. Measurement using stadia tacheometry and EDM are then highlighted, followed by various types of levelling problems. Traversing is then explained, followed by a detailed discussion on adjustment of survey observations and then triangulation and trilateration.

  4. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  5. Error-information in tutorial documentation: Supporting users' errors to facilitate initial skill learning

    NARCIS (Netherlands)

    Lazonder, Adrianus W.; van der Meij, Hans

    1995-01-01

    Novice users make many errors when they first try to learn how to work with a computer program like a spreadsheet or wordprocessor. No matter how user-friendly the software or the training manual, errors can and will occur. The current view on errors is that they can be helpful or disruptive,

  6. Relative frequency of Human T-cell Lymphotropic Virus I/II in HIV/AIDS patients

    Directory of Open Access Journals (Sweden)

    Mohsen Meidani

    2014-01-01

    Conclusion: In our survey, relative frequency of HTLV-I/II was 1.8% in HIV+ patients. This study reveals that relative frequency of HTLV-I/II in HIV positive patients is considerable but determining the need for screening of HTLV-I/II requires further investigation.

  7. Improved sampling for airborne surveys to estimate wildlife population parameters in the African Savannah

    NARCIS (Netherlands)

    Khaemba, W.; Stein, A.

    2002-01-01

    Parameter estimates, obtained from airborne surveys of wildlife populations, often have large bias and large standard errors. Sampling error is one of the major causes of this imprecision and the occurrence of many animals in herds violates the common assumptions in traditional sampling designs like

  8. The epidemiology and type of medication errors reported to the National Poisons Information Centre of Ireland.

    Science.gov (United States)

    Cassidy, Nicola; Duggan, Edel; Williams, David J P; Tracey, Joseph A

    2011-07-01

    Medication errors are widely reported for hospitalised patients, but limited data are available for medication errors that occur in community-based and clinical settings. Epidemiological data from poisons information centres enable characterisation of trends in medication errors occurring across the healthcare spectrum. The objective of this study was to characterise the epidemiology and type of medication errors reported to the National Poisons Information Centre (NPIC) of Ireland. A 3-year prospective study on medication errors reported to the NPIC was conducted from 1 January 2007 to 31 December 2009 inclusive. Data on patient demographics, enquiry source, location, pharmaceutical agent(s), type of medication error, and treatment advice were collated from standardised call report forms. Medication errors were categorised as (i) prescribing error (i.e. physician error), (ii) dispensing error (i.e. pharmacy error), and (iii) administration error involving the wrong medication, the wrong dose, wrong route, or the wrong time. Medication errors were reported for 2348 individuals, representing 9.56% of total enquiries to the NPIC over 3 years. In total, 1220 children and adolescents under 18 years of age and 1128 adults (≥ 18 years old) experienced a medication error. The majority of enquiries were received from healthcare professionals, but members of the public accounted for 31.3% (n = 736) of enquiries. Most medication errors occurred in a domestic setting (n = 2135), but a small number occurred in healthcare facilities: nursing homes (n = 110, 4.68%), hospitals (n = 53, 2.26%), and general practitioner surgeries (n = 32, 1.36%). In children, medication errors with non-prescription pharmaceuticals predominated (n = 722) and anti-pyretics and non-opioid analgesics, anti-bacterials, and cough and cold preparations were the main pharmaceutical classes involved. Medication errors with prescription medication predominated for adults (n = 866) and the major medication

  9. The epidemiology and type of medication errors reported to the National Poisons Information Centre of Ireland.

    LENUS (Irish Health Repository)

    Cassidy, Nicola

    2012-02-01

    INTRODUCTION: Medication errors are widely reported for hospitalised patients, but limited data are available for medication errors that occur in community-based and clinical settings. Epidemiological data from poisons information centres enable characterisation of trends in medication errors occurring across the healthcare spectrum. AIM: The objective of this study was to characterise the epidemiology and type of medication errors reported to the National Poisons Information Centre (NPIC) of Ireland. METHODS: A 3-year prospective study on medication errors reported to the NPIC was conducted from 1 January 2007 to 31 December 2009 inclusive. Data on patient demographics, enquiry source, location, pharmaceutical agent(s), type of medication error, and treatment advice were collated from standardised call report forms. Medication errors were categorised as (i) prescribing error (i.e. physician error), (ii) dispensing error (i.e. pharmacy error), and (iii) administration error involving the wrong medication, the wrong dose, wrong route, or the wrong time. RESULTS: Medication errors were reported for 2348 individuals, representing 9.56% of total enquiries to the NPIC over 3 years. In total, 1220 children and adolescents under 18 years of age and 1128 adults (>\\/= 18 years old) experienced a medication error. The majority of enquiries were received from healthcare professionals, but members of the public accounted for 31.3% (n = 736) of enquiries. Most medication errors occurred in a domestic setting (n = 2135), but a small number occurred in healthcare facilities: nursing homes (n = 110, 4.68%), hospitals (n = 53, 2.26%), and general practitioner surgeries (n = 32, 1.36%). In children, medication errors with non-prescription pharmaceuticals predominated (n = 722) and anti-pyretics and non-opioid analgesics, anti-bacterials, and cough and cold preparations were the main pharmaceutical classes involved. Medication errors with prescription medication predominated for

  10. The relationships among work stress, strain and self-reported errors in UK community pharmacy.

    Science.gov (United States)

    Johnson, S J; O'Connor, E M; Jacobs, S; Hassell, K; Ashcroft, D M

    2014-01-01

    Changes in the UK community pharmacy profession including new contractual frameworks, expansion of services, and increasing levels of workload have prompted concerns about rising levels of workplace stress and overload. This has implications for pharmacist health and well-being and the occurrence of errors that pose a risk to patient safety. Despite these concerns being voiced in the profession, few studies have explored work stress in the community pharmacy context. To investigate work-related stress among UK community pharmacists and to explore its relationships with pharmacists' psychological and physical well-being, and the occurrence of self-reported dispensing errors and detection of prescribing errors. A cross-sectional postal survey of a random sample of practicing community pharmacists (n = 903) used ASSET (A Shortened Stress Evaluation Tool) and questions relating to self-reported involvement in errors. Stress data were compared to general working population norms, and regressed on well-being and self-reported errors. Analysis of the data revealed that pharmacists reported significantly higher levels of workplace stressors than the general working population, with concerns about work-life balance, the nature of the job, and work relationships being the most influential on health and well-being. Despite this, pharmacists were not found to report worse health than the general working population. Self-reported error involvement was linked to both high dispensing volume and being troubled by perceived overload (dispensing errors), and resources and communication (detection of prescribing errors). This study contributes to the literature by benchmarking community pharmacists' health and well-being, and investigating sources of stress using a quantitative approach. A further important contribution to the literature is the identification of a quantitative link between high workload and self-reported dispensing errors. Copyright © 2014 Elsevier Inc. All rights

  11. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China).

    Science.gov (United States)

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-05-25

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The

  12. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    Science.gov (United States)

    Zhao, Q.

    2017-12-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The

  13. Medication error detection in two major teaching hospitals: What are the types of errors?

    Directory of Open Access Journals (Sweden)

    Fatemeh Saghafi

    2014-01-01

    Full Text Available Background: Increasing number of reports on medication errors and relevant subsequent damages, especially in medical centers has become a growing concern for patient safety in recent decades. Patient safety and in particular, medication safety is a major concern and challenge for health care professionals around the world. Our prospective study was designed to detect prescribing, transcribing, dispensing, and administering medication errors in two major university hospitals. Materials and Methods: After choosing 20 similar hospital wards in two large teaching hospitals in the city of Isfahan, Iran, the sequence was randomly selected. Diagrams for drug distribution were drawn by the help of pharmacy directors. Direct observation technique was chosen as the method for detecting the errors. A total of 50 doses were studied in each ward to detect prescribing, transcribing and administering errors in each ward. The dispensing error was studied on 1000 doses dispensed in each hospital pharmacy. Results: A total of 8162 number of doses of medications were studied during the four stages, of which 8000 were complete data to be analyzed. 73% of prescribing orders were incomplete and did not have all six parameters (name, dosage form, dose and measuring unit, administration route, and intervals of administration. We found 15% transcribing errors. One-third of administration of medications on average was erroneous in both hospitals. Dispensing errors ranged between 1.4% and 2.2%. Conclusion: Although prescribing and administrating compromise most of the medication errors, improvements are needed in all four stages with regard to medication errors. Clear guidelines must be written and executed in both hospitals to reduce the incidence of medication errors.

  14. Accelerator and transport line survey and alignment

    International Nuclear Information System (INIS)

    Ruland, R.E.

    1991-10-01

    This paper summarizes the survey and alignment processes of accelerators and transport lines and discusses the propagation of errors associated with these processes. The major geodetic principles governing the survey and alignment measurement space are introduced and their relationship to a lattice coordinate system shown. The paper continues with a broad overview about the activities involved in the step sequence from initial absolute alignment to final smoothing. Emphasis is given to the relative alignment of components, in particular to the importance of incorporating methods to remove residual systematic effects in surveying and alignment operations. Various approaches to smoothing used at major laboratories are discussed. 47 refs., 19 figs., 1 tab

  15. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    Directory of Open Access Journals (Sweden)

    Greg A. Breed

    2015-08-01

    Full Text Available Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm, this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  16. The Open Cluster Chemical Abundances and Mapping (OCCAM) Survey: Optical Extension for Neutron Capture Elements

    Science.gov (United States)

    Melendez, Matthew; O'Connell, Julia; Frinchaboy, Peter M.; Donor, John; Cunha, Katia M. L.; Shetrone, Matthew D.; Majewski, Steven R.; Zasowski, Gail; Pinsonneault, Marc H.; Roman-Lopes, Alexandre; Stassun, Keivan G.; APOGEE Team

    2017-01-01

    The Open Cluster Chemical Abundance & Mapping (OCCAM) survey is a systematic survey of Galactic open clusters using data primarily from the SDSS-III/APOGEE-1 survey. However, neutron capture elements are very limited in the IR region covered by APOGEE. In an effort to fully study detailed Galactic chemical evolution, we are conducting a high resolution (R~60,000) spectroscopic abundance analysis of neutron capture elements for OCCAM clusters in the optical regime to complement the APOGEE results. As part of this effort, we present Ba II, La II, Ce II and Eu II results for a few open clusters without previous abundance measurements using data obtained at McDonald Observatory with the 2.1m Otto Struve telescope and Sandiford Echelle Spectrograph.This work is supported by an NSF AAG grant AST-1311835.

  17. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Directory of Open Access Journals (Sweden)

    Pitchaiah Mandava

    Full Text Available OBJECTIVE: Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS, a range of scores ("Shift" is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. METHODS: We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. RESULTS: Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD. Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001. Taking errors into account, SAINT I would have required 24% more subjects than were randomized. CONCLUSION: We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We

  18. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  19. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  20. THE ELM SURVEY. II. TWELVE BINARY WHITE DWARF MERGER SYSTEMS

    International Nuclear Information System (INIS)

    Kilic, Mukremin; Brown, Warren R.; Kenyon, S. J.; Prieto, Carlos Allende; Agueeros, M. A.; Heinke, Craig

    2011-01-01

    We describe new radial velocity and X-ray observations of extremely low-mass white dwarfs (ELM WDs, ∼0.2 M sun ) in the Sloan Digital Sky Survey Data Release 4 and the MMT Hypervelocity Star survey. We identify four new short period binaries, including two merger systems. These observations bring the total number of short period binary systems identified in our survey to 20. No main-sequence or neutron star companions are visible in the available optical photometry, radio, and X-ray data. Thus, the companions are most likely WDs. Twelve of these systems will merge within a Hubble time due to gravitational wave radiation. We have now tripled the number of known merging WD systems. We discuss the characteristics of this merger sample and potential links to underluminous supernovae, extreme helium stars, AM CVn systems, and other merger products. We provide new observational tests of the WD mass-period distribution and cooling models for ELM WDs. We also find evidence for a new formation channel for single low-mass WDs through binary mergers of two lower mass objects.

  1. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  2. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  3. A Survey of Ca II H and K Chromospheric Emission in Southern Solar-Type Stars

    Science.gov (United States)

    Henry, Todd J.; Soderblom, David R.; Donahue, Robert A.; Baliunas, Sallie L.

    1996-01-01

    More than 800 southern stars within 50 pc have been observed for chromospheric emission in the cores of the Ca II H and K lines. Most of the sample targets were chosen to be G dwarfs on the basis of colors and spectral types. The bimodal distribution in stellar activity first noted in a sample of northern stars by Vaughan and Preston in 1980 is confirmed, and the percentage of active stars, about 30%, is remarkably consistent between the northern and southern surveys. This is especially compelling given that we have used an entirely different instrumental setup and stellar sample than used in the previous study. Comparisons to the Sun, a relatively inactive star, show that most nearby solar-type stars have a similar activity level, and presumably a similar age. We identify two additional subsamples of stars -- a very active group, and a very inactive group. The very active group may be made up of young stars near the Sun, accounting for only a few percent of the sample, and appears to be less than ~0.1 Gyr old. Included in this high-activity tail of the distribution, however, is a subset of very close binaries of the RS CVn or W UMa types. The remaining members of this population may be undetected close binaries or very young single stars. The very inactive group of stars, contributting ~5%--10% to the total sample, may be those caught in a Maunder Minimum type phase. If the observations of the survey stars are considered to be a sequence of snapshots of the Sun during its life, we might expect that the Sun will spend about 10% of the remainder of its main sequence life in a Maunder Minimum phase.

  4. PEP-II vacuum system pressure profile modeling using EXCEL

    International Nuclear Information System (INIS)

    Nordby, M.; Perkins, C.

    1994-06-01

    A generic, adaptable Microsoft EXCEL program to simulate molecular flow in beam line vacuum systems is introduced. Modeling using finite-element approximation of the governing differential equation is discussed, as well as error estimation and program capabilities. The ease of use and flexibility of the spreadsheet-based program is demonstrated. PEP-II vacuum system models are reviewed and compared with analytical models

  5. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  6. CDF run II run control and online monitor

    International Nuclear Information System (INIS)

    Arisawa, T.; Ikado, K.; Badgett, W.; Chlebana, F.; Maeshima, K.; McCrory, E.; Meyer, A.; Patrick, J.; Wenzel, H.; Stadie, H.; Wagner, W.; Veramendi, G.

    2001-01-01

    The authors discuss the CDF Run II Run Control and online event monitoring system. Run Control is the top level application that controls the data acquisition activities across 150 front end VME crates and related service processes. Run Control is a real-time multi-threaded application implemented in Java with flexible state machines, using JDBC database connections to configure clients, and including a user friendly and powerful graphical user interface. The CDF online event monitoring system consists of several parts: the event monitoring programs, the display to browse their results, the server program which communicates with the display via socket connections, the error receiver which displays error messages and communicates with Run Control, and the state manager which monitors the state of the monitor programs

  7. Continuous glucose monitoring in newborn infants: how do errors in calibration measurements affect detected hypoglycemia?

    Science.gov (United States)

    Thomas, Felicity; Signal, Mathew; Harris, Deborah L; Weston, Philip J; Harding, Jane E; Shaw, Geoffrey M; Chase, J Geoffrey

    2014-05-01

    Neonatal hypoglycemia is common and can cause serious brain injury. Continuous glucose monitoring (CGM) could improve hypoglycemia detection, while reducing blood glucose (BG) measurements. Calibration algorithms use BG measurements to convert sensor signals into CGM data. Thus, inaccuracies in calibration BG measurements directly affect CGM values and any metrics calculated from them. The aim was to quantify the effect of timing delays and calibration BG measurement errors on hypoglycemia metrics in newborn infants. Data from 155 babies were used. Two timing and 3 BG meter error models (Abbott Optium Xceed, Roche Accu-Chek Inform II, Nova Statstrip) were created using empirical data. Monte-Carlo methods were employed, and each simulation was run 1000 times. Each set of patient data in each simulation had randomly selected timing and/or measurement error added to BG measurements before CGM data were calibrated. The number of hypoglycemic events, duration of hypoglycemia, and hypoglycemic index were then calculated using the CGM data and compared to baseline values. Timing error alone had little effect on hypoglycemia metrics, but measurement error caused substantial variation. Abbott results underreported the number of hypoglycemic events by up to 8 and Roche overreported by up to 4 where the original number reported was 2. Nova results were closest to baseline. Similar trends were observed in the other hypoglycemia metrics. Errors in blood glucose concentration measurements used for calibration of CGM devices can have a clinically important impact on detection of hypoglycemia. If CGM devices are going to be used for assessing hypoglycemia it is important to understand of the impact of these errors on CGM data. © 2014 Diabetes Technology Society.

  8. Deductive Error Diagnosis and Inductive Error Generalization for Intelligent Tutoring Systems.

    Science.gov (United States)

    Hoppe, H. Ulrich

    1994-01-01

    Examines the deductive approach to error diagnosis for intelligent tutoring systems. Topics covered include the principles of the deductive approach to diagnosis; domain-specific heuristics to solve the problem of generalizing error patterns; and deductive diagnosis and the hypertext-based learning environment. (Contains 26 references.) (JLB)

  9. Measurements of stem diameter: implications for individual- and stand-level errors.

    Science.gov (United States)

    Paul, Keryn I; Larmour, John S; Roxburgh, Stephen H; England, Jacqueline R; Davies, Micah J; Luck, Hamish D

    2017-08-01

    Stem diameter is one of the most common measurements made to assess the growth of woody vegetation, and the commercial and environmental benefits that it provides (e.g. wood or biomass products, carbon sequestration, landscape remediation). Yet inconsistency in its measurement is a continuing source of error in estimates of stand-scale measures such as basal area, biomass, and volume. Here we assessed errors in stem diameter measurement through repeated measurements of individual trees and shrubs of varying size and form (i.e. single- and multi-stemmed) across a range of contrasting stands, from complex mixed-species plantings to commercial single-species plantations. We compared a standard diameter tape with a Stepped Diameter Gauge (SDG) for time efficiency and measurement error. Measurement errors in diameter were slightly (but significantly) influenced by size and form of the tree or shrub, and stem height at which the measurement was made. Compared to standard tape measurement, the mean systematic error with SDG measurement was only -0.17 cm, but varied between -0.10 and -0.52 cm. Similarly, random error was relatively large, with standard deviations (and percentage coefficients of variation) averaging only 0.36 cm (and 3.8%), but varying between 0.14 and 0.61 cm (and 1.9 and 7.1%). However, at the stand scale, sampling errors (i.e. how well individual trees or shrubs selected for measurement of diameter represented the true stand population in terms of the average and distribution of diameter) generally had at least a tenfold greater influence on random errors in basal area estimates than errors in diameter measurements. This supports the use of diameter measurement tools that have high efficiency, such as the SDG. Use of the SDG almost halved the time required for measurements compared to the diameter tape. Based on these findings, recommendations include the following: (i) use of a tape to maximise accuracy when developing allometric models, or when

  10. NURE aerial gamma-ray and magnetic reconnaissance survey of portions of New Mexico, Arizona and Texas. Volume II. New Mexico-Carlsbad NI 31-11 Quadrangle. Final report

    International Nuclear Information System (INIS)

    1981-09-01

    As part of the Department of Energy (DOE) Nation Uranium Resource Evaluation Program, a rotary-wing high sensitivity radiometric and magnetic survey was flown covering the Carlsbad Quadrangle of the State of New Mexico. The area surveyed consisted of approximately 1732 line miles. The survey was flown with a Sikorsky S58T helicopter equipped with a high sensitivity gamma ray spectrometer which was calibrated at the DOE calibration facilities at Walker Field in Grand Junction, Colorado, and the Dynamic Test Range at Lake Mead, Arizona. Instrumentation and data reduction methods are presented in Volume I of this report. The reduced data is presented in the form of stacked profiles, standard deviation anomaly plots, histogram plots and microfiche listings. The results of the geologic interpretation of the radiometric data together with the profiles, anomaly maps and histograms are presented in this Volume II final report

  11. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  12. Errorful and errorless learning: The impact of cue-target constraint in learning from errors.

    Science.gov (United States)

    Bridger, Emma K; Mecklinger, Axel

    2014-08-01

    The benefits of testing on learning are well described, and attention has recently turned to what happens when errors are elicited during learning: Is testing nonetheless beneficial, or can errors hinder learning? Whilst recent findings have indicated that tests boost learning even if errors are made on every trial, other reports, emphasizing the benefits of errorless learning, have indicated that errors lead to poorer later memory performance. The possibility that this discrepancy is a function of the materials that must be learned-in particular, the relationship between the cues and targets-was addressed here. Cued recall after either a study-only errorless condition or an errorful learning condition was contrasted across cue-target associations, for which the extent to which the target was constrained by the cue was either high or low. Experiment 1 showed that whereas errorful learning led to greater recall for low-constraint stimuli, it led to a significant decrease in recall for high-constraint stimuli. This interaction is thought to reflect the extent to which retrieval is constrained by the cue-target association, as well as by the presence of preexisting semantic associations. The advantage of errorful retrieval for low-constraint stimuli was replicated in Experiment 2, and the interaction with stimulus type was replicated in Experiment 3, even when guesses were randomly designated as being either correct or incorrect. This pattern provides support for inferences derived from reports in which participants made errors on all learning trials, whilst highlighting the impact of material characteristics on the benefits and disadvantages that accrue from errorful learning in episodic memory.

  13. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    Science.gov (United States)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  14. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    Science.gov (United States)

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  15. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  16. THE BOLOCAM GALACTIC PLANE SURVEY. II. CATALOG OF THE IMAGE DATA

    International Nuclear Information System (INIS)

    Rosolowsky, Erik; Dunham, Miranda K.; Evans, Neal J.; Harvey, Paul; Ginsburg, Adam; Bally, John; Battersby, Cara; Glenn, Jason; Stringfellow, Guy S.; Bradley, E. Todd; Aguirre, James; Cyganowski, Claudia; Dowell, Darren; Drosback, Meredith; Walawender, Josh; Williams, Jonathan P.

    2010-01-01

    We present a catalog of 8358 sources extracted from images produced by the Bolocam Galactic Plane Survey (BGPS). The BGPS is a survey of the millimeter dust continuum emission from the northern Galactic plane. The catalog sources are extracted using a custom algorithm, Bolocat, which was designed specifically to identify and characterize objects in the large-area maps generated from the Bolocam instrument. The catalog products are designed to facilitate follow-up observations of these relatively unstudied objects. The catalog is 98% complete from 0.4 Jy to 60 Jy over all object sizes for which the survey is sensitive ( -2.4±0.1 and that the mean Galactic latitude for sources is significantly below the midplane: (b) = (-0. 0 095 ± 0. 0 001).

  17. Resurfacing the Jodrell Bank Mk II radio telescope

    Science.gov (United States)

    Spencer, R. E.; Haggis, J. S.; Morrison, I.; Davis, R. J.; Melling, R. J.

    The improvement of the short-wavelength performance of the Jodrell Bank Mk II radio telescope is described. A final rms profile error of 0.6 mm was achieved due to the invention of an inexpensive technique of panel construction and measurement combined with the use of radio-astronomical holographic techniques to measure the telescope under actual operating conditions. Some further improvements to extend the short wavelength performance are suggested.

  18. Suffering in Silence: Medical Error and its Impact on Health Care Providers.

    Science.gov (United States)

    Robertson, Jennifer J; Long, Brit

    2018-04-01

    All humans are fallible. Because physicians are human, unintentional errors unfortunately occur. While unintentional medical errors have an impact on patients and their families, they may also contribute to adverse mental and emotional effects on the involved provider(s). These may include burnout, lack of concentration, poor work performance, posttraumatic stress disorder, depression, and even suicidality. The objectives of this article are to 1) discuss the impact medical error has on involved provider(s), 2) provide potential reasons why medical error can have a negative impact on provider mental health, and 3) suggest solutions for providers and health care organizations to recognize and mitigate the adverse effects medical error has on providers. Physicians and other providers may feel a variety of adverse emotions after medical error, including guilt, shame, anxiety, fear, and depression. It is thought that the pervasive culture of perfectionism and individual blame in medicine plays a considerable role toward these negative effects. In addition, studies have found that despite physicians' desire for support after medical error, many physicians feel a lack of personal and administrative support. This may further contribute to poor emotional well-being. Potential solutions in the literature are proposed, including provider counseling, learning from mistakes without fear of punishment, discussing mistakes with others, focusing on the system versus the individual, and emphasizing provider wellness. Much of the reviewed literature is limited in terms of an emergency medicine focus or even regarding physicians in general. In addition, most studies are survey- or interview-based, which limits objectivity. While additional, more objective research is needed in terms of mitigating the effects of error on physicians, this review may help provide insight and support for those who feel alone in their attempt to heal after being involved in an adverse medical event

  19. THE PRISM MULTI-OBJECT SURVEY (PRIMUS). II. DATA REDUCTION AND REDSHIFT FITTING

    Energy Technology Data Exchange (ETDEWEB)

    Cool, Richard J. [MMT Observatory, Tucson, AZ 85721 (United States); Moustakas, John [Department of Physics, Siena College, 515 Loudon Rd., Loudonville, NY 12211 (United States); Blanton, Michael R.; Hogg, David W. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Burles, Scott M. [D.E. Shaw and Co. L.P, 20400 Stevens Creek Blvd., Suite 850, Cupertino, CA 95014 (United States); Coil, Alison L.; Aird, James; Mendez, Alexander J. [Department of Physics, Center for Astrophysics and Space Sciences, University of California, 9500 Gilman Dr., La Jolla, San Diego, CA 92093 (United States); Eisenstein, Daniel J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St, MS 20, Cambridge, MA 02138 (United States); Wong, Kenneth C. [Steward Observatory, The University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721 (United States); Zhu, Guangtun [Center for Astrophysical Sciences, Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Bernstein, Rebecca A. [Department of Astronomy and Astrophysics, UCA/Lick Observatory, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); Bolton, Adam S. [Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112 (United States)

    2013-04-20

    The PRIsm MUlti-object Survey (PRIMUS) is a spectroscopic galaxy redshift survey to z {approx} 1 completed with a low-dispersion prism and slitmasks allowing for simultaneous observations of {approx}2500 objects over 0.18 deg{sup 2}. The final PRIMUS catalog includes {approx}130,000 robust redshifts over 9.1 deg{sup 2}. In this paper, we summarize the PRIMUS observational strategy and present the data reduction details used to measure redshifts, redshift precision, and survey completeness. The survey motivation, observational techniques, fields, target selection, slitmask design, and observations are presented in Coil et al. Comparisons to existing higher-resolution spectroscopic measurements show a typical precision of {sigma}{sub z}/(1 + z) = 0.005. PRIMUS, both in area and number of redshifts, is the largest faint galaxy redshift survey completed to date and is allowing for precise measurements of the relationship between active galactic nuclei and their hosts, the effects of environment on galaxy evolution, and the build up of galactic systems over the latter half of cosmic history.

  20. Questionnaire surveys of dentists on radiology.

    Science.gov (United States)

    Shelley, A M; Brunton, P; Horner, K

    2012-05-01

    Survey by questionnaire is a widely used research method in dental radiology. A major concern in reviews of questionnaires is non-response. The objectives of this study were to review questionnaire studies in dental radiology with regard to potential survey errors and to develop recommendations to assist future researchers. A literature search with the software search package PubMed was used to obtain internet-based access to Medline through the website www.ncbi.nlm.nih.gov/pubmed. A search of the English language peer-reviewed literature was conducted of all published studies, with no restriction on date. The search strategy found articles with dates from 1983 to 2010. The medical subject heading terms used were "questionnaire", "dental radiology" and "dental radiography". The reference sections of articles retrieved by this method were hand-searched in order to identify further relevant papers. Reviews, commentaries and relevant studies from the wider literature were also included. 53 questionnaire studies were identified in the dental literature that concerned dental radiography and included a report of response rate. These were all published between 1983 and 2010. In total, 87 articles are referred to in this review, including the 53 dental radiology studies. Other cited articles include reviews, commentaries and examples of studies outside dental radiology where they are germane to the arguments presented. Non-response is only one of four broad areas of error to which questionnaire surveys are subject. This review considers coverage, sampling and measurement, as well as non-response. Recommendations are made to assist future research that uses questionnaire surveys.

  1. Putting into practice error management theory: Unlearning and learning to manage action errors in construction.

    Science.gov (United States)

    Love, Peter E D; Smith, Jim; Teo, Pauline

    2018-05-01

    Error management theory is drawn upon to examine how a project-based organization, which took the form of a program alliance, was able to change its established error prevention mindset to one that enacted a learning mindfulness that provided an avenue to curtail its action errors. The program alliance was required to unlearn its existing routines and beliefs to accommodate the practices required to embrace error management. As a result of establishing an error management culture the program alliance was able to create a collective mindfulness that nurtured learning and supported innovation. The findings provide a much-needed context to demonstrate the relevance of error management theory to effectively address rework and safety problems in construction projects. The robust theoretical underpinning that is grounded in practice and presented in this paper provides a mechanism to engender learning from errors, which can be utilized by construction organizations to improve the productivity and performance of their projects. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  3. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  4. Probing HeII Reionization at z>3.5 with Resolved HeII Lyman Alpha Forest Spectra

    Science.gov (United States)

    Worseck, Gabor

    2017-08-01

    The advent of GALEX and COS have revolutionized our view of HeII reionization, the final major phase transition of the intergalactic medium. COS spectra of the HeII Lyman alpha forest have confirmed with high confidence the high HeII transmission that signifies the completion of HeII reionization at z 2.7. However, the handful of z>3.5 quasars observed to date show a set of HeII transmission 'spikes' and larger regions with non-zero transmission that suggest HeII reionization was well underway by z=4. This is in striking conflict with predictions from state-of-the-art radiative transfer simulations of a HeII reionization driven by bright quasars. Explaining these measurements may require either faint quasars or more exotic sources of hard photons at z>4, with concomitant implications for HI reionization. However, many of the observed spikes are unresolved in G140L spectra and are significantly impacted by Poisson noise. Current data cannot reliably probe the ionization state of helium at z>3.5.We request 41 orbits to obtain science-grade G130M spectra of the two UV-brightest HeII-transmitting QSOs at z>3.5 to confirm and resolve their HeII transmission spikes as an unequivocal test of early HeII reionization. These spectra are complemented by recently obtained data from 8m telescopes: (1) Echelle spectra of the coeval HI Lya forest to map the underlying density field that modulates the HeII absorption, and (2) Our dedicated survey for foreground QSOs that may source the HeII transmission. Our recent HST programs revealed the only two viable targets to resolve the z>3.5 HeII Lyman alpha forest, and to conclusively solve this riddle.

  5. PROPERTY OF THE LARGE FORMAT DIGITAL AERIAL CAMERA DMC II

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2012-07-01

    Full Text Available Z/I Imaging introduced with the DMC II 140, 230 and 250 digital aerial cameras with a very large format CCD for the panchromatic channel. The CCDs have with 140 / 230 / 250 mega pixel a size not available in photogrammetry before. CCDs in general have a very high relative accuracy, but the overall geometry has to be checked as well as the influence of not flat CCDs. A CCD with a size of 96mm × 82mm must have a flatness or knowledge of flatness in the range of 1μm if the camera accuracy in the range of 1.3μm shall not be influenced. The DMC II cameras have been evaluated with three different flying heights leading to 5cm, 9cm and 15cm or 20cm GSD, crossing flight lines and 60% side lap. The optimal test conditions guaranteed the precise determination of the object coordinates as well as the systematic image errors. All three camera types show only very small systematic image errors, ranging in the root mean square between 0.12μm up to 0.3μm with extreme values not exceeding 1.6μm. The remaining systematic image errors, determined by analysis of the image residuals and not covered by the additional parameters, are negligible. A standard deviation of the object point heights below the GSD, determined at independent check points, even in blocks with just 20% side lap and 60% end lap is standard. Corresponding to the excellent image geometry the object point coordinates are only slightly influenced by the self calibration. For all DMCII types the handling of image models for data acquisition must not be supported by an improvement of the image coordinates by the determined systematic image errors. Such an improvement up to now is not standard for photogrammetric software packages. The advantage of a single monolithic CCD is obvious. An edge analysis of pan-sharpened DMC II 250 images resulted in factors for the effective resolution below 1.0. The result below 1.0 is only possible by contrast enhancement, but this requires with low image noise

  6. 76 FR 38203 - Proposed Information Collection; North American Woodcock Singing Ground Survey

    Science.gov (United States)

    2011-06-29

    ...] Proposed Information Collection; North American Woodcock Singing Ground Survey AGENCY: Fish and Wildlife... populations. The North American Woodcock Singing Ground Survey is an essential part of the migratory bird.... II. Data OMB Control Number: 1018-0019. Title: North American Woodcock Singing Ground Survey. Service...

  7. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  8. THE LOCAL [C ii] 158 μ m EMISSION LINE LUMINOSITY FUNCTION

    Energy Technology Data Exchange (ETDEWEB)

    Hemmati, Shoubaneh; Yan, Lin; Capak, Peter; Faisst, Andreas; Masters, Daniel [Infrared Processing and Analysis Center, Department of Astronomy, California Institute of Technology, 1200 E. California Blvd., Pasadena CA 91125 (United States); Diaz-Santos, Tanio [Nucleo de Astronomia de la Facultad de Ingenieria, Universidad Diego Portales, Av. Ejercito Libertador 441, Santiago (Chile); Armus, Lee, E-mail: shemmati@ipac.caltech.edu [Spitzer Science Center, Department of Astronomy, California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125 (United States)

    2017-01-01

    We present, for the first time, the local [C ii] 158 μ m emission line luminosity function measured using a sample of more than 500 galaxies from the Revised Bright Galaxy Sample. [C ii] luminosities are measured from the Herschel PACS observations of the Luminous Infrared Galaxies (LIRGs) in the Great Observatories All-sky LIRG Survey and estimated for the rest of the sample based on the far-infrared (far-IR) luminosity and color. The sample covers 91.3% of the sky and is complete at S{sub 60μm} > 5.24 Jy. We calculate the completeness as a function of [C ii] line luminosity and distance, based on the far-IR color and flux densities. The [C ii] luminosity function is constrained in the range ∼10{sup 7–9} L{sub ⊙} from both the 1/ V{sub max} and a maximum likelihood methods. The shape of our derived [C ii] emission line luminosity function agrees well with the IR luminosity function. For the CO(1-0) and [C ii] luminosity functions to agree, we propose a varying ratio of [C ii]/CO(1-0) as a function of CO luminosity, with larger ratios for fainter CO luminosities. Limited [C ii] high-redshift observations as well as estimates based on the IR and UV luminosity functions are suggestive of an evolution in the [C ii] luminosity function similar to the evolution trend of the cosmic star formation rate density. Deep surveys using the Atacama Large Millimeter Array with full capability will be able to confirm this prediction.

  9. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  10. Learning from errors in super-resolution.

    Science.gov (United States)

    Tang, Yi; Yuan, Yuan

    2014-11-01

    A novel framework of learning-based super-resolution is proposed by employing the process of learning from the estimation errors. The estimation errors generated by different learning-based super-resolution algorithms are statistically shown to be sparse and uncertain. The sparsity of the estimation errors means most of estimation errors are small enough. The uncertainty of the estimation errors means the location of the pixel with larger estimation error is random. Noticing the prior information about the estimation errors, a nonlinear boosting process of learning from these estimation errors is introduced into the general framework of the learning-based super-resolution. Within the novel framework of super-resolution, a low-rank decomposition technique is used to share the information of different super-resolution estimations and to remove the sparse estimation errors from different learning algorithms or training samples. The experimental results show the effectiveness and the efficiency of the proposed framework in enhancing the performance of different learning-based algorithms.

  11. Identification of factors which affect the tendency towards and attitudes of emergency unit nurses to make medical errors.

    Science.gov (United States)

    Kiymaz, Dilek; Koç, Zeliha

    2018-03-01

    To determine individual and professional factors affecting the tendency of emergency unit nurses to make medical errors and their attitudes towards these errors in Turkey. Compared with other units, the emergency unit is an environment where there is an increased tendency for making medical errors due to its intensive and rapid pace, noise and complex and dynamic structure. A descriptive cross-sectional study. The study was carried out from 25 July 2014-16 September 2015 with the participation of 284 nurses who volunteered to take part in the study. Data were gathered using the data collection survey for nurses, the Medical Error Tendency Scale and the Medical Error Attitude Scale. It was determined that 40.1% of the nurses previously witnessed medical errors, 19.4% made a medical error in the last year, 17.6% of medical errors were caused by medication errors where the wrong medication was administered in the wrong dose, and none of the nurses filled out a case report form about the medical errors they made. Regarding the factors that caused medical errors in the emergency unit, 91.2% of the nurses stated excessive workload as a cause; 85.1% stated an insufficient number of nurses; and 75.4% stated fatigue, exhaustion and burnout. The study showed that nurses who loved their job were satisfied with their unit and who always worked during day shifts had a lower medical error tendency. It is suggested to consider the following actions: increase awareness about medical errors, organise training to reduce errors in medication administration, develop procedures and protocols specific to the emergency unit health care and create an environment which is not punitive wherein nurses can safely report medical errors. © 2017 John Wiley & Sons Ltd.

  12. Error management process for power stations

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Takeda, Daisuke; Fujimoto, Junzo; Nagasaka, Akihiko

    2016-01-01

    The purpose of this study is to establish 'error management process for power stations' for systematizing activities for human error prevention and for festering continuous improvement of these activities. The following are proposed by deriving concepts concerning error management process from existing knowledge and realizing them through application and evaluation of their effectiveness at a power station: an entire picture of error management process that facilitate four functions requisite for maraging human error prevention effectively (1. systematizing human error prevention tools, 2. identifying problems based on incident reports and taking corrective actions, 3. identifying good practices and potential problems for taking proactive measures, 4. prioritizeng human error prevention tools based on identified problems); detail steps for each activity (i.e. developing an annual plan for human error prevention, reporting and analyzing incidents and near misses) based on a model of human error causation; procedures and example of items for identifying gaps between current and desired levels of executions and outputs of each activity; stages for introducing and establishing the above proposed error management process into a power station. By giving shape to above proposals at a power station, systematization and continuous improvement of activities for human error prevention in line with the actual situation of the power station can be expected. (author)

  13. The Einstein Slew Survey

    Science.gov (United States)

    Elvis, Martin; Plummer, David; Schachter, Jonathan; Fabbiano, G.

    1992-01-01

    A catalog of 819 sources detected in the Einstein IPC Slew Survey of the X-ray sky is presented; 313 of the sources were not previously known as X-ray sources. Typical count rates are 0.1 IPC count/s, roughly equivalent to a flux of 3 x 10 exp -12 ergs/sq cm s. The sources have positional uncertainties of 1.2 arcmin (90 percent confidence) radius, based on a subset of 452 sources identified with previously known pointlike X-ray sources (i.e., extent less than 3 arcmin). Identifications based on a number of existing catalogs of X-ray and optical objects are proposed for 637 of the sources, 78 percent of the survey (within a 3-arcmin error radius) including 133 identifications of new X-ray sources. A public identification data base for the Slew Survey sources will be maintained at CfA, and contributions to this data base are invited.

  14. Getting to the Source: a Survey of Quantitative Data Sources Available to the Everyday Librarian: Part II: Data Sources from Specific Library Applications

    Directory of Open Access Journals (Sweden)

    Lisa Goddard

    2007-03-01

    Full Text Available This is the second part of a two-part article that provides a survey of data sources which are likely to be immediately available to the typical practitioner who wishes to engage in statistical analysis of collections and services within his or her own library. Part I outlines the data elements which can be extracted from web server logs, and discusses web log analysis tools. Part II looks at logs, reports, and data sources from proxy servers, resource vendors, link resolvers, federated search engines, institutional repositories, electronic reference services, and the integrated library system.

  15. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  16. Survey of disruption causes at JET

    International Nuclear Information System (INIS)

    De Vries, P.C.; Johnson, M.F.; Alper, B.; Hender, T.C.; Riccardo, V.; Buratti, P.; Koslowski, H.R.

    2011-01-01

    A survey has been carried out into the causes of all 2309 disruptions over the last decade of JET operations. The aim of this survey was to obtain a complete picture of all possible disruption causes, in order to devise better strategies to prevent or mitigate their impact. The analysis allows the effort to avoid or prevent JET disruptions to be more efficient and effective. As expected, a highly complex pattern of chain of events that led to disruptions emerged. It was found that the majority of disruptions had a technical root cause, for example due to control errors, or operator mistakes. These bring a random, non-physics, factor into the occurrence of disruptions and the disruption rate or disruptivity of a scenario may depend more on technical performance than on physics stability issues. The main root cause of JET disruptions was nevertheless due to neo-classical tearing modes that locked, closely followed in second place by disruptions due to human error. The development of more robust operational scenarios has reduced the JET disruption rate over the last decade from about 15% to below 4%. A fraction of all disruptions was caused by very fast, precursorless unpredictable events. The occurrence of these disruptions may set a lower limit of 0.4% to the disruption rate of JET. If one considers on top of that human error and all unforeseen failures of heating or control systems this lower limit may rise to 1.0% or 1.6%, respectively.

  17. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  18. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  19. The S201 far-ultraviolet imaging survey - A summary of results and implications for future surveys

    Science.gov (United States)

    Carruthers, G. R.; Page, T.

    1984-01-01

    The results from all-sky surveys with the S201 FUV camera/spectrograph from the moon during the Apollo 16 mission are summarized with respect to implications for future UV all-sky surveys. The scans provided imagery of 10 fields, each 20 deg in diameter, in the wavelength ranges 1050-1600 A and 1250-1600 A. Best detection thresholds were obtained with 10 and 30 min exposures at 1400 A. Only 7 percent sky coverage was recorded, and then only down to 11th mag. A Mark II camera may be flown on the Shuttle on the Spartan 3 mission, as may be an all-reflector Schmidt telescope. An additional 20 percent of the sky will be mapped and microchannel intensification will increase the diffuse source sensitivity by two orders of magnitude. Several objects sighted with the S201 will be reviewed with the Mark II.

  20. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.