WorldWideScience

Sample records for survey ii error

  1. THE DISKMASS SURVEY. II. ERROR BUDGET

    International Nuclear Information System (INIS)

    Bershady, Matthew A.; Westfall, Kyle B.; Verheijen, Marc A. W.; Martinsson, Thomas; Andersen, David R.; Swaters, Rob A.

    2010-01-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ * ), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25 0 -35 0 is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction (F bar ) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σ dyn ), disk stellar mass-to-light ratio (Υ disk * ), and disk maximality (F *,max disk ≡V disk *,max / V c ). Random and systematic errors in these quantities for individual galaxies will be ∼25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  2. The DiskMass Survey. II. Error Budget

    Science.gov (United States)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  3. Total Survey Error for Longitudinal Surveys

    NARCIS (Netherlands)

    Lynn, Peter; Lugtig, P.J.

    2016-01-01

    This article describes the application of the total survey error paradigm to longitudinal surveys. Several aspects of survey error, and of the interactions between different types of error, are distinct in the longitudinal survey context. Furthermore, error trade-off decisions in survey design and

  4. The decline and fall of Type II error rates

    Science.gov (United States)

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  5. Errors in practical measurement in surveying, engineering, and technology

    International Nuclear Information System (INIS)

    Barry, B.A.; Morris, M.D.

    1991-01-01

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  6. Analysis of Employee's Survey for Preventing Human-Errors

    International Nuclear Information System (INIS)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun

    2013-01-01

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses

  7. Nonresponse Error in Mail Surveys: Top Ten Problems

    Directory of Open Access Journals (Sweden)

    Jeanette M. Daly

    2011-01-01

    Full Text Available Conducting mail surveys can result in nonresponse error, which occurs when the potential participant is unwilling to participate or impossible to contact. Nonresponse can result in a reduction in precision of the study and may bias results. The purpose of this paper is to describe and make readers aware of a top ten list of mailed survey problems affecting the response rate encountered over time with different research projects, while utilizing the Dillman Total Design Method. Ten nonresponse error problems were identified, such as inserter machine gets sequence out of order, capitalization in databases, and mailing discarded by postal service. These ten mishaps can potentiate nonresponse errors, but there are ways to minimize their frequency. Suggestions offered stem from our own experiences during research projects. Our goal is to increase researchers' knowledge of nonresponse error problems and to offer solutions which can decrease nonresponse error in future projects.

  8. A national physician survey of diagnostic error in paediatrics.

    Science.gov (United States)

    Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B

    2016-10-01

    This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.

  9. The computation of equating errors in international surveys in education.

    Science.gov (United States)

    Monseur, Christian; Berezner, Alla

    2007-01-01

    Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.

  10. The Southern H ii Region Discovery Survey (SHRDS): Pilot Survey

    International Nuclear Information System (INIS)

    Brown, C.; Dickey, John M.; Jordan, C.; Anderson, L. D.; Armentrout, W. P.; Balser, Dana S.; Wenger, Trey V.; Bania, T. M.; Dawson, J. R.; Mc Clure-Griffiths, N. M.

    2017-01-01

    The Southern H ii Region Discovery Survey is a survey of the third and fourth quadrants of the Galactic plane that will detect radio recombination line (RRL) and continuum emission at cm-wavelengths from several hundred H ii region candidates using the Australia Telescope Compact Array. The targets for this survey come from the WISE Catalog of Galactic H ii Regions and were identified based on mid-infrared and radio continuum emission. In this pilot project, two different configurations of the Compact Array Broad Band receiver and spectrometer system were used for short test observations. The pilot surveys detected RRL emission from 36 of 53 H ii region candidates, as well as seven known H ii regions that were included for calibration. These 36 recombination line detections confirm that the candidates are true H ii regions and allow us to estimate their distances.

  11. The Southern H ii Region Discovery Survey (SHRDS): Pilot Survey

    Energy Technology Data Exchange (ETDEWEB)

    Brown, C.; Dickey, John M. [School of Physical Sciences, Private Bag 37, University of Tasmania, Hobart, TAS, 7001 (Australia); Jordan, C. [International Centre for Radio Astronomy Research, Curtin University, Perth, WA, 6845 (Australia); Anderson, L. D.; Armentrout, W. P. [Department of Physics and Astronomy, West Virginia University, P.O. Box 6315, Morgantown, WV 26506 (United States); Balser, Dana S.; Wenger, Trey V. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22904 (United States); Bania, T. M. [Institute for Astrophysical Research, Department of Astronomy, Boston University, 725 Commonwealth Avenue, Boston, MA 02215 (United States); Dawson, J. R. [Department of Physics and Astronomy and MQ Research Centre in Astronomy, Astrophysics and Astrophotonics, Macquarie University, NSW, 2109 (Australia); Mc Clure-Griffiths, N. M. [Research School of Astronomy and Astrophysics, The Australian National University, Canberra ACT 2611 (Australia)

    2017-07-01

    The Southern H ii Region Discovery Survey is a survey of the third and fourth quadrants of the Galactic plane that will detect radio recombination line (RRL) and continuum emission at cm-wavelengths from several hundred H ii region candidates using the Australia Telescope Compact Array. The targets for this survey come from the WISE Catalog of Galactic H ii Regions and were identified based on mid-infrared and radio continuum emission. In this pilot project, two different configurations of the Compact Array Broad Band receiver and spectrometer system were used for short test observations. The pilot surveys detected RRL emission from 36 of 53 H ii region candidates, as well as seven known H ii regions that were included for calibration. These 36 recombination line detections confirm that the candidates are true H ii regions and allow us to estimate their distances.

  12. Medical Errors in Cyprus: The 2005 Eurobarometer Survey

    Directory of Open Access Journals (Sweden)

    Andreas Pavlakis

    2012-01-01

    Full Text Available Background: Medical errors have been highlighted in recent years by different agencies, scientific bodies and research teams alike. We sought to explore the issue of medical errors in Cyprus using data from the Eurobarometer survey.Methods: Data from the special Eurobarometer survey conducted in 2005 across all European Union countries (EU-25 and the acceding countries were obtained from the corresponding EU office. Statisticalanalyses including logistic regression models were performed using SPSS.Results: A total of 502 individuals participated in the Cyprus survey. About 90% reported that they had often or sometimes heard about medical errors, while 22% reported that a family member or they had suffered a serious medical error in a local hospital. In addition, 9.4% reported a serious problem from a prescribed medicine. We also found statistically significant differences across different ages and gender and in rural versus urban residents. Finally, using multivariable-adjusted logistic regression models, wefound that residents in rural areas were more likely to have suffered a serious medical error in a local hospital or from a prescribed medicine.Conclusion: Our study shows that the vast majority of residents in Cyprus in parallel with the other Europeans worry about medical errors and a significant percentage report having suffered a serious medical error at a local hospital or from a prescribed medicine. The results of our study could help the medical community in Cyprus and the society at large to enhance its vigilance with respect to medical errors in order to improve medical care.

  13. The sloan digital sky survey-II supernova survey

    DEFF Research Database (Denmark)

    Frieman, Joshua A.; Bassett, Bruce; Becker, Andrew

    2008-01-01

    The Sloan Digital Sky Survey-II (SDSS-II) has embarked on a multi-year project to identify and measure light curves for intermediate-redshift (0.05 < z < 0.35) Type Ia supernovae (SNe Ia) using repeated five-band (ugriz) imaging over an area of 300 sq. deg. The survey region is a stripe 2.5° wide...

  14. Identifying Lattice, Orbit, And BPM Errors in PEP-II

    International Nuclear Information System (INIS)

    Decker, F.-J.; SLAC

    2005-01-01

    The PEP-II B-Factory is delivering peak luminosities of up to 9.2 · 10 33 1/cm 2 · l/s. This is very impressive especially considering our poor understanding of the lattice, absolute orbit and beam position monitor system (BPM). A few simple MATLAB programs were written to get lattice information, like betatron functions in a coupled machine (four all together) and the two dispersions, from the current machine and compare it the design. Big orbit deviations in the Low Energy Ring (LER) could be explained not by bad BPMs (only 3), but by many strong correctors (one corrector to fix four BPMs on average). Additionally these programs helped to uncover a sign error in the third order correction of the BPM system. Further analysis of the current information of the BPMs (sum of all buttons) indicates that there might be still more problematic BPMs

  15. Graphics Education Survey. Part II.

    Science.gov (United States)

    Ernst, Sandra B.

    After a 1977 survey reflected the importance of graphics education for news students, a study was developed to investigate the state of graphics education in the whole field of journalism. A questionnaire was sent to professors and administrators in four print-oriented professional fields of education: magazine, advertising, public relations, and…

  16. Beam induced vacuum measurement error in BEPC II

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    When the beam in BEPCII storage ring aborts suddenly, the measured pressure of cold cathode gauges and ion pumps will drop suddenly and decrease to the base pressure gradually. This shows that there is a beam induced positive error in the pressure measurement during beam operation. The error is the difference between measured and real pressures. Right after the beam aborts, the error will disappear immediately and the measured pressure will then be equal to real pressure. For one gauge, we can fit a non-linear pressure-time curve with its measured pressure data 20 seconds after a sudden beam abortion. From this negative exponential decay pumping-down curve, real pressure at the time when the beam starts aborting is extrapolated. With the data of several sudden beam abortions we have got the errors of that gauge in different beam currents and found that the error is directly proportional to the beam current, as expected. And a linear data-fitting gives the proportion coefficient of the equation, which we derived to evaluate the real pressure all the time when the beam with varied currents is on.

  17. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    Science.gov (United States)

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  18. Merotelic kinetochore attachment in oocyte meiosis II causes sister chromatids segregation errors in aged mice.

    Science.gov (United States)

    Cheng, Jin-Mei; Li, Jian; Tang, Ji-Xin; Hao, Xiao-Xia; Wang, Zhi-Peng; Sun, Tie-Cheng; Wang, Xiu-Xia; Zhang, Yan; Chen, Su-Ren; Liu, Yi-Xun

    2017-08-03

    Mammalian oocyte chromosomes undergo 2 meiotic divisions to generate haploid gametes. The frequency of chromosome segregation errors during meiosis I increase with age. However, little attention has been paid to the question of how aging affects sister chromatid segregation during oocyte meiosis II. More importantly, how aneuploid metaphase II (MII) oocytes from aged mice evade the spindle assembly checkpoint (SAC) mechanism to complete later meiosis II to form aneuploid embryos remains unknown. Here, we report that MII oocytes from naturally aged mice exhibited substantial errors in chromosome arrangement and configuration compared with young MII oocytes. Interestingly, these errors in aged oocytes had no impact on anaphase II onset and completion as well as 2-cell formation after parthenogenetic activation. Further study found that merotelic kinetochore attachment occurred more frequently and could stabilize the kinetochore-microtubule interaction to ensure SAC inactivation and anaphase II onset in aged MII oocytes. This orientation could persist largely during anaphase II in aged oocytes, leading to severe chromosome lagging and trailing as well as delay of anaphase II completion. Therefore, merotelic kinetochore attachment in oocyte meiosis II exacerbates age-related genetic instability and is a key source of age-dependent embryo aneuploidy and dysplasia.

  19. Overview about bias in Customer Satisfaction Surveys and focus on self-selection error

    OpenAIRE

    Giovanna Nicolini; Luciana Dalla Valle

    2009-01-01

    The present paper provides an overview of the main types of surveys carried out for customer satisfaction analyses. In order to carry out these surveys it is possible to plan a census or select a sample. The higher the accuracy of the survey, the more reliable the results of the analysis. For this very reason, researchers pay special attention to surveys with bias due to non sampling errors, in particular to self-selection errors. These phenomena are very frequent especially in web surveys. S...

  20. A survey of camera error sources in machine vision systems

    Science.gov (United States)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  1. Active and passive compensation of APPLE II-introduced multipole errors through beam-based measurement

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Hwang, Ching-Shiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Department of Electrophysics, National Chiao Tung University, Hsinchu 30050, Taiwan (China)

    2016-08-01

    The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.

  2. Drug Administration Errors by South African Anaesthetists – a Survey

    African Journals Online (AJOL)

    Adele

    TRAVEL FELLOWSHIP. Objectives. To investigate the incidence, nature of and factors contributing towards “wrong drug administrations” by South African anaesthetists. Design. A confidential, self-reporting survey was sent out to the 720 anaesthetists on the database of the South African Society of. Anaesthesiologists.

  3. Medication errors in chemotherapy preparation and administration: a survey conducted among oncology nurses in Turkey.

    Science.gov (United States)

    Ulas, Arife; Silay, Kamile; Akinci, Sema; Dede, Didem Sener; Akinci, Muhammed Bulent; Sendur, Mehmet Ali Nahit; Cubukcu, Erdem; Coskun, Hasan Senol; Degirmenci, Mustafa; Utkan, Gungor; Ozdemir, Nuriye; Isikdogan, Abdurrahman; Buyukcelik, Abdullah; Inanc, Mevlude; Bilici, Ahmet; Odabasi, Hatice; Cihan, Sener; Avci, Nilufer; Yalcin, Bulent

    2015-01-01

    Medication errors in oncology may cause severe clinical problems due to low therapeutic indices and high toxicity of chemotherapeutic agents. We aimed to investigate unintentional medication errors and underlying factors during chemotherapy preparation and administration based on a systematic survey conducted to reflect oncology nurses experience. This study was conducted in 18 adult chemotherapy units with volunteer participation of 206 nurses. A survey developed by primary investigators and medication errors (MAEs) defined preventable errors during prescription of medication, ordering, preparation or administration. The survey consisted of 4 parts: demographic features of nurses; workload of chemotherapy units; errors and their estimated monthly number during chemotherapy preparation and administration; and evaluation of the possible factors responsible from ME. The survey was conducted by face to face interview and data analyses were performed with descriptive statistics. Chi-square or Fisher exact tests were used for a comparative analysis of categorical data. Some 83.4% of the 210 nurses reported one or more than one error during chemotherapy preparation and administration. Prescribing or ordering wrong doses by physicians (65.7%) and noncompliance with administration sequences during chemotherapy administration (50.5%) were the most common errors. The most common estimated average monthly error was not following the administration sequence of the chemotherapeutic agents (4.1 times/month, range 1-20). The most important underlying reasons for medication errors were heavy workload (49.7%) and insufficient number of staff (36.5%). Our findings suggest that the probability of medication error is very high during chemotherapy preparation and administration, the most common involving prescribing and ordering errors. Further studies must address the strategies to minimize medication error in chemotherapy receiving patients, determine sufficient protective measures

  4. Analysis of Employee's Survey for Preventing Human-Errors

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses.

  5. The Data Release of the Sloan Digital Sky Survey-II Supernova Survey

    Science.gov (United States)

    Sako, Masao; Bassett, Bruce; Becker, Andrew C.; Brown, Peter J.; Campbell, Heather; Wolf, Rachel; Cinabro, David; D’Andrea, Chris B.; Dawson, Kyle S.; DeJongh, Fritz; Depoy, Darren L.; Dilday, Ben; Doi, Mamoru; Filippenko, Alexei V.; Fischer, John A.; Foley, Ryan J.; Frieman, Joshua A.; Galbany, Lluis; Garnavich, Peter M.; Goobar, Ariel; Gupta, Ravi R.; Hill, Gary J.; Hayden, Brian T.; Hlozek, Renée; Holtzman, Jon A.; Hopp, Ulrich; Jha, Saurabh W.; Kessler, Richard; Kollatschny, Wolfram; Leloudas, Giorgos; Marriner, John; Marshall, Jennifer L.; Miquel, Ramon; Morokuma, Tomoki; Mosher, Jennifer; Nichol, Robert C.; Nordin, Jakob; Olmstead, Matthew D.; Östman, Linda; Prieto, Jose L.; Richmond, Michael; Romani, Roger W.; Sollerman, Jesper; Stritzinger, Max; Schneider, Donald P.; Smith, Mathew; Wheeler, J. Craig; Yasuda, Naoki; Zheng, Chen

    2018-06-01

    This paper describes the data release of the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey conducted between 2005 and 2007. Light curves, spectra, classifications, and ancillary data are presented for 10,258 variable and transient sources discovered through repeat ugriz imaging of SDSS Stripe 82, a 300 deg2 area along the celestial equator. This data release is comprised of all transient sources brighter than r ≃ 22.5 mag with no history of variability prior to 2004. Dedicated spectroscopic observations were performed on a subset of 889 transients, as well as spectra for thousands of transient host galaxies using the SDSS-III BOSS spectrographs. Photometric classifications are provided for the candidates with good multi-color light curves that were not observed spectroscopically, using host galaxy redshift information when available. From these observations, 4607 transients are either spectroscopically confirmed, or likely to be, supernovae, making this the largest sample of supernova candidates ever compiled. We present a new method for SN host-galaxy identification and derive host-galaxy properties including stellar masses, star formation rates, and the average stellar population ages from our SDSS multi-band photometry. We derive SALT2 distance moduli for a total of 1364 SN Ia with spectroscopic redshifts as well as photometric redshifts for a further 624 purely photometric SN Ia candidates. Using the spectroscopically confirmed subset of the three-year SDSS-II SN Ia sample and assuming a flat ΛCDM cosmology, we determine Ω M = 0.315 ± 0.093 (statistical error only) and detect a non-zero cosmological constant at 5.7σ.

  6. The Data Release of the Sloan Digital Sky Survey-II Supernova Survey

    Energy Technology Data Exchange (ETDEWEB)

    Sako, Masao; et al.

    2014-01-14

    This paper describes the data release of the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey conducted between 2005 and 2007. Light curves, spectra, classifications, and ancillary data are presented for 10,258 variable and transient sources discovered through repeat ugriz imaging of SDSS Stripe 82, a 300 deg2 area along the celestial equator. This data release is comprised of all transient sources brighter than r~22.5 mag with no history of variability prior to 2004. Dedicated spectroscopic observations were performed on a subset of 889 transients, as well as spectra for thousands of transient host galaxies using the SDSS-III BOSS spectrographs. Photometric classifications are provided for the candidates with good multi-color light curves that were not observed spectroscopically. From these observations, 4607 transients are either spectroscopically confirmed, or likely to be, supernovae, making this the largest sample of supernova candidates ever compiled. We present a new method for SN host-galaxy identification and derive host-galaxy properties including stellar masses, star-formation rates, and the average stellar population ages from our SDSS multi-band photometry. We derive SALT2 distance moduli for a total of 1443 SN Ia with spectroscopic redshifts as well as photometric redshifts for a further 677 purely-photometric SN Ia candidates. Using the spectroscopically confirmed subset of the three-year SDSS-II SN Ia sample and assuming a flat Lambda-CDM cosmology, we determine Omega_M = 0.315 +/- 0.093 (statistical error only) and detect a non-zero cosmological constant at 5.7 sigmas.

  7. How Radiation Oncologists Would Disclose Errors: Results of a Survey of Radiation Oncologists and Trainees

    International Nuclear Information System (INIS)

    Evans, Suzanne B.; Yu, James B.; Chagpar, Anees

    2012-01-01

    Purpose: To analyze error disclosure attitudes of radiation oncologists and to correlate error disclosure beliefs with survey-assessed disclosure behavior. Methods and Materials: With institutional review board exemption, an anonymous online survey was devised. An email invitation was sent to radiation oncologists (American Society for Radiation Oncology [ASTRO] gold medal winners, program directors and chair persons of academic institutions, and former ASTRO lecturers) and residents. A disclosure score was calculated based on the number or full, partial, or no disclosure responses chosen to the vignette-based questions, and correlation was attempted with attitudes toward error disclosure. Results: The survey received 176 responses: 94.8% of respondents considered themselves more likely to disclose in the setting of a serious medical error; 72.7% of respondents did not feel it mattered who was responsible for the error in deciding to disclose, and 3.9% felt more likely to disclose if someone else was responsible; 38.0% of respondents felt that disclosure increased the likelihood of a lawsuit, and 32.4% felt disclosure decreased the likelihood of lawsuit; 71.6% of respondents felt near misses should not be disclosed; 51.7% thought that minor errors should not be disclosed; 64.7% viewed disclosure as an opportunity for forgiveness from the patient; and 44.6% considered the patient's level of confidence in them to be a factor in disclosure. For a scenario that could be considerable, a non-harmful error, 78.9% of respondents would not contact the family. Respondents with high disclosure scores were more likely to feel that disclosure was an opportunity for forgiveness (P=.003) and to have never seen major medical errors (P=.004). Conclusions: The surveyed radiation oncologists chose to respond with full disclosure at a high rate, although ideal disclosure practices were not uniformly adhered to beyond the initial decision to disclose the occurrence of the error.

  8. The Data Release of the Sloan Digital Sky Survey-II Supernova Survey

    DEFF Research Database (Denmark)

    Sako, Masao; Bassett, Bruce; C. Becker, Andrew

    2014-01-01

    This paper describes the data release of the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey conducted between 2005 and 2007. Light curves, spectra, classifications, and ancillary data are presented for 10,258 variable and transient sources discovered through repeat ugriz imaging of SDSS S...

  9. The Extended Northern ROSAT Galaxy Cluster Survey (NORAS II). I. Survey Construction and First Results

    Energy Technology Data Exchange (ETDEWEB)

    Böhringer, Hans; Chon, Gayoung; Trümper, Joachim [Max-Planck-Institut für Extraterrestrische Physik, D-85748 Garching (Germany); Retzlaff, Jörg [ESO, D-85748 Garching (Germany); Meisenheimer, Klaus [Max-Planck-Institut für Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Schartel, Norbert [ESAC, Camino Bajo del Castillo, Villanueva de la Cañada, E-28692 Madrid (Spain)

    2017-05-01

    As the largest, clearly defined building blocks of our universe, galaxy clusters are interesting astrophysical laboratories and important probes for cosmology. X-ray surveys for galaxy clusters provide one of the best ways to characterize the population of galaxy clusters. We provide a description of the construction of the NORAS II galaxy cluster survey based on X-ray data from the northern part of the ROSAT All-Sky Survey. NORAS II extends the NORAS survey down to a flux limit of 1.8 × 10{sup −12} erg s{sup −1} cm{sup −2} (0.1–2.4 keV), increasing the sample size by about a factor of two. The NORAS II cluster survey now reaches the same quality and depth as its counterpart, the southern REFLEX II survey, allowing us to combine the two complementary surveys. The paper provides information on the determination of the cluster X-ray parameters, the identification process of the X-ray sources, the statistics of the survey, and the construction of the survey selection function, which we provide in numerical format. Currently NORAS II contains 860 clusters with a median redshift of z  = 0.102. We provide a number of statistical functions, including the log N –log S and the X-ray luminosity function and compare these to the results from the complementary REFLEX II survey. Using the NORAS II sample to constrain the cosmological parameters, σ {sub 8} and Ω{sub m}, yields results perfectly consistent with those of REFLEX II. Overall, the results show that the two hemisphere samples, NORAS II and REFLEX II, can be combined without problems into an all-sky sample, just excluding the zone of avoidance.

  10. The Extended Northern ROSAT Galaxy Cluster Survey (NORAS II). I. Survey Construction and First Results

    International Nuclear Information System (INIS)

    Böhringer, Hans; Chon, Gayoung; Trümper, Joachim; Retzlaff, Jörg; Meisenheimer, Klaus; Schartel, Norbert

    2017-01-01

    As the largest, clearly defined building blocks of our universe, galaxy clusters are interesting astrophysical laboratories and important probes for cosmology. X-ray surveys for galaxy clusters provide one of the best ways to characterize the population of galaxy clusters. We provide a description of the construction of the NORAS II galaxy cluster survey based on X-ray data from the northern part of the ROSAT All-Sky Survey. NORAS II extends the NORAS survey down to a flux limit of 1.8 × 10 −12 erg s −1 cm −2 (0.1–2.4 keV), increasing the sample size by about a factor of two. The NORAS II cluster survey now reaches the same quality and depth as its counterpart, the southern REFLEX II survey, allowing us to combine the two complementary surveys. The paper provides information on the determination of the cluster X-ray parameters, the identification process of the X-ray sources, the statistics of the survey, and the construction of the survey selection function, which we provide in numerical format. Currently NORAS II contains 860 clusters with a median redshift of z  = 0.102. We provide a number of statistical functions, including the log N –log S and the X-ray luminosity function and compare these to the results from the complementary REFLEX II survey. Using the NORAS II sample to constrain the cosmological parameters, σ 8 and Ω m , yields results perfectly consistent with those of REFLEX II. Overall, the results show that the two hemisphere samples, NORAS II and REFLEX II, can be combined without problems into an all-sky sample, just excluding the zone of avoidance.

  11. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. The Use of PCs, Smartphones, and Tablets in a Probability-Based Panel Survey : Effects on Survey Measurement Error

    NARCIS (Netherlands)

    Lugtig, Peter; Toepoel, Vera

    2016-01-01

    Respondents in an Internet panel survey can often choose which device they use to complete questionnaires: a traditional PC, laptop, tablet computer, or a smartphone. Because all these devices have different screen sizes and modes of data entry, measurement errors may differ between devices. Using

  13. The CLASS blazar survey - II. Optical properties

    NARCIS (Netherlands)

    Caccianiga, A; Marcha, MJ; Anton, S; Mack, KH; Neeser, MJ

    2002-01-01

    This paper presents the optical properties of the objects selected in the CLASS blazar survey. Because an optical spectrum is now available for 70 per cent of the 325 sources present in the sample, a spectral classification, based on the appearance of the emission/absorption lines, is possible. A

  14. Improved characterisation and modelling of measurement errors in electrical resistivity tomography (ERT) surveys

    Science.gov (United States)

    Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew

    2017-11-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  15. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    Directory of Open Access Journals (Sweden)

    Brady T West

    Full Text Available Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT, which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.

  16. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    Science.gov (United States)

    West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817

  17. Are Divorce Studies Trustworthy? The Effects of Survey Nonresponse and Response Errors

    Science.gov (United States)

    Mitchell, Colter

    2010-01-01

    Researchers rely on relationship data to measure the multifaceted nature of families. This article speaks to relationship data quality by examining the ramifications of different types of error on divorce estimates, models predicting divorce behavior, and models employing divorce as a predictor. Comparing matched survey and divorce certificate…

  18. Can i just check...? Effects of edit check questions on measurement error and survey estimates

    NARCIS (Netherlands)

    Lugtig, Peter; Jäckle, Annette

    2014-01-01

    Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to

  19. Pajarito Plateau archaeological surveys and excavations. II

    Energy Technology Data Exchange (ETDEWEB)

    Steen, C R

    1982-04-01

    Los Alamos National Laboratory continues its archaeological program of data gathering and salvage excavations. Sites recently added to the archaeological survey are described, as well as the results of five excavations. Among the more interesting and important discoveries are (1) the apparently well-established local use of anhydrous lime, and (2) a late pre-Columbian use of earlier house sites and middens for garden plots. Evidence indicated that the local puebloan population was the result of an expansion of upper Rio Grande peoples, not an influx of migrants.

  20. Comparing Two Inferential Approaches to Handling Measurement Error in Mixed-Mode Surveys

    Directory of Open Access Journals (Sweden)

    Buelens Bart

    2017-06-01

    Full Text Available Nowadays sample survey data collection strategies combine web, telephone, face-to-face, or other modes of interviewing in a sequential fashion. Measurement bias of survey estimates of means and totals are composed of different mode-dependent measurement errors as each data collection mode has its own associated measurement error. This article contains an appraisal of two recently proposed methods of inference in this setting. The first is a calibration adjustment to the survey weights so as to balance the survey response to a prespecified distribution of the respondents over the modes. The second is a prediction method that seeks to correct measurements towards a benchmark mode. The two methods are motivated differently but at the same time coincide in some circumstances and agree in terms of required assumptions. The methods are applied to the Labour Force Survey in the Netherlands and are found to provide almost identical estimates of the number of unemployed. Each method has its own specific merits. Both can be applied easily in practice as they do not require additional data collection beyond the regular sequential mixed-mode survey, an attractive element for national statistical institutes and other survey organisations.

  1. The relative size of measurement error and attrition error in a panel survey. Comparing them with a new multi-trait multi-method model

    NARCIS (Netherlands)

    Lugtig, Peter

    2017-01-01

    This paper proposes a method to simultaneously estimate both measurement and nonresponse errors for attitudinal and behavioural questions in a longitudinal survey. The method uses a Multi-Trait Multi-Method (MTMM) approach, which is commonly used to estimate the reliability and validity of survey

  2. Association between presenilin-1 polymorphism and maternal meiosis II errors in Down syndrome.

    Science.gov (United States)

    Petersen, M B; Karadima, G; Samaritaki, M; Avramopoulos, D; Vassilopoulos, D; Mikkelsen, M

    2000-08-28

    Several lines of evidence suggest a shared genetic susceptibility to Down syndrome (DS) and Alzheimer disease (AD). Rare forms of autosomal-dominant AD are caused by mutations in the APP and presenilin genes (PS-1 and PS-2). The presenilin proteins have been localized to the nuclear membrane, kinetochores, and centrosomes, suggesting a function in chromosome segregation. A genetic association between a polymorphism in intron 8 of the PS-1 gene and AD has been described in some series, and an increased risk of AD has been reported in mothers of DS probands. We therefore studied 168 probands with free trisomy 21 of known parental and meiotic origin and their parents from a population-based material, by analyzing the intron 8 polymorphism in the PS-1 gene. An increased frequency of allele 1 in mothers with a meiosis II error (70.8%) was found compared with mothers with a meiosis I error (52.7%, P < 0.01), with an excess of the 11 genotype in the meiosis II mothers. The frequency of allele 1 in mothers carrying apolipoprotein E (APOE) epsilon4 allele (68.0%) was higher than in mothers without epsilon4 (52.2%, P < 0.01). We hypothesize that the PS-1 intronic polymorphism might be involved in chromosomal nondisjunction through an influence on the expression level of PS-1 or due to linkage disequilibrium with biologically relevant polymorphisms in or outside the PS-1 gene. Copyright 2000 Wiley-Liss, Inc.

  3. Estimating Classification Errors under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC)

    NARCIS (Netherlands)

    Boeschoten, Laura; Oberski, Daniel; De Waal, Ton

    2017-01-01

    Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible

  4. A Survey of Soft-Error Mitigation Techniques for Non-Volatile Memories

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2017-02-01

    Full Text Available Non-volatile memories (NVMs offer superior density and energy characteristics compared to the conventional memories; however, NVMs suffer from severe reliability issues that can easily eclipse their energy efficiency advantages. In this paper, we survey architectural techniques for improving the soft-error reliability of NVMs, specifically PCM (phase change memory and STT-RAM (spin transfer torque RAM. We focus on soft-errors, such as resistance drift and write disturbance, in PCM and read disturbance and write failures in STT-RAM. By classifying the research works based on key parameters, we highlight their similarities and distinctions. We hope that this survey will underline the crucial importance of addressing NVM reliability for ensuring their system integration and will be useful for researchers, computer architects and processor designers.

  5. Factors controlling volume errors through 2D gully erosion assessment: guidelines for optimal survey design

    Science.gov (United States)

    Castillo, Carlos; Pérez, Rafael

    2017-04-01

    The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey

  6. Quantifying type I and type II errors in decision-making under uncertainty : The case of GM crops

    NARCIS (Netherlands)

    Ansink, Erik; Wesseler, Justus

    2009-01-01

    In a recent paper, Hennessy and Moschini (American Journal of Agricultural Economics 88(2): 308-323, 2006) analyse the interactions between scientific uncertainty and costly regulatory actions. We use their model to analyse the costs of making type I and type II errors, in the context of the

  7. Quantifying type I and type II errors in decision-making under uncertainty: the case of GM crops

    NARCIS (Netherlands)

    Ansink, E.J.H.; Wesseler, J.H.H.

    2009-01-01

    In a recent paper, Hennessy and Moschini (American Journal of Agricultural Economics 88(2): 308¿323, 2006) analyse the interactions between scientific uncertainty and costly regulatory actions. We use their model to analyse the costs of making type I and type II errors, in the context of the

  8. NEWLY IDENTIFIED EXTENDED GREEN OBJECTS (EGOs) FROM THE SPITZER GLIMPSE II SURVEY. II. MOLECULAR CLOUD ENVIRONMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Chen Xi; Gan Conggui; Shen Zhiqiang [Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200030 (China); Ellingsen, Simon P.; Titmarsh, Anita [School of Mathematics and Physics, University of Tasmania, Hobart, Tasmania (Australia); He Jinhua, E-mail: chenxi@shao.ac.cn [Key Laboratory for the Structure and Evolution of Celestial Objects, Yunnan Astronomical Observatory/National Astronomical Observatory, Chinese Academy of Sciences, P.O. Box 110, Kunming 650011, Yunnan Province (China)

    2013-06-01

    We have undertaken a survey of molecular lines in the 3 mm band toward 57 young stellar objects using the Australia Telescope National Facility Mopra 22 m radio telescope. The target sources were young stellar objects with active outflows (extended green objects (EGOs)) newly identified from the GLIMPSE II survey. We observe a high detection rate (50%) of broad line wing emission in the HNC and CS thermal lines, which combined with the high detection rate of class I methanol masers toward these sources (reported in Paper I) further demonstrates that the GLIMPSE II EGOs are associated with outflows. The physical and kinematic characteristics derived from the 3 mm molecular lines for these newly identified EGOs are consistent with these sources being massive young stellar objects with ongoing outflow activity and rapid accretion. These findings support our previous investigations of the mid-infrared properties of these sources and their association with other star formation tracers (e.g., infrared dark clouds, methanol masers and millimeter dust sources) presented in Paper I. The high detection rate (64%) of the hot core tracer CH{sub 3}CN reveals that the majority of these new EGOs have evolved to the hot molecular core stage. Comparison of the observed molecular column densities with predictions from hot core chemistry models reveals that the newly identified EGOs from the GLIMPSE II survey are members of the youngest hot core population, with an evolutionary time scale of the order of 10{sup 3} yr.

  9. Technical errors in complete mouth radiographic survey according to radiographic techniques and film holding methods

    International Nuclear Information System (INIS)

    Choi, Karp Sik; Byun, Chong Soo; Choi, Soon Chul

    1986-01-01

    The purpose of this study was to investigate the numbers and causes of retakes in 300 complete mouth radiographic surveys made by 75 senior dental students. According to radiographic techniques and film holding methods, they were divided into 4 groups: Group I: Bisecting-angle technique with patient's fingers. Group II: Bisecting-angle technique with Rinn Snap-A-Ray device. Group III: Bisecting-angle technique with Rinn XCP instrument (short cone) Group IV: Bisecting-angle technique with Rinn XCP instrument (long cone). The most frequent cases of retakes, the most frequent tooth area examined, of retakes and average number of retakes per complete mouth survey were evaluated. The obtained results were as follows: Group I: Incorrect film placement (47.8), upper canine region, and 0.89. Group II: Incorrect film placement (44.0), upper canine region, and 1.12. Group III: Incorrect film placement (79.2), upper canine region, and 2.05. Group IV: Incorrect film placement (67.7), upper canine region, and 1.69.

  10. A Type II Supernova Hubble diagram from the CSP-I, SDSS-II, and SNLS surveys

    OpenAIRE

    de Jaeger, T.; González-Gaitán, S.; Hamuy, M.; Galbany, L.; Anderson, J. P.; Phillips, M. M.; Stritzinger, M. D.; Carlberg, R. G.; Sullivan, M.; Gutiérrez, C. P.; Hook, I. M.; Howell, D. Andrew; Hsiao, E. Y.; Kuncarayakti, H.; Ruhlmann-Kleider, V.

    2016-01-01

    The coming era of large photometric wide-field surveys will increase the detection rate of supernovae by orders of magnitude. Such numbers will restrict spectroscopic follow-up in the vast majority of cases, and hence new methods based solely on photometric data must be developed. Here, we construct a complete Hubble diagram of Type II supernovae (SNe II) combining data from three different samples: the Carnegie Supernova Project-I, the Sloan Digital Sky Survey II SN, and th...

  11. Linking Errors between Two Populations and Tests: A Case Study in International Surveys in Education

    Directory of Open Access Journals (Sweden)

    Dirk Hastedt

    2015-06-01

    Full Text Available This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average achievement scores from the latter can be linked to the international metric. In addition to raising issues associated with different testing conditions, administrative procedures, and the like, this approach also poses psychometric challenges. This paper endeavors to shed some light on the effects that can be expected, the linkage errors in particular, by countries using this practice. The ILSA selected for this simulation study was IEA TIMSS 2011, and the three countries used as the national assessment cases were Botswana, Honduras, and Tunisia, all of which participated in TIMSS 2011. The items selected as items common to the simulated national tests and the international test came from the Grade 4 TIMSS 2011 mathematics items that IEA released into the public domain after completion of this assessment. The findings of the current study show that linkage errors seemed to achieve acceptable levels if 30 or more items were used for the linkage, although the errors were still significantly higher compared to the TIMSS' cutoffs. Comparison of the estimated country averages based on the simulated national surveys and the averages based on the international TIMSS assessment revealed only one instance across the three countries of the estimates approaching parity. Also, the percentages of students in these countries who actually reached the defined benchmarks on the TIMSS achievement scale differed significantly from the results based on TIMSS and the results for the simulated national assessments. As a conclusion, we advise against using groups of released items from international assessments in national

  12. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S. [et al.

    2016-05-27

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  13. The Sloan Digital Sky Survey-II Supernova Survey: Technical Summary

    Energy Technology Data Exchange (ETDEWEB)

    Frieman, Joshua A.; /Fermilab /KICP, Chicago /Chicago U., Astron. Astrophys. Ctr.; Bassett, Bruce; /Cape Town U. /South African Astron. Observ.; Becker, Andrew; /Washington; Choi, Changsu; /Seoul Natl. U.; Cinabro, David; /Wayne State U.; DeJongh, Don Frederic; /Fermilab; Depoy, Darren L.; /Ohio State U.; Doi, Mamoru; /Tokyo U.; Garnavich, Peter M.; /Notre Dame U.; Hogan, Craig J.; /Washington U., Seattle, Astron. Dept.; Holtzman, Jon; /New Mexico State U.; Im, Myungshin; /Seoul Natl. U.; Jha, Saurabh; /Stanford U., Phys. Dept.; Konishi, Kohki; /Tokyo U.; Lampeitl, Hubert; /Baltimore, Space Telescope Sci.; Marriner, John; /Fermilab; Marshall, Jennifer L.; /Ohio State U.; McGinnis,; /Fermilab; Miknaitis, Gajus; /Fermilab; Nichol, Robert C.; /Portsmouth U.; Prieto, Jose Luis; /Ohio State U. /Rochester Inst. Tech. /Stanford U., Phys. Dept. /Pennsylvania U.

    2007-09-14

    The Sloan Digital Sky Survey-II (SDSS-II) has embarked on a multi-year project to identify and measure light curves for intermediate-redshift (0.05 < z < 0.35) Type Ia supernovae (SNe Ia) using repeated five-band (ugriz) imaging over an area of 300 sq. deg. The survey region is a stripe 2.5 degrees wide centered on the celestial equator in the Southern Galactic Cap that has been imaged numerous times in earlier years, enabling construction of a deep reference image for discovery of new objects. Supernova imaging observations are being acquired between 1 September and 30 November of 2005-7. During the first two seasons, each region was imaged on average every five nights. Spectroscopic follow-up observations to determine supernova type and redshift are carried out on a large number of telescopes. In its first two three-month seasons, the survey has discovered and measured light curves for 327 spectroscopically confirmed SNe Ia, 30 probable SNe Ia, 14 confirmed SNe Ib/c, 32 confirmed SNe II, plus a large number of photometrically identified SNe Ia, 94 of which have host-galaxy spectra taken so far. This paper provides an overview of the project and briefly describes the observations completed during the first two seasons of operation.

  14. Errors and omissions in hospital prescriptions: a survey of prescription writing in a hospital.

    Science.gov (United States)

    Calligaris, Laura; Panzera, Angela; Arnoldo, Luca; Londero, Carla; Quattrin, Rosanna; Troncon, Maria G; Brusaferro, Silvio

    2009-05-13

    The frequency of drug prescription errors is high. Excluding errors in decision making, the remaining are mainly due to order ambiguity, non standard nomenclature and writing illegibility. The aim of this study is to analyse, as a part of a continuous quality improvement program, the quality of prescriptions writing for antibiotics, in an Italian University Hospital as a risk factor for prescription errors. The point prevalence survey, carried out in May 26-30 2008, involved 41 inpatient Units. Every parenteral or oral antibiotic prescription was analysed for legibility (generic or brand drug name, dose, frequency of administration) and completeness (generic or brand name, dose, frequency of administration, route of administration, date of prescription and signature of the prescriber). Eight doctors (residents in Hygiene and Preventive Medicine) and two pharmacists performed the survey by reviewing the clinical records of medical, surgical or intensive care section inpatients. The antibiotics drug category was chosen because its use is widespread in the setting considered. Out of 756 inpatients included in the study, 408 antibiotic prescriptions were found in 298 patients (mean prescriptions per patient 1.4; SD +/- 0.6). Overall 92.7% (38/41) of the Units had at least one patient with antibiotic prescription. Legibility was in compliance with 78.9% of generic or brand names, 69.4% of doses, 80.1% of frequency of administration, whereas completeness was fulfilled for 95.6% of generic or brand names, 76.7% of doses, 83.6% of frequency of administration, 87% of routes of administration, 43.9% of dates of prescription and 33.3% of physician's signature. Overall 23.9% of prescriptions were illegible and 29.9% of prescriptions were incomplete. Legibility and completeness are higher in unusual drugs prescriptions. The Intensive Care Section performed best as far as quality of prescription writing was concerned when compared with the Medical and Surgical Sections

  15. THE PITTSBURGH SLOAN DIGITAL SKY SURVEY Mg II QUASAR ABSORPTION-LINE SURVEY CATALOG

    International Nuclear Information System (INIS)

    Quider, Anna M.; Nestor, Daniel B.; Turnshek, David A.; Rao, Sandhya M.; Weyant, Anja N.; Monier, Eric M.; Busche, Joseph R.

    2011-01-01

    We present a catalog of intervening Mg II quasar absorption-line systems in the redshift interval 0.36 ≤ z ≤ 2.28. The catalog was built from Sloan Digital Sky Survey Data Release Four (SDSS DR4) quasar spectra. Currently, the catalog contains ∼17, 000 measured Mg II doublets. We also present data on the ∼44, 600 quasar spectra which were searched to construct the catalog, including redshift and magnitude information, continuum-normalized spectra, and corresponding arrays of redshift-dependent minimum rest equivalent widths detectable at our confidence threshold. The catalog is available online. A careful second search of 500 random spectra indicated that, for every 100 spectra searched, approximately one significant Mg II system was accidentally rejected. Current plans to expand the catalog beyond DR4 quasars are discussed. Many Mg II absorbers are known to be associated with galaxies. Therefore, the combination of large size and well understood statistics makes this catalog ideal for precision studies of the low-ionization and neutral gas regions associated with galaxies at low to moderate redshift. An analysis of the statistics of Mg II absorbers using this catalog will be presented in a subsequent paper.

  16. Evaluation of Analysis by Cross-Validation, Part II: Diagnostic and Optimization of Analysis Error Covariance

    Directory of Open Access Journals (Sweden)

    Richard Ménard

    2018-02-01

    Full Text Available We present a general theory of estimation of analysis error covariances based on cross-validation as well as a geometric interpretation of the method. In particular, we use the variance of passive observation-minus-analysis residuals and show that the true analysis error variance can be estimated, without relying on the optimality assumption. This approach is used to obtain near optimal analyses that are then used to evaluate the air quality analysis error using several different methods at active and passive observation sites. We compare the estimates according to the method of Hollingsworth-Lönnberg, Desroziers et al., a new diagnostic we developed, and the perceived analysis error computed from the analysis scheme, to conclude that, as long as the analysis is near optimal, all estimates agree within a certain error margin.

  17. Survey of Biomass Gasification, Volume II: Principles of Gasification

    Energy Technology Data Exchange (ETDEWEB)

    Reed, T.B. (comp.)

    1979-07-01

    Biomass can be converted by gasification into a clean-burning gaseous fuel that can be used to retrofit existing gas/oil boilers, to power engines, to generate electricity, and as a base for synthesis of methanol, gasoline, ammonia, or methane. This survey describes biomass gasification, associated technologies, and issues in three volumes. Volume I contains the synopsis and executive summary, giving highlights of the findings of the other volumes. In Volume II the technical background necessary for understanding the science, engineering, and commercialization of biomass is presented. In Volume III the present status of gasification processes is described in detail, followed by chapters on economics, gas conditioning, fuel synthesis, the institutional role to be played by the federal government, and recommendations for future research and development.

  18. EFFECT OF MEASUREMENT ERRORS ON PREDICTED COSMOLOGICAL CONSTRAINTS FROM SHEAR PEAK STATISTICS WITH LARGE SYNOPTIC SURVEY TELESCOPE

    Energy Technology Data Exchange (ETDEWEB)

    Bard, D.; Chang, C.; Kahn, S. M.; Gilmore, K.; Marshall, S. [KIPAC, Stanford University, 452 Lomita Mall, Stanford, CA 94309 (United States); Kratochvil, J. M.; Huffenberger, K. M. [Department of Physics, University of Miami, Coral Gables, FL 33124 (United States); May, M. [Physics Department, Brookhaven National Laboratory, Upton, NY 11973 (United States); AlSayyad, Y.; Connolly, A.; Gibson, R. R.; Jones, L.; Krughoff, S. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Lorenz, S. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Haiman, Z.; Jernigan, J. G., E-mail: djbard@slac.stanford.edu [Department of Astronomy and Astrophysics, Columbia University, New York, NY 10027 (United States); and others

    2013-09-01

    We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST Image Simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.

  19. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    CERN Document Server

    Audren, Benjamin; Bird, Simeon; Haehnelt, Martin G.; Viel, Matteo

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fourier space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservat...

  20. The sloan digital sky Survey-II supernova survey: search algorithm and follow-up observations

    Energy Technology Data Exchange (ETDEWEB)

    Sako, Masao [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Bassett, Bruce [Department of Mathematics and Applied Mathematics, University of Cape Town, Rondebosch 7701 (South Africa); Becker, Andrew; Hogan, Craig J. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Cinabro, David [Department of Physics, Wayne State University, Detroit, MI 48202 (United States); DeJongh, Fritz; Frieman, Joshua A.; Marriner, John; Miknaitis, Gajus [Center for Particle Astrophysics, Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Depoy, D. L.; Prieto, Jose Luis [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210-1173 (United States); Dilday, Ben; Kessler, Richard [Kavli Institute for Cosmological Physics, The University of Chicago, 5640 South Ellis Avenue Chicago, IL 60637 (United States); Doi, Mamoru [Institute of Astronomy, Graduate School of Science, University of Tokyo 2-21-1, Osawa, Mitaka, Tokyo 181-0015 (Japan); Garnavich, Peter M. [University of Notre Dame, 225 Nieuwland Science, Notre Dame, IN 46556-5670 (United States); Holtzman, Jon [Department of Astronomy, MSC 4500, New Mexico State University, P.O. Box 30001, Las Cruces, NM 88003 (United States); Jha, Saurabh [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, P.O. Box 20450, MS29, Stanford, CA 94309 (United States); Konishi, Kohki [Institute for Cosmic Ray Research, University of Tokyo, 5-1-5, Kashiwanoha, Kashiwa, Chiba, 277-8582 (Japan); Lampeitl, Hubert [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Nichol, Robert C. [Institute of Cosmology and Gravitation, Mercantile House, Hampshire Terrace, University of Portsmouth, Portsmouth PO1 2EG (United Kingdom); and others

    2008-01-01

    The Sloan Digital Sky Survey-II Supernova Survey has identified a large number of new transient sources in a 300 deg{sup 2} region along the celestial equator during its first two seasons of a three-season campaign. Multi-band (ugriz) light curves were measured for most of the sources, which include solar system objects, galactic variable stars, active galactic nuclei, supernovae (SNe), and other astronomical transients. The imaging survey is augmented by an extensive spectroscopic follow-up program to identify SNe, measure their redshifts, and study the physical conditions of the explosions and their environment through spectroscopic diagnostics. During the survey, light curves are rapidly evaluated to provide an initial photometric type of the SNe, and a selected sample of sources are targeted for spectroscopic observations. In the first two seasons, 476 sources were selected for spectroscopic observations, of which 403 were identified as SNe. For the type Ia SNe, the main driver for the survey, our photometric typing and targeting efficiency is 90%. Only 6% of the photometric SN Ia candidates were spectroscopically classified as non-SN Ia instead, and the remaining 4% resulted in low signal-to-noise, unclassified spectra. This paper describes the search algorithm and the software, and the real-time processing of the SDSS imaging data. We also present the details of the supernova candidate selection procedures and strategies for follow-up spectroscopic and imaging observations of the discovered sources.

  1. ASSESSMENT OF SYSTEMATIC CHROMATIC ERRORS THAT IMPACT SUB-1% PHOTOMETRIC PRECISION IN LARGE-AREA SKY SURVEYS

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Boada, S.; Mondrik, N.; Nagasawa, D. [George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, and Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Tucker, D.; Annis, J.; Finley, D. A.; Kent, S.; Lin, H.; Marriner, J.; Wester, W. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Kessler, R.; Scolnic, D. [Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Bernstein, G. M. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Burke, D. L.; Rykoff, E. S. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); James, D. J.; Walker, A. R. [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile); Collaboration: DES Collaboration; and others

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  2. Errors in second moments estimated from monostatic Doppler sodar winds. II. Application to field measurements

    DEFF Research Database (Denmark)

    Gaynor, J. E.; Kristensen, Leif

    1986-01-01

    Observatory tower. The approximate magnitude of the error due to spatial and temporal pulse volume separation is presented as a function of mean wind angle relative to the sodar configuration and for several antenna pulsing orders. Sodar-derived standard deviations of the lateral wind component, before...

  3. When Is a Failure to Replicate Not a Type II Error?

    Science.gov (United States)

    Vasconcelos, Marco; Urcuioli, Peter J.; Lionello-DeNolf, Karen M.

    2007-01-01

    Zentall and Singer (2007) challenge our conclusion that the work-ethic effect reported by Clement, Feltus, Kaiser, and Zentall (2000) may have been a Type I error by arguing that (a) the effect has been extensively replicated and (b) the amount of overtraining our pigeons received may not have been sufficient to produce it. We believe that our…

  4. Calibration of a neutron log in partially saturated media. Part II. Error analysis

    International Nuclear Information System (INIS)

    Hearst, J.R.; Kasameyer, P.W.; Dreiling, L.A.

    1981-01-01

    Four sources or error (uncertainty) are studied in water content obtained from neutron logs calibrated in partially saturated media for holes up to 3 m. For this calibration a special facility was built and an algorithm for a commercial epithermal neutron log was developed that obtains water content from count rate, bulk density, and gap between the neutron sonde and the borehole wall. The algorithm contained errors due to the calibration and lack of fit, while the field measurements included uncertainties in the count rate (caused by statistics and a short time constant), gap, and density. There can be inhomogeneity in the material surrounding the borehole. Under normal field conditions the hole-size-corrected water content obtained from such neutron logs can have an uncertainty as large as 15% of its value

  5. Estimating Classification Errors Under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC

    Directory of Open Access Journals (Sweden)

    Boeschoten Laura

    2017-12-01

    Full Text Available Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible combinations with scores on other variables. Furthermore, the latent class model, by multiply imputing a new variable, enhances the quality of statistics based on the composite data set. The performance of this method is investigated by a simulation study, which shows that whether or not the method can be applied depends on the entropy R2 of the latent class model and the type of analysis a researcher is planning to do. Finally, the method is applied to public data from Statistics Netherlands.

  6. Informed Design of Mixed-Mode Surveys : Evaluating mode effects on measurement and selection error

    NARCIS (Netherlands)

    Klausch, Thomas|info:eu-repo/dai/nl/341427306

    2014-01-01

    “Mixed-mode designs” are innovative types of surveys which combine more than one mode of administration in the same project, such as surveys administered partly on the web (online), on paper, by telephone, or face-to-face. Mixed-mode designs have become increasingly popular in international survey

  7. A Green Bank Telescope Survey of Large Galactic H II Regions

    Science.gov (United States)

    Anderson, L. D.; Armentrout, W. P.; Luisi, Matteo; Bania, T. M.; Balser, Dana S.; Wenger, Trey V.

    2018-02-01

    As part of our ongoing H II Region Discovery Survey (HRDS), we report the Green Bank Telescope detection of 148 new angularly large Galactic H II regions in radio recombination line (RRL) emission. Our targets are located at a declination of δ > -45^\\circ , which corresponds to 266^\\circ > {\\ell }> -20^\\circ at b=0^\\circ . All sources were selected from the Wide-field Infrared Survey Explorer Catalog of Galactic H II Regions, and have infrared angular diameters ≥slant 260\\prime\\prime . The Galactic distribution of these “large” H II regions is similar to that of the previously known sample of Galactic H II regions. The large H II region RRL line width and peak line intensity distributions are skewed toward lower values, compared with that of previous HRDS surveys. We discover seven sources with extremely narrow RRLs 100 {pc}, making them some of the physically largest known H II regions in the Galaxy. This survey completes the HRDS H II region census in the Northern sky, where we have discovered 887 H II regions and more than doubled the size of the previously known census of Galactic H II regions.

  8. A Survey of Wireless Fair Queuing Algorithms with Location-Dependent Channel Errors

    Directory of Open Access Journals (Sweden)

    Anca VARGATU

    2011-01-01

    Full Text Available The rapid development of wireless networks has brought more and more attention to topics related to fair allocation of resources, creation of suitable algorithms, taking into account the special characteristics of wireless environment and insurance fair access to the transmission channel, with delay bound and throughput guaranteed. Fair allocation of resources in wireless networks requires significant challenges, because of errors that occur only in these networks, such as location-dependent and bursty channel errors. In wireless networks, frequently happens, because interference of radio waves, that a user experiencing bad radio conditions during a period of time, not to receive resources in that period. This paper analyzes some resource allocation algorithms for wireless networks with location dependent errors, specifying the base idea for each algorithm and the way how it works. The analyzed fair queuing algorithms differ by the way they treat the following aspects: how to select the flows which should receive additional services, how to allocate these resources, which is the proportion received by error free flows and how the flows affected by errors are compensated.

  9. TYPE II-P SUPERNOVAE FROM THE SDSS-II SUPERNOVA SURVEY AND THE STANDARDIZED CANDLE METHOD

    International Nuclear Information System (INIS)

    D'Andrea, Chris B.; Sako, Masao; Dilday, Benjamin; Jha, Saurabh; Frieman, Joshua A.; Kessler, Richard; Holtzman, Jon; Konishi, Kohki; Yasuda, Naoki; Schneider, D. P.; Sollerman, Jesper; Wheeler, J. Craig; Cinabro, David; Nichol, Robert C.; Lampeitl, Hubert; Smith, Mathew; Atlee, David W.; Bassett, Bruce; Castander, Francisco J.; Goobar, Ariel

    2010-01-01

    We apply the Standardized Candle Method (SCM) for Type II Plateau supernovae (SNe II-P), which relates the velocity of the ejecta of a SN to its luminosity during the plateau, to 15 SNe II-P discovered over the three season run of the Sloan Digital Sky Survey-II Supernova Survey. The redshifts of these SNe-0.027 0.01) as all of the current literature on the SCM combined. We find that the SDSS SNe have a very small intrinsic I-band dispersion (0.22 mag), which can be attributed to selection effects. When the SCM is applied to the combined SDSS-plus-literature set of SNe II-P, the dispersion increases to 0.29 mag, larger than the scatter for either set of SNe separately. We show that the standardization cannot be further improved by eliminating SNe with positive plateau decline rates, as proposed in Poznanski et al. We thoroughly examine all potential systematic effects and conclude that for the SCM to be useful for cosmology, the methods currently used to determine the Fe II velocity at day 50 must be improved, and spectral templates able to encompass the intrinsic variations of Type II-P SNe will be needed.

  10. On the errors in measurements of Ohio 5 radio sources in the light of the GB survey

    International Nuclear Information System (INIS)

    Machalski, J.

    1975-01-01

    Positions and flux densities of 405 OSU 5 radio sources surveyed at 1415 MHz down to 0.18 f.u. (Brundage et al. 1971) have been examined in the light of data from the GB survey made at 1400 MHz (Maslowski 1972). An identification analysis has shown that about 56% of OSU sources reveal themselves as single, 18% - as confused, 20% - as unresolved and 6% - having no counterparts in the GB survey down to 0.09 f.u. - seem to be spurious. The single OSU sources are strongly affected by the underestimation of their flux densities due to base-line procedure in their vicinity. The average value of about 0.03 f.u. has been found for the systematic underestimation. The second systematic error is due to the presence of a significant number of confused sources with strong overestimation of their flux densities. The confusion effect gives a characteristic non-Gaussian tail in the difference distribution between observed and real flux densities. The confusion effect has a strong influence on source counts from the OSU 5 survey. Differential number-counts relatively to that from the GB survey shows that the counts agree between themselves within the statistical uncertainty up to about 0.40 f.u., which is approximately 4 delta (delta - average rms flux density error in the OSU 5 survey). Below 0.40 f.u. the number of sources missing due to the confusion effect is significantly greater than the number-overestimation due to the noise error. Thus, this part of the OSU 5 source counts cannot be treated seriously, even in the statistical sense. An analysis of the approximate reliability and completeness of the OSU 5 survey shows that, although the total reliability estimated by the authors of the survey is good, the completeness is significantly lower due to the underestimation of the confusion effect magnitude. In fact, the OSU 5 completeness is 67% at 0.18 f.u. and 79% at 0.25 f.u. (author)

  11. FIASCO II failure to achieve a satisfactory cardiac outcome study: the elimination of system errors.

    Science.gov (United States)

    Farid, Shakil; Page, Aravinda; Jenkins, David; Jones, Mark T; Freed, Darren; Nashef, Samer A M

    2013-07-01

    Death in low-risk cardiac surgical patients provides a simple and accessible method by which modifiable causes of death can be identified. In the first FIASCO study published in 2009, local potentially modifiable causes of preventable death in low-risk patients with a logistic EuroSCORE of 0-2 undergoing cardiac surgery were inadequate myocardial protection and lack of clarity in the chain of responsibility. As a result, myocardial protection was improved, and a formalized system introduced to ensure clarity of the chain of responsibility in the care of all cardiac surgical patients. The purpose of the current study was to re-audit outcomes in low-risk patients to see if improvements have been achieved. Patients with a logistic EuroSCORE of 0-2 who had cardiac surgery from January 2006 to August 2012 were included. Data were prospectively collected and retrospectively analysed. The case notes of patients who died in hospital were subject to internal and external review and classified according to preventability. Two thousand five hundred and forty-nine patients with a logistic EuroSCORE of 0-2 underwent cardiac surgery during the study period. Seven deaths occurred in truly low-risk patients, giving a mortality of 0.27%. Of the seven, three were considered preventable and four non-preventable. Mortality was marginally lower than in our previous study (0.37%), and no death occurred as a result of inadequate myocardial protection or communication failures. We postulate that the regular study of such events in all institutions may unmask systemic errors that can be remedied to prevent or reduce future occurrences. We encourage all units to use this methodology to detect any similarly modifiable factors in their practice.

  12. A survey of mindset theories of intelligence and medical error self-reporting among pediatric housestaff and faculty.

    Science.gov (United States)

    Jegathesan, Mithila; Vitberg, Yaffa M; Pusic, Martin V

    2016-02-11

    Intelligence theory research has illustrated that people hold either "fixed" (intelligence is immutable) or "growth" (intelligence can be improved) mindsets and that these views may affect how people learn throughout their lifetime. Little is known about the mindsets of physicians, and how mindset may affect their lifetime learning and integration of feedback. Our objective was to determine if pediatric physicians are of the "fixed" or "growth" mindset and whether individual mindset affects perception of medical error reporting.  We sent an anonymous electronic survey to pediatric residents and attending pediatricians at a tertiary care pediatric hospital. Respondents completed the "Theories of Intelligence Inventory" which classifies individuals on a 6-point scale ranging from 1 (Fixed Mindset) to 6 (Growth Mindset). Subsequent questions collected data on respondents' recall of medical errors by self or others. We received 176/349 responses (50 %). Participants were equally distributed between mindsets with 84 (49 %) classified as "fixed" and 86 (51 %) as "growth". Residents, fellows and attendings did not differ in terms of mindset. Mindset did not correlate with the small number of reported medical errors. There is no dominant theory of intelligence (mindset) amongst pediatric physicians. The distribution is similar to that seen in the general population. Mindset did not correlate with error reports.

  13. National Youth Survey US: Wave II (NYS-1977)

    Data.gov (United States)

    U.S. Department of Health & Human Services — Youth data for the second wave of the National Youth Survey are contained in this data collection. The first wave was conducted in 1976. Youths were interviewed in...

  14. A Survey of Kurdish Students’ Sound Segment & Syllabic Pattern Errors in the Course of Learning EFL

    Directory of Open Access Journals (Sweden)

    Jahangir Mohammadi

    2014-06-01

    Full Text Available This paper is devoted to finding adequate answers to the following queries: (A what are the segmental and syllabic pattern errors made by Kurdish students in their pronunciation? (B Can the problematic areas in pronunciation be predicted by a systematic comparison of the sound systems of both native and target languages? (C Can there be any consistency between the predictions and the results of the error analysis experiments in the same field? To reach the goals of the study the following steps were taken; 1.The sound systems and syllabic patterns of both languages Kurdish and English were clearly described on the basis of place and manner of articulation and the combinatory power of clusters. 2. To carry out a contrastive analysis, the sound segments (vowels, consonants and diphthongs and the syllabic patterns of both languages were compared in order to surface the similarities and differences.  3. The syllabic patterns and sound segments in English that had no counterparts in Kurdish were detected and considered as problematic areas in pronunciation. 4. To countercheck the acquired predictions, an experiment was carried out with 50 male and female pre-university students. Subjects were given some passages to read. The readability index of these passages ranged from 8.775 to 10.432 which are quite suitable in comparison to the readability index of pre-university texts ranging from 8.675 to 10.475. All samples of bound production were transcribed in IPA and the syllabic patterns were shown by symbols ‘V’ and ‘C’ indicating vowels and consonants respectively. An error analysis of the acquired data proved that English sound segments and syllabic patterns with no counterparts in Kurdish resulted in pronunciation errors.

  15. Longitudinal Cut Method Revisited: A Survey on the Main Error Sources

    OpenAIRE

    Moriconi, Alessandro; Lalli, Francesco; Di Felice, Fabio; Esposito, Pier Giorgio; Piscopia, Rodolfo

    2000-01-01

    Some of the main error sources in wave pattern resistance determination were investigated. The experimental data obtained at the Italian Ship Model Basin (longitudinal wave cuts concerned with the steady motion of the Series 60 model and a hard-chine catamaran) were analyzed. It was found that, within the range of Froude numbers tested (0.225 ≤ Fr ≤ 0.345 for the Series 60 and 0.5 ≤ Fr ≤ 1 for the catamaran) two sources of uncertainty play a significant role: (i) the p...

  16. MMT HYPERVELOCITY STAR SURVEY. II. FIVE NEW UNBOUND STARS

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Warren R.; Geller, Margaret J.; Kenyon, Scott J., E-mail: wbrown@cfa.harvard.edu, E-mail: mgeller@cfa.harvard.edu, E-mail: skenyon@cfa.harvard.edu [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States)

    2012-05-20

    We present the discovery of five new unbound hypervelocity stars (HVSs) in the outer Milky Way halo. Using a conservative estimate of Galactic escape velocity, our targeted spectroscopic survey has now identified 16 unbound HVSs as well as a comparable number of HVSs ejected on bound trajectories. A Galactic center origin for the HVSs is supported by their unbound velocities, the observed number of unbound stars, their stellar nature, their ejection time distribution, and their Galactic latitude and longitude distribution. Other proposed origins for the unbound HVSs, such as runaway ejections from the disk or dwarf galaxy tidal debris, cannot be reconciled with the observations. An intriguing result is the spatial anisotropy of HVSs on the sky, which possibly reflects an anisotropic potential in the central 10-100 pc region of the Galaxy. Further progress requires measurement of the spatial distribution of HVSs over the southern sky. Our survey also identifies seven B supergiants associated with known star-forming galaxies; the absence of B supergiants elsewhere in the survey implies there are no new star-forming galaxies in our survey footprint to a depth of 1-2 Mpc.

  17. Interprofessional conflict and medical errors: results of a national multi-specialty survey of hospital residents in the US.

    Science.gov (United States)

    Baldwin, Dewitt C; Daugherty, Steven R

    2008-12-01

    Clear communication is considered the sine qua non of effective teamwork. Breakdowns in communication resulting from interprofessional conflict are believed to potentiate errors in the care of patients, although there is little supportive empirical evidence. In 1999, we surveyed a national, multi-specialty sample of 6,106 residents (64.2% response rate). Three questions inquired about "serious conflict" with another staff member. Residents were also asked whether they had made a "significant medical error" (SME) during their current year of training, and whether this resulted in an "adverse patient outcome" (APO). Just over 20% (n = 722) reported "serious conflict" with another staff member. Ten percent involved another resident, 8.3% supervisory faculty, and 8.9% nursing staff. Of the 2,813 residents reporting no conflict with other professional colleagues, 669, or 23.8%, recorded having made an SME, with 3.4% APOs. By contrast, the 523 residents who reported conflict with at least one other professional had 36.4% SMEs and 8.3% APOs. For the 187 reporting conflict with two or more other professionals, the SME rate was 51%, with 16% APOs. The empirical association between interprofessional conflict and medical errors is both alarming and intriguing, although the exact nature of this relationship cannot currently be determined from these data. Several theoretical constructs are advanced to assist our thinking about this complex issue.

  18. Uncertainty in mapped geological boundaries held by a national geological survey:eliciting the geologists' tacit error model

    Science.gov (United States)

    Lark, R. M.; Lawley, R. S.; Barron, A. J. M.; Aldiss, D. T.; Ambrose, K.; Cooper, A. H.; Lee, J. R.; Waters, C. N.

    2015-06-01

    It is generally accepted that geological line work, such as mapped boundaries, are uncertain for various reasons. It is difficult to quantify this uncertainty directly, because the investigation of error in a boundary at a single location may be costly and time consuming, and many such observations are needed to estimate an uncertainty model with confidence. However, it is recognized across many disciplines that experts generally have a tacit model of the uncertainty of information that they produce (interpretations, diagnoses, etc.) and formal methods exist to extract this model in usable form by elicitation. In this paper we report a trial in which uncertainty models for geological boundaries mapped by geologists of the British Geological Survey (BGS) in six geological scenarios were elicited from a group of five experienced BGS geologists. In five cases a consensus distribution was obtained, which reflected both the initial individually elicited distribution and a structured process of group discussion in which individuals revised their opinions. In a sixth case a consensus was not reached. This concerned a boundary between superficial deposits where the geometry of the contact is hard to visualize. The trial showed that the geologists' tacit model of uncertainty in mapped boundaries reflects factors in addition to the cartographic error usually treated by buffering line work or in written guidance on its application. It suggests that further application of elicitation, to scenarios at an appropriate level of generalization, could be useful to provide working error models for the application and interpretation of line work.

  19. Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey

    Science.gov (United States)

    Gindi, Renee; Cohen, Robin A.

    2012-01-01

    Objectives Using linked administrative data, to validate Medicare coverage estimates among adults aged 65 or older from the National Health Interview Survey (NHIS), and to assess the impact of a recently added Medicare probe question on the validity of these estimates. Data sources Linked 2005 NHIS and Master Beneficiary Record and Payment History Update System files from the Social Security Administration (SSA). Study design We compared Medicare coverage reported on NHIS with “benchmark” benefit records from SSA. Principal findings With the addition of the probe question, more reports of coverage were captured, and the agreement between the NHIS-reported coverage and SSA records increased from 88% to 95%. Few additional overreports were observed. Conclusions Increased accuracy of the Medicare coverage status of NHIS participants was achieved with the Medicare probe question. Though some misclassification remains, data users interested in Medicare coverage as an outcome or correlate can use this survey measure with confidence. PMID:24800138

  20. THE ELM SURVEY. II. TWELVE BINARY WHITE DWARF MERGER SYSTEMS

    International Nuclear Information System (INIS)

    Kilic, Mukremin; Brown, Warren R.; Kenyon, S. J.; Prieto, Carlos Allende; Agueeros, M. A.; Heinke, Craig

    2011-01-01

    We describe new radial velocity and X-ray observations of extremely low-mass white dwarfs (ELM WDs, ∼0.2 M sun ) in the Sloan Digital Sky Survey Data Release 4 and the MMT Hypervelocity Star survey. We identify four new short period binaries, including two merger systems. These observations bring the total number of short period binary systems identified in our survey to 20. No main-sequence or neutron star companions are visible in the available optical photometry, radio, and X-ray data. Thus, the companions are most likely WDs. Twelve of these systems will merge within a Hubble time due to gravitational wave radiation. We have now tripled the number of known merging WD systems. We discuss the characteristics of this merger sample and potential links to underluminous supernovae, extreme helium stars, AM CVn systems, and other merger products. We provide new observational tests of the WD mass-period distribution and cooling models for ELM WDs. We also find evidence for a new formation channel for single low-mass WDs through binary mergers of two lower mass objects.

  1. A Type II Supernova Hubble Diagram from the CSP-I, SDSS-II, and SNLS Surveys

    Science.gov (United States)

    de Jaeger, T.; González-Gaitán, S.; Hamuy, M.; Galbany, L.; Anderson, J. P.; Phillips, M. M.; Stritzinger, M. D.; Carlberg, R. G.; Sullivan, M.; Gutiérrez, C. P.; Hook, I. M.; Howell, D. Andrew; Hsiao, E. Y.; Kuncarayakti, H.; Ruhlmann-Kleider, V.; Folatelli, G.; Pritchet, C.; Basa, S.

    2017-02-01

    The coming era of large photometric wide-field surveys will increase the detection rate of supernovae by orders of magnitude. Such numbers will restrict spectroscopic follow-up in the vast majority of cases, and hence new methods based solely on photometric data must be developed. Here, we construct a complete Hubble diagram of Type II supernovae (SNe II) combining data from three different samples: the Carnegie Supernova Project-I, the Sloan Digital Sky Survey II SN, and the Supernova Legacy Survey. Applying the Photometric Color Method (PCM) to 73 SNe II with a redshift range of 0.01-0.5 and with no spectral information, we derive an intrinsic dispersion of 0.35 mag. A comparison with the Standard Candle Method (SCM) using 61 SNe II is also performed and an intrinsic dispersion in the Hubble diagram of 0.27 mag, I.e., 13% in distance uncertainties, is derived. Due to the lack of good statistics at higher redshifts for both methods, only weak constraints on the cosmological parameters are obtained. However, assuming a flat universe and using the PCM, we derive the universe’s matter density: {{{Ω }}}m={0.32}-0.21+0.30 providing a new independent evidence for dark energy at the level of two sigma. This paper includes data gathered with the 6.5 m Magellan Telescopes, with the du Pont and Swope telescopes located at Las Campanas Observatory, Chile; and the Gemini Observatory, Cerro Pachon, Chile (Gemini Program N-2005A-Q-11, GN-2005B-Q-7, GN-2006A-Q-7, GS-2005A-Q-11, GS-2005B-Q-6, and GS-2008B-Q-56). Based on observations collected at the European Organization for Astronomical Research in the Southern Hemisphere, Chile (ESO Programmes 076.A-0156,078.D-0048, 080.A-0516, and 082.A-0526).

  2. The M33 Synoptic Stellar Survey. II. Mira Variables

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Wenlong; Macri, Lucas M. [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); He, Shiyuan; Long, James; Huang, Jianhua Z., E-mail: lmacri@tamu.edu [Department of Statistics, Texas A and M University, College Station, TX 77843 (United States)

    2017-04-01

    We present the discovery of 1847 Mira candidates in the Local Group galaxy M33 using a novel semi-parametric periodogram technique coupled with a random forest classifier. The algorithms were applied to ∼2.4 × 10{sup 5} I -band light curves previously obtained by the M33 Synoptic Stellar Survey. We derive preliminary period–luminosity relations at optical, near-infrared, and mid-infrared wavelengths and compare them to the corresponding relations in the Large Magellanic Cloud.

  3. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    Science.gov (United States)

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  4. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations

    NARCIS (Netherlands)

    Derks, E. M.; Zwinderman, A. H.; Gamazon, E. R.

    2017-01-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (FST) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates;

  5. The Core Collapse Supernova Rate from the SDSS-II Supernova Survey

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Matt; Cinabro, David; Dilday, Ben; Galbany, Lluis; Gupta, Ravi R.; Kessler, R.; Marriner, John; Nichol, Robert C.; Richmond, Michael; Schneider, Donald P.; Sollerman, Jesper

    2014-08-26

    We use the Sloan Digital Sky Survey II Supernova Survey (SDSS-II SNS) data to measure the volumetric core collapse supernova (CCSN) rate in the redshift range (0.03 < z < 0.09). Using a sample of 89 CCSN, we find a volume-averaged rate of 1.06 ± 0.19 × 10(–)(4)((h/0.7)(3)/(yr Mpc(3))) at a mean redshift of 0.072 ± 0.009. We measure the CCSN luminosity function from the data and consider the implications on the star formation history.

  6. Gravitational lensing statistics with extragalactic surveys - II. Analysis of the Jodrell Bank-VLA Astrometric Survey

    NARCIS (Netherlands)

    Helbig, P; Marlow, D; Quast, R; Wilkinson, PN; Browne, IWA; Koopmans, LVE

    We present constraints on the cosmological constant lambda(0) from gravitational lensing statistics of the Jodrell Bank-VLA Astrometric Survey (JVAS). Although this is the largest gravitational lens survey which has been analysed, cosmological constraints are only comparable to those from optical

  7. VizieR Online Data Catalog: REFLEX II. Properties of the survey (Boehringer+ 2013)

    Science.gov (United States)

    Boehringer, H.; Chon, G.; Collins, C. A.; Guzzo, L.; Nowak, N.; Bobrovskyi, S.

    2013-06-01

    Like REFLEX I, the extended survey covers the southern sky outside the band of the Milky Way (|bII|>=20°) with regions around the Magellanic clouds excised (3 in LMC, 3 in SMC). The total survey area after this excision amounts to 4.24 steradian (or 13924°2) which corresponds to 33.75% of the sky. Different from REFLEX I, we use the refined RASS product RASS III (Voges et al. 1999, Cat. IX/10). (2 data files).

  8. THE CARNEGIE-IRVINE GALAXY SURVEY. II. ISOPHOTAL ANALYSIS

    International Nuclear Information System (INIS)

    Li Zhaoyu; Ho, Luis C.; Barth, Aaron J.; Peng, Chien Y.

    2011-01-01

    The Carnegie-Irvine Galaxy Survey (CGS) is a comprehensive investigation of the physical properties of a complete, representative sample of 605 bright (B T ≤ 12.9 mag) galaxies in the southern hemisphere. This contribution describes the isophotal analysis of the broadband (BVRI) optical imaging component of the project. We pay close attention to sky subtraction, which is particularly challenging for some of the large galaxies in our sample. Extensive crosschecks with internal and external data confirm that our calibration and sky subtraction techniques are robust with respect to the quoted measurement uncertainties. We present a uniform catalog of one-dimensional radial profiles of surface brightness and geometric parameters, as well as integrated colors and color gradients. Composite profiles highlight the tremendous diversity of brightness distributions found in disk galaxies and their dependence on Hubble type. A significant fraction of S0 and spiral galaxies exhibit non-exponential profiles in their outer regions. We perform Fourier decomposition of the isophotes to quantify non-axisymmetric deviations in the light distribution. We use the geometric parameters, in conjunction with the amplitude and phase of the m = 2 Fourier mode, to identify bars and quantify their size and strength. Spiral arm strengths are characterized using the m = 2 Fourier profiles and structure maps. Finally, we utilize the information encoded in the m = 1 Fourier profiles to measure disk lopsidedness. The databases assembled here and in Paper I lay the foundation for forthcoming scientific applications of CGS.

  9. Visual impairment attributable to uncorrected refractive error and other causes in the Ghanaian youth: The University of Cape Coast Survey.

    Science.gov (United States)

    Abokyi, Samuel; Ilechie, Alex; Nsiah, Peter; Darko-Takyi, Charles; Abu, Emmanuel Kwasi; Osei-Akoto, Yaw Jnr; Youfegan-Baanam, Mathurin

    2016-01-01

    To determine the prevalence of visual impairment attributable to refractive error and other causes in a youthful Ghanaian population. A prospective survey of all consecutive visits by first-year tertiary students to the Optometry clinic between August, 2013 and April, 2014. Of the 4378 first-year students aged 16-39 years enumerated, 3437 (78.5%) underwent the eye examination. The examination protocol included presenting visual acuity (PVA), ocular motility, and slit-lamp examination of the external eye, anterior segment and media, and non-dilated fundus examination. Pinhole acuity and fundus examination were performed when the PVA≤6/12 in one or both eyes to determine the principal cause of the vision loss. The mean age of participants was 21.86 years (95% CI: 21.72-21.99). The prevalence of bilateral visual impairment (BVI; PVA in the better eye ≤6/12) and unilateral visual impairment UVI; PVA in the worse eye ≤6/12) were 3.08% (95% CI: 2.56-3.72) and 0.79% (95% CI: 0.54-1.14), respectively. Among 106 participants with BVI, refractive error (96.2%) and corneal opacity (3.8%) were the causes. Of the 27 participants with UVI, refractive error (44.4%), maculopathy (18.5%) and retinal disease (14.8%) were the major causes. There was unequal distribution of BVI in the different age groups, with those above 20 years having a lesser burden. Eye screening and provision of affordable spectacle correction to the youth could be timely to eliminate visual impairment. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  10. Treatment errors resulting from use of lasers and IPL by medical laypersons: results of a nationwide survey.

    Science.gov (United States)

    Hammes, Stefan; Karsai, Syrus; Metelmann, Hans-Robert; Pohl, Laura; Kaiser, Kathrine; Park, Bo-Hyun; Raulin, Christian

    2013-02-01

    The demand for hair and tattoo removal with laser and IPL technology (intense pulsed light technology) is continually increasing. Nowadays these treatments are often carried out by medical laypersons without medical supervision in franchise companies, wellness facilities, cosmetic institutes and hair or tattoo studios. This is the first survey is to document and discuss this issue and its effects on public health. Fifty patients affected by treatment errors caused by medical laypersons with laser and IPL applications were evaluated in this retrospective study. We used a standardized questionnaire with accompanying photographic documentation. Among the reports there were some missing or no longer traceable parameters, which is why 7 cases could not be evaluated. The following complications occurred, with possible multiple answers: 81.4% pigmentation changes, 25.6% scars, 14% textural changes and 4.6% incorrect information. The sources of error (multiple answers possible) were the following: 62.8% excessively high energy, 39.5% wrong device for the indication, 20.9% treatment of patients with darker skin or marked tanning, 7% no cooling, and 4.6% incorrect information. The causes of malpractice suggest insufficient training, inadequate diagnostic abilities, and promising unrealistic results. Direct supervision by a medical specialist, comprehensive experience in laser therapy, and compliance with quality guidelines are prerequisites for safe laser and IPL treatments. Legal measures to make such changes mandatory are urgently needed. © The Authors | Journal compilation © Blackwell Verlag GmbH, Berlin.

  11. ALMA Survey of Lupus Protoplanetary Disks. II. Gas Disk Radii

    Science.gov (United States)

    Ansdell, M.; Williams, J. P.; Trapman, L.; van Terwisga, S. E.; Facchini, S.; Manara, C. F.; van der Marel, N.; Miotello, A.; Tazzari, M.; Hogerheijde, M.; Guidi, G.; Testi, L.; van Dishoeck, E. F.

    2018-05-01

    We present Atacama Large Millimeter/Sub-Millimeter Array (ALMA) Band 6 observations of a complete sample of protoplanetary disks in the young (∼1–3 Myr) Lupus star-forming region, covering the 1.33 mm continuum and the 12CO, 13CO, and C18O J = 2–1 lines. The spatial resolution is ∼0.″25 with a medium 3σ continuum sensitivity of 0.30 mJy, corresponding to M dust ∼ 0.2 M ⊕. We apply Keplerian masking to enhance the signal-to-noise ratios of our 12CO zero-moment maps, enabling measurements of gas disk radii for 22 Lupus disks; we find that gas disks are universally larger than millimeter dust disks by a factor of two on average, likely due to a combination of the optically thick gas emission and the growth and inward drift of the dust. Using the gas disk radii, we calculate the dimensionless viscosity parameter, α visc, finding a broad distribution and no correlations with other disk or stellar parameters, suggesting that viscous processes have not yet established quasi-steady states in Lupus disks. By combining our 1.33 mm continuum fluxes with our previous 890 μm continuum observations, we also calculate the millimeter spectral index, α mm, for 70 Lupus disks; we find an anticorrelation between α mm and millimeter flux for low-mass disks (M dust ≲ 5), followed by a flattening as disks approach α mm ≈ 2, which could indicate faster grain growth in higher-mass disks, but may also reflect their larger optically thick components. In sum, this work demonstrates the continuous stream of new insights into disk evolution and planet formation that can be gleaned from unbiased ALMA disk surveys.

  12. Biennial Survey of Education, 1916-18. Volume II. Bulletin, 1919, No. 89

    Science.gov (United States)

    Bureau of Education, Department of the Interior, 1921

    1921-01-01

    Volume II of the Biennial Survey of Education, 1916-1918 includes the following chapters: (1) Education in Great Britain and Ireland (I. L. Kandel); (2) Education in parts of the British Empire: Educational Developments in the Dominion of Canada (Walter A. Montgomery), Public School System of Jamaica (Charles A. Asbury), Recent Progress of…

  13. Nonresponse and Underreporting Errors Increase over the Data Collection Week Based on Paradata from the National Household Food Acquisition and Purchase Survey.

    Science.gov (United States)

    Hu, Mengyao; Gremel, Garrett W; Kirlin, John A; West, Brady T

    2017-05-01

    Background: Food acquisition diary surveys are important for studying food expenditures, factors affecting food acquisition decisions, and relations between these decisions with selected measures of health (e.g., body mass index, self-reported health). However, to our knowledge, no studies have evaluated the errors associated with these diary surveys, which can bias survey estimates and research findings. The use of paradata, which has been largely ignored in previous literature on diary surveys, could be useful for studying errors in these surveys. Objective: We used paradata to assess survey errors in the National Household Food Acquisition and Purchase Survey (FoodAPS). Methods: To evaluate the patterns of nonresponse over the diary period, we fit a multinomial logistic regression model to data from this 1-wk diary survey. We also assessed factors influencing respondents' probability of reporting food acquisition events during the diary process by using logistic regression models. Finally, with the use of an ordinal regression model, we studied factors influencing respondents' perceived ease of participation in the survey. Results: As the diary period progressed, nonresponse increased, especially for those starting the survey on Friday (where the odds of a refusal increased by 12% with each fielding day). The odds of reporting food acquisition events also decreased by 6% with each additional fielding day. Similarly, the odds of reporting ≥1 food-away-from-home event (i.e., meals, snacks, and drinks obtained outside the home) decreased significantly over the fielding period. Male respondents, larger households, households that eat together less often, and households with frequent guests reported a significantly more difficult time getting household members to participate, as did non-English-speaking households and households currently experiencing difficult financial conditions. Conclusions: Nonresponse and underreporting of food acquisition events tended to

  14. PHYSICAL AND MORPHOLOGICAL PROPERTIES OF [O II] EMITTING GALAXIES IN THE HETDEX PILOT SURVEY

    International Nuclear Information System (INIS)

    Bridge, Joanna S.; Gronwall, Caryl; Ciardullo, Robin; Hagen, Alex; Zeimann, Greg; Malz, A. I.; Schneider, Donald P.

    2015-01-01

    The Hobby-Eberly Dark Energy Experiment pilot survey identified 284 [O II] λ3727 emitting galaxies in a 169 arcmin 2 field of sky in the redshift range 0 < z < 0.57. This line flux limited sample provides a bridge between studies in the local universe and higher-redshift [O II] surveys. We present an analysis of the star formation rates (SFRs) of these galaxies as a function of stellar mass as determined via spectral energy distribution fitting. The [O II] emitters fall on the ''main sequence'' of star-forming galaxies with SFR decreasing at lower masses and redshifts. However, the slope of our relation is flatter than that found for most other samples, a result of the metallicity dependence of the [O II] star formation rate indicator. The mass-specific SFR is higher for lower mass objects, supporting the idea that massive galaxies formed more quickly and efficiently than their lower mass counterparts. This is confirmed by the fact that the equivalent widths of the [O II] emission lines trend smaller with larger stellar mass. Examination of the morphologies of the [O II] emitters reveals that their star formation is not a result of mergers, and the galaxies' half-light radii do not indicate evolution of physical sizes

  15. Mg II-Absorbing Galaxies in the UltraVISTA Survey

    Science.gov (United States)

    Stroupe, Darren; Lundgren, Britt

    2018-01-01

    Light that is emitted from distant quasars can become partially absorbed by intervening gaseous structures, including galaxies, in its path toward Earth, revealing information about the chemical content, degree of ionization, organization and evolution of these structures through time. In this project, quasar spectra are used to probe the halos of foreground galaxies at a mean redshift of z=1.1 in the COSMOS Field. Mg II absorption lines in Sloan Digital Sky Survey quasar spectra are paired with galaxies in the UltraVISTA catalog at an impact parameter less than 200 kpc. A sample of 77 strong Mg II absorbers with a rest-frame equivalent width ≥ 0.3 Å and redshift from 0.34 < z < 2.21 are investigated to find equivalent width ratios of Mg II, C IV and Fe II absorption lines, and their relation to the impact parameter and the star formation rates, stellar masses, environments and redshifts of their host galaxies.

  16. The Sloan Digital Sky Survey-II Supernova Survey:Search Algorithm and Follow-up Observations

    Energy Technology Data Exchange (ETDEWEB)

    Sako, Masao; /Pennsylvania U. /KIPAC, Menlo Park; Bassett, Bruce; /Cape Town U. /South African Astron. Observ.; Becker, Andrew; /Washington U., Seattle, Astron. Dept.; Cinabro, David; /Wayne State U.; DeJongh, Don Frederic; /Fermilab; Depoy, D.L.; /Ohio State U.; Doi, Mamoru; /Tokyo U.; Garnavich, Peter M.; /Notre Dame U.; Craig, Hogan, J.; /Washington U., Seattle, Astron. Dept.; Holtzman, Jon; /New Mexico State U.; Jha, Saurabh; /Stanford U., Phys. Dept.; Konishi, Kohki; /Tokyo U.; Lampeitl, Hubert; /Baltimore, Space; Marriner, John; /Fermilab; Miknaitis, Gajus; /Fermilab; Nichol, Robert C.; /Portsmouth U.; Prieto, Jose Luis; /Ohio State U.; Richmond, Michael W.; /Rochester Inst.; Schneider, Donald P.; /Penn State U., Astron. Astrophys.; Smith, Mathew; /Portsmouth U.; SubbaRao, Mark; /Chicago U. /Tokyo U. /Tokyo U. /South African Astron. Observ. /Tokyo

    2007-09-14

    The Sloan Digital Sky Survey-II Supernova Survey has identified a large number of new transient sources in a 300 deg2 region along the celestial equator during its first two seasons of a three-season campaign. Multi-band (ugriz) light curves were measured for most of the sources, which include solar system objects, Galactic variable stars, active galactic nuclei, supernovae (SNe), and other astronomical transients. The imaging survey is augmented by an extensive spectroscopic follow-up program to identify SNe, measure their redshifts, and study the physical conditions of the explosions and their environment through spectroscopic diagnostics. During the survey, light curves are rapidly evaluated to provide an initial photometric type of the SNe, and a selected sample of sources are targeted for spectroscopic observations. In the first two seasons, 476 sources were selected for spectroscopic observations, of which 403 were identified as SNe. For the Type Ia SNe, the main driver for the Survey, our photometric typing and targeting efficiency is 90%. Only 6% of the photometric SN Ia candidates were spectroscopically classified as non-SN Ia instead, and the remaining 4% resulted in low signal-to-noise, unclassified spectra. This paper describes the search algorithm and the software, and the real-time processing of the SDSS imaging data. We also present the details of the supernova candidate selection procedures and strategies for follow-up spectroscopic and imaging observations of the discovered sources.

  17. SLIM-MAUD: an approach to assessing human error probabilities using structured expert judgment. Volume II. Detailed analysis of the technical issues

    International Nuclear Information System (INIS)

    Embrey, D.E.; Humphreys, P.; Rosa, E.A.; Kirwan, B.; Rea, K.

    1984-07-01

    This two-volume report presents the procedures and analyses performed in developing an approach for structuring expert judgments to estimate human error probabilities. Volume I presents an overview of work performed in developing the approach: SLIM-MAUD (Success Likelihood Index Methodology, implemented through the use of an interactive computer program called MAUD-Multi-Attribute Utility Decomposition). Volume II provides a more detailed analysis of the technical issues underlying the approach

  18. Field Surveys, IOC Valleys. Biological Resources Survey, Dry Lake Valley, Nevada. Volume II, Part I.

    Science.gov (United States)

    1981-08-01

    years ago; the transplant was considered unsuccessful. Sagebrush is the principal item in the diet of adult sage grouse (Centrocercus urophasianus), and...canyon areas in the normal chukar partridge range but can also extend its range to areas too dry for the chukar. The transplant was not con- sidered...determined. - Ertee E-TR-48-II-I SSL1’N SL xx- C - - _ 0S91’ - - I. 009t N - - 0’J o,, s). N, - . ,o 09 -SW,- - - ,o T z X -4 oseo 0L91 - N - = - ozot ma

  19. The HIFI spectral survey of AFGL 2591 (CHESS). II. Summary of the survey

    Science.gov (United States)

    Kaźmierczak-Barthel, M.; van der Tak, F. F. S.; Helmich, F. P.; Chavarría, L.; Wang, K.-S.; Ceccarelli, C.

    2014-07-01

    Aims: This paper presents the richness of submillimeter spectral features in the high-mass star forming region AFGL 2591. Methods: As part of the Chemical Herschel Survey of Star Forming Regions (CHESS) key programme, AFGL 2591 was observed by the Herschel (HIFI) instrument. The spectral survey covered a frequency range from 480 to 1240 GHz as well as single lines from 1267 to 1901 GHz (i.e. CO, HCl, NH3, OH, and [CII]). Rotational and population diagram methods were used to calculate column densities, excitation temperatures, and the emission extents of the observed molecules associated with AFGL 2591. The analysis was supplemented with several lines from ground-based JCMT spectra. Results: From the HIFI spectral survey analysis a total of 32 species were identified (including isotopologues). Although the lines are mostly quite weak (∫TmbdV ~ few K km s-1), 268 emission and 16 absorption lines were found (excluding blends). Molecular column densities range from 6 × 1011 to 1 × 1019 cm-2 and excitation temperatures from 19 to 175 K. Cold (e.g. HCN, H2S, and NH3 with temperatures below 70 K) and warm species (e.g. CH3OH, SO2) in the protostellar envelope can be distinguished. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.Appendix A is available in electronic form at http://www.aanda.org

  20. Characterization and error analysis of an N×N unfolding procedure applied to filtered, photoelectric x-ray detector arrays. II. Error analysis and generalization

    Directory of Open Access Journals (Sweden)

    D. L. Fehl

    2010-12-01

    Full Text Available A five-channel, filtered-x-ray-detector (XRD array has been used to measure time-dependent, soft-x-ray flux emitted by z-pinch plasmas at the Z pulsed-power accelerator (Sandia National Laboratories, Albuquerque, New Mexico, USA. The preceding, companion paper [D. L. Fehl et al., Phys. Rev. ST Accel. Beams 13, 120402 (2010PRABFM1098-4402] describes an algorithm for spectral reconstructions (unfolds and spectrally integrated flux estimates from data obtained by this instrument. The unfolded spectrum S_{unfold}(E,t is based on (N=5 first-order B-splines (histograms in contiguous unfold bins j=1,…,N; the recovered x-ray flux F_{unfold}(t is estimated as ∫S_{unfold}(E,tdE, where E is x-ray energy and t is time. This paper adds two major improvements to the preceding unfold analysis: (a Error analysis.—Both data noise and response-function uncertainties are propagated into S_{unfold}(E,t and F_{unfold}(t. Noise factors ν are derived from simulations to quantify algorithm-induced changes in the noise-to-signal ratio (NSR for S_{unfold} in each unfold bin j and for F_{unfold} (ν≡NSR_{output}/NSR_{input}: for S_{unfold}, 1≲ν_{j}≲30, an outcome that is strongly spectrally dependent; for F_{unfold}, 0.6≲ν_{F}≲1, a result that is less spectrally sensitive and corroborated independently. For nominal z-pinch experiments, the combined uncertainty (noise and calibrations in F_{unfold}(t at peak is estimated to be ∼15%. (b Generalization of the unfold method.—Spectral sensitivities (called here passband functions are constructed for S_{unfold} and F_{unfold}. Predicting how the unfold algorithm reconstructs arbitrary spectra is thereby reduced to quadratures. These tools allow one to understand and quantitatively predict algorithmic distortions (including negative artifacts, to identify potentially troublesome spectra, and to design more useful response functions.

  1. THE GREEN BANK TELESCOPE H II REGION DISCOVERY SURVEY. III. KINEMATIC DISTANCES

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, L. D. [Department of Physics, West Virginia University, Morgantown, WV 26506 (United States); Bania, T. M. [Institute for Astrophysical Research, Department of Astronomy, Boston University, 725 Commonwealth Avenue, Boston, MA 02215 (United States); Balser, Dana S. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903-2475 (United States); Rood, Robert T., E-mail: Loren.Anderson@mail.wvu.edu [Astronomy Department, University of Virginia, P.O. Box 3818, Charlottesville, VA 22903-0818 (United States)

    2012-07-20

    Using the H I emission/absorption method, we resolve the kinematic distance ambiguity and derive distances for 149 of 182 (82%) H II regions discovered by the Green Bank Telescope H II Region Discovery Survey (GBT HRDS). The HRDS is an X-band (9 GHz, 3 cm) GBT survey of 448 previously unknown H II regions in radio recombination line and radio continuum emission. Here, we focus on HRDS sources from 67 Degree-Sign {>=} l {>=} 18 Degree-Sign , where kinematic distances are more reliable. The 25 HRDS sources in this zone that have negative recombination line velocities are unambiguously beyond the orbit of the Sun, up to 20 kpc distant. They are the most distant H II regions yet discovered. We find that 61% of HRDS sources are located at the far distance, 31% at the tangent-point distance, and only 7% at the near distance. 'Bubble' H II regions are not preferentially located at the near distance (as was assumed previously) but average 10 kpc from the Sun. The HRDS nebulae, when combined with a large sample of H II regions with previously known distances, show evidence of spiral structure in two circular arc segments of mean Galactocentric radii of 4.25 and 6.0 kpc. We perform a thorough uncertainty analysis to analyze the effect of using different rotation curves, streaming motions, and a change to the solar circular rotation speed. The median distance uncertainty for our sample of H II regions is only 0.5 kpc, or 5%. This is significantly less than the median difference between the near and far kinematic distances, 6 kpc. The basic Galactic structure results are unchanged after considering these sources of uncertainty.

  2. Psychometric properties of the School Fears Survey Scale for preadolescents (SFSS-II).

    Science.gov (United States)

    García-Fernández, José Manuel; Espada Sánchez, José Pedro; Orgilés Amorós, Mireia; Méndez Carrillo, Xavier

    2010-08-01

    This paper describes the psychometric properties of a new children's self-report measure. The School Fears Survey Scale, Form II (SFSS-II) assesses school fears in children from ages 8 to 11. The factor solution with a Spanish sample of 3,665 children isolated four factors: Fear of academic failure and punishment, fear of physical discomfort, fear of social and school assessment and anticipatory and separation anxiety. The questionnaire was tested by confirmatory factor analysis, which accounted for 55.80% of the total variance. Results indicated that the SFSS-II has a high internal consistency (alpha= .89). The results revealed high test-retest reliability and appropriate relationship with other scales. The age by gender interaction was significant. Two-way analysis of variance found that older children and girls had higher anxiety. The instrument shows adequate psychometric guarantees and can be used for the multidimensional assessment of anxiety in clinical and educational settings.

  3. Comparing acquired angioedema with hereditary angioedema (types I/II): findings from the Icatibant Outcome Survey.

    Science.gov (United States)

    Longhurst, H J; Zanichelli, A; Caballero, T; Bouillet, L; Aberer, W; Maurer, M; Fain, O; Fabien, V; Andresen, I

    2017-04-01

    Icatibant is used to treat acute hereditary angioedema with C1 inhibitor deficiency types I/II (C1-INH-HAE types I/II) and has shown promise in angioedema due to acquired C1 inhibitor deficiency (C1-INH-AAE). Data from the Icatibant Outcome Survey (IOS) were analysed to evaluate the effectiveness of icatibant in the treatment of patients with C1-INH-AAE and compare disease characteristics with those with C1-INH-HAE types I/II. Key medical history (including prior occurrence of attacks) was recorded upon IOS enrolment. Thereafter, data were recorded retrospectively at approximately 6-month intervals during patient follow-up visits. In the icatibant-treated population, 16 patients with C1-INH-AAE had 287 attacks and 415 patients with C1-INH-HAE types I/II had 2245 attacks. Patients with C1-INH-AAE versus C1-INH-HAE types I/II were more often male (69 versus 42%; P = 0·035) and had a significantly later mean (95% confidence interval) age of symptom onset [57·9 (51·33-64·53) versus 14·0 (12·70-15·26) years]. Time from symptom onset to diagnosis was significantly shorter in patients with C1-INH-AAE versus C1-INH-HAE types I/II (mean 12·3 months versus 118·1 months; P = 0·006). Patients with C1-INH-AAE showed a trend for higher occurrence of attacks involving the face (35 versus 21% of attacks; P = 0·064). Overall, angioedema attacks were more severe in patients with C1-INH-HAE types I/II versus C1-INH-AAE (61 versus 40% of attacks were classified as severe to very severe; P types I/II, respectively. © 2016 British Society for Immunology.

  4. PLANETARY NEBULAE DETECTED IN THE SPITZER SPACE TELESCOPE GLIMPSE II LEGACY SURVEY

    International Nuclear Information System (INIS)

    Zhang Yong; Sun Kwok

    2009-01-01

    We report the result of a search for the infrared counterparts of 37 planetary nebulae (PNs) and PN candidates in the Spitzer Galactic Legacy Infrared Mid-Plane Survey Extraordinaire II (GLIMPSE II) survey. The photometry and images of these PNs at 3.6, 4.5, 5.8, 8.0, and 24 μm, taken through the Infrared Array Camera (IRAC) and the Multiband Imaging Photometer for Spitzer (MIPS), are presented. Most of these nebulae are very red and compact in the IRAC bands, and are found to be bright and extended in the 24 μm band. The infrared morphology of these objects are compared with Hα images of the Macquarie-AAO-Strasbourg (MASH) and MASH II PNs. The implications for morphological difference in different wavelengths are discussed. The IRAC data allow us to differentiate between PNs and H II regions and be able to reject non-PNs from the optical catalog (e.g., PNG 352.1 - 00.0). Spectral energy distributions are constructed by combing the IRAC and MIPS data with existing near-, mid-, and far-IR photometry measurements. The anomalous colors of some objects allow us to infer the presence of aromatic emission bands. These multi-wavelength data provide useful insights into the nature of different nebular components contributing to the infrared emission of PNs.

  5. Optimal power flow: a bibliographic survey II. Non-deterministic and hybrid methods

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [Univ. of Jyvaskyla, Dept. of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)

    2012-09-15

    Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey (this article) examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)

  6. Test-Retest Reliability of the Adaptive Chemistry Assessment Survey for Teachers: Measurement Error and Alternatives to Correlation

    Science.gov (United States)

    Harshman, Jordan; Yezierski, Ellen

    2016-01-01

    Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…

  7. Photometric type Ia supernova candidates from the three-year SDSS-II SN survey data

    Energy Technology Data Exchange (ETDEWEB)

    Sako, Masao; /Pennsylvania U.; Bassett, Bruce; /South African Astron. Observ. /Cape Town U., Dept. Math.; Connolly, Brian; /Pennsylvania U.; Dilday, Benjamin; /Las Cumbres Observ. /UC, Santa Barbara /Rutgers U., Piscataway; Cambell, Heather; /Portsmouth U., ICG; Frieman, Joshua A.; /Chicago U. /Chicago U., KICP /Fermilab; Gladney, Larry; /Pennsylvania U.; Kessler, Richard; /Chicago U. /Chicago U., KICP; Lampeitl, Hubert; /Portsmouth U., ICG; Marriner, John; /Fermilab; Miquel, Ramon; /Barcelona, IFAE /ICREA, Barcelona /Portsmouth U., ICG

    2011-07-01

    We analyze the three-year Sloan Digital Sky Survey II (SDSS-II) Supernova (SN) Survey data and identify a sample of 1070 photometric Type Ia supernova (SN Ia) candidates based on their multiband light curve data. This sample consists of SN candidates with no spectroscopic confirmation, with a subset of 210 candidates having spectroscopic redshifts of their host galaxies measured while the remaining 860 candidates are purely photometric in their identification. We describe a method for estimating the efficiency and purity of photometric SN Ia classification when spectroscopic confirmation of only a limited sample is available, and demonstrate that SN Ia candidates from SDSS-II can be identified photometrically with {approx}91% efficiency and with a contamination of {approx}6%. Although this is the largest uniform sample of SN candidates to date for studying photometric identification, we find that a larger spectroscopic sample of contaminating sources is required to obtain a better characterization of the background events. A Hubble diagram using SN candidates with no spectroscopic confirmation, but with host galaxy spectroscopic redshifts, yields a distance modulus dispersion that is only {approx}20%-40% larger than that of the spectroscopically confirmed SN Ia sample alone with no significant bias. A Hubble diagram with purely photometric classification and redshift-distance measurements, however, exhibits biases that require further investigation for precision cosmology.

  8. PHOTOMETRIC TYPE Ia SUPERNOVA CANDIDATES FROM THE THREE-YEAR SDSS-II SN SURVEY DATA

    International Nuclear Information System (INIS)

    Sako, Masao; Connolly, Brian; Gladney, Larry; Bassett, Bruce; Dilday, Benjamin; Cambell, Heather; Lampeitl, Hubert; Nichol, Robert C.; Frieman, Joshua A.; Kessler, Richard; Marriner, John; Miquel, Ramon; Schneider, Donald P.; Smith, Mathew; Sollerman, Jesper

    2011-01-01

    We analyze the three-year Sloan Digital Sky Survey II (SDSS-II) Supernova (SN) Survey data and identify a sample of 1070 photometric Type Ia supernova (SN Ia) candidates based on their multiband light curve data. This sample consists of SN candidates with no spectroscopic confirmation, with a subset of 210 candidates having spectroscopic redshifts of their host galaxies measured while the remaining 860 candidates are purely photometric in their identification. We describe a method for estimating the efficiency and purity of photometric SN Ia classification when spectroscopic confirmation of only a limited sample is available, and demonstrate that SN Ia candidates from SDSS-II can be identified photometrically with ∼91% efficiency and with a contamination of ∼6%. Although this is the largest uniform sample of SN candidates to date for studying photometric identification, we find that a larger spectroscopic sample of contaminating sources is required to obtain a better characterization of the background events. A Hubble diagram using SN candidates with no spectroscopic confirmation, but with host galaxy spectroscopic redshifts, yields a distance modulus dispersion that is only ∼20%-40% larger than that of the spectroscopically confirmed SN Ia sample alone with no significant bias. A Hubble diagram with purely photometric classification and redshift-distance measurements, however, exhibits biases that require further investigation for precision cosmology.

  9. Evaluation of slow shutdown system flux detectors in Point Lepreau Generating Station - II: dynamic compensation error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Anghel, V.N.P.; Sur, B. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Taylor, D. [New Brunswick Power Nuclear, Point Lepreau, New Brunswick (Canada)

    2009-07-01

    CANDU reactors are protected against reactor overpower by two independent shutdown systems: Shut Down System 1 and 2 (SDS1 and SDS2). At the Point Lepreau Generating Station (PLGS), the shutdown systems can be actuated by measurements of the neutron flux from Platinum-clad Inconel In-Core Flux Detectors. These detectors have a complex dynamic behaviour, characterized by 'prompt' and 'delayed' components with respect to immediate changes in the in-core neutron flux. It was shown previously (I: Dynamic Response Characterization by Anghel et al., this conference) that the dynamic responses of the detectors changed with irradiation, with the SDS2 detectors having 'prompt' signal components that decreased significantly. In this paper we assess the implication of these changes for detector dynamic compensation errors by comparing the compensated detector response with the power-to-fuel and the power-to-coolant responses to neutron flux ramps as assumed by previous error analyses. The dynamic compensation error is estimated at any given trip time for all possible accident flux ramps. Some implications for the shutdown system trip set points, obtained from preliminary results, are discussed. (author)

  10. Unusual broad-line Mg II emitters among luminous galaxies in the baryon oscillation spectroscopic survey

    International Nuclear Information System (INIS)

    Roig, Benjamin; Blanton, Michael R.; Ross, Nicholas P.

    2014-01-01

    Many classes of active galactic nuclei (AGNs) have been observed and recorded since the discovery of Seyfert galaxies. In this paper, we examine the sample of luminous galaxies in the Baryon Oscillation Spectroscopic Survey. We find a potentially new observational class of AGNs, one with strong and broad Mg II λ2799 line emission, but very weak emission in other normal indicators of AGN activity, such as the broad-line Hα, Hβ, and the near-ultraviolet AGN continuum, leading to an extreme ratio of broad Hα/Mg II flux relative to normal quasars. Meanwhile, these objects' narrow-line flux ratios reveal AGN narrow-line regions with levels of activity consistent with the Mg II fluxes and in agreement with that of normal quasars. These AGN may represent an extreme case of the Baldwin effect, with very low continuum and high equivalent width relative to typical quasars, but their ratio of broad Mg II to broad Balmer emission remains very unusual. They may also be representative of a class of AGN where the central engine is observed indirectly with scattered light. These galaxies represent a small fraction of the total population of luminous galaxies (≅ 0.1%), but are more likely (about 3.5 times) to have AGN-like nuclear line emission properties than other luminous galaxies. Because Mg II is usually inaccessible for the population of nearby galaxies, there may exist a related population of broad-line Mg II emitters in the local universe which is currently classified as narrow-line emitters (Seyfert 2 galaxies) or low ionization nuclear emission-line regions.

  11. First-Year Spectroscopy for the SDSS-II Supernova Survey

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Chen; Romani, Roger W.; Sako, Masao; Marriner, John; Bassett, Bruce; Becker, Andrew; Choi, Changsu; Cinabro, David; DeJongh, Fritz; Depoy, Darren L.; Dilday, Ben; Doi, Mamoru; Frieman, Joshua A.; Garnavich, Peter M.; Hogan, Craig J.; Holtzman, Jon; Im, Myungshin; Jha, Saurabh; Kessler, Richard; Konishi, Kohki; Lampeitl, Hubert

    2008-03-25

    This paper presents spectroscopy of supernovae discovered in the first season of the Sloan Digital Sky Survey-II Supernova Survey. This program searches for and measures multi-band light curves of supernovae in the redshift range z = 0.05-0.4, complementing existing surveys at lower and higher redshifts. Our goal is to better characterize the supernova population, with a particular focus on SNe Ia, improving their utility as cosmological distance indicators and as probes of dark energy. Our supernova spectroscopy program features rapid-response observations using telescopes of a range of apertures, and provides confirmation of the supernova and host-galaxy types as well as precise redshifts. We describe here the target identification and prioritization, data reduction, redshift measurement, and classification of 129 SNe Ia, 16 spectroscopically probable SNe Ia, 7 SNe Ib/c, and 11 SNe II from the first season. We also describe our efforts to measure and remove the substantial host galaxy contamination existing in the majority of our SN spectra.

  12. Comparison of two dietary assessment methods by food consumption: results of the German National Nutrition Survey II.

    Science.gov (United States)

    Eisinger-Watzl, Marianne; Straßburg, Andrea; Ramünke, Josa; Krems, Carolin; Heuer, Thorsten; Hoffmann, Ingrid

    2015-04-01

    To further characterise the performance of the diet history method and the 24-h recalls method, both in an updated version, a comparison was conducted. The National Nutrition Survey II, representative for Germany, assessed food consumption with both methods. The comparison was conducted in a sample of 9,968 participants aged 14-80. Besides calculating mean differences, statistical agreement measurements encompass Spearman and intraclass correlation coefficients, ranking participants in quartiles and the Bland-Altman method. Mean consumption of 12 out of 18 food groups was higher assessed with the diet history method. Three of these 12 food groups had a medium to large effect size (e.g., raw vegetables) and seven showed at least a small strength while there was basically no difference for coffee/tea or ice cream. Intraclass correlations were strong only for beverages (>0.50) and revealed the least correlation for vegetables (diet history method to remember consumption of the past 4 weeks may be a source of inaccurateness, especially for inhomogeneous food groups. Additionally, social desirability gains significance. There is no assessment method without errors and attention to specific food groups is a critical issue with every method. Altogether, the 24-h recalls method applied in the presented study, offers advantages approximating food consumption as compared to the diet history method.

  13. THE GREEN BANK TELESCOPE H II REGION DISCOVERY SURVEY. IV. HELIUM AND CARBON RECOMBINATION LINES

    Energy Technology Data Exchange (ETDEWEB)

    Wenger, Trey V.; Bania, T. M. [Astronomy Department, 725 Commonwealth Avenue, Boston University, Boston, MA 02215 (United States); Balser, Dana S. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA, 22903-2475 (United States); Anderson, L. D. [Department of Physics, West Virginia University, Morgantown, WV 26506 (United States)

    2013-02-10

    The Green Bank Telescope H II Region Discovery Survey (GBT HRDS) found hundreds of previously unknown Galactic regions of massive star formation by detecting hydrogen radio recombination line (RRL) emission from candidate H II region targets. Since the HRDS nebulae lie at large distances from the Sun, they are located in previously unprobed zones of the Galactic disk. Here, we derive the properties of helium and carbon RRL emission from HRDS nebulae. Our target sample is the subset of the HRDS that has visible helium or carbon RRLs. This criterion gives a total of 84 velocity components (14% of the HRDS) with helium emission and 52 (9%) with carbon emission. For our highest quality sources, the average {sup 4}He{sup +}/H{sup +} abundance ratio by number, (y {sup +}), is 0.068 {+-} 0.023(1{sigma}). This is the same ratio as that measured for the sample of previously known Galactic H II regions. Nebulae without detected helium emission give robust y {sup +} upper limits. There are 5 RRL emission components with y {sup +} less than 0.04 and another 12 with upper limits below this value. These H II regions must have either a very low {sup 4}He abundance or contain a significant amount of neutral helium. The HRDS has 20 nebulae with carbon RRL emission but no helium emission at its sensitivity level. There is no correlation between the carbon RRL parameters and the 8 {mu}m mid-infrared morphology of these nebulae.

  14. Guideline appraisal with AGREE II: online survey of the potential influence of AGREE II items on overall assessment of guideline quality and recommendation for use.

    Science.gov (United States)

    Hoffmann-Eßer, Wiebke; Siering, Ulrich; Neugebauer, Edmund A M; Brockhaus, Anne Catharina; McGauran, Natalie; Eikermann, Michaela

    2018-02-27

    The AGREE II instrument is the most commonly used guideline appraisal tool. It includes 23 appraisal criteria (items) organized within six domains. AGREE II also includes two overall assessments (overall guideline quality, recommendation for use). Our aim was to investigate how strongly the 23 AGREE II items influence the two overall assessments. An online survey of authors of publications on guideline appraisals with AGREE II and guideline users from a German scientific network was conducted between 10th February 2015 and 30th March 2015. Participants were asked to rate the influence of the AGREE II items on a Likert scale (0 = no influence to 5 = very strong influence). The frequencies of responses and their dispersion were presented descriptively. Fifty-eight of the 376 persons contacted (15.4%) participated in the survey and the data of the 51 respondents with prior knowledge of AGREE II were analysed. Items 7-12 of Domain 3 (rigour of development) and both items of Domain 6 (editorial independence) had the strongest influence on the two overall assessments. In addition, Items 15-17 (clarity of presentation) had a strong influence on the recommendation for use. Great variations were shown for the other items. The main limitation of the survey is the low response rate. In guideline appraisals using AGREE II, items representing rigour of guideline development and editorial independence seem to have the strongest influence on the two overall assessments. In order to ensure a transparent approach to reaching the overall assessments, we suggest the inclusion of a recommendation in the AGREE II user manual on how to consider item and domain scores. For instance, the manual could include an a-priori weighting of those items and domains that should have the strongest influence on the two overall assessments. The relevance of these assessments within AGREE II could thereby be further specified.

  15. Fault diagnosis of generation IV nuclear HTGR components – Part II: The area error enthalpy–entropy graph approach

    International Nuclear Information System (INIS)

    Rand, C.P. du; Schoor, G. van

    2012-01-01

    Highlights: ► Different uncorrelated fault signatures are derived for HTGR component faults. ► A multiple classifier ensemble increases confidence in classification accuracy. ► Detailed simulation model of system is not required for fault diagnosis. - Abstract: The second paper in a two part series presents the area error method for generation of representative enthalpy–entropy (h–s) fault signatures to classify malfunctions in generation IV nuclear high temperature gas-cooled reactor (HTGR) components. The second classifier is devised to ultimately address the fault diagnosis (FD) problem via the proposed methods in a multiple classifier (MC) ensemble. FD is realized by way of different input feature sets to the classification algorithm based on the area and trajectory of the residual shift between the fault-free and the actual operating h–s graph models. The application of the proposed technique is specifically demonstrated for 24 single fault transients considered in the main power system (MPS) of the Pebble Bed Modular Reactor (PBMR). The results show that the area error technique produces different fault signatures with low correlation for all the examined component faults. A brief evaluation of the two fault signature generation techniques is presented and the performance of the area error method is documented using the fault classification index (FCI) presented in Part I of the series. The final part of this work reports the application of the proposed approach for classification of an emulated fault transient in data from the prototype Pebble Bed Micro Model (PBMM) plant. Reference data values are calculated for the plant via a thermo-hydraulic simulation model of the MPS. The results show that the correspondence between the fault signatures, generated via experimental plant data and simulated reference values, are generally good. The work presented in the two part series, related to the classification of component faults in the MPS of different

  16. Characteristics and verification of a car-borne survey system for dose rates in air: KURAMA-II

    International Nuclear Information System (INIS)

    Tsuda, S.; Yoshida, T.; Tsutsumi, M.; Saito, K.

    2015-01-01

    The car-borne survey system KURAMA-II, developed by the Kyoto University Research Reactor Institute, has been used for air dose rate mapping after the Fukushima Dai-ichi Nuclear Power Plant accident. KURAMA-II consists of a CsI(Tl) scintillation detector, a GPS device, and a control device for data processing. The dose rates monitored by KURAMA-II are based on the G(E) function (spectrum-dose conversion operator), which can precisely calculate dose rates from measured pulse-height distribution even if the energy spectrum changes significantly. The characteristics of KURAMA-II have been investigated with particular consideration to the reliability of the calculated G(E) function, dose rate dependence, statistical fluctuation, angular dependence, and energy dependence. The results indicate that 100 units of KURAMA-II systems have acceptable quality for mass monitoring of dose rates in the environment. - Highlights: • KURAMA-II is a car-borne survey system developed by Kyoto University. • A spectrum-dose conversion operator for KURAMA-II was calculated and examined. • We examined the radiation characteristics of KURAMA-II such as energy dependence. • KURAMA-II has acceptable quality for environmental mass dose rate monitoring

  17. Attempt to Determine the Prevalence of Two Inborn Errors of Primary Bile Acid Synthesis : Results of a European Survey

    NARCIS (Netherlands)

    Jahnel, Jörg; Zöhrer, Evelyn; Fischler, Björn; D'Antiga, Lorenzo; Debray, Dominique; Dezsofi, Antal; Haas, Dorothea; Hadzic, Nedim; Jacquemin, Emmanuel; Lamireau, Thierry; Maggiore, Giuseppe; McKiernan, Pat J; Calvo, Pier Luigi; Verkade, Henkjan J; Hierro, Loreto; McLin, Valerie; Baumann, Ulrich; Gonzales, Emmanuel

    2017-01-01

    Objective: Inborn errors of primary bile acid (BA) synthesis are genetic cholestatic disorders leading to accumulation of atypical BA with deficiency of normal BA. Unless treated with primary BA, chronic liver disease usually progresses to cirrhosis and liver failure before adulthood. We sought to

  18. Exploring the Milky Way halo with SDSS-II SN survey RR Lyrae stars

    Science.gov (United States)

    De Lee, Nathan

    This thesis details the creation of a large catalog of RR Lyrae stars, their lightcurves, and their associated photometric and kinematic parameters. This catalog contains 421 RR Lyrae stars with 305 RRab and 116 RRc. Of these, 241 stars have stellar spectra taken with either the Blanco 4m RC spectrograph or the SDSS/SEGUE survey, and in some cases taken by both. From these spectra and photometric methods derived from them, an analysis is conducted of the RR lyrae's distribution, metallicity, kinematics, and photometric properties within the halo. All of these RR Lyrae originate from the SDSS-II Supernova Survey. The SDSS-II SN Survey covers a 2.5 degree equatorial stripe ranging from -60 to +60 degrees in RA. This corresponds to relatively high southern galactic latitudes in the anti-center direction. The full catalog ranges from g 0 magnitude 13 to 20 which covers a distance of 3 to 95 kpc from the sun. Using this sample, we explore the Oosterhoff dichotomy through the D log P method as a function of | Z | distance from the plane. This results in a clear division of the RRab stars into OoI and OoII groups at lower | Z |, but the population becomes dominated by OoI stars at higher | Z |. The idea of a dual halo is explored primarily in the context of radial velocity distributions as a function of | Z |. In particular, V gsr , the radial velocity in the galactic standard of rest, is used as a proxy for V [straight phi] , the cylindrical rotational velocity. This is then compared against a single halo model galaxy, which results in very similar V gsr histograms for both at low to medium | Z |. However, at high | Z | there is a clear separation into two distinct velocity groups for the data without a corresponding separation in the model, suggesting that at least a two-component model for the halo is necessary. The final part of the analysis involves [Fe/H] measurements from both spectra and photometric relations cut in both | Z | and radial velocity. In this case

  19. Environmental monitoring survey of oil and gas fields in Region II in 2009. Summary report

    Energy Technology Data Exchange (ETDEWEB)

    2010-03-15

    The oil companies Statoil ASA, ExxonMobil Exploration and Production Norway AS, Total E&P Norge AS, Talisman Energy Norge AS and Marathon Petroleum Norge AS commissioned Section of Applied Environmental Research at UNI RESEARCH AS to undertake the monitoring survey of Region II in 2009. Similar monitoring surveys in Region II have been carried out in 1996, 2000, 2003 and 2006. The survey in 2009 included in total 18 fields: Rev, Varg, Sigyn, Sleipner Vest, Sleipner OEst, Sleipner Alfa Nord, Glitne, Grane, Balder, Ringhorne, Jotun, Vale, Skirne, Byggve, Heimdal, Volve, Vilje og Alvheim. Sampling was conducted from the vessel MV Libas between May 18 and May 27. Samples were collected from in totally 137 sampling sites, of which 15 were regional sampling sites. Samples for chemical analysis were collected at all sites, whereas samples for benthos analysis were collected at 12 fields. As in previous surveys, Region II is divided into natural sub-regions. One sub-region is shallow (77-96 m) sub-region, a central sub-region (107-130 m) and a northern subregion (115-119 m). The sediments of the shallow sub-region had relatively lower content of TOM and pelite and higher content of fine sand than the central and northern sub-regions. Calculated areas of contamination are shown for the sub-regions in Table 1.1. The fields Sigyn, Sleipner Alfa Nord, Glitne, Grane, Balder, Ringhorne, Jotun, Skirne, Byggve, Vilje og Alvheim showed no contamination of THC. At the other fields there were minor changes from 2006. The concentrations of barium increased in the central sub-region from 2006 to 2009, also at fields where no drilling had been undertaken during the last years. The same laboratory and methods are used during the three last regional investigations. The changes in barium concentrations may be due to high variability of barium concentrations in the sediments. This is supported by relatively large variations in average barium concentrations at the regional sampling sites in

  20. Cultural-resource survey report: Hoover Dam Powerplant Modification Project II. Associated transmission-line facility

    International Nuclear Information System (INIS)

    Queen, R.L.

    1991-06-01

    The Bureau of Reclamation (Reclamation) is proposing to modify or install additional transmission facilities between the Hoover Dam hydroelectric plant and the Western Area Power Authority substation near Boulder City, Nevada. Reclamation has completed cultural resource investigations to identify historic or prehistoric resources in the project area that might be affected during construction of the transmission line. Four possible transmission corridors approximately 50 feet wide and between 9.5 and 11.5 miles long were investigated. The proposed transmission lines either parallel or replace existing transmission lines. The corridors generally have undergone significant disturbance from past transmission line construction. A Class II sampling survey covering approximately 242 acres was conducted. Access or construction roads have not been identified and surveys of these areas will have to be completed in the future. No historic or prehistoric archeological sites were encountered within the four corridor right-of-ways. It is believed that the probability for prehistoric sites is very low. Four historic period sites were recorded that are outside, but near, the proposed corridor. These sites are not individually eligible for the National Register of Historic Places, but may be associated with the construction of Hoover Dam and contribute to a historic district or multiple property resource area focusing on the dam and its construction

  1. Survey II of public and leadership attitudes toward nuclear power development in the United States

    International Nuclear Information System (INIS)

    Anon.

    1976-01-01

    In August 1975, Ebasco Services Incorporated released results of a survey conducted by Louis Harris and Associates, Inc. to determine attitudes of the American public and its leaders toward nuclear power development in the U.S. Results showed, among other things, that the public favored building nuclear power plants; that they believed we have an energy shortage that will not go away soon; that they were not willing to make environmental sacrifices; and that, while favoring nuclear power development, they also had concerns about some aspects of nuclear power. Except for the environmental group, the leadership group felt the same way the public does. A follow-up survey was made in July 1976 to measure any shifts in attitudes. Survey II showed that one of the real worries that remains with the American public is the shortage of energy; additionally, the public and the leaders are concerned about the U.S. dependence on imported oil. With exception of the environmentalists, the public and its leaders support a host of measures to build energy sources, including: solar and oil shale development; speeding up the Alaskan pipeline; speeding up off-shore drilling; and building nuclear power plants. The public continues to be unwilling to sacrifice the environment. There is less conviction on the part of the public that electric power will be in short supply over the next decade. The public believes the days of heavy dependence on oil or hydroelectric power are coming to an end. By a margin of 3 to 1, the public favors building more nuclear power plants in the U.S., but some concerns about the risks have not dissipated. Even though the public is worried about radioactivity escaping into the atmosphere, they consider nuclear power generation more safe than unsafe

  2. Hydra II: A Faint and Compact Milky Way Dwarf Galaxy Found in the Survey of the Magellanic Stellar History

    NARCIS (Netherlands)

    Martin, Nicolas F.; Nidever, David L.; Besla, Gurtina; Olsen, Knut; Walker, Alistair R.; Vivas, A. Katherina; Gruendl, Robert A.; Kaleida, Catherine C.; Muñoz, Ricardo R.; Blum, Robert D.; Saha, Abhijit; Conn, Blair C.; Bell, Eric F.; Chu, You-Hua; Cioni, Maria-Rosa L.; de Boer, Thomas J. L.; Gallart, Carme; Jin, Shoko; Kunder, Andrea; Majewski, Steven R.; Martinez-Delgado, David; Monachesi, Antonela; Monelli, Matteo; Monteagudo, Lara; Noël, Noelia E. D.; Olszewski, Edward W.; Stringfellow, Guy S.; van der Marel, Roeland P.; Zaritsky, Dennis

    We present the discovery of a new dwarf galaxy, Hydra II, found serendipitously within the data from the ongoing Survey of the Magellanic Stellar History conducted with the Dark Energy Camera on the Blanco 4 m Telescope. The new satellite is compact ({{r}h}=68 ± 11 pc) and faint ({{M}V}=-4.8 ± 0.3),

  3. Determining Type I and Type II Errors when Applying Information Theoretic Change Detection Metrics for Data Association and Space Situational Awareness

    Science.gov (United States)

    Wilkins, M.; Moyer, E. J.; Hussein, Islam I.; Schumacher, P. W., Jr.

    Correlating new detections back to a large catalog of resident space objects (RSOs) requires solving one of three types of data association problems: observation-to-track, track-to-track, or observation-to-observation. The authors previous work has explored the use of various information divergence metrics for solving these problems: Kullback-Leibler (KL) divergence, mutual information, and Bhattacharrya distance. In addition to approaching the data association problem strictly from the metric tracking aspect, we have explored fusing metric and photometric data using Bayesian probabilistic reasoning for RSO identification to aid in our ability to correlate data to specific RS Os. In this work, we will focus our attention on the KL Divergence, which is a measure of the information gained when new evidence causes the observer to revise their beliefs. We can apply the Principle of Minimum Discrimination Information such that new data produces as small an information gain as possible and this information change is bounded by ɛ. Choosing an appropriate value for ɛ for both convergence and change detection is a function of your risk tolerance. Small ɛ for change detection increases alarm rates while larger ɛ for convergence means that new evidence need not be identical in information content. We need to understand what this change detection metric implies for Type I α and Type II β errors when we are forced to make a decision on whether new evidence represents a true change in characterization of an object or is merely within the bounds of our measurement uncertainty. This is unclear for the case of fusing multiple kinds and qualities of characterization evidence that may exist in different metric spaces or are even semantic statements. To this end, we explore the use of Sequential Probability Ratio Testing where we suppose that we may need to collect additional evidence before accepting or rejecting the null hypothesis that a change has occurred. In this work, we

  4. The Unique Optical Design of the CTI-II Survey Telescope

    Science.gov (United States)

    Ackermann, Mark R.; McGraw, J. T.; MacFarlane, M.

    2006-12-01

    The CCD/Transit Instrument with Innovative Instrumentation (CTI-II) is being developed for precision ground-based astrometric and photometric astronomical observations. The 1.8m telescope will be stationary, near-zenith pointing and will feature a CCD-mosaic array operated in time-delay and integrate (TDI) mode to image a continuous strip of the sky in five bands. The heart of the telescope is a Nasmyth-like bent-Cassegrain optical system optimized to produce near diffraction-limited images with near zero distortion over a circular1.42 deg field. The optical design includes an f/2.2 parabolic ULE primary with no central hole salvaged from the original CTI telescope and adds the requisite hyperbolic secondary, a folding flat and a highly innovative all-spherical, five lens corrector which includes three plano surfaces. The reflective and refractive portions of the design have been optimized as individual but interdependent systems so that the same reflective system can be used with slightly different refractive correctors. At present, two nearly identical corrector designs are being evaluated, one fabricated from BK-7 glass and the other of fused silica. The five lens corrector consists of an air-spaced triplet separated from follow-on air-spaced doublet. Either design produces 0.25 arcsecond images at 83% encircled energy with a maximum of 0.0005% distortion. The innovative five lens corrector design has been applied to other current and planned Cassegrain, RC and super RC optical systems requiring correctors. The basic five lens approach always results in improved performance compared to the original designs. In some cases, the improvement in image quality is small but includes substantial reductions in distortion. In other cases, the improvement in image quality is substantial. Because the CTI-II corrector is designed for a parabolic primary, it might be especially useful for liquid mirror telescopes. We describe and discuss the CTI-II optical design with respect

  5. Radiation survey of first Hi-Art II Tomotherapy vault design in India

    International Nuclear Information System (INIS)

    Kinhikar, Rajesh A.; Jamema, S.V.; Pai, Rajeshree; Sharma, P.K. Dash; Deshpande, Deepak D.

    2009-01-01

    A vault as per government-regulation compliance with adequate shielding needs was designed and constructed for Hi-Art II Tomotherapy machine being the first in India. Radiation measurements around this Tomotherapy treatment vault were carried out to check the shielding adequacy of the source housing and the vault. It was mandatory to get this un-conventional machine 'Type Approved' by Atomic Energy Regulatory Board (AERB) in India. The aim of this paper was to report on the radiation levels measured during the radiation survey carried out for this machine. The radiation levels in and around the vault were measured for stationary as well as rotational treatment procedures with the largest open field size (5 cm x 40 cm) at the isocenter with and without scattering medium. The survey was performed at three locations near each wall surrounding the vault as well. The leakage radiation from the source housing was measured both in the patient plane outside the treatment field and one meter distance from the source outside the patient plane. The radiation levels both for stationary as well as rotational procedures were within 1 mR/h. No significance difference was observed in the radiation levels measured for rotational procedures with and without scattering medium. The leakage radiation in the patient plane was found to be 0.04% (Tolerance 0.2%), while the head leakage was 0.007% (Tolerance 0.5%) of the dose rate at the isocenter. The treatment delivery with Tomotherapy does play safe radiation levels around the installation layout and also passes the leakage criteria as well.

  6. A Survey of Ca II H and K Chromospheric Emission in Southern Solar-Type Stars

    Science.gov (United States)

    Henry, Todd J.; Soderblom, David R.; Donahue, Robert A.; Baliunas, Sallie L.

    1996-01-01

    More than 800 southern stars within 50 pc have been observed for chromospheric emission in the cores of the Ca II H and K lines. Most of the sample targets were chosen to be G dwarfs on the basis of colors and spectral types. The bimodal distribution in stellar activity first noted in a sample of northern stars by Vaughan and Preston in 1980 is confirmed, and the percentage of active stars, about 30%, is remarkably consistent between the northern and southern surveys. This is especially compelling given that we have used an entirely different instrumental setup and stellar sample than used in the previous study. Comparisons to the Sun, a relatively inactive star, show that most nearby solar-type stars have a similar activity level, and presumably a similar age. We identify two additional subsamples of stars -- a very active group, and a very inactive group. The very active group may be made up of young stars near the Sun, accounting for only a few percent of the sample, and appears to be less than ~0.1 Gyr old. Included in this high-activity tail of the distribution, however, is a subset of very close binaries of the RS CVn or W UMa types. The remaining members of this population may be undetected close binaries or very young single stars. The very inactive group of stars, contributting ~5%--10% to the total sample, may be those caught in a Maunder Minimum type phase. If the observations of the survey stars are considered to be a sequence of snapshots of the Sun during its life, we might expect that the Sun will spend about 10% of the remainder of its main sequence life in a Maunder Minimum phase.

  7. A Measurement of the Rate of Type Ia Supernovae in Galaxy Clusters from the SDSS-II Supernova Survey

    Energy Technology Data Exchange (ETDEWEB)

    Dilday, Benjamin; /Rutgers U., Piscataway /Chicago U. /KICP, Chicago; Bassett, Bruce; /Cape Town U., Dept. Math. /South African Astron. Observ.; Becker, Andrew; /Washington U., Seattle, Astron. Dept.; Bender, Ralf; /Munich, Tech. U. /Munich U. Observ.; Castander, Francisco; /Barcelona, IEEC; Cinabro, David; /Wayne State U.; Frieman, Joshua A.; /Chicago U. /Fermilab; Galbany, Lluis; /Barcelona, IFAE; Garnavich, Peter; /Notre Dame U.; Goobar, Ariel; /Stockholm U., OKC /Stockholm U.; Hopp, Ulrich; /Munich, Tech. U. /Munich U. Observ. /Tokyo U.

    2010-03-01

    We present measurements of the Type Ia supernova (SN) rate in galaxy clusters based on data from the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey. The cluster SN Ia rate is determined from 9 SN events in a set of 71 C4 clusters at z {le} 0.17 and 27 SN events in 492 maxBCG clusters at 0.1 {le} z {le} 0.3. We find values for the cluster SN Ia rate of (0.37{sub -0.12-0.01}{sup +0.17+0.01}) SNur h{sup 2} and (0.55{sub -0.11-0.01}{sup +0.13+0.02}) SNur h{sup 2} (SNux = 10{sup -12}L{sub x{circle_dot}}{sup -1} yr{sup -1}) in C4 and maxBCG clusters, respectively, where the quoted errors are statistical and systematic, respectively. The SN rate for early-type galaxies is found to be (0.31{sub -0.12-0.01}{sup +0.18+0.01}) SNur h{sup 2} and (0.49{sub -0.11-0.01}{sup +0.15+0.02}) SNur h{sup 2} in C4 and maxBCG clusters, respectively. The SN rate for the brightest cluster galaxies (BCG) is found to be (2.04{sub -1.11-0.04}{sup +1.99+0.07}) SNur h{sup 2} and (0.36{sub -0.30-0.01}{sup +0.84+0.01}) SNur h{sup 2} in C4 and maxBCG clusters, respectively. The ratio of the SN Ia rate in cluster early-type galaxies to that of the SN Ia rate in field early-type galaxies is 1.94{sub -0.91-0.015}{sup +1.31+0.043} and 3.02{sub -1.03-0.048}{sup +1.31+0.062}, for C4 and maxBCG clusters, respectively. The SN rate in galaxy clusters as a function of redshift, which probes the late time SN Ia delay distribution, shows only weak dependence on redshift. Combining our current measurements with previous measurements, we fit the cluster SN Ia rate data to a linear function of redshift, and find r{sub L} = [(0.49{sub -0.14}{sup +0.15}) + (0.91{sub -0.81}{sup +0.85}) x z] SNuB h{sup 2}. A comparison of the radial distribution of SNe in cluster to field early-type galaxies shows possible evidence for an enhancement of the SN rate in the cores of cluster early-type galaxies. With an observation of at most 3 hostless, intra-cluster SNe Ia, we estimate the fraction of cluster SNe that are

  8. A MEASUREMENT OF THE RATE OF TYPE Ia SUPERNOVAE IN GALAXY CLUSTERS FROM THE SDSS-II SUPERNOVA SURVEY

    International Nuclear Information System (INIS)

    Dilday, Benjamin; Jha, Saurabh W.; Bassett, Bruce; Becker, Andrew; Bender, Ralf; Hopp, Ulrich; Castander, Francisco; Cinabro, David; Frieman, Joshua A.; Galbany, LluIs; Miquel, Ramon; Garnavich, Peter; Goobar, Ariel; Ihara, Yutaka; Kessler, Richard; Lampeitl, Hubert; Nichol, Robert C.; Marriner, John; Molla, Mercedes

    2010-01-01

    We present measurements of the Type Ia supernova (SN) rate in galaxy clusters based on data from the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey. The cluster SN Ia rate is determined from 9 SN events in a set of 71 C4 clusters at z ≤ 0.17 and 27 SN events in 492 maxBCG clusters at 0.1 ≤ z ≤ 0.3. We find values for the cluster SN Ia rate of (0.37 +0.17+0.01 -0.12-0.01 ) SNur h 2 and (0.55 +0.13+0.02 -0.11-0.01 ) SNur h 2 (SNux = 10 -12 L -1 xsun yr -1 ) in C4 and maxBCG clusters, respectively, where the quoted errors are statistical and systematic, respectively. The SN rate for early-type galaxies is found to be (0.31 +0.18+0.01 -0.12-0.01 ) SNur h 2 and (0.49 +0.15+0.02 -0.11-0.01 ) SNur h 2 in C4 and maxBCG clusters, respectively. The SN rate for the brightest cluster galaxies (BCG) is found to be (2.04 +1.99+0.07 -1.11-0.04 ) SNur h 2 and (0.36 +0.84+0.01 -0.30-0.01 ) SNur h 2 in C4 and maxBCG clusters, respectively. The ratio of the SN Ia rate in cluster early-type galaxies to that of the SN Ia rate in field early-type galaxies is 1.94 +1.31+0.043 -0.91-0.015 and 3.02 +1.31+0.062 -1.03-0.048 , for C4 and maxBCG clusters, respectively. The SN rate in galaxy clusters as a function of redshift, which probes the late time SN Ia delay distribution, shows only weak dependence on redshift. Combining our current measurements with previous measurements, we fit the cluster SN Ia rate data to a linear function of redshift, and find r L = [(0.49 +0.15 -0.14 )+(0.91 +0.85 -0.81 ) x z] SNuB h 2 . A comparison of the radial distribution of SNe in cluster to field early-type galaxies shows possible evidence for an enhancement of the SN rate in the cores of cluster early-type galaxies. With an observation of at most three hostless, intra-cluster SNe Ia, we estimate the fraction of cluster SNe that are hostless to be (9.4 +8.3 -5.1 )%.

  9. A multi-institutional survey evaluating patient related QA – phase II

    Directory of Open Access Journals (Sweden)

    Teichmann Tobias

    2017-09-01

    Full Text Available In phase I of the survey a planning intercomparison of patient-related QA was performed at 12 institutions. The participating clinics created phantom based IMRT and VMAT plans which were measured utilizing the ArcCheck diode array. Mobius3D (M3D was used in phase II. It acts as a secondary dose verification tool for patient-specific QA based on average linac beam data collected by Mobius Medical Systems. All Quasimodo linac plans will be analyzed for the continuation of the intercomparison. We aim to determine if Mobius3D is suited for use with diverse treatment techniques, if beam model customization is needed. Initially we computed first Mobius3D results by transferring all plans from phase I to our Mobius3D server. Because of some larger PTV mean dose differences we checked if output factor customization would be beneficial. We performed measurements and output factor correction to account for discrepancies in reference conditions. Compared to Mobius3D's preconfigured average beam data values, these corrected output factors differed by ±1.5% for field sizes between 7x7cm2 and 30x30cm2 and to −3.9% for 3x3cm2. Our method of correcting the output factors turns out good congruence to M3D's reference values for these medium field sizes.

  10. A Survey of Optometry Graduates to Determine Practice Patterns: Part II: Licensure and Practice Establishment Experiences.

    Science.gov (United States)

    Bleimann, Robert L.; Smith, Lee W.

    1985-01-01

    A summary of Part II of a two-volume study of optometry graduates conducted by the Association of Schools and Colleges of Optometry is presented. Part II includes the analysis of the graduates' licensure and practice establishment experiences. (MLW)

  11. Hydra II: A Faint and Compact Milky Way Dwarf Galaxy Found in the Survey of the Magellanic Stellar History

    OpenAIRE

    Martin, NF; Nidever, DL; Besla, G; Olsen, K; Walker, AR; Vivas, AK; Gruendl, RA; Kaleida, CC; Muñoz, RR; Blum, RD; Saha, A; Conn, BC; Bell, EF; Chu, YH; Cioni, MRL

    2015-01-01

    © 2015. The American Astronomical Society. All rights reserved.We present the discovery of a new dwarf galaxy, Hydra II, found serendipitously within the data from the ongoing Survey of the Magellanic Stellar History conducted with the Dark Energy Camera on the Blanco 4 m Telescope. The new satellite is compact (rh = 68 ± 11 pc) and faint (MV = -4.8 ± 0.3), but well within the realm of dwarf galaxies. The stellar distribution of Hydra II in the color-magnitude diagram is well-described by a m...

  12. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  13. SHOCK BREAKOUT IN TYPE II PLATEAU SUPERNOVAE: PROSPECTS FOR HIGH-REDSHIFT SUPERNOVA SURVEYS

    International Nuclear Information System (INIS)

    Tominaga, N.; Morokuma, T.; Blinnikov, S. I.; Nomoto, K.; Baklanov, P.; Sorokina, E. I.

    2011-01-01

    Shock breakout is the brightest radiative phenomenon in a supernova (SN) but is difficult to be observed owing to the short duration and X-ray/ultraviolet (UV)-peaked spectra. After the first observation from the rising phase reported in 2008, its observability at high redshift is attracting enormous attention. We perform multigroup radiation hydrodynamics calculations of explosions for evolutionary presupernova models with various main-sequence masses M MS , metallicities Z, and explosion energies E. We present multicolor light curves of shock breakouts in Type II plateau SNe, being the most frequent core-collapse SNe, and predict apparent multicolor light curves of shock breakout at various redshifts z. We derive the observable SN rate and reachable redshift as functions of filter x and limiting magnitude m x,lim by taking into account an initial mass function, cosmic star formation history, intergalactic absorption, and host galaxy extinction. We propose a realistic survey strategy optimized for shock breakout. For example, the g'-band observable SN rate for m g',lim = 27.5 mag is 3.3 SNe deg -2 day -1 and half of them are located at z ≥ 1.2. It is clear that the shock breakout is a beneficial clue for probing high-z core-collapse SNe. We also establish ways to identify shock breakout and constrain SN properties from the observations of shock breakout, brightness, timescale, and color. We emphasize that the multicolor observations in blue optical bands with ∼hour intervals, preferably over ≥2 continuous nights, are essential to efficiently detect, identify, and interpret shock breakout.

  14. Survey of non-linear hydrodynamic models of type-II Cepheids

    Science.gov (United States)

    Smolec, R.

    2016-03-01

    We present a grid of non-linear convective type-II Cepheid models. The dense model grids are computed for 0.6 M⊙ and a range of metallicities ([Fe/H] = -2.0, -1.5, -1.0), and for 0.8 M⊙ ([Fe/H] = -1.5). Two sets of convective parameters are considered. The models cover the full temperature extent of the classical instability strip, but are limited in luminosity; for the most luminous models, violent pulsation leads to the decoupling of the outermost model shell. Hence, our survey reaches only the shortest period RV Tau domain. In the Hertzsprung-Russell diagram, we detect two domains in which period-doubled pulsation is possible. The first extends through the BL Her domain and low-luminosity W Vir domain (pulsation periods ˜2-6.5 d). The second domain extends at higher luminosities (W Vir domain; periods >9.5 d). Some models within these domains display period-4 pulsation. We also detect very narrow domains (˜10 K wide) in which modulation of pulsation is possible. Another interesting phenomenon we detect is double-mode pulsation in the fundamental mode and in the fourth radial overtone. Fourth overtone is a surface mode, trapped in the outer model layers. Single-mode pulsation in the fourth overtone is also possible on the hot side of the classical instability strip. The origin of the above phenomena is discussed. In particular, the role of resonances in driving different pulsation dynamics as well as in shaping the morphology of the radius variation curves is analysed.

  15. THE EFFECT OF HOST GALAXIES ON TYPE Ia SUPERNOVAE IN THE SDSS-II SUPERNOVA SURVEY

    International Nuclear Information System (INIS)

    Lampeitl, Hubert; Smith, Mathew; Nichol, Robert C.; Bassett, Bruce; Cinabro, David; Dilday, Benjamin; Jha, Saurabh W.; Foley, Ryan J.; Frieman, Joshua A.; Garnavich, Peter M.; Goobar, Ariel; Nordin, Jakob; Im, Myungshin; Marriner, John; Miquel, Ramon; Oestman, Linda; Riess, Adam G.; Sako, Masao; Schneider, Donald P.; Sollerman, Jesper

    2010-01-01

    We present an analysis of the host galaxy dependences of Type Ia Supernovae (SNe Ia) from the full three year sample of the SDSS-II Supernova Survey. We re-discover, to high significance, the strong correlation between host galaxy type and the width of the observed SN light curve, i.e., fainter, quickly declining SNe Ia favor passive host galaxies, while brighter, slowly declining Ia's favor star-forming galaxies. We also find evidence (at between 2σ and 3σ) that SNe Ia are ≅0.1 ± 0.04 mag brighter in passive host galaxies than in star-forming hosts, after the SN Ia light curves have been standardized using the light-curve shape and color variations. This difference in brightness is present in both the SALT2 and MCLS2k2 light-curve fitting methodologies. We see evidence for differences in the SN Ia color relationship between passive and star-forming host galaxies, e.g., for the MLCS2k2 technique, we see that SNe Ia in passive hosts favor a dust law of R V = 1.0 ± 0.2, while SNe Ia in star-forming hosts require R V = 1.8 +0.2 -0.4 . The significance of these trends depends on the range of SN colors considered. We demonstrate that these effects can be parameterized using the stellar mass of the host galaxy (with a confidence of >4σ) and including this extra parameter provides a better statistical fit to our data. Our results suggest that future cosmological analyses of SN Ia samples should include host galaxy information.

  16. Field Surveys, IOC Valleys. Volume III, Part II. Cultural Resources Survey, Pine and Wah Wah Valleys, Utah.

    Science.gov (United States)

    1981-08-01

    including horse, camel, mammoth, Ertm E-TR-48-III-II 20 musk ox, and certain species of bison, goat, and bear, which had previously inhabited the marsh and...34 - - -9,$.. 𔄃 Im I I I Si to * Location lype/Contents Affiliation 42B@644 rid e over cr ek - P/J depression, cleared areas, Fr elon (f4-5-18-92) ground

  17. The socio-economic patterning of survey participation and non-response error in a multilevel study of food purchasing behaviour: area- and individual-level characteristics.

    Science.gov (United States)

    Turrell, Gavin; Patterson, Carla; Oldenburg, Brian; Gould, Trish; Roy, Marie-Andree

    2003-04-01

    To undertake an assessment of survey participation and non-response error in a population-based study that examined the relationship between socio-economic position and food purchasing behaviour. The study was conducted in Brisbane City (Australia) in 2000. The sample was selected using a stratified two-stage cluster design. Respondents were recruited using a range of strategies that attempted to maximise the involvement of persons from disadvantaged backgrounds: respondents were contacted by personal visit and data were collected using home-based face-to-face interviews; multiple call-backs on different days and at different times were used; and a financial gratuity was provided. Non-institutionalised residents of private dwellings located in 50 small areas that differed in their socio-economic characteristics. Rates of survey participation - measured by non-contacts, exclusions, dropped cases, response rates and completions - were similar across areas, suggesting that residents of socio-economically advantaged and disadvantaged areas were equally likely to be recruited. Individual-level analysis, however, showed that respondents and non-respondents differed significantly in their sociodemographic and food purchasing characteristics: non-respondents were older, less educated and exhibited different purchasing behaviours. Misclassification bias probably accounted for the inconsistent pattern of association between the area- and individual-level results. Estimates of bias due to non-response indicated that although respondents and non-respondents were qualitatively different, the magnitude of error associated with this differential was minimal. Socio-economic position measured at the individual level is a strong and consistent predictor of survey non-participation. Future studies that set out to examine the relationship between socio-economic position and diet need to adopt sampling strategies and data collection methods that maximise the likelihood of recruiting

  18. A Hubble Space Telescope Survey of the Disk Cluster Population of M31. II. Advanced Camera for Surveys Pointings

    Science.gov (United States)

    Krienke, O. K.; Hodge, P. W.

    2008-01-01

    This paper reports on a survey of star clusters in M31 based on archival images from the Hubble Space Telescope. Paper I reported results from images obtained with the Wide Field Planetary Camera 2 (WFPC2) and this paper reports results from the Advanced Camera for Surveys (ACS). The ACS survey has yielded a total of 339 star clusters, 52 of which—mostly globular clusters—were found to have been cataloged previously. As for the previous survey, the luminosity function of the clusters drops steeply for absolute magnitudes fainter than MV = -3 the implied cluster mass function has a turnover for masses less than a few hundred solar masses. The color-integrated magnitude diagram of clusters shows three significant features: (1) a group of very red, luminous objects: the globular clusters, (2) a wide range in color for the fainter clusters, representing a considerable range in age and reddening, and (3) a maximum density of clusters centered approximately at V = 21, B - V = 0.30, V - I = 0.50, where there are intermediate-age, intermediate-mass clusters with ages close to 500 million years and masses of about 2000 solar masses. We give a brief qualitative interpretation of the distribution of clusters in the CMDs in terms of their formation and destruction rates. Based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for research in astronomy, Inc., under NASA contract NAS 5-26555.

  19. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  20. Quantifying behavioural determinants relating to health professional reporting of medication errors: a cross-sectional survey using the Theoretical Domains Framework.

    Science.gov (United States)

    Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek

    2016-11-01

    The aims of this study were to quantify the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE) and to explore any differences between respondents. A cross-sectional survey of patient-facing doctors, nurses and pharmacists within three major hospitals of Abu Dhabi, the UAE. An online questionnaire was developed based on the Theoretical Domains Framework (TDF, a framework of behaviour change theories). Principal component analysis (PCA) was used to identify components and internal reliability determined. Ethical approval was obtained from a UK university and all hospital ethics committees. Two hundred and ninety-four responses were received. Questionnaire items clustered into six components of knowledge and skills, feedback and support, action and impact, motivation, effort and emotions. Respondents generally gave positive responses for knowledge and skills, feedback and support and action and impact components. Responses were more neutral for the motivation and effort components. In terms of emotions, the component with the most negative scores, there were significant differences in terms of years registered as health professional (those registered longest most positive, p = 0.002) and age (older most positive, p Theoretical Domains Framework to quantify the behavioural determinants of health professional reporting of medication errors. • Questionnaire items relating to emotions surrounding reporting generated the most negative responses with significant differences in terms of years registered as health professional (those registered longest most positive) and age (older most positive) with no differences for gender and health profession. • Interventions based on behaviour change techniques mapped to emotions should be prioritised for development.

  1. The Isan Culture Maintenance and Revitalisation Programme's Multilingual Signage Attitude Survey: Phase II

    Science.gov (United States)

    Draper, John

    2016-01-01

    This article contextualises and presents to the academic community the full dataset of the Isan Culture Maintenance and Revitalisation Programme's (ICMRP) multilingual signage survey. The ICMRP is a four-year European Union co-sponsored project in Northeast Thailand. This article focuses on one aspect of the project, four surveys each of 1,500…

  2. National Sample Survey of Registered Nurses II. Status of Nurses: November 1980.

    Science.gov (United States)

    Bentley, Barbara S.; And Others

    This report provides data describing the nursing population as determined by the second national sample survey of registered nurses. A brief introduction is followed by a chapter that presents an overview of the survey methodology, including details on the sampling design, the response rate, and the statistical reliability. Chapter 3 provides a…

  3. What's for Lunch? II. A 1990 Survey of Options in the School Lunch Program.

    Science.gov (United States)

    Morris, Patricia McGrath; And Others

    This report provides information on the content of school lunches offered to middle school children in the public schools. A total of 163 middle schools in 42 states responded to the school lunch survey. Survey findings are given on: (1) the contents of the main course, vegetable and fruit offerings, desserts, and beverages; and (2) lunches…

  4. The HST/ACS Coma Cluster Survey : II. Data Description and Source Catalogs

    NARCIS (Netherlands)

    Hammer, Derek; Kleijn, Gijs Verdoes; Hoyos, Carlos; den Brok, Mark; Balcells, Marc; Ferguson, Henry C.; Goudfrooij, Paul; Carter, David; Guzman, Rafael; Peletier, Reynier F.; Smith, Russell J.; Graham, Alister W.; Trentham, Neil; Peng, Eric; Puzia, Thomas H.; Lucey, John R.; Jogee, Shardha; Aguerri, Alfonso L.; Batcheldor, Dan; Bridges, Terry J.; Chiboucas, Kristin; Davies, Jonathan I.; del Burgo, Carlos; Erwin, Peter; Hornschemeier, Ann; Hudson, Michael J.; Huxor, Avon; Jenkins, Leigh; Karick, Arna; Khosroshahi, Habib; Kourkchi, Ehsan; Komiyama, Yutaka; Lotz, Jennifer; Marzke, Ronald O.; Marinova, Irina; Matkovic, Ana; Merritt, David; Miller, Bryan W.; Miller, Neal A.; Mobasher, Bahram; Mouhcine, Mustapha; Okamura, Sadanori; Percival, Sue; Phillipps, Steven; Poggianti, Bianca M.; Price, James; Sharples, Ray M.; Tully, R. Brent; Valentijn, Edwin

    The Coma cluster, Abell 1656, was the target of an HST-ACS Treasury program designed for deep imaging in the F475W and F814W passbands. Although our survey was interrupted by the ACS instrument failure in early 2007, the partially completed survey still covers ~50% of the core high-density region in

  5. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  6. Predictors of BMI Vary along the BMI Range of German Adults – Results of the German National Nutrition Survey II

    Science.gov (United States)

    Moon, Kilson; Krems, Carolin; Heuer, Thorsten; Roth, Alexander; Hoffmann, Ingrid

    2017-01-01

    Objective The objective of the study was to identify predictors of BMI in German adults by considering the BMI distribution and to determine whether the association between BMI and its predictors varies along the BMI distribution. Methods The sample included 9,214 adults aged 18–80 years from the German National Nutrition Survey II (NVS II). Quantile regression analyses were conducted to examine the association between BMI and the following predictors: age, sports activities, socio-economic status (SES), healthy eating index-NVS II (HEI-NVS II), dietary knowledge, sleeping duration and energy intake as well as status of smoking, partner relationship and self-reported health. Results Age, SES, self-reported health status, sports activities and energy intake were the strongest predictors of BMI. The important outcome of this study is that the association between BMI and its predictors varies along the BMI distribution. Especially, energy intake, health status and SES were marginally associated with BMI in normal-weight subjects; this relationships became stronger in the range of overweight, and were strongest in the range of obesity. Conclusions Predictors of BMI and the strength of these associations vary across the BMI distribution in German adults. Consequently, to identify predictors of BMI, the entire BMI distribution should be considered. PMID:28219069

  7. Environmental monitoring survey of oil and gas fields in Region II in 2009. Summary report; Miljoeovervaaking av olje- og gassfelt i Region II i 2009

    Energy Technology Data Exchange (ETDEWEB)

    2010-03-15

    The oil companies Statoil ASA, ExxonMobil Exploration and Production Norway AS, Total E&P Norge AS, Talisman Energy Norge AS and Marathon Petroleum Norge AS commissioned Section of Applied Environmental Research at UNI RESEARCH AS to undertake the monitoring survey of Region II in 2009. Similar monitoring surveys in Region II have been carried out in 1996, 2000, 2003 and 2006. The survey in 2009 included in total 18 fields: Rev, Varg, Sigyn, Sleipner Vest, Sleipner OEst, Sleipner Alfa Nord, Glitne, Grane, Balder, Ringhorne, Jotun, Vale, Skirne, Byggve, Heimdal, Volve, Vilje og Alvheim. Sampling was conducted from the vessel MV Libas between May 18 and May 27. Samples were collected from in totally 137 sampling sites, of which 15 were regional sampling sites. Samples for chemical analysis were collected at all sites, whereas samples for benthos analysis were collected at 12 fields. As in previous surveys, Region II is divided into natural sub-regions. One sub-region is shallow (77-96 m) sub-region, a central sub-region (107-130 m) and a northern subregion (115-119 m). The sediments of the shallow sub-region had relatively lower content of TOM and pelite and higher content of fine sand than the central and northern sub-regions. Calculated areas of contamination are shown for the sub-regions in Table 1.1. The fields Sigyn, Sleipner Alfa Nord, Glitne, Grane, Balder, Ringhorne, Jotun, Skirne, Byggve, Vilje og Alvheim showed no contamination of THC. At the other fields there were minor changes from 2006. The concentrations of barium increased in the central sub-region from 2006 to 2009, also at fields where no drilling had been undertaken during the last years. The same laboratory and methods are used during the three last regional investigations. The changes in barium concentrations may be due to high variability of barium concentrations in the sediments. This is supported by relatively large variations in average barium concentrations at the regional sampling sites in

  8. Environmental monitoring survey of oil and gas fields in Region II in 2009. Summary report; Miljoeovervaaking av olje- og gassfelt i Region II i 2009. Sammendragsrapport

    Energy Technology Data Exchange (ETDEWEB)

    2010-03-15

    The oil companies Statoil ASA, ExxonMobil Exploration and Production Norway AS, Total E&P Norge AS, Talisman Energy Norge AS and Marathon Petroleum Norge AS commissioned Section of Applied Environmental Research at UNI RESEARCH AS to undertake the monitoring survey of Region II in 2009. Similar monitoring surveys in Region II have been carried out in 1996, 2000, 2003 and 2006. The survey in 2009 included in total 18 fields: Rev, Varg, Sigyn, Sleipner Vest, Sleipner Oest, Sleipner Alfa Nord, Glitne, Grane, Balder, Ringhorne, Jotun, Vale, Skirne, Byggve, Heimdal, Volve, Vilje og Alvheim. Sampling was conducted from the vessel MV Libas between May 18 and May 27. Samples were collected from in totally 137 sampling sites, of which 15 were regional sampling sites. Samples for chemical analysis were collected at all sites, whereas samples for benthos analysis were collected at 12 fields. As in previous surveys, Region II is divided into natural sub-regions. One sub-region is shallow (77-96 m) sub-region, a central sub-region (107-130 m) and a northern subregion (115-119 m). The sediments of the shallow sub-region had relatively lower content of TOM and pelite and higher content of fine sand than the central and northern sub-regions. Calculated areas of contamination are shown for the sub-regions in Table 1.1. The fields Sigyn, Sleipner Alfa Nord, Glitne, Grane, Balder, Ringhorne, Jotun, Skirne, Byggve, Vilje og Alvheim showed no contamination of THC. At the other fields there were minor changes from 2006. The concentrations of barium increased in the central sub-region from 2006 to 2009, also at fields where no drilling had been undertaken during the last years. The same laboratory and methods are used during the three last regional investigations. The changes in barium concentrations may be due to high variability of barium concentrations in the sediments. This is supported by relatively large variations in average barium concentrations at the regional sampling sites in

  9. A survey on control schemes for distributed solar collector fields. Part II: Advanced control approaches

    Energy Technology Data Exchange (ETDEWEB)

    Camacho, E.F.; Rubio, F.R. [Universidad de Sevilla, Escuela Superior de Ingenieros, Departamento de Ingenieria de Sistemas y Automatica, Camino de Los Descubrimientos s/n, E-41092 Sevilla (Spain); Berenguel, M. [Universidad de Almeria, Departamento de Lenguajes y Computacion, Area de Ingenieria de Sistemas y Automatica, Carretera Sacramento s/n, E-04120 La Canada, Almeria (Spain); Valenzuela, L. [Plataforma Solar de Almeria - CIEMAT, Carretera Senes s/n, P.O. Box 22, E-04200 Tabernas (Almeria) (Spain)

    2007-10-15

    This article presents a survey of the different advanced automatic control techniques that have been applied to control the outlet temperature of solar plants with distributed collectors during the last 25 years. A classification of the modeling and control approaches described in the first part of this survey is used to explain the main features of each strategy. The treated strategies range from classical advanced control strategies to those with few industrial applications. (author)

  10. Linear feature detection algorithm for astronomical surveys - II. Defocusing effects on meteor tracks

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan; Rasmussen, Andrew; Ivezić, Željko

    2018-03-01

    Given the current limited knowledge of meteor plasma micro-physics and its interaction with the surrounding atmosphere and ionosphere, meteors are a highly interesting observational target for high-resolution wide-field astronomical surveys. Such surveys are capable of resolving the physical size of meteor plasma heads, but they produce large volumes of images that need to be automatically inspected for possible existence of long linear features produced by meteors. Here, we show how big aperture sky survey telescopes detect meteors as defocused tracks with a central brightness depression. We derive an analytic expression for a defocused point source meteor track and use it to calculate brightness profiles of meteors modelled as uniform brightness discs. We apply our modelling to meteor images as seen by the Sloan Digital Sky Survey and Large Synoptic Survey Telescope telescopes. The expression is validated by Monte Carlo ray-tracing simulations of photons travelling through the atmosphere and the Large Synoptic Survey Telescope telescope optics. We show that estimates of the meteor distance and size can be extracted from the measured full width at half-maximum and the strength of the central dip in the observed brightness profile. However, this extraction becomes difficult when the defocused meteor track is distorted by the atmospheric seeing or contaminated by a long-lasting glowing meteor trail. The full width at half-maximum of satellite tracks is distinctly narrower than meteor values, which enables removal of a possible confusion between satellites and meteors.

  11. VizieR Online Data Catalog: The HII Region Discovery Survey (HRDS). II. (Anderson+, 2011)

    Science.gov (United States)

    Anderson, L. D.; Bania, T. M.; Balser, D. S.; Rood, R. T.

    2011-08-01

    Our observations were made with the Green Bank Telescope (GBT) 100m telescope from 2008 June through 2010 October. We assembled our target list from the following multi-frequency, large solid angle Galactic surveys: the NRAO Very Large Array (VLA) Galactic Plane Survey at 21cm HI and continuum (VGPS: Stil et al. 2006AJ....132.1158S), the NRAO VLA Sky Survey at 20cm continuum (NVSS: Condon et al. 1998, Cat. VIII/65), the Southern Galactic Plane Survey at 21cm HI and continuum (SGPS: Haverkorn et al. 2006ApJS..167..230H), the VLA MAGPIS at 20cm continuum (Helfand et al. 2006, Cat. J/AJ/131/2525), and the Spitzer 24um MIPSGAL survey (Carey et al. 2009PASP..121...76C). Our analysis here also uses 8.0um data from the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE: Benjamin et al. 2003PASP..115..953B), which were obtained with the Infrared Array Camera (IRAC) on the Spitzer Space Telescope. (4 data files).

  12. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  13. High School and Beyond. 1980 Senior Coort. Third-Follow-Up (1986). Data File User's Manual. Volume II: Survey Instruments. Contractor Report.

    Science.gov (United States)

    Sebring, Penny; And Others

    Survey instruments used in the collection of data for the High School and Beyond base year (1980) through the third follow-up surveys (1986) are provided as Volume II of a user's manual for the senior cohort data file. The complete user's manual is designed to provide the extensive documentation necessary for using the cohort data files. Copies of…

  14. THE PRISM MULTI-OBJECT SURVEY (PRIMUS). II. DATA REDUCTION AND REDSHIFT FITTING

    Energy Technology Data Exchange (ETDEWEB)

    Cool, Richard J. [MMT Observatory, Tucson, AZ 85721 (United States); Moustakas, John [Department of Physics, Siena College, 515 Loudon Rd., Loudonville, NY 12211 (United States); Blanton, Michael R.; Hogg, David W. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Burles, Scott M. [D.E. Shaw and Co. L.P, 20400 Stevens Creek Blvd., Suite 850, Cupertino, CA 95014 (United States); Coil, Alison L.; Aird, James; Mendez, Alexander J. [Department of Physics, Center for Astrophysics and Space Sciences, University of California, 9500 Gilman Dr., La Jolla, San Diego, CA 92093 (United States); Eisenstein, Daniel J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St, MS 20, Cambridge, MA 02138 (United States); Wong, Kenneth C. [Steward Observatory, The University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721 (United States); Zhu, Guangtun [Center for Astrophysical Sciences, Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Bernstein, Rebecca A. [Department of Astronomy and Astrophysics, UCA/Lick Observatory, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); Bolton, Adam S. [Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112 (United States)

    2013-04-20

    The PRIsm MUlti-object Survey (PRIMUS) is a spectroscopic galaxy redshift survey to z {approx} 1 completed with a low-dispersion prism and slitmasks allowing for simultaneous observations of {approx}2500 objects over 0.18 deg{sup 2}. The final PRIMUS catalog includes {approx}130,000 robust redshifts over 9.1 deg{sup 2}. In this paper, we summarize the PRIMUS observational strategy and present the data reduction details used to measure redshifts, redshift precision, and survey completeness. The survey motivation, observational techniques, fields, target selection, slitmask design, and observations are presented in Coil et al. Comparisons to existing higher-resolution spectroscopic measurements show a typical precision of {sigma}{sub z}/(1 + z) = 0.005. PRIMUS, both in area and number of redshifts, is the largest faint galaxy redshift survey completed to date and is allowing for precise measurements of the relationship between active galactic nuclei and their hosts, the effects of environment on galaxy evolution, and the build up of galactic systems over the latter half of cosmic history.

  15. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  16. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  17. THE BOLOCAM GALACTIC PLANE SURVEY. II. CATALOG OF THE IMAGE DATA

    International Nuclear Information System (INIS)

    Rosolowsky, Erik; Dunham, Miranda K.; Evans, Neal J.; Harvey, Paul; Ginsburg, Adam; Bally, John; Battersby, Cara; Glenn, Jason; Stringfellow, Guy S.; Bradley, E. Todd; Aguirre, James; Cyganowski, Claudia; Dowell, Darren; Drosback, Meredith; Walawender, Josh; Williams, Jonathan P.

    2010-01-01

    We present a catalog of 8358 sources extracted from images produced by the Bolocam Galactic Plane Survey (BGPS). The BGPS is a survey of the millimeter dust continuum emission from the northern Galactic plane. The catalog sources are extracted using a custom algorithm, Bolocat, which was designed specifically to identify and characterize objects in the large-area maps generated from the Bolocam instrument. The catalog products are designed to facilitate follow-up observations of these relatively unstudied objects. The catalog is 98% complete from 0.4 Jy to 60 Jy over all object sizes for which the survey is sensitive ( -2.4±0.1 and that the mean Galactic latitude for sources is significantly below the midplane: (b) = (-0. 0 095 ± 0. 0 001).

  18. Quality assurance and human error effects on the structural safety

    International Nuclear Information System (INIS)

    Bertero, R.; Lopez, R.; Sarrate, M.

    1991-01-01

    Statistical surveys show that the frequency of failure of structures is much larger than that expected by the codes. Evidence exists that human errors (especially during the design process) is the main cause for the difference between the failure probability admitted by codes and the reality. In this paper, the attenuation of human error effects using tools of quality assurance is analyzed. In particular, the importance of the independent design review is highlighted, and different approaches are discussed. The experience from the Atucha II project, as well as the USA and German practice on independent design review, are summarized. (Author)

  19. 10C survey of radio sources at 15.7 GHz - II. First results

    Science.gov (United States)

    AMI Consortium; Davies, Mathhew L.; Franzen, Thomas M. O.; Waldram, Elizabeth M.; Grainge, Keith J. B.; Hobson, Michael P.; Hurley-Walker, Natasha; Lasenby, Anthony; Olamaie, Malak; Pooley, Guy G.; Riley, Julia M.; Rodríguez-Gonzálvez, Carmen; Saunders, Richard D. E.; Scaife, Anna M. M.; Schammel, Michel P.; Scott, Paul F.; Shimwell, Timothy W.; Titterington, David J.; Zwart, Jonathan T. L.

    2011-08-01

    In a previous paper (Paper I), the observational, mapping and source-extraction techniques used for the Tenth Cambridge (10C) Survey of Radio Sources were described. Here, the first results from the survey, carried out using the Arcminute Microkelvin Imager Large Array (LA) at an observing frequency of 15.7 GHz, are presented. The survey fields cover an area of ≈27 deg2 to a flux-density completeness of 1 mJy. Results for some deeper areas, covering ≈12 deg2, wholly contained within the total areas and complete to 0.5 mJy, are also presented. The completeness for both areas is estimated to be at least 93 per cent. The 10C survey is the deepest radio survey of any significant extent (≳0.2 deg2) above 1.4 GHz. The 10C source catalogue contains 1897 entries and is available online. The source catalogue has been combined with that of the Ninth Cambridge Survey to calculate the 15.7-GHz source counts. A broken power law is found to provide a good parametrization of the differential count between 0.5 mJy and 1 Jy. The measured source count has been compared with that predicted by de Zotti et al. - the model is found to display good agreement with the data at the highest flux densities. However, over the entire flux-density range of the measured count (0.5 mJy to 1 Jy), the model is found to underpredict the integrated count by ≈30 per cent. Entries from the source catalogue have been matched with those contained in the catalogues of the NRAO VLA Sky Survey and the Faint Images of the Radio Sky at Twenty-cm survey (both of which have observing frequencies of 1.4 GHz). This matching provides evidence for a shift in the typical 1.4-GHz spectral index to 15.7-GHz spectral index of the 15.7-GHz-selected source population with decreasing flux density towards sub-mJy levels - the spectra tend to become less steep. Automated methods for detecting extended sources, developed in Paper I, have been applied to the data; ≈5 per cent of the sources are found to be extended

  20. Designing future dark energy space missions. II. Photometric redshift of space weak lensing optimized surveys

    Science.gov (United States)

    Jouvel, S.; Kneib, J.-P.; Bernstein, G.; Ilbert, O.; Jelinsky, P.; Milliard, B.; Ealet, A.; Schimd, C.; Dahlen, T.; Arnouts, S.

    2011-08-01

    Context. With the discovery of the accelerated expansion of the universe, different observational probes have been proposed to investigate the presence of dark energy, including possible modifications to the gravitation laws by accurately measuring the expansion of the Universe and the growth of structures. We need to optimize the return from future dark energy surveys to obtain the best results from these probes. Aims: A high precision weak-lensing analysis requires not an only accurate measurement of galaxy shapes but also a precise and unbiased measurement of galaxy redshifts. The survey strategy has to be defined following both the photometric redshift and shape measurement accuracy. Methods: We define the key properties of the weak-lensing instrument and compute the effective PSF and the overall throughput and sensitivities. We then investigate the impact of the pixel scale on the sampling of the effective PSF, and place upper limits on the pixel scale. We then define the survey strategy computing the survey area including in particular both the Galactic absorption and Zodiacal light variation accross the sky. Using the Le Phare photometric redshift code and realistic galaxy mock catalog, we investigate the properties of different filter-sets and the importance of the u-band photometry quality to optimize the photometric redshift and the dark energy figure of merit (FoM). Results: Using the predicted photometric redshift quality, simple shape measurement requirements, and a proper sky model, we explore what could be an optimal weak-lensing dark energy mission based on FoM calculation. We find that we can derive the most accurate the photometric redshifts for the bulk of the faint galaxy population when filters have a resolution ℛ ~ 3.2. We show that an optimal mission would survey the sky through eight filters using two cameras (visible and near infrared). Assuming a five-year mission duration, a mirror size of 1.5 m and a 0.5 deg2 FOV with a visible pixel

  1. Seroepidemiological Survey of HTLV-I/II in Blood Donors of Mazandaran in 1999.

    OpenAIRE

    N. Tabarestani; R. F. Hosseini; ِA. Ajami

    2000-01-01

    SummaryBackground and purpose: HTL-I/II viruses of the Retroviridae family are known to be the causes of various diseases. They are transmitted by blood transfusion, sexual contact and breast milk. As of contaminated mothers. These viral infections are endemic in certain regions, Epidemiological studies appear to be necessary in the country. Blood donors from different transfusion Centers were investigated in a pilot study.Materials and Methods: In this descriptive study, blood samples of 180...

  2. The diesel exhaust in miners study: II. Exposure monitoring surveys and development of exposure groups.

    NARCIS (Netherlands)

    Coble, J.B.; Stewart, P.A.; Vermeulen, R.; Yereb, D.; Stanevich, R.; Blair, A.; Silverman, D.T.; Attfield, M.

    2010-01-01

    Air monitoring surveys were conducted between 1998 and 2001 at seven non-metal mining facilities to assess exposure to respirable elemental carbon (REC), a component of diesel exhaust (DE), for an epidemiologic study of miners exposed to DE. Personal exposure measurements were taken on workers in a

  3. Linking a Medical User Survey to Management for Library Effectiveness: II, A Checkland Soft Systems Study.

    Science.gov (United States)

    Brember, V. L.

    1985-01-01

    Presents Checkland's soft systems methodology, discusses it in terms of the systems approach, and illustrates how it was used to relate evidence of user survey to practical problems of library management. Difficulties in using methodology are described and implications for library management and information science research are presented. (8…

  4. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  5. AKARI INFRARED CAMERA SURVEY OF THE LARGE MAGELLANIC CLOUD. II. THE NEAR-INFRARED SPECTROSCOPIC CATALOG

    International Nuclear Information System (INIS)

    Shimonishi, Takashi; Onaka, Takashi; Kato, Daisuke; Sakon, Itsuki; Ita, Yoshifusa; Kawamura, Akiko; Kaneda, Hidehiro

    2013-01-01

    We performed a near-infrared spectroscopic survey toward an area of ∼10 deg 2 of the Large Magellanic Cloud (LMC) with the infrared satellite AKARI. Observations were carried out as part of the AKARI Large-area Survey of the Large Magellanic Cloud (LSLMC). The slitless multi-object spectroscopic capability of the AKARI/IRC enabled us to obtain low-resolution (R ∼ 20) spectra in 2-5 μm for a large number of point sources in the LMC. As a result of the survey, we extracted about 2000 infrared spectra of point sources. The data are organized as a near-infrared spectroscopic catalog. The catalog includes various infrared objects such as young stellar objects (YSOs), asymptotic giant branch (AGB) stars, supergiants, and so on. It is shown that 97% of the catalog sources have corresponding photometric data in the wavelength range from 1.2 to 11 μm, and 67% of the sources also have photometric data up to 24 μm. The catalog allows us to investigate near-infrared spectral features of sources by comparison with their infrared spectral energy distributions. In addition, it is estimated that about 10% of the catalog sources are observed at more than two different epochs. This enables us to study a spectroscopic variability of sources by using the present catalog. Initial results of source classifications for the LSLMC samples are presented. We classified 659 LSLMC spectra based on their near-infrared spectral features by visual inspection. As a result, it is shown that the present catalog includes 7 YSOs, 160 C-rich AGBs, 8 C-rich AGB candidates, 85 O-rich AGBs, 122 blue and yellow supergiants, 150 red super giants, and 128 unclassified sources. Distributions of the classified sources on the color-color and color-magnitude diagrams are discussed in the text. Continuous wavelength coverage and high spectroscopic sensitivity in 2-5 μm can only be achieved by space observations. This is an unprecedented large-scale spectroscopic survey toward the LMC in the near

  6. VizieR Online Data Catalog: LMC NIR Synoptic Survey. II. Wesenheit relations (Bhardwaj+, 2016)

    Science.gov (United States)

    Bhardwaj, A.; Kanbur, S. M.; Macri, L. M.; Singh, H. P.; Ngeow, C.-C.; Wagner-Kaiser, R.; Sarajedini, A.

    2018-03-01

    We make use of NIR mean magnitudes for 775 fundamental-mode and 474 first-overtone Cepheids in the LMC from Macri et al. 2015, J/AJ/149/117 (Paper I). These magnitudes are based on observations from a synoptic survey (average of 16 epochs) of the central region of the LMC using the CPAPIR camera at the Cerro Tololo Interamerican Observatory 1.5-m telescope between 2006 and 2007. Most of these Cepheid variables were previously studied in the optical V and I bands by the third phase of the Optical Gravitational Lensing Experiment (OGLE-III) survey (Soszynski et al. 2008, J/AcA/58/163; Ulaczyk et al. 2013, J/AcA/63/159). The V and I band mean magnitudes are also compiled in Paper I. The calibration into the 2MASS photometric system, extinction corrections, and the adopted reddening law are discussed in detail in Paper I. (4 data files).

  7. Visual servoing in medical robotics: a survey. Part II: tomographic imaging modalities--techniques and applications.

    Science.gov (United States)

    Azizian, Mahdi; Najmaei, Nima; Khoshnam, Mahta; Patel, Rajni

    2015-03-01

    Intraoperative application of tomographic imaging techniques provides a means of visual servoing for objects beneath the surface of organs. The focus of this survey is on therapeutic and diagnostic medical applications where tomographic imaging is used in visual servoing. To this end, a comprehensive search of the electronic databases was completed for the period 2000-2013. Existing techniques and products are categorized and studied, based on the imaging modality and their medical applications. This part complements Part I of the survey, which covers visual servoing techniques using endoscopic imaging and direct vision. The main challenges in using visual servoing based on tomographic images have been identified. 'Supervised automation of medical robotics' is found to be a major trend in this field and ultrasound is the most commonly used tomographic modality for visual servoing. Copyright © 2014 John Wiley & Sons, Ltd.

  8. The infrared medium-deep survey. II. How to trigger radio AGNs? Hints from their environments

    Energy Technology Data Exchange (ETDEWEB)

    Karouzos, Marios; Im, Myungshin; Kim, Jae-Woo; Lee, Seong-Kook; Jeon, Yiseul; Choi, Changsu; Hong, Jueun; Hyun, Minhee; Jun, Hyunsung David; Kim, Dohyeong; Kim, Yongjung; Kim, Ji Hoon; Kim, Duho; Park, Won-Kee; Taak, Yoon Chan; Yoon, Yongmin [CEOU—Astronomy Program, Department of Physics and Astronomy, Seoul National University, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Chapman, Scott [Department of Physics and Atmospheric Science, Dalhousie University, Halifax, Nova Scotia (Canada); Pak, Soojong [School of Space Research, Kyung Hee University, Yongin-si, Gyeonggi-do 446-701 (Korea, Republic of); Edge, Alastair, E-mail: mkarouzos@astro.snu.ac.kr [Department of Physics, University of Durham, South Road, Durham, DH1 3LE (United Kingdom)

    2014-12-10

    Activity at the centers of galaxies, during which the central supermassive black hole is accreting material, is nowadays accepted to be rather ubiquitous and most probably a phase of every galaxy's evolution. It has been suggested that galactic mergers and interactions may be the culprits behind the triggering of nuclear activity. We use near-infrared data from the new Infrared Medium-Deep Survey and the Deep eXtragalactic Survey of the VIMOS-SA22 field and radio data at 1.4 GHz from the FIRST survey and a deep Very Large Array survey to study the environments of radio active galactic nuclei (AGNs) over an area of ∼25 deg{sup 2} and down to a radio flux limit of 0.1 mJy and a J-band magnitude of 23 mag AB. Radio AGNs are predominantly found in environments similar to those of control galaxies at similar redshift, J-band magnitude, and (M{sub u} – M{sub r} ) rest-frame color. However, a subpopulation of radio AGNs is found in environments up to 100 times denser than their control sources. We thus preclude merging as the dominant triggering mechanism of radio AGNs. By fitting the broadband spectral energy distribution of radio AGNs in the least and most dense environments, we find that those in the least dense environments show higher radio-loudness, higher star formation efficiencies, and higher accretion rates, typical of the so-called high-excitation radio AGNs. These differences tend to disappear at z > 1. We interpret our results in terms of a different triggering mechanism for these sources that is driven by mass loss through winds of young stars created during the observed ongoing star formation.

  9. Test program element II blanket and shield thermal-hydraulic and thermomechanical testing, experimental facility survey

    International Nuclear Information System (INIS)

    Ware, A.G.; Longhurst, G.R.

    1981-12-01

    This report presents results of a survey conducted by EG and G Idaho to determine facilities available to conduct thermal-hydraulic and thermomechanical testing for the Department of Energy Office of Fusion Energy First Wall/Blanket/Shield Engineering Test Program. In response to EG and G queries, twelve organizations (in addition to EG and G and General Atomic) expressed interest in providing experimental facilities. A variety of methods of supplying heat is available

  10. Mammalian Survey Techniques for Level II Natural Resource Inventories on Corps of Engineers Projects (Part 1)

    Science.gov (United States)

    2009-07-01

    sheep (Ovis dalli dalli), mountain goats (Oreamnos americanus) and other hoofed animals are often surveyed using aerial counts from fixed-wing...Society Bulletin 34:69-73. Hilty, J. A., and A. M. Merenlender. 2004. Use of riparian corridors and vineyards by mammalian preda- tors in northern...Witmer, and R. M. Engeman. 2004. Feral swine impacts on agriculture and the environment. Sheep and Goat Research Journal 19:34-40. Slade, N. A., and

  11. Test program element II blanket and shield thermal-hydraulic and thermomechanical testing, experimental facility survey

    Energy Technology Data Exchange (ETDEWEB)

    Ware, A.G.; Longhurst, G.R.

    1981-12-01

    This report presents results of a survey conducted by EG and G Idaho to determine facilities available to conduct thermal-hydraulic and thermomechanical testing for the Department of Energy Office of Fusion Energy First Wall/Blanket/Shield Engineering Test Program. In response to EG and G queries, twelve organizations (in addition to EG and G and General Atomic) expressed interest in providing experimental facilities. A variety of methods of supplying heat is available.

  12. A selection of hot subluminous stars in the GALEX survey - II. Subdwarf atmospheric parameters

    Czech Academy of Sciences Publication Activity Database

    Németh, Péter; Kawka, Adela; Vennes, Stephane

    2012-01-01

    Roč. 427, č. 3 (2012), s. 2180-2211 ISSN 0035-8711 R&D Projects: GA AV ČR(CZ) IAA300030908; GA AV ČR IAA301630901; GA ČR GAP209/10/0967 Institutional support: RVO:67985815 Keywords : catalogues * surveys * abundance Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 5.521, year: 2012

  13. A SUCCESSFUL BROADBAND SURVEY FOR GIANT Ly{alpha} NEBULAE. II. SPECTROSCOPIC CONFIRMATION

    Energy Technology Data Exchange (ETDEWEB)

    Prescott, Moire K. M. [Department of Physics, University of California, Broida Hall, Mail Code 9530, Santa Barbara, CA 93106 (United States); Dey, Arjun; Jannuzi, Buell T., E-mail: mkpresco@physics.ucsb.edu [National Optical Astronomy Observatory, 950 North Cherry Avenue, Tucson, AZ 85719 (United States)

    2013-01-01

    Using a systematic broadband search technique, we have carried out a survey for large Ly{alpha} nebulae (or Ly{alpha} {sup b}lobs{sup )} at 2 {approx}< z {approx}< 3 within 8.5 deg{sup 2} of the NOAO Deep Wide-Field Survey Booetes field, corresponding to a total survey comoving volume of Almost-Equal-To 10{sup 8} h {sup -3} {sub 70} Mpc{sup 3}. Here, we present our spectroscopic observations of candidate giant Ly{alpha} nebulae. Of 26 candidates targeted, 5 were confirmed to have Ly{alpha} emission at 1.7 {approx}< z {approx}< 2.7, 4 of which were new discoveries. The confirmed Ly{alpha} nebulae span a range of Ly{alpha} equivalent widths, colors, sizes, and line ratios, and most show spatially extended continuum emission. The remaining candidates did not reveal any strong emission lines, but instead exhibit featureless, diffuse, blue continuum spectra. Their nature remains mysterious, but we speculate that some of these might be Ly{alpha} nebulae lying within the redshift desert (i.e., 1.2 {approx}< z {approx}< 1.6). Our spectroscopic follow-up confirms the power of using deep broadband imaging to search for the bright end of the Ly{alpha} nebula population across enormous comoving volumes.

  14. Surface roughness considerations for atmospheric correction of ocean color sensors. I - The Rayleigh-scattering component. II - Error in the retrieved water-leaving radiance

    Science.gov (United States)

    Gordon, Howard R.; Wang, Menghua

    1992-01-01

    The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.

  15. THE GALACTIC O-STAR SPECTROSCOPIC SURVEY (GOSSS). II. BRIGHT SOUTHERN STARS

    International Nuclear Information System (INIS)

    Sota, A.; Apellániz, J. Maíz; Alfaro, E. J.; Morrell, N. I.; Barbá, R. H.; Arias, J. I.; Walborn, N. R.; Gamen, R. C.

    2014-01-01

    We present the second installment of GOSSS, a massive spectroscopic survey of Galactic O stars, based on new homogeneous, high signal-to-noise ratio, R ∼ 2500 digital observations from both hemispheres selected from the Galactic O-Star Catalog (GOSC). In this paper we include bright stars and other objects drawn mostly from the first version of GOSC, all of them south of δ = –20°, for a total number of 258 O stars. We also revise the northern sample of Paper I to provide the full list of spectroscopically classified Galactic O stars complete to B = 8, bringing the total number of published GOSSS stars to 448. Extensive sequences of exceptional objects are given, including the early Of/WN, O Iafpe, Ofc, ON/OC, Onfp, Of?p, and Oe types, as well as double/triple-lined spectroscopic binaries. The new spectral subtype O9.2 is also discussed. The magnitude and spatial distributions of the observed sample are analyzed. We also present new results from OWN, a multi-epoch high-resolution spectroscopic survey coordinated with GOSSS that is assembling the largest sample of Galactic spectroscopic massive binaries ever attained. The OWN data combined with additional information on spectroscopic and visual binaries from the literature indicate that only a very small fraction (if any) of the stars with masses above 15-20 M ☉ are born as single systems. In the future we will publish the rest of the GOSSS survey, which is expected to include over 1000 Galactic O stars

  16. The Dubai Community Psychiatric Survey: II. Development of the Socio-cultural Change Questionnaire.

    Science.gov (United States)

    Bebbington, P; Ghubash, R; Hamdi, E

    1993-04-01

    The Dubai Community Psychiatric Survey was carried out to assess the effect of very rapid social change on the mental health of women in Dubai, one of the United Arab Emirates. In order to measure social change at an individual level, we developed a questionnaire covering behaviour and attitudes in a wide range of situations, the Socio-cultural Change Questionnaire (ScCQ). In this paper we give an account of the considerations that determined the form of the ScCQ, its structural characteristics, and its validity.

  17. The Chandra planetary nebula survey (CHANPLANS). II. X-ray emission from compact planetary nebulae

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, M.; Kastner, J. H. [Center for Imaging Science and Laboratory for Multiwavelength Astrophysics, Rochester Institute of Technology, 54 Lomb Memorial Drive, Rochester, NY 14623 (United States); Montez, R. Jr. [Department of Physics and Astronomy, Vanderbilt University, Nashville, TN (United States); Balick, B. [Department of Astronomy, University of Washington, Seattle, WA (United States); Frew, D. J.; De Marco, O.; Parker, Q. A. [Department of Physics and Astronomy and Macquarie Research Centre for Astronomy, Astrophysics and Astrophotonics, Macquarie University, Sydney, NSW 2109 (Australia); Jones, D. [Departamento de Física, Universidad de Atacama, Copayapu 485, Copiapó (Chile); Miszalski, B. [South African Astronomical Observatory, P.O. Box 9, Observatory, 7935 (South Africa); Sahai, R. [Jet Propulsion Laboratory, MS 183-900, California Institute of Technology, Pasadena, CA 91109 (United States); Blackman, E.; Frank, A. [Department of Physics and Astronomy, University of Rochester, Rochester, NY (United States); Chu, Y.-H. [Department of Astronomy, University of Illinois at Urbana-Champaign, Urbana, IL (United States); Guerrero, M. A. [Instituto de Astrofísica de Andalucía, IAA-CSIC, Glorieta de la Astronomía s/n, Granada, E-18008 (Spain); Lopez, J. A. [Instituto de Astronomía, Universidad Nacional Autonoma de Mexico, Campus Ensenada, Apdo. Postal 22860, Ensenada, B. C. (Mexico); Zijlstra, A. [School of Physics and Astronomy, University of Manchester, Manchester M13 9PL (United Kingdom); Bujarrabal, V. [Instituto de Astrofísica de Canarias, E-38200 La Laguna, Tenerife (Spain); Corradi, R. L. M. [Departamento de Astrofísica, Universidad de La Laguna, E-38206 La Laguna, Tenerife (Spain); Nordhaus, J. [NSF Astronomy and Astrophysics Fellow, Center for Computational Relativity and Gravitation, Rochester Institute of Technology, Rochester, NY 14623 (United States); and others

    2014-10-20

    We present results from the most recent set of observations obtained as part of the Chandra X-ray observatory Planetary Nebula Survey (CHANPLANS), the first comprehensive X-ray survey of planetary nebulae (PNe) in the solar neighborhood (i.e., within ∼1.5 kpc of the Sun). The survey is designed to place constraints on the frequency of appearance and range of X-ray spectral characteristics of X-ray-emitting PN central stars and the evolutionary timescales of wind-shock-heated bubbles within PNe. CHANPLANS began with a combined Cycle 12 and archive Chandra survey of 35 PNe. CHANPLANS continued via a Chandra Cycle 14 Large Program which targeted all (24) remaining known compact (R {sub neb} ≲ 0.4 pc), young PNe that lie within ∼1.5 kpc. Results from these Cycle 14 observations include first-time X-ray detections of hot bubbles within NGC 1501, 3918, 6153, and 6369, and point sources in HbDs 1, NGC 6337, and Sp 1. The addition of the Cycle 14 results brings the overall CHANPLANS diffuse X-ray detection rate to ∼27% and the point source detection rate to ∼36%. It has become clearer that diffuse X-ray emission is associated with young (≲ 5 × 10{sup 3} yr), and likewise compact (R {sub neb} ≲ 0.15 pc), PNe with closed structures and high central electron densities (n{sub e} ≳ 1000 cm{sup –3}), and is rarely associated with PNe that show H{sub 2} emission and/or pronounced butterfly structures. Hb 5 is one such exception of a PN with a butterfly structure that hosts diffuse X-ray emission. Additionally, two of the five new diffuse X-ray detections (NGC 1501 and NGC 6369) host [WR]-type central stars, supporting the hypothesis that PNe with central stars of [WR]-type are likely to display diffuse X-ray emission.

  18. ONLINE PORNOGRAPHY AND SEXUALITY: SOME RESULTS OF EU KIDS ONLINE SURVEY II IN THE ROMANIAN CASE

    Directory of Open Access Journals (Sweden)

    VALENTINA MARINESCU

    2014-05-01

    Full Text Available The present article intends to analyze the exposure of Romanian children and teens to sexually explicit message and the so-called „sexting” activities they perform in the online environment. The main research topic to which we try to find some answers is: are young people more exposed to risks because they view sexually explicit content online and send sexual messages to others? Our results validate the risk migration hypothesis, the blurring boundaries between the online and offline worlds enabling the migration of risk from the real world to the internet and the reverse. At the same time, the date of EU Kinds Online II validate the vulnerability hypothesis, according to which the harm declared by the children following the exposure to sexually explicit images and the receiving the sexual messages is the result of their socio-demographic vulnerabilities

  19. Brazil Geological Basic Survey Program - Ponte Nova - Sheet SF.23-X-B-II - Minas Gerais State

    International Nuclear Information System (INIS)

    Brandalise, L.A.

    1991-01-01

    The present report refers to the Ponte Nova Sheet (SF.23-X-B-II) systematic geological mapping, on the 1:100.000 scale. The Sheet covers the Zona da Mata region, Minas Gerais State, in the Mantiqueira Geotectonic Province, to the eastern part of Sao Francisco Geotectonic Province, as defined in the project. The high grade metamorphic rocks to low amphibolite, occurring in the area were affected by a marked low angle shearing transposition, and show diphtheritic effects. Archaean to Proterozoic ages are attributed to the metamorphites mostly by comparison to similar types of the region. Three deformed events were registered in the region. Analysis of the crustal evolution pattern based on geological mapping, laboratorial analyses, gravimetric and air magnetometry data, and available geochronologic data is given in the 6. Chapter, Part II, in the text. Major element oxides, trace-elements, and rare-earths elements were analysed to establish parameters for the rocks environment elucidation. Geochemical survey was carried out with base on pan concentrated and stream sediments distributed throughout the Sheet. Gneisses quarries (industrial rocks) in full exploration activity have been registered, as well as sand and clay deposits employed in construction industry. Metallogenetic/Provisional analysis points out the area as a favorable one for gold prospection. (author)

  20. Deep Chandra Survey of the Small Magellanic Cloud. II. Timing Analysis of X-Ray Pulsars

    Energy Technology Data Exchange (ETDEWEB)

    Hong, JaeSub; Antoniou, Vallia; Zezas, Andreas; Drake, Jeremy J.; Plucinsky, Paul P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Haberl, Frank [Max-Planck-Institut für extraterrestrische Physik, Giessenbach straße, D-85748 Garching (Germany); Sasaki, Manami [Friedrich-Alexander-Universität Erlangen-Nürnberg, Sternwartstrasse 7, 96049 Bamberg (Germany); Laycock, Silas, E-mail: jaesub@head.cfa.harvard.edu [Department of Physics, University of Massachusetts Lowell, MA 01854 (United States)

    2017-09-20

    We report the timing analysis results of X-ray pulsars from a recent deep Chandra survey of the Small Magellanic Cloud (SMC). We analyzed a total exposure of 1.4 Ms from 31 observations over a 1.2 deg{sup 2} region in the SMC under a Chandra X-ray Visionary Program. Using the Lomb–Scargle and epoch-folding techniques, we detected periodic modulations from 20 pulsars and a new candidate pulsar. The survey also covered 11 other pulsars with no clear sign of periodic modulation. The 0.5–8 keV X-ray luminosity ( L {sub X} ) of the pulsars ranges from 10{sup 34} to 10{sup 37} erg s{sup −1} at 60 kpc. All of the Chandra sources with L {sub X} ≳ 4 × 10{sup 35} erg s{sup −1} exhibit X-ray pulsations. The X-ray spectra of the SMC pulsars (and high-mass X-ray binaries) are in general harder than those of the SMC field population. All but SXP 8.02 can be fitted by an absorbed power-law model with a photon index of Γ ≲ 1.5. The X-ray spectrum of the known magnetar SXP 8.02 is better fitted with a two-temperature blackbody model. Newly measured pulsation periods of SXP 51.0, SXP 214, and SXP 701, are significantly different from the previous XMM-Newton and RXTE measurements. This survey provides a rich data set for energy-dependent pulse profile modeling. Six pulsars show an almost eclipse-like dip in the pulse profile. Phase-resolved spectral analysis reveals diverse spectral variations during pulsation cycles: e.g., for an absorbed power-law model, some exhibit an (anti)-correlation between absorption and X-ray flux, while others show more intrinsic spectral variation (i.e., changes in photon indices).

  1. THE HST/ACS COMA CLUSTER SURVEY. II. DATA DESCRIPTION AND SOURCE CATALOGS

    International Nuclear Information System (INIS)

    Hammer, Derek; Verdoes Kleijn, Gijs; Den Brok, Mark; Peletier, Reynier F.; Hoyos, Carlos; Balcells, Marc; Aguerri, Alfonso L.; Ferguson, Henry C.; Goudfrooij, Paul; Carter, David; Guzman, Rafael; Smith, Russell J.; Lucey, John R.; Graham, Alister W.; Trentham, Neil; Peng, Eric; Puzia, Thomas H.; Jogee, Shardha; Batcheldor, Dan; Bridges, Terry J.

    2010-01-01

    The Coma cluster, Abell 1656, was the target of an HST-ACS Treasury program designed for deep imaging in the F475W and F814W passbands. Although our survey was interrupted by the ACS instrument failure in early 2007, the partially completed survey still covers ∼50% of the core high-density region in Coma. Observations were performed for 25 fields that extend over a wide range of cluster-centric radii (∼1.75 Mpc or 1 0 ) with a total coverage area of 274 arcmin 2 . The majority of the fields are located near the core region of Coma (19/25 pointings) with six additional fields in the southwest region of the cluster. In this paper, we present reprocessed images and SEXTRACTOR source catalogs for our survey fields, including a detailed description of the methodology used for object detection and photometry, the subtraction of bright galaxies to measure faint underlying objects, and the use of simulations to assess the photometric accuracy and completeness of our catalogs. We also use simulations to perform aperture corrections for the SEXTRACTOR Kron magnitudes based only on the measured source flux and its half-light radius. We have performed photometry for ∼73,000 unique objects; approximately one-half of our detections are brighter than the 10σ point-source detection limit at F814W = 25.8 mag (AB). The slight majority of objects (60%) are unresolved or only marginally resolved by ACS. We estimate that Coma members are 5%-10% of all source detections, which consist of a large population of unresolved compact sources (primarily globular clusters but also ultra-compact dwarf galaxies) and a wide variety of extended galaxies from a cD galaxy to dwarf low surface brightness galaxies. The red sequence of Coma member galaxies has a color-magnitude relation with a constant slope and dispersion over 9 mag (-21 F814W < -13). The initial data release for the HST-ACS Coma Treasury program was made available to the public in 2008 August. The images and catalogs described

  2. Civilians in World War II and DSM-IV mental disorders: Results from the World Mental Health Survey Initiative

    Science.gov (United States)

    Frounfelker, Rochelle; Gilman, Stephen E.; Betancourt, Theresa S.; Aguilar-Gaxiola, Sergio; Alonso, Jordi; Bromet, Evelyn J.; Bruffaerts, Ronny; de Girolamo, Giovanni; Gluzman, Semyon; Gureje, Oye; Karam, Elie G.; Lee, Sing; Lépine, Jean-Pierre; Ono, Yutaka; Pennell, Beth-Ellen; Popovici, Daniela G.; Have, Margreet ten; Kessler, Ronald C.

    2018-01-01

    Purpose Understanding the effects of war on mental disorders is important for developing effective post-conflict recovery policies and programs. The current study uses cross-sectional, retrospectively reported data collected as part of the World Mental Health (WMH) Survey Initiative to examine the associations of being a civilian in a war zone/region of terror in World War II with a range of DSM-IV mental disorders. Methods Adults (n= 3,370)who lived in countries directly involved in World War II in Europe and Japan were administered structured diagnostic interviews of lifetime DSM-IV mental disorders. The associations of war-related traumas with subsequent disorder onset-persistence were assessed with discrete-time survival analysis (lifetime prevalence) and conditional logistic regression (12-month prevalence). Results Respondents who were civilians in a war zone/region of terror had higher lifetime risks than other respondents of major depressive disorder (MDD; OR 1.5, 95% CI 1.1, 1.9) and anxiety disorder (OR 1.5, 95% CI 1.1, 2.0). The association of war exposure with MDD was strongest in the early years after the war, whereas the association with anxiety disorders increased over time. Among lifetime cases, war exposure was associated with lower past year risk of anxiety disorders. (OR 0.4, 95% CI 0.2, 0.7). Conclusions Exposure to war in World War II was associated with higher lifetime risk of some mental disorders. Whether comparable patterns will be found among civilians living through more recent wars remains to be seen, but should be recognized as a possibility by those projecting future needs for treatment of mental disorders. PMID:29119266

  3. Civilians in World War II and DSM-IV mental disorders: results from the World Mental Health Survey Initiative.

    Science.gov (United States)

    Frounfelker, Rochelle; Gilman, Stephen E; Betancourt, Theresa S; Aguilar-Gaxiola, Sergio; Alonso, Jordi; Bromet, Evelyn J; Bruffaerts, Ronny; de Girolamo, Giovanni; Gluzman, Semyon; Gureje, Oye; Karam, Elie G; Lee, Sing; Lépine, Jean-Pierre; Ono, Yutaka; Pennell, Beth-Ellen; Popovici, Daniela G; Ten Have, Margreet; Kessler, Ronald C

    2018-02-01

    Understanding the effects of war on mental disorders is important for developing effective post-conflict recovery policies and programs. The current study uses cross-sectional, retrospectively reported data collected as part of the World Mental Health (WMH) Survey Initiative to examine the associations of being a civilian in a war zone/region of terror in World War II with a range of DSM-IV mental disorders. Adults (n = 3370) who lived in countries directly involved in World War II in Europe and Japan were administered structured diagnostic interviews of lifetime DSM-IV mental disorders. The associations of war-related traumas with subsequent disorder onset-persistence were assessed with discrete-time survival analysis (lifetime prevalence) and conditional logistic regression (12-month prevalence). Respondents who were civilians in a war zone/region of terror had higher lifetime risks than other respondents of major depressive disorder (MDD; OR 1.5, 95% CI 1.1, 1.9) and anxiety disorder (OR 1.5, 95% CI 1.1, 2.0). The association of war exposure with MDD was strongest in the early years after the war, whereas the association with anxiety disorders increased over time. Among lifetime cases, war exposure was associated with lower past year risk of anxiety disorders (OR 0.4, 95% CI 0.2, 0.7). Exposure to war in World War II was associated with higher lifetime risk of some mental disorders. Whether comparable patterns will be found among civilians living through more recent wars remains to be seen, but should be recognized as a possibility by those projecting future needs for treatment of mental disorders.

  4. New constraints on neutron star models of gamma-ray bursts. II - X-ray observations of three gamma-ray burst error boxes

    Science.gov (United States)

    Boer, M.; Hurley, K.; Pizzichini, G.; Gottardi, M.

    1991-01-01

    Exosat observations are presented for 3 gamma-ray-burst error boxes, one of which may be associated with an optical flash. No point sources were detected at the 3-sigma level. A comparison with Einstein data (Pizzichini et al., 1986) is made for the March 5b, 1979 source. The data are interpreted in the framework of neutron star models and derive upper limits for the neutron star surface temperatures, accretion rates, and surface densities of an accretion disk. Apart from the March 5b, 1979 source, consistency is found with each model.

  5. Survey of fish impingement at power plants in the United States. Volume II. Inland waters

    International Nuclear Information System (INIS)

    Freeman, R.F. III; Sharma, R.K.

    1977-03-01

    Impingement of fish at cooling-water intakes of 33 power plants located on inland waters other than the Great Lakes has been surveyed and data are presented. Descriptions of site, plant, and intake design and operation are provided. Reports in this volume summarize impingement data for individual plants in tabular and histogram formats. Information was available from differing sources such as the utilities themselves, public documents, regulatory agencies, and others. Thus, the extent of detail in the reports varies greatly from plant to plant. Histogram preparation involved an extrapolation procedure that has inadequacies. The reader is cautioned in the use of information presented in this volume to determine intake-design acceptability or intensity of impacts on ecosystems. No conclusions are presented herein; data comparisons are made in Volume IV

  6. QUEST1 Variability Survey. II. Variability Determination Criteria and 200k Light Curve Catalog

    Science.gov (United States)

    Rengstorf, A. W.; Mufson, S. L.; Andrews, P.; Honeycutt, R. K.; Vivas, A. K.; Abad, C.; Adams, B.; Bailyn, C.; Baltay, C.; Bongiovanni, A.; Briceño, C.; Bruzual, G.; Coppi, P.; Della Prugna, F.; Emmet, W.; Ferrín, I.; Fuenmayor, F.; Gebhard, M.; Hernández, J.; Magris, G.; Musser, J.; Naranjo, O.; Oemler, A.; Rosenzweig, P.; Sabbey, C. N.; Sánchez, Ge.; Sánchez, Gu.; Schaefer, B.; Schenner, H.; Sinnott, J.; Snyder, J. A.; Sofia, S.; Stock, J.; van Altena, W.

    2004-12-01

    The QUEST (QUasar Equatorial Survey Team) Phase 1 camera has collected multibandpass photometry on a large strip of high Galactic latitude sky over a period of 26 months. This robust data set has been reduced and nightly catalogs compared to determine the photometric variability of the ensemble objects. Subsequent spectroscopic observations have confirmed a subset of the photometric variables as quasars, as previously reported. This paper reports on the details of the data reduction and analysis pipeline and presents multiple bandpass light curves for 198,213 QUEST1 objects, along with global variability information and matched Sloan photometry. Based on observations obtained at the Llano del Hato National Astronomical Observatory, operated by the Centro de Investigaciones de Astronomía for the Ministerio de Ciencia y Tecnologia of Venezuela.

  7. The California- Kepler Survey. II. Precise Physical Properties of 2025 Kepler Planets and Their Host Stars

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, John Asher; Cargile, Phillip A.; Sinukoff, Evan [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Petigura, Erik A.; Howard, Andrew W. [California Institute of Technology, Pasadena, CA, 91125 (United States); Fulton, Benjamin J.; Hirsch, Lea A. [Institute for Astronomy, University of Hawai‘i at Mānoa, Honolulu, HI 96822 (United States); Marcy, Geoffrey W.; Isaacson, Howard [Department of Astronomy, University of California, Berkeley, CA 94720 (United States); Hebb, Leslie [Hobart and William Smith Colleges, Geneva, NY 14456 (United States); Morton, Timothy D.; Winn, Joshua N. [Department of Astrophysical Sciences, Peyton Hall, 4 Ivy Lane, Princeton, NJ 08540 (United States); Weiss, Lauren M. [Institut de Recherche sur les Exoplanètes, Université de Montréal, Montréal, QC (Canada); Rogers, Leslie A., E-mail: petigura@caltech.edu [Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States)

    2017-09-01

    We present stellar and planetary properties for 1305 Kepler Objects of Interest hosting 2025 planet candidates observed as part of the California- Kepler Survey. We combine spectroscopic constraints, presented in Paper I, with stellar interior modeling to estimate stellar masses, radii, and ages. Stellar radii are typically constrained to 11%, compared to 40% when only photometric constraints are used. Stellar masses are constrained to 4%, and ages are constrained to 30%. We verify the integrity of the stellar parameters through comparisons with asteroseismic studies and Gaia parallaxes. We also recompute planetary radii for 2025 planet candidates. Because knowledge of planetary radii is often limited by uncertainties in stellar size, we improve the uncertainties in planet radii from typically 42% to 12%. We also leverage improved knowledge of stellar effective temperature to recompute incident stellar fluxes for the planets, now precise to 21%, compared to a factor of two when derived from photometry.

  8. Redshifts for fainter galaxies in the first CfA survey slice. II

    Science.gov (United States)

    Wegner, Gary; Thorstensen, John R.; Kurtz, Michael J.; Geller, Margaret J.; Huchra, John P.

    1990-01-01

    Redshifts were measured for 96 galaxies in right ascension alpha between 8h and 17h declination delta between 30 and 31 deg, and with m(Zwicky) in the range 15.6-15.7. These correspond to 94 of the 96 entries in the Zwicky-Nilson merged catalog. The declination range delta between 29 deg and 31 deg is now complete to m(Zwicky) = 15.7. The structures in the first 6-deg-wide slice of the Center for Astrophysics redshift survey slice (delta between 26.5 and 32.5 deg are clearly defined in the 2-deg-wide slightly deeper sample; the fainter galaxies trace the structures defined by the brighter ones.

  9. Shocked POststarburst Galaxy Survey. II. The Molecular Gas Content and Properties of a Subset of SPOGs

    Science.gov (United States)

    Alatalo, Katherine; Lisenfeld, Ute; Lanz, Lauranne; Appleton, Philip N.; Ardila, Felipe; Cales, Sabrina L.; Kewley, Lisa J.; Lacy, Mark; Medling, Anne M.; Nyland, Kristina; Rich, Jeffrey A.; Urry, C. Meg

    2016-08-01

    We present CO(1-0) observations of objects within the Shocked POststarburst Galaxy Survey taken with the Institut de Radioastronomie Millimétrique 30 m single dish and the Combined Array for Research for Millimeter Astronomy interferometer. Shocked poststarburst galaxies (SPOGs) represent a transitioning population of galaxies, with deep Balmer absorption ({{EW}}{{H}δ }\\gt 5 {\\mathring{{A}}} ), consistent with an intermediate-age (A-star) stellar population, and ionized gas line ratios inconsistent with pure star formation. The CO(1-0) subsample was selected from SPOGs detected by the Wide-field Infrared Survey Explorer with 22 μm flux detected at a signal-to-noise ratio (S/N) > 3. Of the 52 objects observed in CO(1-0), 47 are detected with S/N > 3. A large fraction (37%-46% ± 7%) of our CO-SPOG sample were visually classified as morphologically disrupted. The H2 masses detected were between {10}8.7-10.8 {M}⊙ , consistent with the gas masses found in normal galaxies, though approximately an order of magnitude larger than the range seen in poststarburst galaxies. When comparing the 22 μm and CO(1-0) fluxes, SPOGs diverge from the normal star-forming relation, having 22 μm fluxes in excess of the relation by a factor of ={4.91}-0.39+0.42, suggestive of the presence of active galactic nuclei (AGNs). The Na I D characteristics of CO-SPOGs show that it is likely that many of these objects host interstellar winds. Objects with large Na I D enhancements also tend to emit in the radio, suggesting possible AGN driving of neutral winds.

  10. THE SWIFT AGN AND CLUSTER SURVEY. II. CLUSTER CONFIRMATION WITH SDSS DATA

    International Nuclear Information System (INIS)

    Griffin, Rhiannon D.; Dai, Xinyu; Kochanek, Christopher S.; Bregman, Joel N.

    2016-01-01

    We study 203 (of 442) Swift AGN and Cluster Survey extended X-ray sources located in the SDSS DR8 footprint to search for galaxy over-densities in three-dimensional space using SDSS galaxy photometric redshifts and positions near the Swift cluster candidates. We find 104 Swift clusters with a >3σ galaxy over-density. The remaining targets are potentially located at higher redshifts and require deeper optical follow-up observations for confirmation as galaxy clusters. We present a series of cluster properties including the redshift, brightest cluster galaxy (BCG) magnitude, BCG-to-X-ray center offset, optical richness, and X-ray luminosity. We also detect red sequences in ∼85% of the 104 confirmed clusters. The X-ray luminosity and optical richness for the SDSS confirmed Swift clusters are correlated and follow previously established relations. The distribution of the separations between the X-ray centroids and the most likely BCG is also consistent with expectation. We compare the observed redshift distribution of the sample with a theoretical model, and find that our sample is complete for z ≲ 0.3 and is still 80% complete up to z ≃ 0.4, consistent with the SDSS survey depth. These analysis results suggest that our Swift cluster selection algorithm has yielded a statistically well-defined cluster sample for further study of cluster evolution and cosmology. We also match our SDSS confirmed Swift clusters to existing cluster catalogs, and find 42, 23, and 1 matches in optical, X-ray, and Sunyaev–Zel’dovich catalogs, respectively, and so the majority of these clusters are new detections

  11. [Effect of Mn(II) on the error-prone DNA polymerase iota activity in extracts from human normal and tumor cells].

    Science.gov (United States)

    Lakhin, A V; Efremova, A S; Makarova, I V; Grishina, E E; Shram, S I; Tarantul, V Z; Gening, L V

    2013-01-01

    The DNA polymerase iota (Pol iota), which has some peculiar features and is characterized by an extremely error-prone DNA synthesis, belongs to the group of enzymes preferentially activated by Mn2+ instead of Mg2+. In this work, the effect of Mn2+ on DNA synthesis in cell extracts from a) normal human and murine tissues, b) human tumor (uveal melanoma), and c) cultured human tumor cell lines SKOV-3 and HL-60 was tested. Each group displayed characteristic features of Mn-dependent DNA synthesis. The changes in the Mn-dependent DNA synthesis caused by malignant transformation of normal tissues are described. It was also shown that the error-prone DNA synthesis catalyzed by Pol iota in extracts of all cell types was efficiently suppressed by an RNA aptamer (IKL5) against Pol iota obtained in our work earlier. The obtained results suggest that IKL5 might be used to suppress the enhanced activity of Pol iota in tumor cells.

  12. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    Science.gov (United States)

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  13. The HST/ACS Coma Cluster Survey. II. Data Description and Source Catalogs

    Science.gov (United States)

    Hammer, Derek; Kleijn, Gijs Verdoes; Hoyos, Carlos; Den Brok, Mark; Balcells, Marc; Ferguson, Henry C.; Goudfrooij, Paul; Carter, David; Guzman, Rafael; Peletier, Reynier F.; hide

    2010-01-01

    The Coma cluster, Abell 1656, was the target of a HST-ACS Treasury program designed for deep imaging in the F475W and F814W passbands. Although our survey was interrupted by the ACS instrument failure in early 2007, the partially-completed survey still covers approximately 50% of the core high density region in Coma. Observations were performed for twenty-five fields with a total coverage area of 274 aremin(sup 2), and extend over a wide range of cluster-centric radii (approximately 1.75 Mpe or 1 deg). The majority of the fields are located near the core region of Coma (19/25 pointings) with six additional fields in the south-west region of the cluster. In this paper we present SEXTRACTOR source catalogs generated from the processed images, including a detailed description of the methodology used for object detection and photometry, the subtraction of bright galaxies to measure faint underlying objects, and the use of simulations to assess the photometric accuracy and completeness of our catalogs. We also use simulations to perform aperture corrections for the SEXTRACTOR Kron magnitudes based only on the measured source flux and its half-light radius. We have performed photometry for 76,000 objects that consist of roughly equal numbers of extended galaxies and unresolved objects. Approximately two-thirds of all detections are brighter than F814W=26.5 mag (AB), which corresponds to the 10sigma, point-source detection limit. We estimate that Coma members are 5-10% of the source detections, including a large population of compact objects (primarily GCs, but also cEs and UCDs), and a wide variety of extended galaxies from cD galaxies to dwarf low surface brightness galaxies. The initial data release for the HST-ACS Coma Treasury program was made available to the public in August 2008. The images and catalogs described in this study relate to our second data release.

  14. Association between drug use and urban violence: Data from the II Brazilian National Alcohol and Drugs Survey (BNADS

    Directory of Open Access Journals (Sweden)

    Renata Rigacci Abdalla

    2018-06-01

    Full Text Available Objective: To investigate the association of alcohol and cocaine use with urban violence (both as victim and as perpetrator in a representative sample of the Brazilian population. Method: The Second Brazilian Alcohol and Drugs Survey (II BNADS interviewed 4607 individuals aged 14years and older from the Brazilian household population including an oversample of 1157 adolescents (14 to 18years old. The survey gathered information on alcohol, tobacco and illegal substances use as well as on risk factors for abuse and dependence, behaviors associated with the use of substances and the possible consequences, as urban violence indicators. Results: Approximately 9.3% of the Brazilian population has been victim of at least one form of urban violence. This proportion increases to 19.7% among cocaine users and to 18.1% among individuals with alcohol use disorders (AUD. Perpetration of violence was reported by 6.2% of the sample. Cocaine use and AUD increased in almost four times the chances of being an aggressor. Being religious and married decreased the chances of being a victim and/or perpetrador of urban violence. Higher education also decreased the chances of involvement in both victimization or perpetration of violence. Both Parallel Mediation Models considering cocaine use as a predictor of urban violence (victimization or perpetration were valid and alcohol consumption and depressive symptoms were mediators of this relationship. Conclusions: This study presents relevant data of interest to Brazil as this country is one of the major consumer market of cocaine and also is among the most violent countries worldwide. Keywords: Urban violence, Cocaine, Alcohol use disorder, Household survey, Epidemiology

  15. Fuel Quality/Processing Study. Volume II. Appendix, Task I, literature survey

    Energy Technology Data Exchange (ETDEWEB)

    O' Hara, J B; Bela, A; Jentz, N E; Klumpe, H W; Kessler, R E; Kotzot, H T; Loran, B I

    1981-04-01

    This activity was begun with the assembly of information from Parsons' files and from contacts in the development and commercial fields. A further more extensive literature search was carried out using the Energy Data Base and the American Petroleum Institute Data Base. These are part of the DOE/RECON system. Approximately 6000 references and abstracts were obtained from the EDB search. These were reviewed and the especially pertinent documents, approximately 300, were acquired in the form of paper copy or microfiche. A Fuel Properties form was developed for listing information pertinent to gas turbine liquid fuel properties specifications. Fuel properties data for liquid fuels from selected synfuel processes, deemed to be successful candidates for near future commercial plants were tabulated on the forms. The processes selected consisted of H-Coal, SRC-II and Exxon Donor Solvent (EDS) coal liquefaction processes plus Paraho and Tosco shale oil processes. Fuel properties analyses for crude and distillate syncrude process products are contained in Section 2. Analyses representing synthetic fuels given refinery treatments, mostly bench scale hydrotreating, are contained in Section 3. Section 4 discusses gas turbine fuel specifications based on petroleum source fuels as developed by the major gas turbine manufacturers. Section 5 presents the on-site gas turbine fuel treatments applicable to petroleum base fuels impurities content in order to prevent adverse contaminant effects. Section 7 relates the environmental aspects of gas turbine fuel usage and combustion performance. It appears that the near future stationary industrial gas turbine fuel market will require that some of the synthetic fuels be refined to the point that they resemble petroleum based fuels.

  16. DEEP NEAR-INFRARED SURVEY OF THE PIPE NEBULA. II. DATA, METHODS, AND DUST EXTINCTION MAPS

    International Nuclear Information System (INIS)

    Roman-Zuniga, Carlos G.; Alves, Joao F.; Lada, Charles J.; Lombardi, Marco

    2010-01-01

    We present a new set of high-resolution dust extinction maps of the nearby and essentially starless Pipe Nebula molecular cloud. The maps were constructed from a concerted deep near-infrared imaging survey with the ESO-VLT, ESO-NTT, CAHA 3.5 m telescopes, and 2MASS data. The new maps have a resolution three times higher than the previous extinction map of this cloud by Lombardi et al. and are able to resolve structures down to 2600 AU. We detect 244 significant extinction peaks across the cloud. These peaks have masses between 0.1 and 18.4 M sun , diameters between 1.2 and 5.7 x 10 4 AU (0.06 and 0.28 pc), and mean densities of about 10 4 cm -3 , all in good agreement with previous results. From the analysis of the mean surface density of companions we find a well-defined scale near 1.4 x 10 4 AU below which we detect a significant decrease in structure of the cloud. This scale is smaller than the Jeans length calculated from the mean density of the peaks. The surface density of peaks is not uniform but instead it displays clustering. Extinction peaks in the Pipe Nebula appear to have a spatial distribution similar to the stars in Taurus, suggesting that the spatial distribution of stars evolves directly from the primordial spatial distribution of high-density material.

  17. [Cost of lost productivity in pharmacoeconomics analysis. Part II. Survey in the expert group].

    Science.gov (United States)

    Wrona, Witold; Hermanowski, Tomasz; Jakubczyk, Michał; Golicki, Dominik; Czech, Marcin; Niewada, Maciej; Kolasa, Katarzyna

    2011-01-01

    The aim of the survey was to collect data on practice and preferences of decision-makers and experts in health economics concerning the role of indirect costs in Poland. The questionnaire contained 18 questions covering the need for indirect costs calculation in economic evaluations and measures used to calculate indirect cost. Fifty four respondents related to health economics returned completed questionnaires. Mean age of respondents was 33,3 years; mean experience in health economics 4.7 years; 43% (23/54) of responders had non-economic background; 30% each were users and doers of health technology assessment reports. All (excluding one) responders indicated that indirect costs should be calculated in pharmacoeconomic studies. Twenty three (i.e., 43%) responders indicated human capital approach as the best method to estimate costs from societal perspective; friction cost method came second best 11%; 42% respondents had no opinion. The doers of economics evaluations pointed to GDP per capita (61%, 11/18), average salary (61%, 11/18), and costs of sick pay or injury benefit (61%, 11/18) as measures which could be used to value production losses. Indirect costs are considered important component of economic evaluations of healthcare interventions in Poland. The lack of widely accepted methods for indirect cost evaluation support further research.

  18. The Gaia spectrophotometric standard stars survey: II. Instrumental effects of six ground-based observing campaigns

    Science.gov (United States)

    Altavilla, G.; Marinoni, S.; Pancino, E.; Galleti, S.; Ragaini, S.; Bellazzini, M.; Cocozza, G.; Bragaglia, A.; Carrasco, J. M.; Castro, A.; Di Fabrizio, L.; Federici, L.; Figueras, F.; Gebran, M.; Jordi, C.; Masana, E.; Schuster, W.; Valentini, G.; Voss, H.

    2015-08-01

    The Gaia SpectroPhotometric Standard Stars (SPSS) survey started in 2006, was awarded almost 450 observing nights and accumulated almost 100 000 raw data frames with both photometric and spectroscopic observations. Such large observational effort requires careful, homogeneous, and automatic data reduction and quality control procedures. In this paper, we quantitatively evaluate instrumental effects that might have a significant (i.e., ≥ 1 %) impact on the Gaia SPSS flux calibration. The measurements involve six different instruments, monitored over the eight years of observations dedicated to the Gaia flux standards campaigns: DOLORES@TNG in La Palma, EFOSC2@NTT and ROSS@REM in La Silla, CAFOS@2.2 m in Calar Alto, BFOSC@Cassini in Loiano, and LaRuca@1.5 m in San Pedro Mártir. We examine and quantitatively evaluate the following effects: CCD linearity and shutter times, calibration frames stability, lamp flexures, second order contamination, light polarization, and fringing. We present methods to correct for the relevant effects which can be applied to a wide range of observational projects at similar instruments. Based on data obtained with BFOSC@Cassini in Loiano, Italy; EFOSC2@NTT in La Silla, Chile; DOLORES@TNG in La Palma, Spain; CAFOS@2.2 m in Calar Alto, Spain; LaRuca@1.5 m in San Pedro Mártir, Mexico (see acknowledgements for more details).

  19. Far-infrared data for symbiotic stars. II. The IRAS survey observations

    International Nuclear Information System (INIS)

    Kenyon, S.J.; Fernandez-Castro, T.; Stencel, R.E.

    1988-01-01

    IRAS survey data for all known symbiotic binaries are reported. S type systems have 25 micron excesses much larger than those of single red giant stars, suggesting that these objects lose mass more rapidly than do normal giants. D type objects have far-IR colors similar to those of Mira variables, implying mass-loss rate of about 10 to the -6th solar masses/yr. The near-IR extinctions of the D types indicate that their Mira components are enshrouded in optically thick dust shells, while their hot companions lie outside the shells. If this interpretation of the data is correct, then the very red near-IR colors of D type symbiotic stars are caused by extreme amounts of dust absorption rather than dust emission. The small group of D prime objects possesses far-IR colors resembling those of compact planetary nebulae or extreme OH/IR stars. It is speculated that these binaries are not symbiotic stars at all, but contain a hot compact star and an exasymptotic branch giant which is in the process of ejecting a planetary nebula shell. 42 references

  20. THE SLOAN DIGITAL SKY SURVEY DATA RELEASE 7 SPECTROSCOPIC M DWARF CATALOG. II. STATISTICAL PARALLAX ANALYSIS

    International Nuclear Information System (INIS)

    Bochanski, John J.; Hawley, Suzanne L.; West, Andrew A.

    2011-01-01

    We present a statistical parallax analysis of low-mass dwarfs from the Sloan Digital Sky Survey. We calculate absolute r-band magnitudes (M r ) as a function of color and spectral type and investigate changes in M r with location in the Milky Way. We find that magnetically active M dwarfs are intrinsically brighter in M r than their inactive counterparts at the same color or spectral type. Metallicity, as traced by the proxy ζ, also affects M r , with metal-poor stars having fainter absolute magnitudes than higher metallicity M dwarfs at the same color or spectral type. Additionally, we measure the velocity ellipsoid and solar reflex motion for each subsample of M dwarfs. We find good agreement between our measured solar peculiar motion and previous results for similar populations, as well as some evidence for differing motions of early and late M-type populations in U and W velocities that cannot be attributed to asymmetric drift. The reflex solar motion and the velocity dispersions both show that younger populations, as traced by magnetic activity and location near the Galactic plane, have experienced less dynamical heating. We introduce a new parameter, the independent position altitude (IPA), to investigate populations as a function of vertical height from the Galactic plane. M dwarfs at all types exhibit an increase in velocity dispersion when analyzed in comparable IPA subgroups.

  1. The Taurus Boundary of Stellar/Substellar (TBOSS) Survey. II. Disk Masses from ALMA Continuum Observations

    Science.gov (United States)

    Ward-Duong, K.; Patience, J.; Bulger, J.; van der Plas, G.; Ménard, F.; Pinte, C.; Jackson, A. P.; Bryden, G.; Turner, N. J.; Harvey, P.; Hales, A.; De Rosa, R. J.

    2018-02-01

    We report 885 μm ALMA continuum flux densities for 24 Taurus members spanning the stellar/substellar boundary with spectral types from M4 to M7.75. Of the 24 systems, 22 are detected at levels ranging from 1.0 to 55.7 mJy. The two nondetections are transition disks, though other transition disks in the sample are detected. Converting ALMA continuum measurements to masses using standard scaling laws and radiative transfer modeling yields dust mass estimates ranging from ∼0.3 to 20 M ⊕. The dust mass shows a declining trend with central object mass when combined with results from submillimeter surveys of more massive Taurus members. The substellar disks appear as part of a continuous sequence and not a distinct population. Compared to older Upper Sco members with similar masses across the substellar limit, the Taurus disks are brighter and more massive. Both Taurus and Upper Sco populations are consistent with an approximately linear relationship in M dust to M star, although derived power-law slopes depend strongly upon choices of stellar evolutionary model and dust temperature relation. The median disk around early-M stars in Taurus contains a comparable amount of mass in small solids as the average amount of heavy elements in Kepler planetary systems on short-period orbits around M-dwarf stars, with an order of magnitude spread in disk dust mass about the median value. Assuming a gas-to-dust ratio of 100:1, only a small number of low-mass stars and brown dwarfs have a total disk mass amenable to giant planet formation, consistent with the low frequency of giant planets orbiting M dwarfs.

  2. The H IX galaxy survey - II. H I kinematics of H I eXtreme galaxies

    Science.gov (United States)

    Lutz, K. A.; Kilborn, V. A.; Koribalski, B. S.; Catinella, B.; Józsa, G. I. G.; Wong, O. I.; Stevens, A. R. H.; Obreschkow, D.; Dénes, H.

    2018-05-01

    By analysing a sample of galaxies selected from the H I Parkes All Sky Survey (HIPASS) to contain more than 2.5 times their expected H I content based on their optical properties, we investigate what drives these H I eXtreme (H IX) galaxies to be so H I-rich. We model the H I kinematics with the Tilted Ring Fitting Code TiRiFiC and compare the observed H IX galaxies to a control sample of galaxies from HIPASS as well as simulated galaxies built with the semi-analytic model DARK SAGE. We find that (1) H I discs in H IX galaxies are more likely to be warped and more likely to host H I arms and tails than in the control galaxies, (2) the average H I and average stellar column density of H IX galaxies is comparable to the control sample, (3) H IX galaxies have higher H I and baryonic specific angular momenta than control galaxies, (4) most H IX galaxies live in higher spin haloes than most control galaxies. These results suggest that H IX galaxies are H I-rich because they can support more H I against gravitational instability due to their high specific angular momentum. The majority of the H IX galaxies inherits their high specific angular momentum from their halo. The H I content of H IX galaxies might be further increased by gas-rich minor mergers. This paper is based on data obtained with the Australia Telescope Compact Array through the large program C 2705.

  3. The Green Bank Northern Celestial Cap Pulsar Survey. II. The Discovery and Timing of 10 Pulsars

    Science.gov (United States)

    Kawash, A. M.; McLaughlin, M. A.; Kaplan, D. L.; DeCesar, M. E.; Levin, L.; Lorimer, D. R.; Lynch, R. S.; Stovall, K.; Swiggum, J. K.; Fonseca, E.; Archibald, A. M.; Banaszak, S.; Biwer, C. M.; Boyles, J.; Cui, B.; Dartez, L. P.; Day, D.; Ernst, S.; Ford, A. J.; Flanigan, J.; Heatherly, S. A.; Hessels, J. W. T.; Hinojosa, J.; Jenet, F. A.; Karako-Argaman, C.; Kaspi, V. M.; Kondratiev, V. I.; Leake, S.; Lunsford, G.; Martinez, J. G.; Mata, A.; Matheny, T. D.; Mcewen, A. E.; Mingyar, M. G.; Orsini, A. L.; Ransom, S. M.; Roberts, M. S. E.; Rohr, M. D.; Siemens, X.; Spiewak, R.; Stairs, I. H.; van Leeuwen, J.; Walker, A. N.; Wells, B. L.

    2018-04-01

    We present timing solutions for 10 pulsars discovered in 350 MHz searches with the Green Bank Telescope. Nine of these were discovered in the Green Bank Northern Celestial Cap survey and one was discovered by students in the Pulsar Search Collaboratory program during an analysis of drift-scan data. Following the discovery and confirmation with the Green Bank Telescope, timing has yielded phase-connected solutions with high-precision measurements of rotational and astrometric parameters. Eight of the pulsars are slow and isolated, including PSR J0930‑2301, a pulsar with a nulling fraction lower limit of ∼30% and a nulling timescale of seconds to minutes. This pulsar also shows evidence of mode changing. The remaining two pulsars have undergone recycling, accreting material from binary companions, resulting in higher spin frequencies. PSR J0557‑2948 is an isolated, 44 ms pulsar that has been partially recycled and is likely a former member of a binary system that was disrupted by a second supernova. The paucity of such so-called “disrupted binary pulsars” (DRPs) compared to double neutron star (DNS) binaries can be used to test current evolutionary scenarios, especially the kicks imparted on the neutron stars in the second supernova. There is some evidence that DRPs have larger space velocities, which could explain their small numbers. PSR J1806+2819 is a 15 ms pulsar in a 44-day orbit with a low-mass white dwarf companion. We did not detect the companion in archival optical data, indicating that it must be older than 1200 Myr.

  4. Survey of total error of precipitation and homogeneous HDL-cholesterol methods and simultaneous evaluation of lyophilized saccharose-containing candidate reference materials for HDL-cholesterol

    NARCIS (Netherlands)

    C.M. Cobbaert (Christa); H. Baadenhuijsen; L. Zwang (Louwerens); C.W. Weykamp; P.N. Demacker; P.G.H. Mulder (Paul)

    1999-01-01

    textabstractBACKGROUND: Standardization of HDL-cholesterol is needed for risk assessment. We assessed for the first time the accuracy of HDL-cholesterol testing in The Netherlands and evaluated 11 candidate reference materials (CRMs). METHODS: The total error (TE) of

  5. CHROMOSPHERIC VARIABILITY IN SLOAN DIGITAL SKY SURVEY M DWARFS. II. SHORT-TIMESCALE Hα VARIABILITY

    International Nuclear Information System (INIS)

    Kruse, E. A.; Berger, E.; Laskar, T.; Knapp, G. R.; Gunn, J. E.; Loomis, C. P.; Lupton, R. H.; Schlegel, D. J.

    2010-01-01

    We present the first comprehensive study of short-timescale chromospheric Hα variability in M dwarfs using the individual 15 minute spectroscopic exposures for 52, 392 objects from the Sloan Digital Sky Survey. Our sample contains about 10 3 -10 4 objects per spectral type bin in the range M0-M9, with a typical number of three exposures per object (ranging up to a maximum of 30 exposures). Using this extensive data set, we find that about 16% of the sources exhibit Hα emission in at least one exposure, and of those about 45% exhibit Hα emission in all of the available exposures. As in previous studies of Hα activity (L Hα /L bol ), we find a rapid increase in the fraction of active objects from M0-M6. However, we find a subsequent decline in later spectral types that we attribute to our use of the individual spectra. Similarly, we find saturated activity at a level of L Hα /L bol ∼ 10 -3.6 for spectral types M0-M5 followed by a decline to about 10 -4.3 in the range M7-M9. Within the sample of objects with Hα emission, only 26% are consistent with non-variable emission, independent of spectral type. The Hα variability, quantified in terms of the ratio of maximum to minimum Hα equivalent width (R EW ), exhibits a rapid rise from M0 to M5, followed by a plateau and a possible decline in M9 objects. In particular, variability with R EW ∼> 10 is only observed in objects later than M5, and survival analysis indicates a probability of ∼ EW values for M0-M4 and M5-M9 are drawn from the same distribution. We further find that for an exponential distribution, the R EW values follow N(R EW ) ∝ exp[ - (R EW - 1)/2.3] for M0-M4 and ∝exp[ - (R EW - 1)/2.9] for M5-M9. Finally, comparing objects with persistent and intermittent Hα emission, we find that the latter exhibit greater variability. Based on these results, we conclude that Hα variability in M dwarfs on timescales of 15 minutes to 1 hr increases with later spectral type, and that the variability is

  6. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  7. The MUSCLES Treasury Survey. IV. Scaling Relations for Ultraviolet, Ca II K, and Energetic Particle Fluxes from M Dwarfs

    Science.gov (United States)

    Youngblood, Allison; France, Kevin; Loyd, R. O. Parke; Brown, Alexander; Mason, James P.; Schneider, P. Christian; Tilley, Matt A.; Berta-Thompson, Zachory K.; Buccino, Andrea; Froning, Cynthia S.; Hawley, Suzanne L.; Linsky, Jeffrey; Mauas, Pablo J. D.; Redfield, Seth; Kowalski, Adam; Miguel, Yamila; Newton, Elisabeth R.; Rugheimer, Sarah; Segura, Antígona; Roberge, Aki; Vieytes, Mariela

    2017-07-01

    Characterizing the UV spectral energy distribution (SED) of an exoplanet host star is critically important for assessing its planet’s potential habitability, particularly for M dwarfs, as they are prime targets for current and near-term exoplanet characterization efforts and atmospheric models predict that their UV radiation can produce photochemistry on habitable zone planets different from that on Earth. To derive ground-based proxies for UV emission for use when Hubble Space Telescope (HST) observations are unavailable, we have assembled a sample of 15 early to mid-M dwarfs observed by HST and compared their nonsimultaneous UV and optical spectra. We find that the equivalent width of the chromospheric Ca II K line at 3933 Å, when corrected for spectral type, can be used to estimate the stellar surface flux in ultraviolet emission lines, including H I Lyα. In addition, we address another potential driver of habitability: energetic particle fluxes associated with flares. We present a new technique for estimating soft X-ray and >10 MeV proton flux during far-UV emission line flares (Si IV and He II) by assuming solar-like energy partitions. We analyze several flares from the M4 dwarf GJ 876 observed with HST and Chandra as part of the MUSCLES Treasury Survey and find that habitable zone planets orbiting GJ 876 are impacted by large Carrington-like flares with peak soft X-ray fluxes ≥10-3 W m-2 and possible proton fluxes ˜102-103 pfu, approximately four orders of magnitude more frequently than modern-day Earth.

  8. The MUSCLES Treasury Survey. IV. Scaling Relations for Ultraviolet, Ca ii K, and Energetic Particle Fluxes from M Dwarfs

    Energy Technology Data Exchange (ETDEWEB)

    Youngblood, Allison; France, Kevin; Loyd, R. O. Parke; Mason, James P. [Laboratory for Atmospheric and Space Physics, University of Colorado, 600 UCB, Boulder, CO 80309 (United States); Brown, Alexander [Center for Astrophysics and Space Astronomy, University of Colorado, 389 UCB, Boulder, CO 80309 (United States); Schneider, P. Christian [European Space Research and Technology Centre (ESA/ESTEC), Keplerlaan 1, 2201 AZ Noordwijk (Netherlands); Tilley, Matt A. [Department of Earth and Space Sciences, University of Washington, Box 351310, Seattle, WA 98195 (United States); Berta-Thompson, Zachory K.; Kowalski, Adam [Department of Astrophysical and Planetary Sciences, University of Colorado, 2000 Colorado Ave., Boulder, CO 80305 (United States); Buccino, Andrea; Mauas, Pablo J. D. [Dpto. de Física, Facultad de Ciencias Exactas y Naturales (FCEN), Universidad de Buenos Aires (UBA), Buenos Aires (Argentina); Froning, Cynthia S. [Department of Astronomy/McDonald Observatory, C1400, University of Texas at Austin, Austin, TX 78712 (United States); Hawley, Suzanne L. [Astronomy Department, Box 351580, University of Washington, Seattle, WA 98195 (United States); Linsky, Jeffrey [JILA, University of Colorado and NIST, 440 UCB, Boulder, CO 80309 (United States); Redfield, Seth [Astronomy Department and Van Vleck Observatory, Wesleyan University, Middletown, CT 06459 (United States); Miguel, Yamila [Observatoire de la Cote d’Azur, Boulevard de l’Observatoire, CS 34229 F-06304 NICE Cedex 4 (France); Newton, Elisabeth R. [Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA 02138 (United States); Rugheimer, Sarah, E-mail: allison.youngblood@colorado.edu [School of Earth and Environmental Sciences, University of St. Andrews, Irvine Building, North Street, St. Andrews, KY16 9AL (United Kingdom); and others

    2017-07-01

    Characterizing the UV spectral energy distribution (SED) of an exoplanet host star is critically important for assessing its planet’s potential habitability, particularly for M dwarfs, as they are prime targets for current and near-term exoplanet characterization efforts and atmospheric models predict that their UV radiation can produce photochemistry on habitable zone planets different from that on Earth. To derive ground-based proxies for UV emission for use when Hubble Space Telescope ( HST ) observations are unavailable, we have assembled a sample of 15 early to mid-M dwarfs observed by HST and compared their nonsimultaneous UV and optical spectra. We find that the equivalent width of the chromospheric Ca ii K line at 3933 Å, when corrected for spectral type, can be used to estimate the stellar surface flux in ultraviolet emission lines, including H i Ly α . In addition, we address another potential driver of habitability: energetic particle fluxes associated with flares. We present a new technique for estimating soft X-ray and >10 MeV proton flux during far-UV emission line flares (Si iv and He ii) by assuming solar-like energy partitions. We analyze several flares from the M4 dwarf GJ 876 observed with HST and Chandra as part of the MUSCLES Treasury Survey and find that habitable zone planets orbiting GJ 876 are impacted by large Carrington-like flares with peak soft X-ray fluxes ≥10{sup −3} W m{sup −2} and possible proton fluxes ∼10{sup 2}–10{sup 3} pfu, approximately four orders of magnitude more frequently than modern-day Earth.

  9. The MUSCLES Treasury Survey. IV. Scaling Relations for Ultraviolet, Ca ii K, and Energetic Particle Fluxes from M Dwarfs

    International Nuclear Information System (INIS)

    Youngblood, Allison; France, Kevin; Loyd, R. O. Parke; Mason, James P.; Brown, Alexander; Schneider, P. Christian; Tilley, Matt A.; Berta-Thompson, Zachory K.; Kowalski, Adam; Buccino, Andrea; Mauas, Pablo J. D.; Froning, Cynthia S.; Hawley, Suzanne L.; Linsky, Jeffrey; Redfield, Seth; Miguel, Yamila; Newton, Elisabeth R.; Rugheimer, Sarah

    2017-01-01

    Characterizing the UV spectral energy distribution (SED) of an exoplanet host star is critically important for assessing its planet’s potential habitability, particularly for M dwarfs, as they are prime targets for current and near-term exoplanet characterization efforts and atmospheric models predict that their UV radiation can produce photochemistry on habitable zone planets different from that on Earth. To derive ground-based proxies for UV emission for use when Hubble Space Telescope ( HST ) observations are unavailable, we have assembled a sample of 15 early to mid-M dwarfs observed by HST and compared their nonsimultaneous UV and optical spectra. We find that the equivalent width of the chromospheric Ca ii K line at 3933 Å, when corrected for spectral type, can be used to estimate the stellar surface flux in ultraviolet emission lines, including H i Ly α . In addition, we address another potential driver of habitability: energetic particle fluxes associated with flares. We present a new technique for estimating soft X-ray and >10 MeV proton flux during far-UV emission line flares (Si iv and He ii) by assuming solar-like energy partitions. We analyze several flares from the M4 dwarf GJ 876 observed with HST and Chandra as part of the MUSCLES Treasury Survey and find that habitable zone planets orbiting GJ 876 are impacted by large Carrington-like flares with peak soft X-ray fluxes ≥10 −3 W m −2 and possible proton fluxes ∼10 2 –10 3 pfu, approximately four orders of magnitude more frequently than modern-day Earth.

  10. SDSS-II SUPERNOVA SURVEY: AN ANALYSIS OF THE LARGEST SAMPLE OF TYPE IA SUPERNOVAE AND CORRELATIONS WITH HOST-GALAXY SPECTRAL PROPERTIES

    International Nuclear Information System (INIS)

    Wolf, Rachel C.; Gupta, Ravi R.; Sako, Masao; Fischer, John A.; March, Marisa C.; Fischer, Johanna-Laina; D’Andrea, Chris B.; Smith, Mathew; Kessler, Rick; Scolnic, Daniel M.; Jha, Saurabh W.; Campbell, Heather; Nichol, Robert C.; Olmstead, Matthew D.; Richmond, Michael; Schneider, Donald P.

    2016-01-01

    Using the largest single-survey sample of Type Ia supernovae (SNe Ia) to date, we study the relationship between properties of SNe Ia and those of their host galaxies, focusing primarily on correlations with Hubble residuals (HRs). Our sample consists of 345 photometrically classified or spectroscopically confirmed SNe Ia discovered as part of the SDSS-II Supernova Survey (SDSS-SNS). This analysis utilizes host-galaxy spectroscopy obtained during the SDSS-I/II spectroscopic survey and from an ancillary program on the SDSS-III Baryon Oscillation Spectroscopic Survey that obtained spectra for nearly all host galaxies of SDSS-II SN candidates. In addition, we use photometric host-galaxy properties from the SDSS-SNS data release such as host stellar mass and star formation rate. We confirm the well-known relation between HR and host-galaxy mass and find a 3.6 σ significance of a nonzero linear slope. We also recover correlations between HR and host-galaxy gas-phase metallicity and specific star formation rate as they are reported in the literature. With our large data set, we examine correlations between HR and multiple host-galaxy properties simultaneously and find no evidence of a significant correlation. We also independently analyze our spectroscopically confirmed and photometrically classified SNe Ia and comment on the significance of similar combined data sets for future surveys.

  11. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  12. The Segue K giant survey. II. A catalog of distance determinations for the Segue K giants in the galactic halo

    International Nuclear Information System (INIS)

    Xue, Xiang-Xiang; Rix, Hans-Walter; Ma, Zhibo; Morrison, Heather L.; Harding, Paul; Beers, Timothy C.; Ivans, Inese I.; Jacobson, Heather R.; Johnson, Jennifer; Lee, Young Sun; Lucatello, Sara; Rockosi, Constance M.; Sobeck, Jennifer S.; Yanny, Brian; Zhao, Gang; Allende Prieto, Carlos

    2014-01-01

    We present an online catalog of distance determinations for 6036 K giants, most of which are members of the Milky Way's stellar halo. Their medium-resolution spectra from the Sloan Digital Sky Survey/Sloan Extension for Galactic Understanding and Exploration are used to derive metallicities and rough gravity estimates, along with radial velocities. Distance moduli are derived from a comparison of each star's apparent magnitude with the absolute magnitude of empirically calibrated color-luminosity fiducials, at the observed (g – r) 0 color and spectroscopic [Fe/H]. We employ a probabilistic approach that makes it straightforward to properly propagate the errors in metallicities, magnitudes, and colors into distance uncertainties. We also fold in prior information about the giant-branch luminosity function and the different metallicity distributions of the SEGUE K-giant targeting sub-categories. We show that the metallicity prior plays a small role in the distance estimates, but that neglecting the luminosity prior could lead to a systematic distance modulus bias of up to 0.25 mag, compared to the case of using the luminosity prior. We find a median distance precision of 16%, with distance estimates most precise for the least metal-poor stars near the tip of the red giant branch. The precision and accuracy of our distance estimates are validated with observations of globular and open clusters. The stars in our catalog are up to 125 kpc from the Galactic center, with 283 stars beyond 50 kpc, forming the largest available spectroscopic sample of distant tracers in the Galactic halo.

  13. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  14. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  15. A New Extension of the Binomial Error Model for Responses to Items of Varying Difficulty in Educational Testing and Attitude Surveys.

    Directory of Open Access Journals (Sweden)

    James A Wiley

    Full Text Available We put forward a new item response model which is an extension of the binomial error model first introduced by Keats and Lord. Like the binomial error model, the basic latent variable can be interpreted as a probability of responding in a certain way to an arbitrarily specified item. For a set of dichotomous items, this model gives predictions that are similar to other single parameter IRT models (such as the Rasch model but has certain advantages in more complex cases. The first is that in specifying a flexible two-parameter Beta distribution for the latent variable, it is easy to formulate models for randomized experiments in which there is no reason to believe that either the latent variable or its distribution vary over randomly composed experimental groups. Second, the elementary response function is such that extensions to more complex cases (e.g., polychotomous responses, unfolding scales are straightforward. Third, the probability metric of the latent trait allows tractable extensions to cover a wide variety of stochastic response processes.

  16. Objectives and methodology of Romanian SEPHAR II Survey. Project for comparing the prevalence and control of cardiovascular risk factors in two East-European countries: Romania and Poland.

    Science.gov (United States)

    Dorobantu, Maria; Tautu, Oana-Florentina; Darabont, Roxana; Ghiorghe, Silviu; Badila, Elisabeta; Dana, Minca; Dobreanu, Minodora; Baila, Ilarie; Rutkowski, Marcin; Zdrojewski, Tomasz

    2015-08-12

    Comparing results of representative surveys conducted in different East-European countries could contribute to a better understanding and management of cardiovascular risk factors, offering grounds for the development of health policies addressing the special needs of this high cardiovascular risk region of Europe. The aim of this paper was to describe the methodology on which the comparison between the Romanian survey SEPHAR II and the Polish survey NATPOL 2011 results is based. SEPHAR II, like NATPOL 2011, is a cross-sectional survey conducted on a representative sample of the adult Romanian population (18 to 80 years) and encompasses two visits with the following components: completing the study questionnaire, blood pressure and anthropometric measurements, and collection of blood and urine samples. From a total of 2223 subjects found at 2860 visited addresses, 2044 subjects gave written consent but only 1975 subjects had eligible data for the analysis, accounting for a response rate of 69.06%. Additionally we excluded 11 subjects who were 80 years of age (NATPOL 2011 included adult subjects up to 79 years). Therefore, the sample size included in the statistical analysis is 1964. It has similar age groups and gender structure as the Romanian population aged 18-79 years from the last census available at the moment of conducting the survey (weight adjustments for epidemiological analyses range from 0.48 to 8.7). Sharing many similarities, the results of SEPHAR II and NATPOL 2011 surveys can be compared by a proper statistical method offering crucial information regarding cardiovascular risk factors in a high-cardiovascular risk European region.

  17. THE MUSCLES TREASURY SURVEY. II. INTRINSIC LY α AND EXTREME ULTRAVIOLET SPECTRA OF K AND M DWARFS WITH EXOPLANETS

    Energy Technology Data Exchange (ETDEWEB)

    Youngblood, Allison; France, Kevin; Loyd, R. O. Parke [Laboratory for Atmospheric and Space Physics, University of Colorado, 600 UCB, Boulder, CO 80309 (United States); Linsky, Jeffrey L. [JILA, University of Colorado and NIST, 440 UCB, Boulder, CO 80309 (United States); Redfield, Seth [Astronomy Department and Van Vleck Observatory, Wesleyan University, Middletown, CT 06459-0123 (United States); Schneider, P. Christian [European Space Research and Technology Centre (ESA/ESTEC), Keplerlaan 1, 2201 AZ Noordwijk (Netherlands); Wood, Brian E. [Naval Research Laboratory, Space Science Division, Washington, DC 20375 (United States); Brown, Alexander [Center for Astrophysics and Space Astronomy, University of Colorado, 389 UCB, Boulder, CO 80309 (United States); Froning, Cynthia [Dept. of Astronomy C1400, University of Texas, Austin, TX 78712 (United States); Miguel, Yamila [Laboratoire Lagrange, Universite de Nice-Sophia Antipolis, Observatoire de la Cote d’Azur, CNRS, Blvd de l’Observatoire, CS 34229, F-06304 Nice cedex 4 (France); Rugheimer, Sarah [Department of Earth and Environmental Sciences, Irvine Building, University of St. Andrews, St. Andrews KY16 9AL (United Kingdom); Walkowicz, Lucianne, E-mail: allison.youngblood@colorado.edu [The Adler Planetarium, 1300 S Lakeshore Dr, Chicago, IL 60605 (United States)

    2016-06-20

    The ultraviolet (UV) spectral energy distributions (SEDs) of low-mass (K- and M-type) stars play a critical role in the heating and chemistry of exoplanet atmospheres, but are not observationally well-constrained. Direct observations of the intrinsic flux of the Ly α line (the dominant source of UV photons from low-mass stars) are challenging, as interstellar H i absorbs the entire line core for even the closest stars. To address the existing gap in empirical constraints on the UV flux of K and M dwarfs, the MUSCLES Hubble Space Telescope Treasury Survey has obtained UV observations of 11 nearby M and K dwarfs hosting exoplanets. This paper presents the Ly α and extreme-UV spectral reconstructions for the MUSCLES targets. Most targets are optically inactive, but all exhibit significant UV activity. We use a Markov Chain Monte Carlo technique to correct the observed Ly α profiles for interstellar absorption, and we employ empirical relations to compute the extreme-UV SED from the intrinsic Ly α flux in ∼100 Å bins from 100–1170 Å. The reconstructed Ly α profiles have 300 km s{sup −1} broad cores, while >1% of the total intrinsic Ly α flux is measured in extended wings between 300 and 1200 km s{sup −1}. The Ly α surface flux positively correlates with the Mg ii surface flux and negatively correlates with the stellar rotation period. Stars with larger Ly α surface flux also tend to have larger surface flux in ions formed at higher temperatures, but these correlations remain statistically insignificant in our sample of 11 stars. We also present H i column density measurements for 10 new sightlines through the local interstellar medium.

  18. CHANDRA OBSERVATIONS OF 3C RADIO SOURCES WITH z < 0.3. II. COMPLETING THE SNAPSHOT SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Massaro, F. [SLAC National Laboratory and Kavli Institute for Particle Astrophysics and Cosmology, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Tremblay, G. R. [European Southern Observatory, Karl-Schwarzschild-Str. 2, D-85748 Garching bei Muenchen (Germany); Harris, D. E.; O' Dea, C. P. [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Kharb, P.; Axon, D. [Department of Physics, Rochester Institute of Technology, Carlson Center for Imaging Science 76-3144, 84 Lomb Memorial Dr., Rochester, NY 14623 (United States); Balmaverde, B.; Capetti, A. [INAF-Osservatorio Astrofisico di Torino, Strada Osservatorio 20, I-10025 Pino Torinese (Italy); Baum, S. A. [Carlson Center for Imaging Science 76-3144, 84 Lomb Memorial Dr., Rochester, NY 14623 (United States); Chiaberge, M.; Macchetto, F. D.; Sparks, W. [Space Telescope Science Institute, 3700 San Martine Drive, Baltimore, MD 21218 (United States); Gilli, R. [INAF-Osservatorio Astronomico di Bologna, Via Ranzani 1, I-40127 Bologna (Italy); Giovannini, G. [INAF-Istituto di Radioastronomia di Bologna, Via Gobetti 101, I-40129 Bologna (Italy); Grandi, P.; Torresi, E. [INAF-IASF-Istituto di Astrofisica Spaziale e fisica Cosmica di Bologna, Via P. Gobetti 101, I-40129 Bologna (Italy); Risaliti, G. [INAF-Osservatorio Astronomico di Arcetri, Largo E. Fermi 5, I-50125 Firenze (Italy)

    2012-12-15

    We report on the second round of Chandra observations of the 3C snapshot survey developed to observe the complete sample of 3C radio sources with z < 0.3 for 8 ks each. In the first paper, we illustrated the basic data reduction and analysis procedures performed for the 30 sources of the 3C sample observed during Chandra Cycle 9, while here we present the data for the remaining 27 sources observed during Cycle 12. We measured the X-ray intensity of the nuclei and of any radio hot spots and jet features with associated X-ray emission. X-ray fluxes in three energy bands, i.e., soft, medium, and hard, for all the sources analyzed are also reported. For the stronger nuclei, we also applied the standard spectral analysis, which provides the best-fit values of the X-ray spectral index and absorbing column density. In addition, a detailed analysis of bright X-ray nuclei that could be affected by pile-up has been performed. X-ray emission was detected for all the nuclei of the radio sources in our sample except for 3C 319. Among the current sample, there are two compact steep spectrum radio sources, two broad-line radio galaxies, and one wide angle tail radio galaxy, 3C 89, hosted in a cluster of galaxies clearly visible in our Chandra snapshot observation. In addition, we also detected soft X-ray emission arising from the galaxy cluster surrounding 3C 196.1. Finally, X-ray emission from hot spots has been found in three FR II radio sources and, in the case of 3C 459, we also report the detection of X-ray emission associated with the eastern radio lobe as well as X-ray emission cospatial with radio jets in 3C 29 and 3C 402.

  19. THE HETDEX PILOT SURVEY. IV. THE EVOLUTION OF [O II] EMITTING GALAXIES FROM z ∼ 0.5 TO z ∼ 0

    International Nuclear Information System (INIS)

    Ciardullo, Robin; Gronwall, Caryl; Schneider, Donald P.; Zeimann, Gregory R.

    2013-01-01

    We present an analysis of the luminosities and equivalent widths of the 284 z 2 pilot survey for the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX). By combining emission-line fluxes obtained from the Mitchell spectrograph on the McDonald 2.7 m telescope with deep broadband photometry from archival data, we derive each galaxy's dereddened [O II] λ3727 luminosity and calculate its total star formation rate. We show that over the last ∼5 Gyr of cosmic time, there has been substantial evolution in the [O II] emission-line luminosity function, with L* decreasing by ∼0.6 ± 0.2 dex in the observed function, and by ∼0.9 ± 0.2 dex in the dereddened relation. Accompanying this decline is a significant shift in the distribution of [O II] equivalent widths, with the fraction of high equivalent-width emitters declining dramatically with time. Overall, the data imply that the relative intensity of star formation within galaxies has decreased over the past ∼5 Gyr, and that the star formation rate density of the universe has declined by a factor of ∼2.5 between z ∼ 0.5 and z ∼ 0. These observations represent the first [O II]-based star formation rate density measurements in this redshift range, and foreshadow the advancements which will be generated by the main HETDEX survey.

  20. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  1. Measurements of the Rate of Type Ia Supernovae at Redshift z < ~0.3 from the SDSS-II Supernova Survey

    Energy Technology Data Exchange (ETDEWEB)

    Dilday, Benjamin; /Rutgers U., Piscataway /Chicago U. /KICP, Chicago; Smith, Mathew; /Cape Town U., Dept. Math. /Portsmouth U.; Bassett, Bruce; /Cape Town U., Dept. Math. /South African Astron. Observ.; Becker, Andrew; /Washington U., Seattle, Astron. Dept.; Bender, Ralf; /Munich, Tech. U. /Munich U. Observ.; Castander, Francisco; /Barcelona, IEEC; Cinabro, David; /Wayne State U.; Filippenko, Alexei V.; /UC, Berkeley; Frieman, Joshua A.; /Chicago U. /Fermilab; Galbany, Lluis; /Barcelona, IFAE; Garnavich, Peter M.; /Notre Dame U. /Stockholm U., OKC /Stockholm U.

    2010-01-01

    We present a measurement of the volumetric Type Ia supernova (SN Ia) rate based on data from the Sloan Digital Sky Survey II (SDSS-II) Supernova Survey. The adopted sample of supernovae (SNe) includes 516 SNe Ia at redshift z {approx}< 0.3, of which 270 (52%) are spectroscopically identified as SNe Ia. The remaining 246 SNe Ia were identified through their light curves; 113 of these objects have spectroscopic redshifts from spectra of their host galaxy, and 133 have photometric redshifts estimated from the SN light curves. Based on consideration of 87 spectroscopically confirmed non-Ia SNe discovered by the SDSS-II SN Survey, we estimate that 2.04{sub -0.95}{sup +1.61}% of the photometric SNe Ia may be misidentified. The sample of SNe Ia used in this measurement represents an order of magnitude increase in the statistics for SN Ia rate measurements in the redshift range covered by the SDSS-II Supernova Survey. If we assume a SN Ia rate that is constant at low redshift (z < 0.15), then the SN observations can be used to infer a value of the SN rate of r{sub V} = (2.69{sub -0.30-0.01}{sup +0.34+0.21}) x 10{sup -5} SNe yr{sup -1} Mpc{sup -3} (H{sub 0}/(70 km s{sup -1} Mpc{sup -1})){sup 3} at a mean redshift of {approx} 0.12, based on 79 SNe Ia of which 72 are spectroscopically confirmed. However, the large sample of SNe Ia included in this study allows us to place constraints on the redshift dependence of the SN Ia rate based on the SDSS-II Supernova Survey data alone. Fitting a power-law model of the SN rate evolution, r{sub V} (z) = A{sub p} x ((1+z)/(1+z{sub 0})){sup {nu}}, over the redshift range 0.0 < z < 0.3 with z{sub 0} = 0.21, results in A{sub p} = (3.43{sub -0.15}{sup +0.15}) x 10{sup -5} SNe yr{sup -1} Mpc{sup -3} (H{sub 0}/(70 km s{sup -1} Mpc{sup -1})){sup 3} and {nu} = 2.04{sub -0.89}{sup +0.90}.

  2. The MUSE Hubble Ultra Deep Field Survey. II. Spectroscopic redshifts and comparisons to color selections of high-redshift galaxies

    Science.gov (United States)

    Inami, H.; Bacon, R.; Brinchmann, J.; Richard, J.; Contini, T.; Conseil, S.; Hamer, S.; Akhlaghi, M.; Bouché, N.; Clément, B.; Desprez, G.; Drake, A. B.; Hashimoto, T.; Leclercq, F.; Maseda, M.; Michel-Dansac, L.; Paalvast, M.; Tresse, L.; Ventou, E.; Kollatschny, W.; Boogaard, L. A.; Finley, H.; Marino, R. A.; Schaye, J.; Wisotzki, L.

    2017-11-01

    We have conducted a two-layered spectroscopic survey (1' × 1' ultra deep and 3' × 3' deep regions) in the Hubble Ultra Deep Field (HUDF) with the Multi Unit Spectroscopic Explorer (MUSE). The combination of a large field of view, high sensitivity, and wide wavelength coverage provides an order of magnitude improvement in spectroscopically confirmed redshifts in the HUDF; i.e., 1206 secure spectroscopic redshifts for Hubble Space Telescope (HST) continuum selected objects, which corresponds to 15% of the total (7904). The redshift distribution extends well beyond z> 3 and to HST/F775W magnitudes as faint as ≈ 30 mag (AB, 1σ). In addition, 132 secure redshifts were obtained for sources with no HST counterparts that were discovered in the MUSE data cubes by a blind search for emission-line features. In total, we present 1338 high quality redshifts, which is a factor of eight increase compared with the previously known spectroscopic redshifts in the same field. We assessed redshifts mainly with the spectral features [O II] at zcolor selection (dropout) diagrams of high-z galaxies. The selection condition for F336W dropouts successfully captures ≈ 80% of the targeted z 2.7 galaxies. However, for higher redshift selections (F435W, F606W, and F775W dropouts), the success rates decrease to ≈ 20-40%. We empirically redefine the selection boundaries to make an attempt to improve them to ≈ 60%. The revised boundaries allow bluer colors that capture Lyα emitters with high Lyα equivalent widths falling in the broadbands used for the color-color selection. Along with this paper, we release the redshift and line flux catalog. Based on observations made with ESO telescopes at the La Silla Paranal Observatory under program IDs 094.A-0289(B), 095.A-0010(A), 096.A-0045(A) and 096.A-0045(B).MUSE Ultra Deep Field redshift catalogs (Full Table A.1) are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc

  3. Finding errors in big data

    NARCIS (Netherlands)

    Puts, Marco; Daas, Piet; de Waal, A.G.

    No data source is perfect. Mistakes inevitably creep in. Spotting errors is hard enough when dealing with survey responses from several thousand people, but the difficulty is multiplied hugely when that mysterious beast Big Data comes into play. Statistics Netherlands is about to publish its first

  4. Airborne gamma-ray spectrometer and magnetometer survey: Cameron A, Arizona, detail area. Volume II A. Final report

    International Nuclear Information System (INIS)

    1983-01-01

    Volume II A contains appendices for: stacked profiles; geologic histograms; geochemical histograms; speed and altitude histograms; geologic statistical tables; geochemical statistical tables; magnetic and ancillary profiles; and test line data

  5. Airborne gamma-ray spectrometer and magnetometer survey: Monument Valley B, Utah, detail area. Volume II A. Final report

    International Nuclear Information System (INIS)

    1983-01-01

    Volume II A contains appendices for: stacked profiles; geologic histograms; geochemical histograms; speed and altitude histograms; geologic statistical tables; geochemical statistical tables; magnetic and ancillary profiles; and test line data

  6. Airborne gamma-ray spectrometer and magnetometer survey: Monument Valley B, Utah, detail area. Volume II B. Final report

    International Nuclear Information System (INIS)

    1983-01-01

    Volume II B contains appendices for: flight line maps; geology maps; explanation of geologic legend; flight line/geology maps; radiometric contour maps; magnetic contour maps; and geochemical factor analysis maps

  7. Airborne gamma-ray spectrometer and magnetometer survey: Monument Valley A, Utah, detail area. Volume II B. Final report

    International Nuclear Information System (INIS)

    1983-01-01

    Volume II B contains appendices for: flight line maps; geology maps; explanation of geologic legend; flight line/geology maps; radiometric contour maps; magnetic contour maps; multi-variant analysis maps; and geochemical factor analysis maps

  8. Cardiovascular prevention guidelines in daily practice: a comparison of EUROASPIRE I, II, and III surveys in eight European countries.

    LENUS (Irish Health Repository)

    Kotseva, Kornelia

    2009-03-14

    The first and second EUROASPIRE surveys showed high rates of modifiable cardiovascular risk factors in patients with coronary heart disease. The third EUROASPIRE survey was done in 2006-07 in 22 countries to see whether preventive cardiology had improved and if the Joint European Societies\\' recommendations on cardiovascular disease prevention are being followed in clinical practice.

  9. Spitzer sage survey of the large magellanic cloud. II. Evolved stars and infrared color-magnitude diagrams

    NARCIS (Netherlands)

    Blum, R. D.; Mould, J. R.; Olsen, K. A.; Frogel, J. A.; Meixner, M.; Markwick-Kemper, F.; Indebetouw, R.; Whitney, B.; Meade, M.; Babler, B.; Churchwell, E. B.; Gordon, K.; Engelbracht, C.; For, B. -Q.; Misselt, K.; Vijh, U.; Leitherer, C.; Volk, K.; Points, S.; Reach, W.; Hora, J. L.; Bernard, J. -P.; Boulanger, F.; Bracker, S.; Cohen, M.; Fukui, Y.; Gallagher, J.; Gorjian, V.; Harris, J.; Kelly, D.; Kawamura, A.; Latter, W. B.; Madden, S.; Mizuno, A.; Mizuno, N.; Oey, M. S.; Onishi, T.; Paladini, R.; Panagia, N.; Perez-Gonzalez, P.; Shibai, H.; Sato, S.; Smith, L.; Staveley-Smith, L.; Tielens, A.G.G.M; Ueta, T.; Van Dyk, S.; Zaritsky, D.; Werner, M.J.

    Color-magnitude diagrams (CMDs) are presented for the Spitzer SAGE (Surveying the Agents of a Galaxy's Evolution) survey of the Large Magellanic Cloud (LMC). IRAC and MIPS 24 mu m epoch 1 data are presented. These data represent the deepest, widest mid-infrared CMDs of their kind ever produced in

  10. NURE aerial gamma-ray and magnetic detail survey, Lost Creek, Washington area. Volume II. Final report

    International Nuclear Information System (INIS)

    1981-05-01

    Maps and the data from the aerial surveys are included in this report. The purposes of the surveys were to acquire and compile geologic and other information in order to assess the magnitude and distribution of uranium resources and to determine areas favorable for the occurrence of uranium in the USA

  11. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  12. THE TYPE II SUPERNOVA RATE IN z {approx} 0.1 GALAXY CLUSTERS FROM THE MULTI-EPOCH NEARBY CLUSTER SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Graham, M. L.; Sand, D. J. [Las Cumbres Observatory Global Telescope Network, 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Bildfell, C. J.; Pritchet, C. J. [Department of Physics and Astronomy, University of Victoria, P.O. Box 3055, STN CSC, Victoria BC V8W 3P6 (Canada); Zaritsky, D.; Just, D. W.; Herbert-Fort, S. [Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States); Hoekstra, H. [Leiden Observatory, Leiden University, Niels Bohrweg 2, NL-2333 CA Leiden (Netherlands); Sivanandam, S. [Dunlap Institute for Astronomy and Astrophysics, 50 St. George St., Toronto, ON M5S 3H4 (Canada); Foley, R. J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2012-07-01

    We present seven spectroscopically confirmed Type II cluster supernovae (SNe II) discovered in the Multi-Epoch Nearby Cluster Survey, a supernova survey targeting 57 low-redshift 0.05 < z < 0.15 galaxy clusters with the Canada-France-Hawaii Telescope. We find the rate of Type II supernovae within R{sub 200} of z {approx} 0.1 galaxy clusters to be 0.026{sup +0.085}{sub -0.018}(stat){sup +0.003}{sub -0.001}(sys) SNuM. Surprisingly, one SN II is in a red-sequence host galaxy that shows no clear evidence of recent star formation (SF). This is unambiguous evidence in support of ongoing, low-level SF in at least some cluster elliptical galaxies, and illustrates that galaxies that appear to be quiescent cannot be assumed to host only Type Ia SNe. Based on this single SN II we make the first measurement of the SN II rate in red-sequence galaxies, and find it to be 0.007{sup +0.014}{sub -0.007}(stat){sup +0.009}{sub -0.001}(sys) SNuM. We also make the first derivation of cluster specific star formation rates (sSFR) from cluster SN II rates. We find that for all galaxy types the sSFR is 5.1{sup +15.8}{sub -3.1}(stat) {+-} 0.9(sys) M{sub Sun} yr{sup -1} (10{sup 12} M{sub Sun }){sup -1}, and for red-sequence galaxies only it is 2.0{sup +4.2}{sub -0.9}(stat) {+-} 0.4(sys) M{sub Sun} yr{sup -1} (10{sup 12} M{sub Sun }){sup -1}. These values agree with SFRs measured from infrared and ultraviolet photometry, and H{alpha} emission from optical spectroscopy. Additionally, we use the SFR derived from our SNII rate to show that although a small fraction of cluster Type Ia SNe may originate in the young stellar population and experience a short delay time, these results do not preclude the use of cluster SN Ia rates to derive the late-time delay time distribution for SNe Ia.

  13. The Associated Absorption Features in Quasar Spectra of the Sloan Digital Sky Survey. I. Mg II Absorption Doublets

    Science.gov (United States)

    Chen, Zhi-Fu; Huang, Wei-Rong; Pang, Ting-Ting; Huang, Hong-Yan; Pan, Da-Sheng; Yao, Min; Nong, Wei-Jing; Lu, Mei-Mei

    2018-03-01

    Using the SDSS spectra of quasars included in the DR7Q or DR12Q catalogs, we search for Mg II λλ2796, 2803 narrow absorption doublets in the spectra data around Mg II λ2798 emission lines. We obtain 17,316 Mg II doublets, within the redshift range of 0.3299 ≤ z abs ≤ 2.5663. We find that a velocity offset of υ r 6000 km s‑1. If associated Mg II absorbers are defined by υ r present at least one associated Mg II system with {W}{{r}}λ 2796≥slant 0.2 \\mathringA . The fraction of associated Mg II systems with high-velocity outflows correlates with the average luminosities of their central quasars, indicating a relationship between outflows and the quasar feedback power. The υ r distribution of the outflow Mg II absorbers is peaked at 1023 km s‑1, which is smaller than the corresponding value of the outflow C IV absorbers. The redshift number density evolution of absorbers (dn/dz) limited by υ r > ‑3000 km s‑1 differs from that of absorbers constrained by υ r > 2000 km s‑1. Absorbers limited by υ r > 2000 km s‑1 and higher values exhibit profiles similar to dn/dz. In addition, the dn/dz is smaller when absorbers are constrained with larger υ r . The distributions of equivalent widths, and the ratio of {W}rλ 2796/{W}rλ 2803, are the same for associated and intervening systems, and independent of quasar luminosity.

  14. THE SECOND SURVEY OF THE MOLECULAR CLOUDS IN THE LARGE MAGELLANIC CLOUD BY NANTEN. II. STAR FORMATION

    International Nuclear Information System (INIS)

    Kawamura, Akiko; Mizuno, Yoji; Minamidani, Tetsuhiro; Mizuno, Norikazu; Onishi, Toshikazu; Fukui, Yasuo; Fillipovic, Miroslav D.; Staveley-Smith, Lister; Kim, Sungeun; Mizuno, Akira

    2009-01-01

    We studied star formation activities in the molecular clouds in the Large Magellanic Cloud. We have utilized the second catalog of 272 molecular clouds obtained by NANTEN to compare the cloud distribution with signatures of massive star formation including stellar clusters, and optical and radio H II regions. We find that the molecular clouds are classified into three types according to the activities of massive star formation: Type I shows no signature of massive star formation; Type II is associated with relatively small H II region(s); and Type III with both H II region(s) and young stellar cluster(s). The radio continuum sources were used to confirm that Type I giant molecular clouds (GMCs) do not host optically hidden H II regions. These signatures of massive star formation show a good spatial correlation with the molecular clouds in the sense that they are located within ∼100 pc of the molecular clouds. Among possible ideas to explain the GMC types, we favor that the types indicate an evolutionary sequence; i.e., the youngest phase is Type I, followed by Type II, and the last phase is Type III, where the most active star formation takes place leading to cloud dispersal. The number of the three types of GMCs should be proportional to the timescale of each evolutionary stage if a steady state of massive star and cluster formation is a good approximation. By adopting the timescale of the youngest stellar clusters, 10 Myr, we roughly estimate the timescales of Types I, II, and III to be 6 Myr, 13 Myr, and 7 Myr, respectively, corresponding to a lifetime of 20-30 Myr for the GMCs with a mass above the completeness limit, 5 x 10 4 M sun .

  15. Aerial radiometric and magnetic survey; Brushy Basin detail survey: Price/Salina national topographic map sheets, Utah. Volume III. Area II: graphic data, Section I-II. Final report

    International Nuclear Information System (INIS)

    1981-01-01

    This volume contains all of the graphic data for Area II which consists of map lines 1660 to 3400 and 5360 to 5780, and tie lines 6100, 6120, and 6160. Due to the large map scale of the presented data (1:62,500), this sub-section was divided into eleven 7-1/2 min quadrant sheets

  16. An Impurity Emission Survey in the near UV and Visible Spectral Ranges of Electron Cyclotron Heated (ECH) Plasma in the TJ-II Stellarator

    International Nuclear Information System (INIS)

    McCarthy, K. J.; Zurro, B.; Baciero, A.

    2001-01-01

    We report on a near-ultraviolet and visible spectroscopic survey (220-600 nm) of electron cyclotron resonance (ECR) heated plasmas created in the TJ-II stellarator, with central electron temperatures up to 2 keV and central electron densities up to 1.7 x 10 ''19 m''-3. Approximately 1200 lines from thirteen elements have been identified. The purpose of the work is to identify the principal impurities and spectral lines present in TJ-II plasmas, as well as their possible origin to search for transitions from highly ionised ions. This work will act as a base for identifying suitable transitions for following the evolution of impurities under different operating regimens and multiplet systems for line polarisation studies. It is intended to use the database creates as a spectral line reference for comparing spectra under different operating and plasma heating regimes. (Author)

  17. OBSERVATIONS OF Mg II ABSORPTION NEAR z ∼ 1 GALAXIES SELECTED FROM THE DEEP2 REDSHIFT SURVEY

    International Nuclear Information System (INIS)

    Lovegrove, Elizabeth; Simcoe, Robert A.

    2011-01-01

    We study the frequency of Mg II absorption in the outer halos of galaxies at z = 0.6-1.4 (with median z = 0.87), using new spectra obtained of 10 background quasars with galaxy impact parameters of b r = 0.15-1.0 A, though not all absorbers correlate with DEEP galaxies. We find five unique absorbers within Δv = 500 km s -1 and b r > 1.0 A, consistent with other samples of galaxy-selected Mg II systems. We speculate that Mg II systems with 0.3 r r are more likely to reflect the more recent star-forming history of their associated galaxies.

  18. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  19. Gemini NIFS survey of feeding and feedback processes in nearby active galaxies - II. The sample and surface mass density profiles

    Science.gov (United States)

    Riffel, R. A.; Storchi-Bergmann, T.; Riffel, R.; Davies, R.; Bianchin, M.; Diniz, M. R.; Schönell, A. J.; Burtscher, L.; Crenshaw, M.; Fischer, T. C.; Dahmer-Hahn, L. G.; Dametto, N. Z.; Rosario, D.

    2018-02-01

    We present and characterize a sample of 20 nearby Seyfert galaxies selected for having BAT 14-195 keV luminosities LX ≥ 1041.5 erg s-1, redshift z ≤ 0.015, being accessible for observations with the Gemini Near-Infrared Field Spectrograph (NIFS) and showing extended [O III]λ5007 emission. Our goal is to study Active Galactic Nucleus (AGN) feeding and feedback processes from near-infrared integral-field spectra, which include both ionized (H II) and hot molecular (H2) emission. This sample is complemented by other nine Seyfert galaxies previously observed with NIFS. We show that the host galaxy properties (absolute magnitudes MB, MH, central stellar velocity dispersion and axial ratio) show a similar distribution to those of the 69 BAT AGN. For the 20 galaxies already observed, we present surface mass density (Σ) profiles for H II and H2 in their inner ˜500 pc, showing that H II emission presents a steeper radial gradient than H2. This can be attributed to the different excitation mechanisms: ionization by AGN radiation for H II and heating by X-rays for H2. The mean surface mass densities are in the range (0.2 ≤ ΣH II ≤ 35.9) M⊙ pc-2, and (0.2 ≤ ΣH2 ≤ 13.9)× 10-3 M⊙ pc-2, while the ratios between the H II and H2 masses range between ˜200 and 8000. The sample presented here will be used in future papers to map AGN gas excitation and kinematics, providing a census of the mass inflow and outflow rates and power as well as their relation with the AGN luminosity.

  20. Getting to the Source: a Survey of Quantitative Data Sources Available to the Everyday Librarian: Part II: Data Sources from Specific Library Applications

    Directory of Open Access Journals (Sweden)

    Lisa Goddard

    2007-03-01

    Full Text Available This is the second part of a two-part article that provides a survey of data sources which are likely to be immediately available to the typical practitioner who wishes to engage in statistical analysis of collections and services within his or her own library. Part I outlines the data elements which can be extracted from web server logs, and discusses web log analysis tools. Part II looks at logs, reports, and data sources from proxy servers, resource vendors, link resolvers, federated search engines, institutional repositories, electronic reference services, and the integrated library system.

  1. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  2. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  3. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  4. The Impact of Repeated Lying on Survey Results

    Directory of Open Access Journals (Sweden)

    Thomas Chesney

    2013-01-01

    Full Text Available We study the effects on results of participants completing a survey more than once, a phenomenon known as farming. Using data from a real social science study as a baseline, three strategies that participants might use to farm are studied by Monte Carlo simulation. Findings show that farming influences survey results and can cause both statistical hypotheses testing Type I (false positive and Type II (false negative errors in unpredictable ways.

  5. Adding the s-Process Element Cerium to the APOGEE Survey: Identification and Characterization of Ce II Lines in the H-band Spectral Window

    Science.gov (United States)

    Cunha, Katia; Smith, Verne V.; Hasselquist, Sten; Souto, Diogo; Shetrone, Matthew D.; Allende Prieto, Carlos; Bizyaev, Dmitry; Frinchaboy, Peter; García-Hernández, D. Anibal; Holtzman, Jon; Johnson, Jennifer A.; Jőnsson, Henrik; Majewski, Steven R.; Mészáros, Szabolcs; Nidever, David; Pinsonneault, Mark; Schiavon, Ricardo P.; Sobeck, Jennifer; Skrutskie, Michael F.; Zamora, Olga; Zasowski, Gail; Fernández-Trincado, J. G.

    2017-08-01

    Nine Ce II lines have been identified and characterized within the spectral window observed by the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey (between λ1.51 and 1.69 μm). At solar metallicities, cerium is an element that is produced predominantly as a result of the slow capture of neutrons (the s-process) during asymptotic giant branch stellar evolution. The Ce II lines were identified using a combination of a high-resolution (R=λ /δ λ ={{100,000}}) Fourier Transform Spectrometer (FTS) spectrum of α Boo and an APOGEE spectrum (R = 22,400) of a metal-poor, but s-process enriched, red giant (2M16011638-1201525). Laboratory oscillator strengths are not available for these lines. Astrophysical gf-values were derived using α Boo as a standard star, with the absolute cerium abundance in α Boo set by using optical Ce II lines that have precise published laboratory gf-values. The near-infrared Ce II lines identified here are also analyzed, as consistency checks, in a small number of bright red giants using archival FTS spectra, as well as a small sample of APOGEE red giants, including two members of the open cluster NGC 6819, two field stars, and seven metal-poor N- and Al-rich stars. The conclusion is that this set of Ce II lines can be detected and analyzed in a large fraction of the APOGEE red giant sample and will be useful for probing chemical evolution of the s-process products in various populations of the Milky Way.

  6. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  7. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  8. The LAMOST survey of background quasars in the vicinity of the Andromeda and Triangulum galaxies. II. Results from the commissioning observations and the pilot surveys

    International Nuclear Information System (INIS)

    Huo, Zhi-Ying; Bai, Zhong-Rui; Chen, Jian-Jun; Chen, Xiao-Yan; Du, Bing; Jia, Lei; Lei, Ya-Juan; Liu, Xiao-Wei; Yuan, Hai-Bo; Xiang, Mao-Sheng; Huang, Yang; Zhang, Hui-Hua; Yan, Lin; Chu, Jia-Ru; Chu, Yao-Quan; Hu, Hong-Zhuan; Cui, Xiang-Qun; Hou, Yong-Hui; Hu, Zhong-Wen; Jiang, Fang-Hua

    2013-01-01

    We present new quasars discovered in the vicinity of the Andromeda and Triangulum galaxies with the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, also named the Guoshoujing Telescope, during the 2010 and 2011 observational seasons. Quasar candidates are selected based on the available Sloan Digital Sky Survey, Kitt Peak National Observatory 4 m telescope, Xuyi Schmidt Telescope Photometric Survey optical, and Wide-field Infrared Survey Explorer near-infrared photometric data. We present 509 new quasars discovered in a stripe of ∼135 deg 2 from M31 to M33 along the Giant Stellar Stream in the 2011 pilot survey data sets, and also 17 new quasars discovered in an area of ∼100 deg 2 that covers the central region and the southeastern halo of M31 in the 2010 commissioning data sets. These 526 new quasars have i magnitudes ranging from 15.5 to 20.0, redshifts from 0.1 to 3.2. They represent a significant increase of the number of identified quasars in the vicinity of M31 and M33. There are now 26, 62, and 139 known quasars in this region of the sky with i magnitudes brighter than 17.0, 17.5, and 18.0, respectively, of which 5, 20, and 75 are newly discovered. These bright quasars provide an invaluable collection with which to probe the kinematics and chemistry of the interstellar/intergalactic medium in the Local Group of galaxies. A total of 93 quasars are now known with locations within 2.°5 of M31, of which 73 are newly discovered. Tens of quasars are now known to be located behind the Giant Stellar Stream, and hundreds are behind the extended halo and its associated substructures of M31. The much enlarged sample of known quasars in the vicinity of M31 and M33 can potentially be utilized to construct a perfect astrometric reference frame to measure the minute proper motions (PMs) of M31 and M33, along with the PMs of substructures associated with the Local Group of galaxies. Those PMs are some of the most fundamental properties of the Local Group.

  9. SIMULATIONS OF WIDE-FIELD WEAK-LENSING SURVEYS. II. COVARIANCE MATRIX OF REAL-SPACE CORRELATION FUNCTIONS

    International Nuclear Information System (INIS)

    Sato, Masanori; Matsubara, Takahiko; Takada, Masahiro; Hamana, Takashi

    2011-01-01

    Using 1000 ray-tracing simulations for a Λ-dominated cold dark model in Sato et al., we study the covariance matrix of cosmic shear correlation functions, which is the standard statistics used in previous measurements. The shear correlation function of a particular separation angle is affected by Fourier modes over a wide range of multipoles, even beyond a survey area, which complicates the analysis of the covariance matrix. To overcome such obstacles we first construct Gaussian shear simulations from the 1000 realizations and then use the Gaussian simulations to disentangle the Gaussian covariance contribution to the covariance matrix we measured from the original simulations. We found that an analytical formula of Gaussian covariance overestimates the covariance amplitudes due to an effect of the finite survey area. Furthermore, the clean separation of the Gaussian covariance allows us to examine the non-Gaussian covariance contributions as a function of separation angles and source redshifts. For upcoming surveys with typical source redshifts of z s = 0.6 and 1.0, the non-Gaussian contribution to the diagonal covariance components at 1 arcmin scales is greater than the Gaussian contribution by a factor of 20 and 10, respectively. Predictions based on the halo model qualitatively well reproduce the simulation results, however show a sizable disagreement in the covariance amplitudes. By combining these simulation results we develop a fitting formula to the covariance matrix for a survey with arbitrary area coverage, taking into account effects of the finiteness of survey area on the Gaussian covariance.

  10. A Survey for Low Surface Brightness Galaxies Around M31. II. The Newly Discovered Dwarf Andromeda VI

    OpenAIRE

    Armandroff, Taft E.; Jacoby, George H.; Davies, James E.

    1999-01-01

    We present B-, V-, and I-band images, as well as an H alpha image, of And VI. This is the second newly identified dwarf spheroidal (dSph) companion to M31 found using a digital filtering technique applied to the second Palomar Sky Survey for which 1550 square degrees now have been surveyed. And VI was confirmed to be a nearby dSph galaxy when it resolved into stars easily with a short 4-m V-band exposure. Sub-arcsec images taken at the Kitt Peak WIYN 3.5-m telescope provided (I,V-I) and (V,B-...

  11. Domestic violence and immigration status among Latina mothers in the child welfare system: findings from the National Survey of Child and Adolescent Well-being II (NSCAW II).

    Science.gov (United States)

    Ogbonnaya, Ijeoma Nwabuzor; Finno-Velasquez, Megan; Kohl, Patricia L

    2015-01-01

    Many children involved with the child welfare system witness parental domestic violence. The association between children's domestic violence exposure and child welfare involvement may be influenced by certain socio-cultural factors; however, minimal research has examined this relationship. The current study compares domestic violence experiences and case outcomes among Latinas who are legal immigrants (n=39), unauthorized immigrants (n=77), naturalized citizens (n=30), and US-born citizen mothers (n=383) reported for child maltreatment. This analysis used data from the second round of the National Survey of Child and Adolescent Well-being. Mothers were asked about whether they experienced domestic violence during the past year. In addition, data were collected to assess if (a) domestic violence was the primary abuse type reported and, if so, (b) the maltreatment allegation was substantiated. Results show that naturalized citizens, legal residents, and unauthorized immigrants did not differ from US-born citizens in self-reports of domestic violence; approximately 33% of mothers reported experiences of domestic violence within the past year. Yet, unauthorized immigrants were 3.76 times more likely than US-born citizens to have cases with allegations of domestic violence as the primary abuse type. Despite higher rates of alleged domestic violence, unauthorized citizens were not more likely than US-born citizens to have these cases substantiated for domestic violence (F(2.26, 153.99)=0.709, p=.510). Findings highlight that domestic violence is not accurately accounted for in families with unauthorized immigrant mothers. We recommend child welfare workers are trained to properly assess and fulfill the needs of immigrant families, particularly as it relates to domestic violence. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Survey on problems in developing technologies for the global environment issues (Version II); Chikyu kankyo mondai gijutsu kaihatsu kadai chosa. 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1990-07-01

    This paper describes a survey on problems in developing technologies for the global environment issues. Technological development of means to reduce generation of environmental issues and substitutive means for non-generation thereof is being moved forward specifically in the Sunshine Project and the Moonlight Project. The Chemical Technology Research Institute assumes that it has a responsibility to positively contribute to developing a technological system that matches the substance circulation mechanism of the earth from the fields of chemistry. Therefore, the Institute has organized working groups that have been identifying problems from their expertise standpoints and have been extracting study assignments. Subsequent to the Version I, the Version II has been compiled. The Version II takes up the simulation of global warming mechanisms, behavior of gases dissolved in oceans, and possibility of fixing CO2 in oceans. With respect to fluorocarbons, the Version II describes development of substitutive substances, their stability, combustion method as a destruction technique, and destruction by means of super criticality. Regarding CO2, the version introduces technologies to re-use CO2 as a resource by means of membrane separation, storage, and contact hydrogenation. The volume also dwells on CO2 reduction by using photo-chemical and electrochemical reactions, CO2 reduction and photo-synthesis by using semiconductors as photo catalysts and electrodes. (NEDO)

  13. Industry Wage Surveys: Banking and Life Insurance, December 1976. Part I--Banking. Part II--Life Insurance. Bulletin 1988.

    Science.gov (United States)

    Barsky, Carl

    This report presents the results of a survey conducted by the Bureau of Labor Statistics to determine wages and related benefits in (1) the banking industry and (2) for employees in home offices and regional head offices of life insurance carriers. Part 1 discusses banking industry characteristics and presents data for tellers and selected…

  14. The SUrvey for Pulsars and Extragalactic Radio Bursts – II. New FRB discoveries and their follow-up

    NARCIS (Netherlands)

    Bhandari, S.; Keane, E.F.; Barr, E.D.; Jameson, A.; Petroff, E.; Johnston, S.; Bailes, M.; Bhat, N.D.R.; Burgay, M.; Burke-Spolaor, S.; Caleb, M.; Eatough, R.P.; Flynn, C.; Green, J.A.; Jankowski, F.; Kramer, M.; Krishnan, V Venkatraman; Morello, V.; Possenti, A.; Stappers, B.; Tiburzi, C.; van Straten, W.; Andreoni, I.; Butterley, T.; Chandra, P.; Cooke, J.; Corongiu, A.; Coward, D.M.; Dhillon, V.S.; Dodson, R.; Hardy, L.K.; Howell, E.J.; Jaroenjittichai, P.; Klotz, A.; Littlefair, S.P.; Marsh, T.R.; Mickaliger, M.; Muxlow, T.; Perrodin, D.; Pritchard, D.; Sawangwit, U.; Terai, T.; Tominaga, N.; Torne, P.; Totani, T.; Trois, A.; Turpin, D.; Niino, Y.; Wilson, R.W.; Albert, A.; André, M.; Anghinolfi, M.; Anton, G.; Ardid, M.; Aubert, J.J.; Avgitas, T.; Baret, B.; Barrios-Marti, J.; Basa, S.; Belhorma, B.; Bertin, V.; Biagi, S.; Bormuth, R.; Bourret, S.; Bouwhuis, M.C.; Brânzas, H.; Bruijn, R.; Brunner, J.; Busto, J.; Capone, A.; Caramete, L.; Carr, J.; Celli, S.; Cherkaoui El Moursli, R.; Chiarusi, T.; Circella, M.; Coelho, J.A.B.; Coleiro, A.; Coniglione, R.; Costantini, H.; Coyle, P.; Creusot, A.; Díaz, A.F.; Deschamps, A.; De Bonis, G.; Distefano, C.; Di Palma, I.; Domi, A.; Donzaud, C.; Dornic, D.; Drouhin, D.; Eberl, T.; El Bojaddaini, I.; El Khayati, N.; Elsässer, D.; Enzenhöfer, A.; Ettahiri, A.; Fassi, F.; Felis, I.; Fusco, L.A.; Gay, P.; Giordano, V.; Glotin, H.; Grégoire, T.; Gracia-Ruiz, R.; Graf, K.; Hallmann, S.; van Haren, H.; Heijboer, A.J.; Hello, Y.; Hernandez-Rey, J.J.; Hößl, J.; Hofestädt, J.; Hugon, C.; Illuminati, G.; James, C.W.; de Jong, M.; Jongen, M.; Kadler, M.; Kalekin, O.; Katz, U.; Kießling, D.; Kouchner, A.; Kreter, M.; Kreykenbohm, I.; Kulikovskiy, V.; Lachaud, C.; Lahmann, R.; Lefevre, D.; Leonora, E.; Loucatos, S.; Marcelin, M.; Margiotta, A.; Marinelli, A.; Martinez-Mora, J.A.; Mele, R.; Melis, K.; Michael, T.; Migliozzi, P.; Moussa, A.; Navas, S.; Nezri, E.; Organokov, M.; Pavalas, G.E.; Pellegrino, C.; Perrina, C.; Piattelli, P.; Popa, V.; Pradier, T.; Quinn, L.; Racca, C.; Riccobene, G.; Sanchez-Losa, A.; Saldaña, M.; Salvadori, I.; Samtleben, D.F.E.; Sanguineti, M.; Sapienza, P.; Schussler, F.; Sieger, C.; Spurio, M.; Stolarczyk, Th.; Taiuti, M.; Tayalati, Y.; Trovato, A.; Turpin, D.; Tönnis, C.; Vallage, B.; Van Elewyck, V.; Versari, F.; Vivolo, D.; Vizzocca, A.; Wilms, J.; Zornoza, J.D.; Zúñiga, J.

    2017-01-01

    We report the discovery of four Fast Radio Bursts (FRBs) in the ongoing SUrvey for Pulsars and Extragalactic Radio Bursts at the Parkes Radio Telescope: FRBs 150610, 151206, 151230 and 160102. Our real-time discoveries have enabled us to conduct extensive, rapid multimessenger follow-up at 12 major

  15. A CFH12k lensing survey of X-ray luminous galaxy clusters - II. Weak lensing analysis and global correlations

    NARCIS (Netherlands)

    Bardeau, S.; Soucail, G.; Kneib, J.-P.; Czoske, O.; Ebeling, H.; Hudelot, P.; Smail, I.; Smith, G. P.

    Aims. We present a wide-field multi-color survey of a homogeneous sample of eleven clusters of galaxies for which we measure total masses and mass distributions from weak lensing. This sample, spanning a small range in both X-ray luminosity and redshift, is ideally suited to determining the

  16. Human Trafficking in the United States. Part II. Survey of U.S. Government Web Resources for Publications and Data

    Science.gov (United States)

    Panigabutra-Roberts, Anchalee

    2012-01-01

    This second part of a two-part series is a survey of U.S. government web resources on human trafficking in the United States, particularly of the online publications and data included on agencies' websites. Overall, the goal is to provide an introduction, an overview, and a guide on this topic for library staff to use in their research and…

  17. VizieR Online Data Catalog: Main-belt asteroids polarimetric survey. II. (Gil-Hutton+, 2012)

    Science.gov (United States)

    Gil-Hutton, R.; Canada-Assandri, M.

    2012-01-01

    Results for the objects observed during the polarimetric survey of main-belt asteroids. The observations were carried out during different observing runs between May 2004 and November 2009 at the 2.15m telescope of the CASLEO, San Juan, Argentina, using the Torino and CASPROF polarimeters. (3 data files).

  18. The sloan lens acs survey. II. Stellar populations and internal structure of early-type lens galaxies

    NARCIS (Netherlands)

    Treu, Tommaso; Koopmans, Léon V.; Bolton, Adam S.; Burles, Scott; Moustakas, Leonidas A.

    2006-01-01

    We use HST images to derive effective radii and effective surface brightnesses of 15 early-type (E+S0) lens galaxies identified by the SLACS Survey. Our measurements are combined with stellar velocity dispersions from the SDSS database to investigate for the first time the distribution of lens

  19. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  20. Emergence of dengue virus 4 genotype II in Guangzhou, China, 2010: Survey and molecular epidemiology of one community outbreak

    Directory of Open Access Journals (Sweden)

    Jing Qin-Long

    2012-04-01

    Full Text Available Abstract Background The re-emergence of dengue virus 4 (DENV-4 has become a public health concern in South America, Southeast Asia and South Asia. However, it has not been known to have caused a local outbreak in China for the past 20 years. The purpose of this study was to elucidate the epidemiology of one local community outbreak caused by DENV-4 in Guangzhou city, China, in 2010; and to determine the molecular characteristics of the genotype II virus involved. Case presentations During September and October of 2010, one imported case, a Guangzhou resident who travelled back from Thailand, resulted in 18 secondary autochthonous cases in Guangzhou City, with an incidence rate of 5.53 per 10,000 residents. In indigenous cases, 14 serum samples tested positive for IgM against DENV and 7 for IgG from a total of 15 submitted serum samples, accompanied by 5 DENV-4 isolates. With identical envelope gene nucleotide sequences, the two isolates (D10168-GZ from the imported index case and Guangzhou 10660 from the first isolate in the autochthonous cases were grouped into DENV-4 genotype II after comparison to 32 previous DENV-4 isolates from GenBank that originated from different areas. Conclusions Based on epidemiological and phylogenetic analyses, the outbreak, which was absent for 20 years after the DENV-4 genotype I outbreak in 1990, was confirmed as DENV-4 genotype II and initially traced to the imported index case, a Guangzhou resident who travelled back from Thailand.

  1. Bases conceptuales y metodológicas de la Encuesta Nacional de Salud II, México 1994 Conceptual and methodological basis of the National Health Survey II, Mexico, 1994

    Directory of Open Access Journals (Sweden)

    1998-01-01

    Full Text Available Se describen las bases conceptuales y metodológicas de la Encuesta Nacional de Salud II (ENSA-II, que integra avances de la investigación multidisciplinaria en salud pública, tanto en el terreno conceptual como en el metodológico, que se han dado en nuestro país últimamente. Su diseño se concentró particularmente en las condiciones del acceso, la calidad y los costos de los servicios de atención a la salud, tanto a nivel ambulatorio como hospitalario. Se incluyen detalles de su marco conceptual, así como los aspectos relacionados con el procesamiento y análisis. La cobertura geográfica fue hecha para cinco regiones; se visitaron 12 615 viviendas a escala nacional, y se recabó información sobre 61 524 individuos. La tasa global de respuesta fue de 96.7%, tanto para los hogares como para los utilizadores identificados de servicios de salud. La conclusión general apunta hacia la incorporación del enfoque de la población al proceso de planeación y asignación de recursos para la atención a la salud.The conceptual and methodological basis of the National Health Survey II (NHS-II are described and recent advances in multidisciplinary public health research in Mexico, both conceptual and methodological, are synthesized. The design of the NHS-II concentrated on the study of the access, quality of care and health attention expenses in ambulatory and hospitalization services. Details on the conceptual framework related with the analysis and processing of data are also included. Five geographic regions were covered; 12 615 households at national level were visited and information on 61524 individuals was gathered. The overall response rate was 96.7% both for households and for identified health service users. The general conclusion emphasizes the need to incorporate the population perspective to the planning and allocation of health resources.

  2. Massive open star clusters using the VVV survey. II. Discovery of six clusters with Wolf-Rayet stars

    Science.gov (United States)

    Chené, A.-N.; Borissova, J.; Bonatto, C.; Majaess, D. J.; Baume, G.; Clarke, J. R. A.; Kurtev, R.; Schnurr, O.; Bouret, J.-C.; Catelan, M.; Emerson, J. P.; Feinstein, C.; Geisler, D.; de Grijs, R.; Hervé, A.; Ivanov, V. D.; Kumar, M. S. N.; Lucas, P.; Mahy, L.; Martins, F.; Mauro, F.; Minniti, D.; Moni Bidin, C.

    2013-01-01

    Context. The ESO Public Survey "VISTA Variables in the Vía Láctea" (VVV) provides deep multi-epoch infrared observations for an unprecedented 562 sq. degrees of the Galactic bulge, and adjacent regions of the disk. Nearly 150 new open clusters and cluster candidates have been discovered in this survey. Aims: This is the second in a series of papers about young, massive open clusters observed using the VVV survey. We present the first study of six recently discovered clusters. These clusters contain at least one newly discovered Wolf-Rayet (WR) star. Methods: Following the methodology presented in the first paper of the series, wide-field, deep JHKs VVV observations, combined with new infrared spectroscopy, are employed to constrain fundamental parameters for a subset of clusters. Results: We find that the six studied stellar groups are real young (2-7 Myr) and massive (between 0.8 and 2.2 × 103 M⊙) clusters. They are highly obscured (AV ~ 5-24 mag) and compact (1-2 pc). In addition to WR stars, two of the six clusters also contain at least one red supergiant star, and one of these two clusters also contains a blue supergiant. We claim the discovery of 8 new WR stars, and 3 stars showing WR-like emission lines which could be classified WR or OIf. Preliminary analysis provides initial masses of ~30-50 M⊙ for the WR stars. Finally, we discuss the spiral structure of the Galaxy using the six new clusters as tracers, together with the previously studied VVV clusters. Based on observations with ISAAC, VLT, ESO (programme 087.D-0341A), New Technology Telescope at ESO's La Silla Observatory (programme 087.D-0490A) and with the Clay telescope at the Las Campanas Observatory (programme CN2011A-086). Also based on data from the VVV survey (programme 172.B-2002).

  3. The Einstein@Home Gamma-ray Pulsar Survey. II. Source Selection, Spectral Analysis, and Multiwavelength Follow-up

    Science.gov (United States)

    Wu, J.; Clark, C. J.; Pletsch, H. J.; Guillemot, L.; Johnson, T. J.; Torne, P.; Champion, D. J.; Deneva, J.; Ray, P. S.; Salvetti, D.; Kramer, M.; Aulbert, C.; Beer, C.; Bhattacharyya, B.; Bock, O.; Camilo, F.; Cognard, I.; Cuéllar, A.; Eggenstein, H. B.; Fehrmann, H.; Ferrara, E. C.; Kerr, M.; Machenschalk, B.; Ransom, S. M.; Sanpa-Arsa, S.; Wood, K.

    2018-02-01

    We report on the analysis of 13 gamma-ray pulsars discovered in the Einstein@Home blind search survey using Fermi Large Area Telescope (LAT) Pass 8 data. The 13 new gamma-ray pulsars were discovered by searching 118 unassociated LAT sources from the third LAT source catalog (3FGL), selected using the Gaussian Mixture Model machine-learning algorithm on the basis of their gamma-ray emission properties being suggestive of pulsar magnetospheric emission. The new gamma-ray pulsars have pulse profiles and spectral properties similar to those of previously detected young gamma-ray pulsars. Follow-up radio observations have revealed faint radio pulsations from two of the newly discovered pulsars and enabled us to derive upper limits on the radio emission from the others, demonstrating that they are likely radio-quiet gamma-ray pulsars. We also present results from modeling the gamma-ray pulse profiles and radio profiles, if available, using different geometric emission models of pulsars. The high discovery rate of this survey, despite the increasing difficulty of blind pulsar searches in gamma rays, suggests that new systematic surveys such as presented in this article should be continued when new LAT source catalogs become available.

  4. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  5. Polarimetric survey of main-belt asteroids. II. Results for 58 B- and C-type objects

    Science.gov (United States)

    Gil-Hutton, R.; Cañada-Assandri, M.

    2012-03-01

    Aims: We present results of a polarimetric survey of main-belt asteroids at Complejo Astronómico el Leoncito (CASLEO), San Juan, Argentina. The aims of this survey are to increase the database of asteroid polarimetry, to estimate diversity in polarimetric properties of asteroids that belong to different taxonomic classes, and to search for objects that exhibit anomalous polarimetric properties. Methods: The data were obtained with the Torino and CASPROF polarimeters at the 2.15m telescope. The Torino polarimeter is an instrument that allows simultaneous measurement of polarization in five different bands, and the CASPROF polarimeter is a two-hole aperture polarimeter with rapid modulation. Results: The survey began in 2003, and up to 2009 data on a sample of more than 170 asteroids were obtained. In this paper the results for 58 B- and C-type objects are presented, most of them polarimetrically observed for the first time. Using these data we find phase-polarization curves and polarimetric parameters for these taxonomic classes. Based on observations carried out at the Complejo Astronómico El Leoncito, operated under agreement between the Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina and the National Universities of La Plata, Córdoba, and San Juan.Tables 1 and 2 are available in electronic form at CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/539/A115

  6. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  7. Photometric redshifts for the next generation of deep radio continuum surveys - II. Gaussian processes and hybrid estimates

    Science.gov (United States)

    Duncan, Kenneth J.; Jarvis, Matt J.; Brown, Michael J. I.; Röttgering, Huub J. A.

    2018-04-01

    Building on the first paper in this series (Duncan et al. 2018), we present a study investigating the performance of Gaussian process photometric redshift (photo-z) estimates for galaxies and active galactic nuclei detected in deep radio continuum surveys. A Gaussian process redshift code is used to produce photo-z estimates targeting specific subsets of both the AGN population - infrared, X-ray and optically selected AGN - and the general galaxy population. The new estimates for the AGN population are found to perform significantly better at z > 1 than the template-based photo-z estimates presented in our previous study. Our new photo-z estimates are then combined with template estimates through hierarchical Bayesian combination to produce a hybrid consensus estimate that outperforms both of the individual methods across all source types. Photo-z estimates for radio sources that are X-ray sources or optical/IR AGN are significantly improved in comparison to previous template-only estimates - with outlier fractions and robust scatter reduced by up to a factor of ˜4. The ability of our method to combine the strengths of the two input photo-z techniques and the large improvements we observe illustrate its potential for enabling future exploitation of deep radio continuum surveys for both the study of galaxy and black hole co-evolution and for cosmological studies.

  8. The Canada-France deep fields survey-II: Lyman-break galaxies and galaxy clustering at z ~ 3

    Science.gov (United States)

    Foucaud, S.; McCracken, H. J.; Le Fèvre, O.; Arnouts, S.; Brodwin, M.; Lilly, S. J.; Crampton, D.; Mellier, Y.

    2003-10-01

    We present a large sample of z ~ 3 U-band dropout galaxies extracted from the Canada-France deep fields survey (CFDF). Our catalogue covers an effective area of ~ 1700 arcmin2 divided between three large, contiguous fields separated widely on the sky. To IAB=24.5, the survey contains 1294 Lyman-break candidates, in agreement with previous measurements by other authors, after appropriate incompleteness corrections have been applied to our data. Based on comparisons with spectroscopic observations and simulations, we estimate that our sample of Lyman-break galaxies is contaminated by stars and interlopers (lower-redshift galaxies) at no more than { ~ } 30%. We find that omega (theta ) is well fitted by a power-law of fixed slope, gamma =1.8, even at small (theta University of Hawaii, and at the Cerro Tololo Inter-American Observatory and Mayall 4-meter Telescopes, divisions of the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc. under cooperative agreement with the National Science Foundation.

  9. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    International Nuclear Information System (INIS)

    Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex

    2012-01-01

    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕ b . In the presence of large voltage errors, δU⪢ΔE b , the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  10. Phase II Characterization Survey of the USNS Bridge (T-AOE 10), Military Sealift Fleet Support Command, Naval Station, Norfolk, Virginia

    Energy Technology Data Exchange (ETDEWEB)

    ALTIC, NICK A

    2012-08-30

    In March 2011, the USNS Bridge was deployed off northeastern Honshu, Japan with the carrier USS Ronald Reagan to assist with relief efforts after the 2011 Tōhoku earthquake and tsunami. During that time, the Bridge was exposed to air-borne radioactive materials leaking from the damaged Fukushima I Nuclear Power Plant. The proximity of the Bridge to the air-borne impacted area resulted in the contamination of the ship’s air-handling systems and the associated components, as well as potential contamination of other ship surfaces due to either direct intake/deposition or inadvertent spread from crew/operational activities. Preliminary surveys in the weeks after the event confirmed low-level contamination within the heating, ventilation, and air conditioning (HVAC) ductwork and systems, and engine and other auxiliary air intake systems. Some partial decontamination was performed at that time. In response to the airborne contamination event, Military Sealift Fleet Support Command (MSFSC) contracted Oak Ridge Associated Universities (ORAU), under provisions of the Oak Ridge Institute for Science and Education (ORISE) contract, to assess the radiological condition of the Bridge. Phase I identified contamination within the CPS filters, ventilation systems, miscellaneous equipment, and other suspect locations that could not accessed at that time (ORAU 2011b). Because the Bridge was underway during the characterization, all the potentially impacted systems/spaces could not be investigated. As a result, MSFSC contracted with ORAU to perform Phase II of the characterization, specifically to survey systems/spaces previously inaccessible. During Phase II of the characterization, the ship was in port to perform routine maintenance operations, allowing access to the previously inaccessible systems/spaces.

  11. PROBING THE PHYSICS OF NARROW LINE REGIONS IN ACTIVE GALAXIES. II. THE SIDING SPRING SOUTHERN SEYFERT SPECTROSCOPIC SNAPSHOT SURVEY (S7)

    Energy Technology Data Exchange (ETDEWEB)

    Dopita, Michael A.; Davies, Rebecca; Kewley, Lisa; Hampton, Elise; Sutherland, Ralph [RSAA, Australian National University, Cotter Road, Weston Creek, ACT 2611 (Australia); Shastri, Prajval; Kharb, Preeti; Jose, Jessy; Bhatt, Harish; Ramya, S. [Indian Institute of Astrophysics, Koramangala 2 B Block, Bangalore 560034 (India); Scharwächter, Julia [LERMA, Observatoire de Paris, CNRS, UMR 8112, 61 Avenue de l’Observatoire, F-75014 Paris (France); Jin, Chichuan [Qian Xuesen Laboratory for Space Technology, Beijing (China); Banfield, Julie [CSIRO Astronomy and Space Science, P.O. Box 76, Epping NSW, 1710 Australia (Australia); Zaw, Ingyin [New York University (Abu Dhabi), 70 Washington Square South, New York, NY 10012 (United States); Juneau, Stéphanie [CEA-Saclay, DSM/IRFU/SAp, F-91191 Gif-sur-Yvette (France); James, Bethan [Institute of Astronomy, Cambridge University, Madingley Road, Cambridge CB3 0HA (United Kingdom); Srivastava, Shweta, E-mail: Michael.Dopita@anu.edu.au [Astronomy and Astrophysics Division, Physical Research Laboratory, Ahmedabad 380009 (India)

    2015-03-15

    Here we describe the Siding Spring Southern Seyfert Spectroscopic Snapshot Survey (S7) and present results on 64 galaxies drawn from the first data release. The S7 uses the Wide Field Spectrograph mounted on the ANU 2.3 m telescope located at the Siding Spring Observatory to deliver an integral field of 38 × 25 arcsec at a spectral resolution of R = 7000 in the red (530–710 nm), and R = 3000 in the blue (340–560 nm). From these data cubes we have extracted the narrow-line region spectra from a 4 arcsec aperture centered on the nucleus. We also determine the Hβ and [O iii] λ5007 fluxes in the narrow lines, the nuclear reddening, the reddening-corrected relative intensities of the observed emission lines, and the Hβ and [O iii] λ5007 luminosities determined from spectra for which the stellar continuum has been removed. We present a set of images of the galaxies in [O iii] λ5007, [N ii] λ6584, and Hα, which serve to delineate the spatial extent of the extended narrow-line region and also to reveal the structure and morphology of the surrounding H ii regions. Finally, we provide a preliminary discussion of those Seyfert 1 and Seyfert 2 galaxies that display coronal emission lines in order to explore the origin of these lines.

  12. [Hungarian Diet and Nutritional Status Survey - OTÁP2014. II. Energy and macronutrient intake of the Hungarian population].

    Science.gov (United States)

    Sarkadi Nagy, Eszter; Bakacs, Márta; Illés, Éva; Nagy, Barbara; Varga, Anita; Kis, Orsolya; Schreiberné Molnár, Erzsébet; Martos, Éva

    2017-04-01

    The aim of the study was to assess and monitor the dietary habits and nutrient intake of Hungarian adults. Three-day dietary records were used for dietary assessment, the sample was representative for the Hungarian population aged ≥18ys by gender and age. The mean proportion of energy from fat was higher (men: 38 energy%, women: 37 energy%), that from carbohydrates was lower (men: 45 energy%, women: 47 energy%) than recommended, the protein intake is adequate. Unfavorable change compared to the previous survey in 2009 was the increase of fat and saturated fatty acid energy percent in women, the decrease in fruit and vegetable consumption, which explains the decreased fiber intake. An increasing trend in added sugar energy percent in each age groups of both genders was observed compared to 2009. Interventions focusing on the promotion of fruit and vegetable consumption and decreasing of saturated fat and added sugar intake are needed. Orv. Hetil., 2017, 158(15), 587-597.

  13. Profiling Occupant Behaviour in Danish Dwellings using Time Use Survey Data - Part II: Time-related Factors and Occupancy

    DEFF Research Database (Denmark)

    Barthelmes, V.M.; Li, R.; Andersen, R.K.

    2018-01-01

    Occupant behaviour has been shown to be one of the key driving factors of uncertainty in prediction of energy consumption in buildings. Building occupants affect building energy use directly and indirectly by interacting with building energy systems such as adjusting temperature set...... occupant profiles for prediction of energy use to reduce the gap between predicted and real building energy consumptions. In this study, we exploit diary-based Danish Time Use Surveys for understanding and modelling occupant behaviour in the residential sector in Denmark. This paper is a continuation......-points, switching lights on/off, using electrical devices and opening/closing windows. Furthermore, building inhabitants’ daily activity profiles clearly shape the timing of energy demand in households. Modelling energy-related human activities throughout the day, therefore, is crucial to defining more realistic...

  14. THE GOULD’S BELT DISTANCES SURVEY (GOBELINS). II. DISTANCES AND STRUCTURE TOWARD THE ORION MOLECULAR CLOUDS

    Energy Technology Data Exchange (ETDEWEB)

    Kounkel, Marina; Hartmann, Lee [Department of Astronomy, University of Michigan, 1085 S. University Street, Ann Arbor, MI 48109 (United States); Loinard, Laurent; Ortiz-León, Gisela N.; Rodríguez, Luis F.; Pech, Gerardo; Rivera, Juana L. [Instituto de Radioastronomía y Astrofísica, Universidad Nacional Autónoma de Mexico, Morelia 58089 (Mexico); Mioduszewski, Amy J. [National Radio Astronomy Observatory, Domenici Science Operations Center, 1003 Lopezville Road, Socorro, NM 87801 (United States); Dzib, Sergio A. [Max Planck Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn (Germany); Torres, Rosa M. [Centro Universitario de Tonalá, Universidad de Guadalajara, Avenida Nuevo Perifrico No. 555, Ejido San José, Tatepozco, C.P. 48525, Tonalá, Jalisco, México (Mexico); Galli, Phillip A. B. [Université Grenoble Alpes, IPAG, F-38000, Grenoble (France); Boden, Andrew F. [Division of Physics, Math and Astronomy, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Evans II, Neal J. [Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712-1205 (United States); Briceño, Cesar [Cerro Tololo Interamerican Observatory, Casilla 603, La Serena (Chile); Tobin, John J., E-mail: mkounkel@umich.edu [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, 440 West Brooks Street, Norman, OK 73019 (United States)

    2017-01-10

    We present the results of the Gould’s Belt Distances Survey of young star-forming regions toward the Orion Molecular Cloud Complex. We detected 36 young stellar objects (YSOs) with the Very Large Baseline Array, 27 of which have been observed in at least three epochs over the course of two years. At least half of these YSOs belong to multiple systems. We obtained parallax and proper motions toward these stars to study the structure and kinematics of the Complex. We measured a distance of 388 ± 5 pc toward the Orion Nebula Cluster, 428 ± 10 pc toward the southern portion L1641, 388 ± 10 pc toward NGC 2068, and roughly ∼420 pc toward NGC 2024. Finally, we observed a strong degree of plasma radio scattering toward λ Ori.

  15. OBSERVATIONS OF BINARY STARS WITH THE DIFFERENTIAL SPECKLE SURVEY INSTRUMENT. II. HIPPARCOS STARS OBSERVED IN 2010 JANUARY AND JUNE

    International Nuclear Information System (INIS)

    Horch, Elliott P.; Gomez, Shamilia C.; Anderson, Lisa M.; Sherry, William H.; Howell, Steve B.; Ciardi, David R.; Van Altena, William F.

    2011-01-01

    The results of 497 speckle observations of Hipparcos stars and selected other targets are presented. Of these, 367 were resolved into components and 130 were unresolved. The data were obtained using the Differential Speckle Survey Instrument at the WIYN 3.5 m Telescope. (The WIYN Observatory is a joint facility of the University of Wisconsin-Madison, Indiana University, Yale University, and the National Optical Astronomy Observatories.) Since the first paper in this series, the instrument has been upgraded so that it now uses two electron-multiplying CCD cameras. The measurement precision obtained when comparing to ephemeris positions of binaries with very well known orbits is approximately 1-2 mas in separation and better than 0. 0 6 in position angle. Differential photometry is found to be in very good agreement with Hipparcos measures in cases where the comparison is most relevant. We derive preliminary orbits for two systems.

  16. Molecular-cloud-scale Chemical Composition. II. Mapping Spectral Line Survey toward W3(OH) in the 3 mm Band

    Energy Technology Data Exchange (ETDEWEB)

    Nishimura, Yuri [Institute of Astronomy, The University of Tokyo, 2-21-1, Osawa, Mitaka, Tokyo 181-0015 (Japan); Watanabe, Yoshimasa; Yamamoto, Satoshi [Department of Physics, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Harada, Nanase [Academia Sinica Institute of Astronomy and Astrophysics, No.1, Sec. 4, Roosevelt Road, 10617 Taipei, Taiwan, R.O.C. (China); Shimonishi, Takashi [Frontier Research Institute for Interdisciplinary Sciences, Tohoku University, Aramakiazaaoba 6-3, Aoba-ku, Sendai, Miyagi 980-8578 (Japan); Sakai, Nami [RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Aikawa, Yuri [Department of Astronomy, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Kawamura, Akiko [Chile Observatory, National Astronomical Observatory of Japan, 2-21-1, Osawa, Mitaka, Tokyo 181-8588 (Japan)

    2017-10-10

    To study a molecular-cloud-scale chemical composition, we conducted a mapping spectral line survey toward the Galactic molecular cloud W3(OH), which is one of the most active star-forming regions in the Perseus arm. We conducted our survey through the use of the Nobeyama Radio Observatory 45 m telescope, and observed the area of 16′ × 16′, which corresponds to 9.0 pc × 9.0 pc. The observed frequency ranges are 87–91, 96–103, and 108–112 GHz. We prepared the spectrum averaged over the observed area, in which eight molecular species (CCH, HCN, HCO{sup +}, HNC, CS, SO, C{sup 18}O, and {sup 13}CO) are identified. On the other hand, the spectrum of the W3(OH) hot core observed at a 0.17 pc resolution shows the lines of various molecules such as OCS, H{sub 2}CS CH{sub 3}CCH, and CH{sub 3}CN in addition to the above species. In the spatially averaged spectrum, emission of the species concentrated just around the star-forming core, such as CH{sub 3}OH and HC{sub 3}N, is fainter than in the hot core spectrum, whereas emission of the species widely extended over the cloud such as CCH is relatively brighter. We classified the observed area into five subregions according to the integrated intensity of {sup 13}CO, and evaluated the contribution to the averaged spectrum from each subregion. The CCH, HCN, HCO{sup +}, and CS lines can be seen even in the spectrum of the subregion with the lowest {sup 13}CO integrated intensity range (<10 K km s{sup −1}). Thus, the contributions of the spatially extended emission is confirmed to be dominant in the spatially averaged spectrum.

  17. Spectroscopic survey of Kepler stars - II. FIES/NOT observations of A- and F-type stars

    Science.gov (United States)

    Niemczura, E.; Polińska, M.; Murphy, S. J.; Smalley, B.; Kołaczkowski, Z.; Jessen-Hansen, J.; Uytterhoeven, K.; Lykke, J. M.; Triviño Hage, A.; Michalska, G.

    2017-09-01

    We have analysed high-resolution spectra of 28 A and 22 F stars in the Kepler field, observed using the Fibre-Fed Échelle Spectrograph at the Nordic Optical Telescope. We provide spectral types, atmospheric parameters and chemical abundances for 50 stars. Balmer, Fe I and Fe II lines were used to derive effective temperatures, surface gravities and microturbulent velocities. We determined chemical abundances and projected rotational velocities using a spectrum synthesis technique. Effective temperatures calculated by spectral energy distribution fitting are in good agreement with those determined from the spectral line analysis. The stars analysed include chemically peculiar stars of the Am and λ Boo types, as well as stars with approximately solar chemical abundances. The wide distribution of projected rotational velocity, vsin I, is typical for A and F stars. The microturbulence velocities obtained are typical for stars in the observed temperature and surface gravity ranges. Moreover, we affirm the results of Niemczura et al. that Am stars do not have systematically higher microturbulent velocities than normal stars of the same temperature.

  18. Study of the picture change error at the 2nd order Douglas Kroll Hess level of theory. Electron and spin density and structure factors of the Bis[bis(methoxycarbimido) aminato] copper (II) complex

    International Nuclear Information System (INIS)

    Bučinský, Lukáš; Biskupič, Stanislav; Jayatilaka, Dylan

    2012-01-01

    Graphical abstract: The dependence of the radial distribution of the spin density in the vicinity of the nucleus on the formal oxidation state of the copper atom is shown on the top three figures. Note also the large impact of PCE as well as relativistic effects. The bottom three figures present the relativistic effects and PCE in the electron density of the [CuL 2 ] model compound (of the size 1 bohr 2 ). PCE is very little affecting the relativistic effects in the electron density close to the nucleus of copper atom, i.e. the PCE in the relativistic effects of the electron density are hardly discernable in the case of compounds containing copper. Highlights: ► The extent of PCE in a model compound containing copper atom is presented. ► The spin/electron density along bond the Cu–N is the most affected by PCE only at the nucleus of the copper atom. ► The 2D inspection of relativistic effects in electron/spin densities is not sensitive to PCE. ► Structure factors are an order of magnitude less affected by PCE than by relativistic effects. ► PCE in the Mulliken populations and spin contamination is considered. - Abstract: The analytic correction and the extent of the picture change error (PCE) at the scalar 2nd order Douglas–Kroll–Hess level of theory is considered. The one-dimensional (1D), two-dimensional (2D) spin/electron densities and/or difference densities, structure factors and Mulliken populations of the Bis [bis-(methoxycarbimido) aminato] copper (II) model compound are presented. For further comparison the radial distributions of the electron and spin density of the copper atom (as well as of the copper di-cation) are presented. In addition, the infinite order two component (IOTC) radial distributions of electron and spin density of the copper atom and copper dication are presented as well. The PCE is almost hidden in the 2D densities of the studied model compound. The 1D electron/spin difference densities along the Cu–N bond show the

  19. Feasibility study for an airborne high-sensitivity gamma-ray survey of Alaska. Phase II (final) report: 1976--1979 program

    International Nuclear Information System (INIS)

    1975-01-01

    This study constitutes a determination of the extent to which it is feasible to use airborne, high-sensitivity gamma spectrometer systems for uranium reconnaissance in the State of Alaska, and specification of a preliminary plan for surveying the entire state of the 1975--1979 time frame. Phase I included the design of a program to survey the highest priority areas in 1975 using available aircraft and spectrometer equipment. This has now resulted in a contract for 10,305 flight line miles to cover about 11 of the 1:250,000 scale quadrangles using a DC-3 aircraft with an average 6.25 x 25 mile grid of flight line. Phase II includes the design of alternative programs to cover the remaining 128 quadrangles using either a DC-3 and a Bell 205A helicopter or a Helio Stallion STOL aircraft and a Bell 205A helicopter during 1976-1979. The 1976-1979 time frame allows some time for possible new system developments in both airborne gamma-ray spectrometers and in ancillary equipment, and these are outlined. (auth)

  20. Using threshold regression to analyze survival data from complex surveys: With application to mortality linked NHANES III Phase II genetic data.

    Science.gov (United States)

    Li, Yan; Xiao, Tao; Liao, Dandan; Lee, Mei-Ling Ting

    2018-03-30

    The Cox proportional hazards (PH) model is a common statistical technique used for analyzing time-to-event data. The assumption of PH, however, is not always appropriate in real applications. In cases where the assumption is not tenable, threshold regression (TR) and other survival methods, which do not require the PH assumption, are available and widely used. These alternative methods generally assume that the study data constitute simple random samples. In particular, TR has not been studied in the setting of complex surveys that involve (1) differential selection probabilities of study subjects and (2) intracluster correlations induced by multistage cluster sampling. In this paper, we extend TR procedures to account for complex sampling designs. The pseudo-maximum likelihood estimation technique is applied to estimate the TR model parameters. Computationally efficient Taylor linearization variance estimators that consider both the intracluster correlation and the differential selection probabilities are developed. The proposed methods are evaluated by using simulation experiments with various complex designs and illustrated empirically by using mortality-linked Third National Health and Nutrition Examination Survey Phase II genetic data. Copyright © 2017 John Wiley & Sons, Ltd.

  1. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. The VISTA Carina Nebula Survey. II. Spatial distribution of the infrared-excess-selected young stellar population

    Science.gov (United States)

    Zeidler, P.; Preibisch, T.; Ratzka, T.; Roccatagliata, V.; Petr-Gotzens, M. G.

    2016-01-01

    We performed a deep wide-field (6.76 sq. deg) near-infrared survey with the VISTA telescope that covers the entire extent of the Carina nebula complex (CNC). The point-source catalog created from these data contains around four million individual objects down to masses of 0.1 M⊙. We present a statistical study of the large-scale spatial distribution and an investigation of the clustering properties of infrared-excesses objects, which are used to trace disk-bearing young stellar objects (YSOs). A selection based on a near-infrared (J-H) versus (H-Ks) color-color diagram shows an almost uniform distribution over the entire observed area. We interpret this as a result of the very high degree of background contamination that arises from the Carina Nebula's location close to the Galactic plane. Complementing the VISTA near-infrared catalog with Spitzer IRAC mid-infrared photometry improves the situation of the background contamination considerably. We find that a (J-H) versus (Ks- [4.5]) color-color diagram is well suited to tracing the population of YSO-candidates (cYSOs) by their infrared excess. We identify 8781 sources with strong infrared excess, which we consider as cYSOs. This sample is used to investigate the spatial distribution of the cYSOs with a nearest-neighbor analysis. The surface density distribution of cYSOs agrees well with the shape of the clouds as seen in our Herschel far-infrared survey. The strong decline in the surface density of excess sources outside the area of the clouds supports the hypothesis that our excess-selected sample consists predominantly of cYSOs with a low level of background contamination. This analysis allows us to identify 14 groups of cYSOs outside the central area.Our results suggest that the total population of cYSOs in the CNC comprises about 164 000 objects, with a substantial fraction (~35%) located in the northern, still not well studied parts. Our cluster analysis suggests that roughly half of the cYSOs constitute a

  3. Survey of eight dimensions quality of life for patients with diabetes type II, referred to Sanandaj diabetes center in 2009

    Directory of Open Access Journals (Sweden)

    Shahnaz Khaledi

    2011-06-01

    Full Text Available Background & Objective: Diabetes is a chronic disease; the patients suffer from diabetes needs a special care. One of the programs to help these kinds of patients is to analyze their quality of life, which was carried out through a nursing disciplinary program by a cross sectional study during 2009. Materials & Methods: 198 type II diabetic patients who were referred to diabetic center of an educational hospital, affiliated to Sanandaj medical university were selected randomly, they were interviewed and obtained a written permission to join this study, then asked to fill up SF-36 questionnaires, and finally, the data from the questionnaires were analyzed by the SPSS software program. Results: The results showed the quality of life of diabetes patients (55.6% with respect to their physical fitness were acceptable. Whereas, in case of play in the physical role 67.7% were not acceptable but in case of the physical pain 45.3% had physical pain and effects on public health 45.6% were moderately effected, in case of energy and vitality 35.4% were not acceptable, in case of social functioning 38.5% were favorable, in case of emotional role 75.8% were undesirable and finally considering psycho mental health 49.5% were in the desirable limit. Statistical analysis for evaluation of relationship between quality of life and demographic data, were carried out by using "ANOVA” test. Conclusion: This study showed that the quality of life in all the group of study were at moderate level. In order to improve the quality of life in diabetes patients it is suggested that planners and managers should pay enough attention to support the physical, mental and social well being of the diabetes patients.

  4. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  5. A Joint Sea Beam/SeaMARC II Survey of the East Pacific Rise and Its Flanks 7 deg 50 min-10 deg 30 min N, to Establish a Geologic Acoustic Natural Laboratory

    Science.gov (United States)

    1991-01-15

    of Oceanography, University of Rhode Island , Narragansett, R.I. 02882, A. Shor and C. Nishimura, Hawaii Institute of Geophysics, University of Hawaii...across the Clipperton and the absence of intra-transform spreading, and opening across the Siqueiros with sustained intra-transform spreading. An...Ma. Future work will focus on the significant task of combining this survey with three 1987 SeaMARC II surveys of the Clipperton transform, the 9°N

  6. Engineering survey planning for the alignment of a particle accelerator: part II. Design of a reference network and measurement strategy

    Science.gov (United States)

    Junqueira Leão, Rodrigo; Raffaelo Baldo, Crhistian; Collucci da Costa Reis, Maria Luisa; Alves Trabanco, Jorge Luiz

    2018-03-01

    The building blocks of particle accelerators are magnets responsible for keeping beams of charged particles at a desired trajectory. Magnets are commonly grouped in support structures named girders, which are mounted on vertical and horizontal stages. The performance of this type of machine is highly dependent on the relative alignment between its main components. The length of particle accelerators ranges from small machines to large-scale national or international facilities, with typical lengths of hundreds of meters to a few kilometers. This relatively large volume together with micrometric positioning tolerances make the alignment activity a classical large-scale dimensional metrology problem. The alignment concept relies on networks of fixed monuments installed on the building structure to which all accelerator components are referred. In this work, the Sirius accelerator is taken as a case study, and an alignment network is optimized via computational methods in terms of geometry, densification, and surveying procedure. Laser trackers are employed to guide the installation and measure the girders’ positions, using the optimized network as a reference and applying the metric developed in part I of this paper. Simulations demonstrate the feasibility of aligning the 220 girders of the Sirius synchrotron to better than 0.080 mm, at a coverage probability of 95%.

  7. The Fornax Cluster VLT Spectroscopic Survey II - Planetary Nebulae kinematics within 200 kpc of the cluster core

    Science.gov (United States)

    Spiniello, C.; Napolitano, N. R.; Arnaboldi, M.; Tortora, C.; Coccato, L.; Capaccioli, M.; Gerhard, O.; Iodice, E.; Spavone, M.; Cantiello, M.; Peletier, R.; Paolillo, M.; Schipani, P.

    2018-06-01

    We present the largest and most spatially extended planetary nebulae (PNe) catalogue ever obtained for the Fornax cluster. We measured velocities of 1452 PNe out to 200 kpc in the cluster core using a counter-dispersed slitless spectroscopic technique with data from FORS2 on the Very Large Telescope (VLT). With such an extended spatial coverage, we can study separately the stellar haloes of some of the cluster main galaxies and the intracluster light. In this second paper of the Fornax Cluster VLT Spectroscopic Survey, we identify and classify the emission-line sources, describe the method to select PNe, and calculate their coordinates and velocities from the dispersed slitless images. From the PN 2D velocity map, we identify stellar streams that are possibly tracing the gravitational interaction of NGC 1399 with NGC 1404 and NGC 1387. We also present the velocity dispersion profile out to ˜200 kpc radii, which shows signatures of a superposition of the bright central galaxy and the cluster potential, with the latter clearly dominating the regions outside R ˜ 1000 arcsec (˜100 kpc).

  8. Galaxy evolution and large-scale structure in the far-infrared. II. The IRAS faint source survey

    International Nuclear Information System (INIS)

    Lonsdale, C.J.; Hacking, P.B.; Conrow, T.P.; Rowan-Robinson, M.

    1990-01-01

    The new IRAS Faint Source Survey data base is used to confirm the conclusion of Hacking et al. (1987) that the 60 micron source counts fainter than about 0.5 Jy lie in excess of predictions based on nonevolving model populations. The existence of an anisotropy between the northern and southern Galactic caps discovered by Rowan-Robinson et al. (1986) and Needham and Rowan-Robinson (1988) is confirmed, and it is found to extend below their sensitivity limit to about 0.3 Jy in 60 micron flux density. The count anisotropy at f(60) greater than 0.3 can be interpreted reasonably as due to the Local Supercluster; however, no one structure accounting for the fainter anisotropy can be easily identified in either optical or far-IR two-dimensional sky distributions. The far-IR galaxy sky distributions are considerably smoother than distributions from the published optical galaxy catalogs. It is likely that structure of the large size discussed here have been discriminated against in earlier studies due to insufficient volume sampling. 105 refs

  9. High-Redshift Quasars Found in Sloan Digital Sky Survey Commissioning Data. II. The Spring Equatorial Stripe

    International Nuclear Information System (INIS)

    Fan, Xiaohui; Strauss, Michael A.; Schneider, Donald P.; Gunn, James E.; Lupton, Robert H.; Anderson, Scott F.; Voges, Wolfgang; Margon, Bruce; Annis, James; Bahcall, Neta A.

    2000-01-01

    This is the second paper in a series aimed at finding high-redshift quasars from five-color (u ' g ' r ' i ' z ' ) imaging data taken along the Celestial Equator by the Sloan Digital Sky Survey (SDSS) during its commissioning phase. In this paper, we present 22 high-redshift quasars (z>3.6) discovered from ∼250 deg2 of data in the spring Equatorial Stripe, plus photometry for two previously known high-redshift quasars in the same region of the sky. Our success rate in identifying high-redshift quasars is 68%. Five of the newly discovered quasars have redshifts higher than 4.6 (z=4.62, 4.69, 4.70, 4.92, and 5.03). All the quasars have i * B 0 =0.5). Several of the quasars show unusual emission and absorption features in their spectra, including an object at z=4.62 without detectable emission lines, and a broad absorption line (BAL) quasar at z=4.92. (c) (c) 2000. The American Astronomical Society

  10. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  11. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  12. OTELO SURVEY: DEEP BVRI BROADBAND PHOTOMETRY OF THE GROTH STRIP. II. OPTICAL PROPERTIES OF X-RAY EMITTERS

    International Nuclear Information System (INIS)

    Povic, M.; Perez GarcIa, A. M.; Bongiovanni, A.; Castaneda, H.; Lorenzo, M. Fernandez; Lara-Lopez, M. A.; Sanchez-Portal, M.; Cepa, J.; Alfaro, E.; Gallego, J.; Gonzalez-Serrano, J. I.; Gonzalez, J. J.

    2009-01-01

    The Groth field is one of the sky regions that will be targeted by the OSIRIS Tunable Filter Emission Line Object survey in the optical 820 nm and 920 nm atmospheric windows. In the present paper, public Chandra X-ray data with total exposure time of 200 ks are analyzed and combined with optical broadband data of the Groth field, in order to study a set of optical structural parameters of the X-ray emitters and its relation with X-ray properties. To this aim, we processed the raw, public X-ray data using the Chandra Interactive Analysis of Observations, and determined and analyzed different structural parameters, in order to produce a morphological classification of X-ray sources. We present the morphology of 340 X-ray emitters with optical counterpart detected. Objects have been classified by X-ray type using a diagnostic diagram relating X-ray-to-optical ratio (X/O) to hardness ratio. We did not find any clear correlation between X-ray and morphological types. We analyzed the angular clustering of X-ray sources with optical counterpart using two-point correlation functions. A significant positive angular clustering was obtained from a preliminary analysis of four subsamples of the X-ray sources catalog. The clustering signal of the optically extended counterparts is similar to that of strongly clustered populations like red and very red galaxies, suggesting that the environment plays an important role in active galactic nuclei phenomena. Finally, we combined optical structural parameters with other X-ray and optical properties, and we confirmed an anticorrelation between the X/O ratio and the Abraham concentration index, which might suggest that early-type galaxies have lower Eddington rates than those of late-type galaxies.

  13. WHITE DWARF-RED DWARF SYSTEMS RESOLVED WITH THE HUBBLE SPACE TELESCOPE. II. FULL SNAPSHOT SURVEY RESULTS

    International Nuclear Information System (INIS)

    Farihi, J.; Hoard, D. W.; Wachter, S.

    2010-01-01

    Results are presented for a Hubble Space Telescope Advanced Camera for Surveys high-resolution imaging campaign of 90 white dwarfs with known or suspected low-mass stellar and substellar companions. Of the 72 targets that remain candidate and confirmed white dwarfs with near-infrared excess, 43 are spatially resolved into two or more components, and a total of 12 systems are potentially triples. For 68 systems where a comparison is possible, 50% have significant photometric distance mismatches between their white dwarf and M dwarf components, suggesting that white dwarf parameters derived spectroscopically are often biased due to the cool companion. Interestingly, 9 of the 30 binaries known to have emission lines are found to be visual pairs and hence widely separated, indicating an intrinsically active cool star and not irradiation from the white dwarf. There is a possible, slight deficit of earlier spectral types (bluer colors) among the spatially unresolved companions, exactly the opposite of expectations if significant mass is transferred to the companion during the common envelope phase. Using the best available distance estimates, the low-mass companions to white dwarfs exhibit a bimodal distribution in projected separation. This result supports the hypothesis that during the giant phases of the white dwarf progenitor, any unevolved companions either migrate inward to short periods of hours to days, or outward to periods of hundreds to thousands of years. No intermediate projected separations of a few to several AU are found among these pairs. However, a few double M dwarfs (within triples) are spatially resolved in this range, empirically demonstrating that such separations were readily detectable among the binaries with white dwarfs. A straightforward and testable prediction emerges: all spatially unresolved, low-mass stellar and substellar companions to white dwarfs should be in short-period orbits. This result has implications for substellar companion and

  14. THE VLA NASCENT DISK AND MULTIPLICITY SURVEY OF PERSEUS PROTOSTARS (VANDAM). II. MULTIPLICITY OF PROTOSTARS IN THE PERSEUS MOLECULAR CLOUD

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, John J.; Harris, Robert J. [Leiden Observatory, Leiden University, P.O. Box 9513, 2300-RA Leiden (Netherlands); Looney, Leslie W.; Segura-Cox, Dominique [Department of Astronomy, University of Illinois, Urbana, IL 61801 (United States); Li, Zhi-Yun [Department of Astronomy, University of Virginia, Charlottesville, VA 22903 (United States); Chandler, Claire J.; Perez, Laura [National Radio Astronomy Observatory, P.O. Box O, Socorro, NM 87801 (United States); Dunham, Michael M. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St, MS 78, Cambridge, MA 02138 (United States); Sadavoy, Sarah I. [Max-Planck-Institut für Astronomie, D-69117 Heidelberg (Germany); Melis, Carl [Center for Astrophysics and Space Sciences, University of California, San Diego, CA 92093 (United States); Kratter, Kaitlin, E-mail: tobin@strw.leidenuniv.nl [University of Arizona, Steward Observatory, Tucson, AZ 85721 (United States)

    2016-02-10

    We present a multiplicity study of all known protostars (94) in the Perseus molecular cloud from a Karl G. Jansky Very Large Array survey at Ka-band (8 mm and 1 cm) and C-band (4 and 6.6 cm). The observed sample has a bolometric luminosity range between 0.1 L{sub ⊙} and ∼33 L{sub ⊙}, with a median of 0.7 L{sub ⊙}. This multiplicity study is based on the Ka-band data, having a best resolution of ∼0.″065 (15 au) and separations out to ∼43″ (10,000 au) can be probed. The overall multiplicity fraction (MF) is found to be 0.40 ± 0.06 and the companion star fraction (CSF) is 0.71 ± 0.06. The MF and CSF of the Class 0 protostars are 0.57 ± 0.09 and 1.2 ± 0.2, and the MF and CSF of Class I protostars are both 0.23 ± 0.08. The distribution of companion separations appears bi-modal, with a peak at ∼75 au and another peak at ∼3000 au. Turbulent fragmentation is likely the dominant mechanism on >1000 au scales and disk fragmentation is likely to be the dominant mechanism on <200 au scales. Toward three Class 0 sources we find companions separated by <30 au. These systems have the smallest separations of currently known Class 0 protostellar binary systems. Moreover, these close systems are embedded within larger (50–400 au) structures and may be candidates for ongoing disk fragmentation.

  15. The SUrvey for Pulsars and Extragalactic Radio Bursts - II. New FRB discoveries and their follow-up

    Science.gov (United States)

    Bhandari, S.; Keane, E. F.; Barr, E. D.; Jameson, A.; Petroff, E.; Johnston, S.; Bailes, M.; Bhat, N. D. R.; Burgay, M.; Burke-Spolaor, S.; Caleb, M.; Eatough, R. P.; Flynn, C.; Green, J. A.; Jankowski, F.; Kramer, M.; Krishnan, V. Venkatraman; Morello, V.; Possenti, A.; Stappers, B.; Tiburzi, C.; van Straten, W.; Andreoni, I.; Butterley, T.; Chandra, P.; Cooke, J.; Corongiu, A.; Coward, D. M.; Dhillon, V. S.; Dodson, R.; Hardy, L. K.; Howell, E. J.; Jaroenjittichai, P.; Klotz, A.; Littlefair, S. P.; Marsh, T. R.; Mickaliger, M.; Muxlow, T.; Perrodin, D.; Pritchard, T.; Sawangwit, U.; Terai, T.; Tominaga, N.; Torne, P.; Totani, T.; Trois, A.; Turpin, D.; Niino, Y.; Wilson, R. W.; Albert, A.; André, M.; Anghinolfi, M.; Anton, G.; Ardid, M.; Aubert, J.-J.; Avgitas, T.; Baret, B.; Barrios-Martí, J.; Basa, S.; Belhorma, B.; Bertin, V.; Biagi, S.; Bormuth, R.; Bourret, S.; Bouwhuis, M. C.; Brânzaş, H.; Bruijn, R.; Brunner, J.; Busto, J.; Capone, A.; Caramete, L.; Carr, J.; Celli, S.; Moursli, R. Cherkaoui El; Chiarusi, T.; Circella, M.; Coelho, J. A. B.; Coleiro, A.; Coniglione, R.; Costantini, H.; Coyle, P.; Creusot, A.; Díaz, A. F.; Deschamps, A.; De Bonis, G.; Distefano, C.; Palma, I. Di; Domi, A.; Donzaud, C.; Dornic, D.; Drouhin, D.; Eberl, T.; Bojaddaini, I. El; Khayati, N. El; Elsässer, D.; Enzenhöfer, A.; Ettahiri, A.; Fassi, F.; Felis, I.; Fusco, L. A.; Gay, P.; Giordano, V.; Glotin, H.; Gregoire, T.; Gracia-Ruiz, R.; Graf, K.; Hallmann, S.; van Haren, H.; Heijboer, A. J.; Hello, Y.; Hernández-Rey, J. J.; Hößl, J.; Hofestädt, J.; Hugon, C.; Illuminati, G.; James, C. W.; de Jong, M.; Jongen, M.; Kadler, M.; Kalekin, O.; Katz, U.; Kießling, D.; Kouchner, A.; Kreter, M.; Kreykenbohm, I.; Kulikovskiy, V.; Lachaud, C.; Lahmann, R.; Lefèvre, D.; Leonora, E.; Loucatos, S.; Marcelin, M.; Margiotta, A.; Marinelli, A.; Martínez-Mora, J. A.; Mele, R.; Melis, K.; Michael, T.; Migliozzi, P.; Moussa, A.; Navas, S.; Nezri, E.; Organokov, M.; Pǎvǎlaş, G. E.; Pellegrino, C.; Perrina, C.; Piattelli, P.; Popa, V.; Pradier, T.; Quinn, L.; Racca, C.; Riccobene, G.; Sánchez-Losa, A.; Saldaña, M.; Salvadori, I.; Samtleben, D. F. E.; Sanguineti, M.; Sapienza, P.; Schüssler, F.; Sieger, C.; Spurio, M.; Stolarczyk, Th; Taiuti, M.; Tayalati, Y.; Trovato, A.; Turpin, D.; Tönnis, C.; Vallage, B.; Van Elewyck, V.; Versari, F.; Vivolo, D.; Vizzocca, A.; Wilms, J.; Zornoza, J. D.; Zúñiga, J.

    2018-04-01

    We report the discovery of four Fast Radio Bursts (FRBs) in the ongoing SUrvey for Pulsars and Extragalactic Radio Bursts at the Parkes Radio Telescope: FRBs 150610, 151206, 151230 and 160102. Our real-time discoveries have enabled us to conduct extensive, rapid multimessenger follow-up at 12 major facilities sensitive to radio, optical, X-ray, gamma-ray photons and neutrinos on time-scales ranging from an hour to a few months post-burst. No counterparts to the FRBs were found and we provide upper limits on afterglow luminosities. None of the FRBs were seen to repeat. Formal fits to all FRBs show hints of scattering while their intrinsic widths are unresolved in time. FRB 151206 is at low Galactic latitude, FRB 151230 shows a sharp spectral cut-off, and FRB 160102 has the highest dispersion measure (DM = 2596.1 ± 0.3 pc cm-3) detected to date. Three of the FRBs have high dispersion measures (DM > 1500 pc cm-3), favouring a scenario where the DM is dominated by contributions from the intergalactic medium. The slope of the Parkes FRB source counts distribution with fluences >2 Jy ms is α =-2.2^{+0.6}_{-1.2} and still consistent with a Euclidean distribution (α = -3/2). We also find that the all-sky rate is 1.7^{+1.5}_{-0.9}× 10^3FRBs/(4π sr)/day above {˜ }2{ }{Jy}{ }{ms} and there is currently no strong evidence for a latitude-dependent FRB sky rate.

  16. Survey results of corroding problems at biological treatment plants, Stage II Protection of concrete - State of the Art

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Ylva (CBI, Boraas (Sweden)); Henriksson, Gunilla (SP, Boraas (Sweden))

    2011-07-01

    A pilot study on the degradation and corrosion of concrete in biological treatment plants was conducted in 2009/2010 in a Waste Refinery Project WR-27 'Survey results of corroding problems at biological treatment plants'. The results showed that the concrete does not have sufficient resistance in the current aggressive plant environment. Furthermore, it is stated that some form of surface protection system is needed to ensure the good performance of concrete constructions, and that the system must withstand the aggressive environment and the traffic that occurs on site. Consequently, a new study was proposed in order to develop specifications for surface protection of concrete in aggressive food waste environments. Results from that study are presented in this report. The report includes various types of waterproofing/protection coating for concrete in biological treatment plants. A number of proposals from the industry are presented in the light of results from project WR-27, i.e., the materials must, among other things, withstand the aggressive leachate from waste food at temperatures up to 70 deg C, and some degree of wear. Some systems are compared in terms of technical material properties as reported by the manufacturer. It turns out that different testing methods were used, and the test results are thus generally not directly comparable. A proposal for a test program has been developed, focusing on chemical resistance and wear resistance. A test solution corresponding to leachate is specified. Laboratory tests for verification of the proposed methodology and future requirements are proposed, as well as test sites and follow-up in the field

  17. Error probabilities in default Bayesian hypothesis testing

    NARCIS (Netherlands)

    Gu, Xin; Hoijtink, Herbert; Mulder, J,

    2016-01-01

    This paper investigates the classical type I and type II error probabilities of default Bayes factors for a Bayesian t test. Default Bayes factors quantify the relative evidence between the null hypothesis and the unrestricted alternative hypothesis without needing to specify prior distributions for

  18. Multiagency radiation survey and site investigation manual (MARSSIM): Survey design

    International Nuclear Information System (INIS)

    Abelquist, E.W.; Berger, J.D.

    1996-01-01

    This paper describes the MultiAgency Radiation Survey and Site Investigation Manual (MARSSIM) strategy for designing a final status survey. The purpose of the final status survey is to demonstrate that release criteria established by the regulatory agency have been met. Survey design begins with identification of the contaminants and determination of whether the radionuclides of concern exist in background. The decommissioned site is segregated into Class 1, Class 2, and Class 3 areas, based on contamination potential, and each area is further divided into survey units. Appropriate reference areas for indoor and outdoor background measurements are selected. Survey instrumentation and techniques are selected in order to assure that the instrumentation is capable of detecting the contamination at the derived concentration guideline level (DCGL). Survey reference systems are established and the number of survey data points is determined-with the required number of data points distributed on a triangular grid Pattern. Two suitistical tests are used to evaluate data from final status surveys. For contaminants that are b, present in background, the Wilcoxon Rank Sum test is used; for contaminants that are not present in background, the Wilcoxon Signed Rank (or Sign) test is used. The number of data points needed to satisfy these nonparametric tests is based on the contaminant DCGL value, the expected Standard deviation of the contaminant in background and in the survey unit, and the acceptable probability of making Type I and Type II decision errors. The MARSSIM also requires a reasonable level of assurance that any small areas of elevated residual radioactivity that could be significant relative to regulatory limits are not missed during the final status survey. Measurements and sampling on a specified grid size are used to obtain an adequate assurance level that small locations of elevated radioactivity will Still satisfy DCGLs-applicable to small areas

  19. Apical Periodontitis and Endodontic Treatment in Patients with Type II Diabetes Mellitus: Comparative Cross-sectional Survey.

    Science.gov (United States)

    Smadi, Leena

    2017-05-01

    The aims of this study were to investigate the prevalence of apical periodontitis (AP) in diabetes mellitus (DM) patients compared with nondiabetic patients and to examine the effect of glycemic control on the prevalence of AP. Radiographs of a group of DM patients were compared with those of a matched nondiabetic group to identify AP. The diabetic group was subdivided according to the level of glycemic control into two subgroups: A well-controlled DM and a poorly controlled DM. The periapical index score was used to assess the periapical status. All groups were compared in regard to the presence of AP lesions, the number of end-odontically treated teeth (ET), and the percentage of failure of endodontically treated teeth (AP/ET ratio). Statistical Package for the Social Sciences (SPSS version 20.0, Chicago, Illinois, USA) was used for all the analyses; p ≤ 0.05 was considered as statistically significant. The prevalence of AP was higher in diabetic group than in the nondiabetic group (13.5 vs 11.9% respectively). Diabetic group had more teeth with endodontic treatment ET compared with nondiabetic group (4.18 vs 1.82% respectively); this difference was statistically significant (p = 0.001) along with higher AP/ET ratio (27.7 vs 19.3 respectively). The poorly controlled DM group had a higher prevalence of AP lesions compared with the well-controlled DM group (18.29 vs 9.21 respectively). This difference was statistically significant (p = 0.001); they also had a higher percentage of ET (5.55 vs 3.13% respectively) and AP/ ET ratio (32.0 vs 21.8% respectively). This survey demonstrates a higher prevalence of AP in DM patients compared with nondiabetic group, with an increased prevalence of persistent chronic AP. Compared with a well-controlled diabetic group, a poor glycemic control may be associated with a higher prevalence of AP and increased rate of endodontic failures. Counseling diabetic patients, particularly those with poor glycemic control, about the risk of

  20. Issues in environmental survey design

    International Nuclear Information System (INIS)

    Iachan, R.

    1989-01-01

    Several environmental survey design issues are discussed and illustrated with surveys designed by Research Triangle Institute statisticians. Issues related to sampling and nonsampling errors are illustrated for indoor air quality surveys, radon surveys, pesticide surveys, and occupational and personal exposure surveys. Sample design issues include the use of auxiliary information (e.g. for stratification), and sampling in time. We also discuss the reduction and estimation of nonsampling errors, including nonresponse and measurement bias

  1. Use of the internet as a resource for consumer health information: results of the second osteopathic survey of health care in America (OSTEOSURV-II).

    Science.gov (United States)

    Licciardone, J C; Smith-Barbaro, P; Coleridge, S T

    2001-01-01

    The Internet offers consumers unparalleled opportunities to acquire health information. The emergence of the Internet, rather than more-traditional sources, for obtaining health information is worthy of ongoing surveillance, including identification of the factors associated with using the Internet for this purpose. To measure the prevalence of Internet use as a mechanism for obtaining health information in the United States; to compare such Internet use with newspapers or magazines, radio, and television; and to identify sociodemographic factors associated with using the Internet for acquiring health information. Data were acquired from the Second Osteopathic Survey of Health Care in America (OSTEOSURV-II), a national telephone survey using random-digit dialing within the United States during 2000. The target population consisted of adult, noninstitutionalized, household members. As part of the survey, data were collected on: facility with the Internet, sources of health information, and sociodemographic characteristics. Multivariate analysis was used to identify factors associated with acquiring health information on the Internet. A total of 499 (64% response rate) respondents participated in the survey. With the exception of an overrepresentation of women (66%), respondents were generally similar to national referents. Fifty percent of respondents either strongly agreed or agreed that they felt comfortable using the Internet as a health information resource. The prevalence rates of using the health information sources were: newspapers or magazines, 69%; radio, 30%; television, 56%; and the Internet, 32%. After adjusting for potential confounders, older respondents were more likely than younger respondents to use newspapers or magazines and television to acquire health information, but less likely to use the Internet. Higher education was associated with greater use of newspapers or magazines and the Internet as health information sources. Internet use was lower

  2. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  3. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  4. A spectroscopic survey of the youngest field stars in the solar neighborhood . II. The optically faint sample

    Science.gov (United States)

    Frasca, A.; Guillout, P.; Klutsch, A.; Ferrero, R. Freire; Marilli, E.; Biazzo, K.; Gandolfi, D.; Montes, D.

    2018-05-01

    Context. Star formation in the solar neighborhood is mainly traced by young stars in open clusters, associations, and in the field, which can be identified, for example, by their X-ray emission. The determination of stellar parameters for the optical counterparts of X-ray sources is crucial for a full characterization of these stars. Aims: This work extends the spectroscopic study of the RasTyc sample, obtained by the cross-correlation of the Tycho and ROSAT All-Sky Survey catalogs, to stars fainter than V = 9.5 mag and aims to identify sparse populations of young stars in the solar neighborhood. Methods: We acquired 625 high-resolution spectra for 443 presumably young stars with four different instruments in the northern hemisphere. The radial and rotational velocity (vsini) of our targets were measured by means of the cross-correlation technique, which is also helpful to discover single-lined (SB1), double-lined spectroscopic binaries (SB2), and multiple systems. We used the code ROTFIT to perform an MK spectral classification and to determine the atmospheric parameters (Teff, logg, [Fe/H]) and vsini of the single stars and SB1 systems. For these objects, we used the spectral subtraction of slowly rotating templates to measure the equivalent widths of the Hα and Li I 6708 Å lines, which enabled us to derive their chromospheric activity level and lithium abundance. We made use of Gaia DR1 parallaxes and proper motions to locate the targets in the Hertzsprung-Russell (HR) diagram and to compute the space velocity components of the youngest objects. Results: We find a remarkable percentage (at least 35%) of binaries and multiple systems. On the basis of the lithium abundance, the sample of single stars and SB1 systems appears to be mostly ( 60%) composed of stars younger than the members of the UMa cluster. The remaining sources are in the age range between the UMa and Hyades clusters ( 20%) or older ( 20%). In total, we identify 42 very young (PMS-like) stars

  5. Prevalence surveys as part of a strategic plan to prevent healthcare associated infections. The experience of the University Hospital "Federico II" of Naples, Italy.

    Science.gov (United States)

    Montella, E; Triassi, M; Bellopede, R; Reis, W; Palladino, R; Di Silverio, P

    2014-01-01

    The care-associated infections (HAI) are the most serious complication associated with medical care. They are the cause of diseases for patients and economic damage to public health. The University "Federico II" of Naples decided to monitor the HAI, repeating the prevalence survey conducted earlier in 2011 in order to analyze the phenomenon of infection and to evaluate the possible correlation with risk factors. The Survey was conducted according to ECDC criteria. Considered that the study carried out in 2011 was conducted following the same methodology, to compare the results of the year 2012 the prevalence rates of both years were standardized. For the year 2012, the number of patients enrolled in the study and stratification of patients by age and sex were similar to data collected in 2011. It was very interesting to find the prevalence of HAI standardized reduced in 2012 compared to 2011. As a matter of fact, in fact, that the standardized prevalence of HAI for the year 2012 was 3.1%, one percentage point lower than in 2011 (4.4%). The practical training and direct regarded as the most appropriate approach in order to make health professionals aware in the field of health care-associated infections, as well as the system of selfcontrol peripheral for the correct application of the procedures, as well as epidemiological surveillance active, measured through rates of incidence, at the same time allow the monitoring of the phenomenon is infectious and the application of corrective measures that prevent its onset. The choice to make again an epidemiological study of prevalence with the same methodology ensures, in fact, two advantages: the comparability of the data, both at intra-company both at regional, national and international evaluation of the effectiveness of corrective actions.

  6. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  7. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  8. Quantum error-correcting code for ternary logic

    Science.gov (United States)

    Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita

    2018-05-01

    Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.

  9. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...

  10. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  11. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  12. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  13. Errors in energy bills

    International Nuclear Information System (INIS)

    Kop, L.

    2001-01-01

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  14. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  15. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  16. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  17. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  18. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  19. Error management for musicians: an interdisciplinary conceptual framework.

    Science.gov (United States)

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and

  20. Error management for musicians: an interdisciplinary conceptual framework

    Directory of Open Access Journals (Sweden)

    Silke eKruse-Weber

    2014-07-01

    Full Text Available Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for errorless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error and error management (during and after the error are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of these abilities. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further

  1. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  2. Determining the sample size for co-dominant molecular marker-assisted linkage detection for a monogenic qualitative trait by controlling the type-I and type-II errors in a segregating F2 population.

    Science.gov (United States)

    Hühn, M; Piepho, H P

    2003-03-01

    Tests for linkage are usually performed using the lod score method. A critical question in linkage analyses is the choice of sample size. The appropriate sample size depends on the desired type-I error and power of the test. This paper investigates the exact type-I error and power of the lod score method in a segregating F(2) population with co-dominant markers and a qualitative monogenic dominant-recessive trait. For illustration, a disease-resistance trait is considered, where the susceptible allele is recessive. A procedure is suggested for finding the appropriate sample size. It is shown that recessive plants have about twice the information content of dominant plants, so the former should be preferred for linkage detection. In some cases the exact alpha-values for a given nominal alpha may be rather small due to the discrete nature of the sampling distribution in small samples. We show that a gain in power is possible by using exact methods.

  3. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  4. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  5. Errors in radiographic recognition in the emergency room

    International Nuclear Information System (INIS)

    Britton, C.A.; Cooperstein, L.A.

    1986-01-01

    For 6 months we monitored the frequency and type of errors in radiographic recognition made by radiology residents on call in our emergency room. A relatively low error rate was observed, probably because the authors evaluated cognitive errors only, rather than include those of interpretation. The most common missed finding was a small fracture, particularly on the hands or feet. First-year residents were most likely to make an error, but, interestingly, our survey revealed a small subset of upper-level residents who made a disproportionate number of errors

  6. Medication errors detected in non-traditional databases

    DEFF Research Database (Denmark)

    Perregaard, Helene; Aronson, Jeffrey K; Dalhoff, Kim

    2015-01-01

    AIMS: We have looked for medication errors involving the use of low-dose methotrexate, by extracting information from Danish sources other than traditional pharmacovigilance databases. We used the data to establish the relative frequencies of different types of errors. METHODS: We searched four...... errors, whereas knowledge-based errors more often resulted in near misses. CONCLUSIONS: The medication errors in this survey were most often action-based (50%) and knowledge-based (34%), suggesting that greater attention should be paid to education and surveillance of medical personnel who prescribe...

  7. Ten-Year Changes in the Prevalence and Socio-Demographic Determinants of Physical Activity among Polish Adults Aged 20 to 74 Years. Results of the National Multicenter Health Surveys WOBASZ (2003-2005) and WOBASZ II (2013-2014).

    Science.gov (United States)

    Kwaśniewska, Magdalena; Pikala, Małgorzata; Bielecki, Wojciech; Dziankowska-Zaborszczyk, Elżbieta; Rębowska, Ewa; Kozakiewicz, Krystyna; Pająk, Andrzej; Piwoński, Jerzy; Tykarski, Andrzej; Zdrojewski, Tomasz; Drygas, Wojciech

    2016-01-01

    The aim of the study was to estimate ten-year changes in physical activity (PA) patterns and sociodemographic determinants among adult residents of Poland. The study comprised two independent samples of randomly selected adults aged 20-74 years participating in the National Multicentre Health Survey WOBASZ (2003-2005; n = 14572) and WOBASZ II (2013-2014; n = 5694). In both surveys the measurements were performed by six academic centers in all 16 voivodships of Poland (108 measurement points in each survey). Sociodemographic data were collected by an interviewer-administered questionnaire in both surveys. Physical activity was assessed in three domains: leisure-time, occupational and commuting physical activity. Leisure-time PA changed substantially between the surveys (p<0.001). The prevalence of subjects being active on most days of week fell in both genders in the years 2003-2014 (37.4% vs 27.3% in men); 32.7% vs 28.3% in women. None or occasional activity increased from 49.6% to 56.8% in men, while remained stable in women (55.2% vs 54.9%). In both WOBASZ surveys the likelihood of physical inactivity was higher in less educated individuals, smokers and those living in large agglomerations (p<0.01). No significant changes were observed in occupational activity in men between the surveys, while in women percentage of sedentary work increased from 43.4% to % 49.4% (p<0.01). Commuting PA decreased significantly in both genders (p<0.001). About 79.3% of men and 71.3% of women reported no active commuting in the WOBASZ II survey. The observed unfavourable changes in PA emphasize the need for novel intervention concepts in order to reverse this direction. Further detailed monitoring of PA patterns in Poland is of particular importance.

  8. Ten-Year Changes in the Prevalence and Socio-Demographic Determinants of Physical Activity among Polish Adults Aged 20 to 74 Years. Results of the National Multicenter Health Surveys WOBASZ (2003-2005 and WOBASZ II (2013-2014.

    Directory of Open Access Journals (Sweden)

    Magdalena Kwaśniewska

    Full Text Available The aim of the study was to estimate ten-year changes in physical activity (PA patterns and sociodemographic determinants among adult residents of Poland.The study comprised two independent samples of randomly selected adults aged 20-74 years participating in the National Multicentre Health Survey WOBASZ (2003-2005; n = 14572 and WOBASZ II (2013-2014; n = 5694. In both surveys the measurements were performed by six academic centers in all 16 voivodships of Poland (108 measurement points in each survey. Sociodemographic data were collected by an interviewer-administered questionnaire in both surveys. Physical activity was assessed in three domains: leisure-time, occupational and commuting physical activity.Leisure-time PA changed substantially between the surveys (p<0.001. The prevalence of subjects being active on most days of week fell in both genders in the years 2003-2014 (37.4% vs 27.3% in men; 32.7% vs 28.3% in women. None or occasional activity increased from 49.6% to 56.8% in men, while remained stable in women (55.2% vs 54.9%. In both WOBASZ surveys the likelihood of physical inactivity was higher in less educated individuals, smokers and those living in large agglomerations (p<0.01. No significant changes were observed in occupational activity in men between the surveys, while in women percentage of sedentary work increased from 43.4% to % 49.4% (p<0.01. Commuting PA decreased significantly in both genders (p<0.001. About 79.3% of men and 71.3% of women reported no active commuting in the WOBASZ II survey.The observed unfavourable changes in PA emphasize the need for novel intervention concepts in order to reverse this direction. Further detailed monitoring of PA patterns in Poland is of particular importance.

  9. Type Ia Supernova Properties as a Function of the Distance to the Host Galaxy in the SDSS-II SN Survey

    Energy Technology Data Exchange (ETDEWEB)

    Galbany, Lluis [Institut de Fisica d' Altes Energies (IFAE), Barcelona (Spain); et al.

    2012-08-20

    We use type-Ia supernovae (SNe Ia) discovered by the SDSS-II SN Survey to search for dependencies between SN Ia properties and the projected distance to the host galaxy center, using the distance as a proxy for local galaxy properties (local star-formation rate, local metallicity, etc.). The sample consists of almost 200 spectroscopically or photometrically confirmed SNe Ia at redshifts below 0.25. The sample is split into two groups depending on the morphology of the host galaxy. We fit light-curves using both MLCS2k2 and SALT2, and determine color (AV, c) and light-curve shape (delta, x1) parameters for each SN Ia, as well as its residual in the Hubble diagram. We then correlate these parameters with both the physical and the normalized distances to the center of the host galaxy and look for trends in the mean values and scatters of these parameters with increasing distance. The most significant (at the 4-sigma level) finding is that the average fitted AV from MLCS2k2 and c from SALT2 decrease with the projected distance for SNe Ia in spiral galaxies. We also find indications that SNe in elliptical galaxies tend to have narrower light-curves if they explode at larger distances, although this may be due to selection effects in our sample. We do not find strong correlations between the residuals of the distance moduli with respect to the Hubble flow and the galactocentric distances, which indicates a limited correlation between SN magnitudes after standardization and local host metallicity.

  10. cobalt (ii), nickel (ii)

    African Journals Online (AJOL)

    DR. AMINU

    Department of Chemistry Bayero University, P. M. B. 3011, Kano, Nigeria. E-mail: hnuhu2000@yahoo.com. ABSTRACT. The manganese (II), cobalt (II), nickel (II) and .... water and common organic solvents, but are readily soluble in acetone. The molar conductance measurement [Table 3] of the complex compounds in.

  11. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  12. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  13. Libertarismo & Error Categorial

    OpenAIRE

    PATARROYO G, CARLOS G

    2009-01-01

    En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...

  14. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  15. Learning from errors in radiology to improve patient safety.

    Science.gov (United States)

    Saeed, Shaista Afzal; Masroor, Imrana; Shafqat, Gulnaz

    2013-10-01

    To determine the views and practices of trainees and consultant radiologists about error reporting. Cross-sectional survey. Radiology trainees and consultant radiologists in four tertiary care hospitals in Karachi approached in the second quarter of 2011. Participants were enquired as to their grade, sub-specialty interest, whether they kept a record/log of their errors (defined as a mistake that has management implications for the patient), number of errors they made in the last 12 months and the predominant type of error. They were also asked about the details of their department error meetings. All duly completed questionnaires were included in the study while the ones with incomplete information were excluded. A total of 100 radiologists participated in the survey. Of them, 34 were consultants and 66 were trainees. They had a wide range of sub-specialty interest like CT, Ultrasound, etc. Out of the 100 responders, 49 kept a personal record/log of their errors. In response to the recall of approximate errors they made in the last 12 months, 73 (73%) of participants recorded a varied response with 1 - 5 errors mentioned by majority i.e. 47 (64.5%). Most of the radiologists (97%) claimed receiving information about their errors through multiple sources like morbidity/mortality meetings, patients' follow-up, through colleagues and consultants. Perceptual error 66 (66%) were the predominant error type reported. Regular occurrence of error meetings and attending three or more error meetings in the last 12 months was reported by 35% participants. Majority among these described the atmosphere of these error meetings as informative and comfortable (n = 22, 62.8%). It is of utmost importance to develop a culture of learning from mistakes by conducting error meetings and improving the process of recording and addressing errors to enhance patient safety.

  16. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  17. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  18. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  19. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  20. Estimation of the limit of detection with a bootstrap-derived standard error by a partly non-parametric approach. Application to HPLC drug assays

    DEFF Research Database (Denmark)

    Linnet, Kristian

    2005-01-01

    Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors......Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors...

  1. Audit of medication errors by anesthetists in North Western Nigeria

    African Journals Online (AJOL)

    2013-08-03

    Aug 3, 2013 ... Materials and Methods. This multi‑center cross‑sectional survey was conducted ... vigilance (9), appropriate and double checking of drug labels (18), and color coding of syringes (7) as ways to minimize medication errors.

  2. Friendship at work and error disclosure

    Directory of Open Access Journals (Sweden)

    Hsiao-Yen Mao

    2017-10-01

    Full Text Available Organizations rely on contextual factors to promote employee disclosure of self-made errors, which induces a resource dilemma (i.e., disclosure entails costing one's own resources to bring others resources and a friendship dilemma (i.e., disclosure is seemingly easier through friendship, yet the cost of friendship is embedded. This study proposes that friendship at work enhances error disclosure and uses conservation of resources theory as underlying explanation. A three-wave survey collected data from 274 full-time employees with a variety of occupational backgrounds. Empirical results indicated that friendship enhanced error disclosure partially through relational mechanisms of employees’ attitudes toward coworkers (i.e., employee engagement and of coworkers’ attitudes toward employees (i.e., perceived social worth. Such effects hold when controlling for established predictors of error disclosure. This study expands extant perspectives on employee error and the theoretical lenses used to explain the influence of friendship at work. We propose that, while promoting error disclosure through both contextual and relational approaches, organizations should be vigilant about potential incongruence.

  3. Analysis of Students' Errors on Linear Programming at Secondary ...

    African Journals Online (AJOL)

    The purpose of this study was to identify secondary school students' errors on linear programming at 'O' level. It is based on the fact that students' errors inform teaching hence an essential tool for any serious mathematics teacher who intends to improve mathematics teaching. The study was guided by a descriptive survey ...

  4. Learning from Errors: Critical Incident Reporting in Nursing

    Science.gov (United States)

    Gartmeier, Martin; Ottl, Eva; Bauer, Johannes; Berberat, Pascal Oliver

    2017-01-01

    Purpose: The purpose of this paper is to conceptualize error reporting as a strategy for informal workplace learning and investigate nurses' error reporting cost/benefit evaluations and associated behaviors. Design/methodology/approach: A longitudinal survey study was carried out in a hospital setting with two measurements (time 1 [t1]:…

  5. Prevalence of refractive errors among junior high school students in ...

    African Journals Online (AJOL)

    Among school children, uncorrected refractive errors have a considerable impact on their participation and learning in class. The aim of this study was to assess the prevalence of refractive error among students in the Ejisu-Juabeng Municipality of Ghana. A survey with multi-stage sampling was undertaken. We interviewed ...

  6. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  7. Exposure of UK industrial plumbers to asbestos, Part II: Awareness and responses of plumbers to working with asbestos during a survey in parallel with personal sampling.

    Science.gov (United States)

    Bard, Delphine; Burdett, Garry

    2007-03-01

    Throughout the European Union, millions tonnes of asbestos were used in the manufacture of products for building and for industrial installations. Today, in the UK, it is estimated that over half a million non-domestic premises alone have asbestos-containing materials in them and it is recognized that those working in building maintenance trades continue to be at significant risk. In part II, the awareness of UK plumbers to when they are working with asbestos was investigated and compared with the monitored levels reported in part I. The plumbers were issued by post with passive samplers, activity logs to monitor a working week and a questionnaire. The activity logs were used to assess whether maintenance workers were knowingly or unknowingly exposed to airborne asbestos fibres during a course of a working week. The questionnaire was designed to gather information on their: age, employment status, current and past perception of the frequency which they work with asbestos and knowledge of the precautions that should be taken to limit exposure and risk. Approximately 20% of workers reported on the sample log that they had worked with asbestos. There was a high correlation (93%) between the sampling log replies that they were knowingly working with asbestos and measured asbestos on the passive sampler. However, some 60% of the samples had >5 microm long asbestos structures found by transmission electron microscopy (TEM) analysis suggesting that the plumbers were aware of about only one-third of their contacts with asbestos materials throughout the week. This increased to just over one half of the plumbers being aware of their contact based on the results for phase contrast microscopy (PCM) countable asbestos fibres. The results from the questionnaire found that over half of the plumbers replying thought that they disturb asbestos only once a year and 90% of them thought they would work with asbestos for<10 h year-1. Their expectations and awareness of work with

  8. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  9. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  10. Organizational Climate, Stress, and Error in Primary Care: The MEMO Study

    National Research Council Canada - National Science Library

    Linzer, Mark; Manwell, Linda B; Mundt, Marlon; Williams, Eric; Maguire, Ann; McMurray, Julia; Plane, Mary B

    2005-01-01

    .... Physician surveys assessed office environment and organizational climate (OC). Stress was measured using a 4-item scale, past errors were self reported, and the likelihood of future errors was self-assessed using the OSPRE...

  11. Minimum Tracking Error Volatility

    OpenAIRE

    Luca RICCETTI

    2010-01-01

    Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...

  12. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  13. Satellite Photometric Error Determination

    Science.gov (United States)

    2015-10-18

    Satellite Photometric Error Determination Tamara E. Payne, Philip J. Castro, Stephen A. Gregory Applied Optimization 714 East Monument Ave, Suite...advocate the adoption of new techniques based on in-frame photometric calibrations enabled by newly available all-sky star catalogs that contain highly...filter systems will likely be supplanted by the Sloan based filter systems. The Johnson photometric system is a set of filters in the optical

  14. NURE aerial gamma-ray and magnetic reconnaissance survey of portions of New Mexico, Arizona and Texas. Volume II. New Mexico-Carlsbad NI 31-11 Quadrangle. Final report

    International Nuclear Information System (INIS)

    1981-09-01

    As part of the Department of Energy (DOE) Nation Uranium Resource Evaluation Program, a rotary-wing high sensitivity radiometric and magnetic survey was flown covering the Carlsbad Quadrangle of the State of New Mexico. The area surveyed consisted of approximately 1732 line miles. The survey was flown with a Sikorsky S58T helicopter equipped with a high sensitivity gamma ray spectrometer which was calibrated at the DOE calibration facilities at Walker Field in Grand Junction, Colorado, and the Dynamic Test Range at Lake Mead, Arizona. Instrumentation and data reduction methods are presented in Volume I of this report. The reduced data is presented in the form of stacked profiles, standard deviation anomaly plots, histogram plots and microfiche listings. The results of the geologic interpretation of the radiometric data together with the profiles, anomaly maps and histograms are presented in this Volume II final report

  15. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  16. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  17. Drought Persistence Errors in Global Climate Models

    Science.gov (United States)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  18. Investigating Medication Errors in Educational Health Centers of Kermanshah

    Directory of Open Access Journals (Sweden)

    Mohsen Mohammadi

    2015-08-01

    Full Text Available Background and objectives : Medication errors can be a threat to the safety of patients. Preventing medication errors requires reporting and investigating such errors. The present study was conducted with the purpose of investigating medication errors in educational health centers of Kermanshah. Material and Methods: The present research is an applied, descriptive-analytical study and is done as a survey. Error Report of Ministry of Health and Medical Education was used for data collection. The population of the study included all the personnel (nurses, doctors, paramedics of educational health centers of Kermanshah. Among them, those who reported the committed errors were selected as the sample of the study. The data analysis was done using descriptive statistics and Chi 2 Test using SPSS version 18. Results: The findings of the study showed that most errors were related to not using medication properly, the least number of errors were related to improper dose, and the majority of errors occurred in the morning. The most frequent reason for errors was staff negligence and the least frequent was the lack of knowledge. Conclusion: The health care system should create an environment for detecting and reporting errors by the personnel, recognizing related factors causing errors, training the personnel and create a good working environment and standard workload.

  19. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Nurse perceptions of organizational culture and its association with the culture of error reporting: a case of public sector hospitals in Pakistan.

    Science.gov (United States)

    Jafree, Sara Rizvi; Zakar, Rubeena; Zakar, Muhammad Zakria; Fischer, Florian

    2016-01-05

    There is an absence of formal error tracking systems in public sector hospitals of Pakistan and also a lack of literature concerning error reporting culture in the health care sector. Nurse practitioners have front-line knowledge and rich exposure about both the organizational culture and error sharing in hospital settings. The aim of this paper was to investigate the association between organizational culture and the culture of error reporting, as perceived by nurses. The authors used the "Practice Environment Scale-Nurse Work Index Revised" to measure the six dimensions of organizational culture. Seven questions were used from the "Survey to Solicit Information about the Culture of Reporting" to measure error reporting culture in the region. Overall, 309 nurses participated in the survey, including female nurses from all designations such as supervisors, instructors, ward-heads, staff nurses and student nurses. We used SPSS 17.0 to perform a factor analysis. Furthermore, descriptive statistics, mean scores and multivariable logistic regression were used for the analysis. Three areas were ranked unfavorably by nurse respondents, including: (i) the error reporting culture, (ii) staffing and resource adequacy, and (iii) nurse foundations for quality of care. Multivariable regression results revealed that all six categories of organizational culture, including: (1) nurse manager ability, leadership and support, (2) nurse participation in hospital affairs, (3) nurse participation in governance, (4) nurse foundations of quality care, (5) nurse-coworkers relations, and (6) nurse staffing and resource adequacy, were positively associated with higher odds of error reporting culture. In addition, it was found that married nurses and nurses on permanent contract were more likely to report errors at the workplace. Public healthcare services of Pakistan can be improved through the promotion of an error reporting culture, reducing staffing and resource shortages and the

  1. Error Control for Network-on-Chip Links

    CERN Document Server

    Fu, Bo

    2012-01-01

    As technology scales into nanoscale regime, it is impossible to guarantee the perfect hardware design. Moreover, if the requirement of 100% correctness in hardware can be relaxed, the cost of manufacturing, verification, and testing will be significantly reduced. Many approaches have been proposed to address the reliability problem of on-chip communications. This book focuses on the use of error control codes (ECCs) to improve on-chip interconnect reliability. Coverage includes detailed description of key issues in NOC error control faced by circuit and system designers, as well as practical error control techniques to minimize the impact of these errors on system performance. Provides a detailed background on the state of error control methods for on-chip interconnects; Describes the use of more complex concatenated codes such as Hamming Product Codes with Type-II HARQ, while emphasizing integration techniques for on-chip interconnect links; Examines energy-efficient techniques for integrating multiple error...

  2. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  3. On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models

    Energy Technology Data Exchange (ETDEWEB)

    Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.

    2014-02-01

    The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.

  4. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  5. Engineering surveying

    CERN Document Server

    Schofield, W

    2007-01-01

    Engineering surveying involves determining the position of natural and man-made features on or beneath the Earth's surface and utilizing these features in the planning, design and construction of works. It is a critical part of any engineering project. Without an accurate understanding of the size, shape and nature of the site the project risks expensive and time-consuming errors or even catastrophic failure.Engineering Surveying 6th edition covers all the basic principles and practice of this complex subject and the authors bring expertise and clarity. Previous editions of this classic text have given readers a clear understanding of fundamentals such as vertical control, distance, angles and position right through to the most modern technologies, and this fully updated edition continues that tradition.This sixth edition includes:* An introduction to geodesy to facilitate greater understanding of satellite systems* A fully updated chapter on GPS, GLONASS and GALILEO for satellite positioning in surveying* Al...

  6. Optimizing learning of a locomotor task: amplifying errors as needed.

    Science.gov (United States)

    Marchal-Crespo, Laura; López-Olóriz, Jorge; Jaeger, Lukas; Riener, Robert

    2014-01-01

    Research on motor learning has emphasized that errors drive motor adaptation. Thereby, several researchers have proposed robotic training strategies that amplify movement errors rather than decrease them. In this study, the effect of different robotic training strategies that amplify errors on learning a complex locomotor task was investigated. The experiment was conducted with a one degree-of freedom robotic stepper (MARCOS). Subjects were requested to actively coordinate their legs in a desired gait-like pattern in order to track a Lissajous figure presented on a visual display. Learning with three different training strategies was evaluated: (i) No perturbation: the robot follows the subjects' movement without applying any perturbation, (ii) Error amplification: existing errors were amplified with repulsive forces proportional to errors, (iii) Noise disturbance: errors were evoked with a randomly-varying force disturbance. Results showed that training without perturbations was especially suitable for a subset of initially less-skilled subjects, while error amplification seemed to benefit more skilled subjects. Training with error amplification, however, limited transfer of learning. Random disturbing forces benefited learning and promoted transfer in all subjects, probably because it increased attention. These results suggest that learning a locomotor task can be optimized when errors are randomly evoked or amplified based on subjects' initial skill level.

  7. Standard Errors for Matrix Correlations.

    Science.gov (United States)

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  8. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  9. Web-based Surveys: Changing the Survey Process

    OpenAIRE

    Gunn, Holly

    2002-01-01

    Web-based surveys are having a profound influence on the survey process. Unlike other types of surveys, Web page design skills and computer programming expertise play a significant role in the design of Web-based surveys. Survey respondents face new and different challenges in completing a Web-based survey. This paper examines the different types of Web-based surveys, the advantages and challenges of using Web-based surveys, the design of Web-based surveys, and the issues of validity, error, ...

  10. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  11. Error Patterns in Problem Solving.

    Science.gov (United States)

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  12. Role of memory errors in quantum repeaters

    International Nuclear Information System (INIS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Duer, W.

    2007-01-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication

  13. Performance, postmodernity and errors

    DEFF Research Database (Denmark)

    Harder, Peter

    2013-01-01

    speaker’s competency (note the –y ending!) reflects adaptation to the community langue, including variations. This reversal of perspective also reverses our understanding of the relationship between structure and deviation. In the heyday of structuralism, it was tempting to confuse the invariant system...... with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....

  14. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. A search for pre-main sequence stars in the high-latitude molecular clouds. II - A survey of the Einstein database

    Science.gov (United States)

    Caillault, Jean-Pierre; Magnani, Loris

    1990-01-01

    The preliminary results are reported of a survey of every EINSTEIN image which overlaps any high-latitude molecular cloud in a search for X-ray emitting pre-main sequence stars. This survey, together with complementary KPNO and IRAS data, will allow the determination of how prevalent low mass star formation is in these clouds in general and, particularly, in the translucent molecular clouds.

  16. Aerial radiometric and magnetic survey; Brushy Basin detail survey: Price/Salina national topographic map sheets, Utah. Volume III. Area II: graphic data, Section III-IX Final report

    International Nuclear Information System (INIS)

    1981-01-01

    This volume contains all of the graphic data for Area II, which include map lines 1660 to 3400 and 5360 to 5780 and tie lines 6100, 6120, and 6160. Due to the large map scale of the data presented (1:62,500), this area was further subdivided into eleven 7-1/2 min quadrant sheets. It should be noted that TL6100 resides in both Areas II and III. The graphic data for TL6100 are presented in Volume IV - Area III - Graphic Data of this report

  17. Barriers to medication error reporting among hospital nurses.

    Science.gov (United States)

    Rutledge, Dana N; Retrosi, Tina; Ostrowski, Gary

    2018-03-01

    The study purpose was to report medication error reporting barriers among hospital nurses, and to determine validity and reliability of an existing medication error reporting barriers questionnaire. Hospital medication errors typically occur between ordering of a medication to its receipt by the patient with subsequent staff monitoring. To decrease medication errors, factors surrounding medication errors must be understood; this requires reporting by employees. Under-reporting can compromise patient safety by disabling improvement efforts. This 2017 descriptive study was part of a larger workforce engagement study at a faith-based Magnet ® -accredited community hospital in California (United States). Registered nurses (~1,000) were invited to participate in the online survey via email. Reported here are sample demographics (n = 357) and responses to the 20-item medication error reporting barriers questionnaire. Using factor analysis, four factors that accounted for 67.5% of the variance were extracted. These factors (subscales) were labelled Fear, Cultural Barriers, Lack of Knowledge/Feedback and Practical/Utility Barriers; each demonstrated excellent internal consistency. The medication error reporting barriers questionnaire, originally developed in long-term care, demonstrated good validity and excellent reliability among hospital nurses. Substantial proportions of American hospital nurses (11%-48%) considered specific factors as likely reporting barriers. Average scores on most barrier items were categorised "somewhat unlikely." The highest six included two barriers concerning the time-consuming nature of medication error reporting and four related to nurses' fear of repercussions. Hospitals need to determine the presence of perceived barriers among nurses using questionnaires such as the medication error reporting barriers and work to encourage better reporting. Barriers to medication error reporting make it less likely that nurses will report medication

  18. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  19. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  20. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  1. A chance to avoid mistakes human error

    International Nuclear Information System (INIS)

    Amaro, Pablo; Obeso, Eduardo; Gomez, Ruben

    2010-01-01

    human factor contribution to the events 'The explanations of the error': The evolution of the human error concept and which are the causes that are behind him, are presented in this chapter. Several examples try to facilitate understanding. In the appendix II, we present a series of 'Cause Codes' used in the industry, trying to aid to the technicians when they are assessing and researching events. 'The battle against error': Its the main objective of the book. Present one after other, the tools that are managed in the nuclear industry in a practical way. What's, Who have to use it and When to use it, are described with sufficient detail so that anyone can assimilated the tool and, if is applicable, look for the implementation in his organization. (authors)

  2. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Directory of Open Access Journals (Sweden)

    Martin eSpüler

    2015-03-01

    Full Text Available When a person recognizes an error during a task, an error-related potential (ErrP can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback.With this study, we wanted to answer three different questions: (i Can ErrPs be measured in electroencephalography (EEG recordings during a task with continuous cursor control? (ii Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action. We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible.Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG.

  3. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Science.gov (United States)

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  4. THE MASSIVE AND DISTANT CLUSTERS OF WISE SURVEY. II. INITIAL SPECTROSCOPIC CONFIRMATION OF z ∼ 1 GALAXY CLUSTERS SELECTED FROM 10,000 deg2

    International Nuclear Information System (INIS)

    Stanford, S. A.; Gonzalez, Anthony H.; Gettings, Daniel P.; Brodwin, Mark; Eisenhardt, Peter R. M.; Stern, Daniel; Wylezalek, Dominika

    2014-01-01

    We present optical and infrared imaging and optical spectroscopy of galaxy clusters which were identified as part of an all-sky search for high-redshift galaxy clusters, the Massive and Distant Clusters of WISE Survey (MaDCoWS). The initial phase of MaDCoWS combined infrared data from the all-sky data release of the Wide-field Infrared Survey Explorer (WISE) with optical data from the Sloan Digital Sky Survey to select probable z ∼ 1 clusters of galaxies over an area of 10,000 deg 2 . Our spectroscopy confirms 19 new clusters at 0.7 < z < 1.3, half of which are at z > 1, demonstrating the viability of using WISE to identify high-redshift galaxy clusters. The next phase of MaDCoWS will use the greater depth of the AllWISE data release to identify even higher redshift cluster candidates

  5. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  6. Fisheries Disaster Survey, 2000

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Responses to selected questions from the Social and Economic Survey administered in spring and summer 2000 to recipients of the second round (Round II) of financial...

  7. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  8. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... that contains very detailed information about incomes. This gives a unique opportunity to learn about the magnitude and nature of the measurement error in income reported by the respondents in the Danish NTS compared to income from the administrative register (correct measure). We find that the classical...

  9. THE GREEN BANK TELESCOPE 350 MHz DRIFT-SCAN SURVEY II: DATA ANALYSIS AND THE TIMING OF 10 NEW PULSARS, INCLUDING A RELATIVISTIC BINARY

    Energy Technology Data Exchange (ETDEWEB)

    Lynch, Ryan S.; Kaspi, Victoria M.; Archibald, Anne M.; Karako-Argaman, Chen [Department of Physics, McGill University, 3600 University Street, Montreal, QC H3A 2T8 (Canada); Boyles, Jason; Lorimer, Duncan R.; McLaughlin, Maura A.; Cardoso, Rogerio F. [Department of Physics, West Virginia University, 111 White Hall, Morgantown, WV 26506 (United States); Ransom, Scott M. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Stairs, Ingrid H.; Berndsen, Aaron; Cherry, Angus; McPhee, Christie A. [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada); Hessels, Jason W. T.; Kondratiev, Vladislav I.; Van Leeuwen, Joeri [ASTRON, The Netherlands Institute for Radio Astronomy, Postbus 2, 7990-AA Dwingeloo (Netherlands); Epstein, Courtney R. [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Pennucci, Tim [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904 (United States); Roberts, Mallory S. E. [Eureka Scientific Inc., 2452 Delmer Street, Suite 100, Oakland, CA 94602 (United States); Stovall, Kevin, E-mail: rlynch@physics.mcgill.ca [Center for Advanced Radio Astronomy and Department of Physics and Astronomy, University of Texas at Brownsville, Brownsville, TX 78520 (United States)

    2013-02-15

    We have completed a 350 MHz Drift-scan Survey using the Robert C. Byrd Green Bank Telescope with the goal of finding new radio pulsars, especially millisecond pulsars that can be timed to high precision. This survey covered {approx}10,300 deg{sup 2} and all of the data have now been fully processed. We have discovered a total of 31 new pulsars, 7 of which are recycled pulsars. A companion paper by Boyles et al. describes the survey strategy, sky coverage, and instrumental setup, and presents timing solutions for the first 13 pulsars. Here we describe the data analysis pipeline, survey sensitivity, and follow-up observations of new pulsars, and present timing solutions for 10 other pulsars. We highlight several sources-two interesting nulling pulsars, an isolated millisecond pulsar with a measurement of proper motion, and a partially recycled pulsar, PSR J0348+0432, which has a white dwarf companion in a relativistic orbit. PSR J0348+0432 will enable unprecedented tests of theories of gravity.

  10. A Survey of Beginning Crop Science Courses at 49 U.S. Universities. II. Laboratory Format, Teaching Methods, and Topical Content.

    Science.gov (United States)

    Connors, Krista L.; Karnok, Keith J.

    1986-01-01

    This paper is the second of a two-part series which discusses the findings related to laboratory segments in the beginning crop science courses offered in Land Grant institutions. Survey results reveal that laboratories are used but employ traditional teaching rather than individualized or auto-tutorial techniques. (ML)

  11. A Study of General Education Astronomy Students' Understandings of Cosmology. Part II. Evaluating Four Conceptual Cosmology Surveys: A Classical Test Theory Approach

    Science.gov (United States)

    Wallace, Colin S.; Prather, Edward E.; Duncan, Douglas K.

    2011-01-01

    This is the second of five papers detailing our national study of general education astronomy students' conceptual and reasoning difficulties with cosmology. This article begins our quantitative investigation of the data. We describe how we scored students' responses to four conceptual cosmology surveys, and we present evidence for the inter-rater…

  12. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  13. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  14. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  15. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  16. Architecture design for soft errors

    CERN Document Server

    Mukherjee, Shubu

    2008-01-01

    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  17. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  18. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where

  19. Identifying Error in AUV Communication

    National Research Council Canada - National Science Library

    Coleman, Joseph; Merrill, Kaylani; O'Rourke, Michael; Rajala, Andrew G; Edwards, Dean B

    2006-01-01

    Mine Countermeasures (MCM) involving Autonomous Underwater Vehicles (AUVs) are especially susceptible to error, given the constraints on underwater acoustic communication and the inconstancy of the underwater communication channel...

  20. Human Errors in Decision Making

    OpenAIRE

    Mohamad, Shahriari; Aliandrina, Dessy; Feng, Yan

    2005-01-01

    The aim of this paper was to identify human errors in decision making process. The study was focused on a research question such as: what could be the human error as a potential of decision failure in evaluation of the alternatives in the process of decision making. Two case studies were selected from the literature and analyzed to find the human errors contribute to decision fail. Then the analysis of human errors was linked with mental models in evaluation of alternative step. The results o...

  1. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  2. Teamwork and Clinical Error Reporting among Nurses in Korean Hospitals

    OpenAIRE

    Jee-In Hwang, PhD; Jeonghoon Ahn, PhD

    2015-01-01

    Purpose: To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. Methods: The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitori...

  3. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  4. Evaluation of the computerized procedures Manual II (COPMA II)

    International Nuclear Information System (INIS)

    Converse, S.A.

    1995-11-01

    The purpose of this study was to evaluate the effects of a computerized procedure system, the Computerized Procedure Manual II (COPMA-II), on the performance and mental workload of licensed reactor operators. To evaluate COPMA-II, eight teams of two operators were trained to operate a scaled pressurized water reactor facility (SPWRF) with traditional paper procedures and with COPMA-II. Following training, each team operated the SPWRF under normal operating conditions with both paper procedures and COPMA-II. The teams then performed one of two accident scenarios with paper procedures, but performed the remaining accident scenario with COPMA-II. Performance measures and subjective estimates of mental workload were recorded for each performance trial. The most important finding of the study was that the operators committed only half as many errors during the accident scenarios with COPMA-II as they committed with paper procedures. However, time to initiate a procedure was fastest for paper procedures for accident scenario trials. For performance under normal operating conditions, there was no difference in time to initiate or to complete a procedure, or in the number of errors committed with paper procedures and with COPMA-II. There were no consistent differences in the mental workload ratings operators recorded for trials with paper procedures and COPMA-II

  5. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  6. Preanalytical Blood Sampling Errors in Clinical Settings

    International Nuclear Information System (INIS)

    Zehra, N.; Malik, A. H.; Arshad, Q.; Sarwar, S.; Aslam, S.

    2016-01-01

    Background: Blood sampling is one of the common procedures done in every ward for disease diagnosis and prognosis. Daily hundreds of samples are collected from different wards but lack of appropriate knowledge of blood sampling by paramedical staff and accidental errors make the samples inappropriate for testing. Thus the need to avoid these errors for better results still remains. We carried out this research with an aim to determine the common errors during blood sampling; find factors responsible and propose ways to reduce these errors. Methods: A cross sectional descriptive study was carried out at the Military and Combined Military Hospital Rawalpindi during February and March 2014. A Venous Blood Sampling questionnaire (VBSQ) was filled by the staff on voluntary basis in front of the researchers. The staff was briefed on the purpose of the survey before filling the questionnaire. Sample size was 228. Results were analysed using SPSS-21. Results: When asked in the questionnaire, around 61.6 percent of the paramedical staff stated that they cleaned the vein by moving the alcohol swab from inward to outwards while 20.8 percent of the staff reported that they felt the vein after disinfection. On contrary to WHO guidelines, 89.6 percent identified that they had a habit of placing blood in the test tube by holding it in the other hand, which should actually be done after inserting it into the stand. Although 86 percent thought that they had ample knowledge regarding the blood sampling process but they did not practice it properly. Conclusion: Pre analytical blood sampling errors are common in our setup. Eighty six percent participants though thought that they had adequate knowledge regarding blood sampling, but most of them were not adhering to standard protocols. There is a need of continued education and refresher courses. (author)

  7. HERSCHEL/PACS SURVEY OF PROTOPLANETARY DISKS IN TAURUS/AURIGA—OBSERVATIONS OF [O I] AND [C II], AND FAR-INFRARED CONTINUUM

    International Nuclear Information System (INIS)

    Howard, Christian D.; Sandell, Göran; Vacca, William D.; Duchêne, Gaspard; Mathews, Geoffrey; Augereau, Jean-Charles; Ménard, Francois; Pinte, Christophe; Podio, Linda; Thi, Wing-Fai; Barrado, David; Riviere-Marichalar, Pablo; Dent, William R. F.; Eiroa, Carlos; Meeus, Gwendolyn; Grady, Carol; Roberge, Aki; Kamp, Inga; Vicente, Silvia; Williams, Jonathan P.

    2013-01-01

    The Herschel Space Observatory was used to observe ∼120 pre-main-sequence stars in Taurus as part of the GASPS Open Time Key project. Photodetector Array Camera and Spectrometer was used to measure the continuum as well as several gas tracers such as [O I] 63 μm, [O I] 145 μm, [C II] 158 μm, OH, H 2 O, and CO. The strongest line seen is [O I] at 63 μm. We find a clear correlation between the strength of the [O I] 63 μm line and the 63 μm continuum for disk sources. In outflow sources, the line emission can be up to 20 times stronger than in disk sources, suggesting that the line emission is dominated by the outflow. The tight correlation seen for disk sources suggests that the emission arises from the inner disk (<50 AU) and lower surface layers of the disk where the gas and dust are coupled. The [O I] 63 μm is fainter in transitional stars than in normal Class II disks. Simple spectral energy distribution models indicate that the dust responsible for the continuum emission is colder in these disks, leading to weaker line emission. [C II] 158 μm emission is only detected in strong outflow sources. The observed line ratios of [O I] 63 μm to [O I] 145 μm are in the regime where we are insensitive to the gas-to-dust ratio, neither can we discriminate between shock or photodissociation region emission. We detect no Class III object in [O I] 63 μm and only three in continuum, at least one of which is a candidate debris disk

  8. Care seeking, complementary therapy and herbal medicine use among people with type 2 diabetes and cardiovascular disease CAMELOT phase II: Surveying for diversity

    DEFF Research Database (Denmark)

    Manderson, Lenore; Oldenburg, Brian; Lin, Vivian

    2012-01-01

    prior to the survey, 43% of all respondents had used CAM products or practitioners, including 11% who used Western herbal medicines. The data offers considerable opportunities to tease out the drivers, costs and benefits of CAM use by people with chronic disease. Although findings will be published...... across a number of articles, here we profile the demographic and health status characteristics of survey respondents and compare the characteristics of users of naturopathy and Western herbal medicine practitioner with this.......Many Australians manage their health through the combined use of conventional medicine and complementary and alternative medicine, with substantial direct and indirect costs to government and consumers. Our interest was in the varied health practices of people with type 2 diabetes...

  9. THE WYOMING SURVEY FOR Hα. II. Hα LUMINOSITY FUNCTIONS AT z∼ 0.16, 0.24, 0.32, AND 0.40

    International Nuclear Information System (INIS)

    Dale, Daniel A.; Cook, David O.; Moore, Carolynn A.; Staudaher, Shawn M.; Barlow, Rebecca J.; Cohen, Seth A.; Johnson, L. Clifton; Kattner, ShiAnne M.; Schuster, Micah D.

    2010-01-01

    The Wyoming Survey for Hα, or WySH, is a large-area, ground-based imaging survey for Hα-emitting galaxies at redshifts of z ∼ 0.16, 0.24, 0.32, and 0.40. The survey spans up to 4 deg 2 in a set of fields of low Galactic cirrus emission, using twin narrowband filters at each epoch for improved stellar continuum subtraction. Hα luminosity functions are presented for each Δz ∼ 0.02 epoch based on a total of nearly 1200 galaxies. These data clearly show an evolution with look-back time in the volume-averaged cosmic star formation rate. Integrals of Schechter fits to the incompleteness- and extinction-corrected Hα luminosity functions indicate star formation rates per comoving volume of 0.010, 0.013, 0.020, 0.022 h 70 M sun yr -1 Mpc -3 at z ∼ 0.16, 0.24, 0.32, and 0.40, respectively. Combined statistical and systematic measurement uncertainties are on the order of 25%, while the effects of cosmic variance are at the 20% level. The bulk of this evolution is driven by changes in the characteristic luminosity L * of the Hα luminosity functions, with L * for the earlier two epochs being a factor of 2 larger than L * at the latter two epochs; it is more difficult with this data set to decipher systematic evolutionary differences in the luminosity function amplitude and faint-end slope. Coupling these results with a comprehensive compilation of results from the literature on emission line surveys, the evolution in the cosmic star formation rate density over 0 ∼< z ∼< 1.5 is measured.

  10. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  11. A NEAR-INFRARED SURVEY OF THE INNER GALACTIC PLANE FOR WOLF-RAYET STARS. II. GOING FAINTER: 71 MORE NEW W-R STARS

    Energy Technology Data Exchange (ETDEWEB)

    Shara, Michael M.; Faherty, Jacqueline K.; Zurek, David [American Museum of Natural History, 79th Street and Central Park West, New York, NY 10024-5192 (United States); Moffat, Anthony F. J.; Doyon, Rene [Departement de Physique, Universite de Montreal, CP 6128, Succ. C-V, Montreal, QC, H3C 3J7 (Canada); Gerke, Jill [Department of Astronomy, Ohio State University, Columbus, OH 43210-1173 (United States); Artigau, Etienne; Drissen, Laurent, E-mail: mshara@amnh.org, E-mail: jfaherty@amnh.org, E-mail: dzurek@amnh.org, E-mail: moffat@astro.umontreal.ca, E-mail: doyon@astro.umontreal.ca, E-mail: gerke@astronomy.ohio-state.edu, E-mail: artigau@astro.umontreal.ca, E-mail: ldrissen@phy.ulaval.ca [Departement de Physique, Universite Laval, Pavillon Vachon, Quebec City, QC, G1K 7P4 (Canada)

    2012-06-15

    We are continuing a J, K and narrowband imaging survey of 300 deg{sup 2} of the plane of the Galaxy, searching for new Wolf-Rayet (W-R) stars. Our survey spans 150 Degree-Sign in Galactic longitude and reaches 1 Degree-Sign above and below the Galactic plane. The survey has a useful limiting magnitude of K = 15 over most of the observed Galactic plane, and K = 14 (due to severe crowding) within a few degrees of the Galactic center. Thousands of emission-line candidates have been detected. In spectrographic follow-ups of 146 relatively bright W-R star candidates, we have re-examined 11 previously known WC and WN stars and discovered 71 new W-R stars, 17 of type WN and 54 of type WC. Our latest image analysis pipeline now picks out W-R stars with a 57% success rate. Star subtype assignments have been confirmed with the K-band spectra and distances approximated using the method of spectroscopic parallax. Some of the new W-R stars are among the most distant known in our Galaxy. The distribution of these new W-R stars is beginning to trace the locations of massive stars along the distant spiral arms of the Milky Way.

  12. A NEAR-INFRARED SURVEY OF THE INNER GALACTIC PLANE FOR WOLF-RAYET STARS. II. GOING FAINTER: 71 MORE NEW W-R STARS

    International Nuclear Information System (INIS)

    Shara, Michael M.; Faherty, Jacqueline K.; Zurek, David; Moffat, Anthony F. J.; Doyon, René; Gerke, Jill; Artigau, Etienne; Drissen, Laurent

    2012-01-01

    We are continuing a J, K and narrowband imaging survey of 300 deg 2 of the plane of the Galaxy, searching for new Wolf-Rayet (W-R) stars. Our survey spans 150° in Galactic longitude and reaches 1° above and below the Galactic plane. The survey has a useful limiting magnitude of K = 15 over most of the observed Galactic plane, and K = 14 (due to severe crowding) within a few degrees of the Galactic center. Thousands of emission-line candidates have been detected. In spectrographic follow-ups of 146 relatively bright W-R star candidates, we have re-examined 11 previously known WC and WN stars and discovered 71 new W-R stars, 17 of type WN and 54 of type WC. Our latest image analysis pipeline now picks out W-R stars with a 57% success rate. Star subtype assignments have been confirmed with the K-band spectra and distances approximated using the method of spectroscopic parallax. Some of the new W-R stars are among the most distant known in our Galaxy. The distribution of these new W-R stars is beginning to trace the locations of massive stars along the distant spiral arms of the Milky Way.

  13. Relationships between dental hygienists' career attitudes and their retention of practice. Part II. From the results of the Ohio Dentist and Dental Hygiene Surveys.

    Science.gov (United States)

    Cox, S S; Langhout, K J; Scheid, R C

    1993-01-01

    This article utilizes findings from the Ohio Dental Hygiene Survey and Ohio Dentist Survey to uncover what specific dental hygiene attitudes exist relative to employment and what factors have led to job termination and to re-entry. Ohio dental hygiene employees are most satisfied with patient relationships, co-worker relationships, and flexible working hours. The dental hygienists are least satisfied with fringe benefits, financial growth, and career creativity. Salary, benefits, nor career longevity were significant factors in determining satisfaction. Dental hygienists who were not working when surveyed, said they would consider returning to practice if a better salary were available, if they could find part-time work, if there were a good wage scale with benefits, or if their own financial need changed. Thirty-six percent of the non-practitioners said they would not ever consider returning to practice due to working conditions, establishment of a new career, or inadequate compensation. Dentist employers stated that they were satisfied or very satisfied with their dental hygienists' patient care and contribution to the practice.

  14. Survey report for fiscal 1998. Survey of trends of new CO{sub 2} fixation technology using bacteria and algae (II); 1998 nendo chosa hokokusho. Saikin sorui wo riyoshita atarashii nisanka tanso kotei gijutsu no doko chosa

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    The trend of technology is surveyed from a standpoint that, in the process of CO2 fixation using microbes for the production of useful substances, it is essential, in view of income/outgo balance and economy, to utilize their catalytic function. The survey centers about the feasibility of the utilization of organic wastes, cellulose wastes in particular, as an energy source. Special attention is paid to the energy of artificial light and laser beams. From a point of view that it is important to suppress cell multiplication and to effectively utilize only catalytic activity for the production of useful substances, the cell division mechanism of the Corynebacterium is analyzed, and the findings are compiled to facilitate the study as to whether the division may be controlled. A report is also prepared on the metabolic mechanism of a photosynthesizing bacterium that is judged to be the most promising species. Reference is made to aerobic and anaerobic bacteria. Shown are the organic compounds that are formed by CO2 gas fixation thanks to microbial or enzymatic reactions. To emphasize their importance as an energy source and to explain the conversion of biomass into useful substances, the technology and economy of conversion into fuel compounds are surveyed. The production of ethanol out of organic wastes is evaluated in the way of LCA (life cycle assessment). (NEDO)

  15. Copper (II)

    African Journals Online (AJOL)

    CLEMENT O BEWAJI

    Valine (2 - amino - 3 – methylbutanoic acid), is a chemical compound containing .... Stability constant (Kf). Gibb's free energy. ) (. 1. −. ∆. Mol. JG. [CuL2(H2O)2] ... synthesis and characterization of Co(ii), Ni(ii), Cu (II), and Zn(ii) complexes with ...

  16. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  17. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  18. NURE aerial gamma-ray and magnetic reconnaissance survey, Colorado-Arizona area: Salton Sea NI II-9, Phoenix NI 12-7, El Centro NI II-12, AJO NI 12-10, Lukeville NH 12-1 quadrangles. Volume I. Narrative report

    International Nuclear Information System (INIS)

    1979-11-01

    A rotary-wing reconnaissance high sensitivity radiometric and magnetic survey, encompassing several 1:250,000 quadrangles in southwestern Arizona and southeastern California, was performed. The surveyed area consisted of approximately 9300 line miles. The radiometric data were corrected and normalized to 400 feet terrain clearance. The data were identified as to rock type by correlating the data samples with existing geologic maps. Statistics defining the mean and standard deviation of each rock type are presented as listings in Volume I of this report. The departure of the data from its corresponding mean rock type is computed in terms of standard deviation units and is presented graphically as anomaly maps in Volume II and as computer listings in microfiche form in Volume I. Profiles of the normalized averaged data are contained in Volume II and include traces of the potassium, uranium and thorium count rates, corresponding ratios, and several ancilliary sensor data traces, magnetometer, radio altimeter and barometric pressure height. A description of the local geology is provided, and a discussion of the magnetic and radiometric data is presented together with an evaluation of selected uranium anomalies

  19. A theory of human error

    Science.gov (United States)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  20. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  1. Cognitive aspect of diagnostic errors.

    Science.gov (United States)

    Phua, Dong Haur; Tan, Nigel C K

    2013-01-01

    Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.

  2. Photometric and Spectroscopic Survey of the Cluster [DBS2003] 156 Associated with the H II Region G331.1-0.5

    Science.gov (United States)

    Pinheiro, M. C.; Ortiz, R.; Abraham, Z.; Copetti, M. V. F.

    2016-05-01

    The Norma section of the Milky Way is especially interesting because it crosses three spiral arms: Sagittarius-Carina, Scutum-Crux and the Norma arm itself. Distance determinations of embedded young stellar clusters can contribute to define the spiral structure in this part of the Galaxy. However, spectrophotometric distances were obtained for only a few of these clusters in Norma. We present a photometric and spectroscopic study in the NIR of the [DBS2003] 156 stellar cluster, associated with the H II region G331.1-0.5. We aim to find the ionizing sources of the H II region and determine its distance. The cluster was observed in the J, H, and {K}{{s}} bands and eight potential massive stars were chosen among the detected sources according to color criteria; subsequent spectroscopy of these candidates was performed with the Ohio State Infrared Imager/Spectrometer spectrograph attached to the Southern Observatory for Astrophysical Research 4.1 m telescope. We identified and classified spectroscopically four early-type stars: IRS 176 (O8 V), IRS 308 (O-type), IRS 310 (O6 V), and IRS 71 (B1 Iab). Based on the proximity of IRS 176 and 308 with the radio continuum emission peaks and their relative positions with respect to the warm dust mid-infrared emission, we concluded that these two stars are the main ionizing sources of the H ii region G331.1-0.5. The mean spectrophotometric distance of IRS 176 and 310 of 3.38 ± 0.58 kpc is similar to that obtained in a previous work for two early-type stars of the neighbor cluster [DBS2003] 157 of 3.29 ± 0.58 kpc. The narrow range of radial velocities of radio sources in the area of the clusters [DBS2003] 156 and 157 and their similar visual extinction indicate that these clusters are physically associated. A common distance of 3.34 ± 0.34 kpc is derived for the system [DBS2003] 156 and 157. Based on observations obtained at the Southern Observatory for Astrophysical Research (SOAR), a joint project of the Ministério de Ci

  3. A WIDE AREA SURVEY FOR HIGH-REDSHIFT MASSIVE GALAXIES. II. NEAR-INFRARED SPECTROSCOPY OF BzK-SELECTED MASSIVE STAR-FORMING GALAXIES

    International Nuclear Information System (INIS)

    Onodera, Masato; Daddi, Emanuele; Arimoto, Nobuo; Renzini, Alvio; Kong Xu; Cimatti, Andrea; Broadhurst, Tom; Alexander, Dave M.

    2010-01-01

    Results are presented from near-infrared spectroscopic observations of a sample of BzK-selected, massive star-forming galaxies (sBzKs) at 1.5 < z < 2.3 that were obtained with OHS/CISCO at the Subaru telescope and with SINFONI at the Very Large Telescope. Among the 28 sBzKs observed, Hα emission was detected in 14 objects, and for 11 of them the [N II] λ6583 flux was also measured. Multiwavelength photometry was also used to derive stellar masses and extinction parameters, whereas Hα and [N II] emissions have allowed us to estimate star formation rates (SFRs), metallicities, ionization mechanisms, and dynamical masses. In order to enforce agreement between SFRs from Hα with those derived from rest-frame UV and mid-infrared, additional obscuration for the emission lines (that originate in H II regions) was required compared to the extinction derived from the slope of the UV continuum. We have also derived the stellar mass-metallicity relation, as well as the relation between stellar mass and specific SFR (SSFR), and compared them to the results in other studies. At a given stellar mass, the sBzKs appear to have been already enriched to metallicities close to those of local star-forming galaxies of similar mass. The sBzKs presented here tend to have higher metallicities compared to those of UV-selected galaxies, indicating that near-infrared selected galaxies tend to be a chemically more evolved population. The sBzKs show SSFRs that are systematically higher, by up to ∼2 orders of magnitude, compared to those of local galaxies of the same mass. The empirical correlations between stellar mass and metallicity, and stellar mass and SSFR are then compared with those of evolutionary population synthesis models constructed either with the simple closed-box assumption, or within an infall scenario. Within the assumptions that are built-in such models, it appears that a short timescale for the star formation (≅100 Myr) and large initial gas mass appear to be required

  4. CHANDRA ACIS SURVEY OF X-RAY POINT SOURCES IN NEARBY GALAXIES. II. X-RAY LUMINOSITY FUNCTIONS AND ULTRALUMINOUS X-RAY SOURCES

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Song; Qiu, Yanli; Liu, Jifeng [Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China); Bregman, Joel N., E-mail: songw@bao.ac.cn, E-mail: jfliu@bao.ac.cn [University of Michigan, Ann Arbor, MI 48109 (United States)

    2016-09-20

    Based on the recently completed Chandra /ACIS survey of X-ray point sources in nearby galaxies, we study the X-ray luminosity functions (XLFs) for X-ray point sources in different types of galaxies and the statistical properties of ultraluminous X-ray sources (ULXs). Uniform procedures are developed to compute the detection threshold, to estimate the foreground/background contamination, and to calculate the XLFs for individual galaxies and groups of galaxies, resulting in an XLF library of 343 galaxies of different types. With the large number of surveyed galaxies, we have studied the XLFs and ULX properties across different host galaxy types, and confirm with good statistics that the XLF slope flattens from lenticular ( α ∼ 1.50 ± 0.07) to elliptical (∼1.21 ± 0.02), to spirals (∼0.80 ± 0.02), to peculiars (∼0.55 ± 0.30), and to irregulars (∼0.26 ± 0.10). The XLF break dividing the neutron star and black hole binaries is also confirmed, albeit at quite different break luminosities for different types of galaxies. A radial dependency is found for ellipticals, with a flatter XLF slope for sources located between D {sub 25} and 2 D {sub 25}, suggesting the XLF slopes in the outer region of early-type galaxies are dominated by low-mass X-ray binaries in globular clusters. This study shows that the ULX rate in early-type galaxies is 0.24 ± 0.05 ULXs per surveyed galaxy, on a 5 σ confidence level. The XLF for ULXs in late-type galaxies extends smoothly until it drops abruptly around 4 × 10{sup 40} erg s{sup −1}, and this break may suggest a mild boundary between the stellar black hole population possibly including 30 M {sub ⊙} black holes with super-Eddington radiation and intermediate mass black holes.

  5. THE HIGH A{sub V} Quasar Survey: Reddened Quasi-Stellar Objects selected from optical/near-infrared photometry. II

    Energy Technology Data Exchange (ETDEWEB)

    Krogager, J.-K.; Fynbo, J. P. U.; Vestergaard, M. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen Ø (Denmark); Geier, S. [Instituto de Astrofísica de Canarias (IAC), E-38205 La Laguna, Tenerife (Spain); Venemans, B. P. [Max-Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Ledoux, C. [European Southern Observatory, Alonso de Córdova 3107, Vitacura, Casilla 19001, Santiago 19 (Chile); Møller, P. [European Southern Observatory, Karl-Schwarzschildstrasse 2, D-85748 Garching bei München (Germany); Noterdaeme, P. [Institut d’Astrophysique de Paris, CNRS-UPMC, UMR7095, 98bis bd Arago, F-75014 Paris (France); Kangas, T.; Pursimo, T.; Smirnova, O. [Nordic Optical Telescope, Apartado 474, E-38700 Santa Cruz de La Palma (Spain); Saturni, F. G. [Tuorla Observatory, Department of Physics and Astronomy, University of Turku, Väisäläntie 20, 21500 Piikkiö (Finland)

    2015-03-15

    Quasi-stellar objects (QSOs) whose spectral energy distributions (SEDs) are reddened by dust either in their host galaxies or in intervening absorber galaxies are to a large degree missed by optical color selection criteria like the ones used by the Sloan Digital Sky Survey (SDSS). To overcome this bias against red QSOs, we employ a combined optical and near-infrared (near-IR) color selection. In this paper, we present a spectroscopic follow-up campaign of a sample of red candidate QSOs which were selected from the SDSS and the UKIRT Infrared Deep Sky Survey (UKIDSS). The spectroscopic data and SDSS/UKIDSS photometry are supplemented by mid-infrared photometry from the Wide-field Infrared Survey Explorer. In our sample of 159 candidates, 154 (97%) are confirmed to be QSOs. We use a statistical algorithm to identify sightlines with plausible intervening absorption systems and identify nine such cases assuming dust in the absorber similar to Large Magellanic Cloud sightlines. We find absorption systems toward 30 QSOs, 2 of which are consistent with the best-fit absorber redshift from the statistical modeling. Furthermore, we observe a broad range in SED properties of the QSOs as probed by the rest-frame 2 μm flux. We find QSOs with a strong excess as well as QSOs with a large deficit at rest-frame 2 μm relative to a QSO template. Potential solutions to these discrepancies are discussed. Overall, our study demonstrates the high efficiency of the optical/near-IR selection of red QSOs.

  6. SURVEYING THE AGENTS OF GALAXY EVOLUTION IN THE TIDALLY STRIPPED, LOW METALLICITY SMALL MAGELLANIC CLOUD (SAGE-SMC). II. COOL EVOLVED STARS

    International Nuclear Information System (INIS)

    Boyer, Martha L.; Meixner, Margaret; Gordon, Karl D.; Shiao, Bernie; Srinivasan, Sundar; Van Loon, Jacco Th.; McDonald, Iain; Kemper, F.; Zaritsky, Dennis; Block, Miwa; Engelbracht, Charles W.; Misselt, Karl; Babler, Brian; Bracker, Steve; Meade, Marilyn; Whitney, Barbara; Hora, Joe; Robitaille, Thomas; Indebetouw, Remy; Sewilo, Marta

    2011-01-01

    We investigate the infrared (IR) properties of cool, evolved stars in the Small Magellanic Cloud (SMC), including the red giant branch (RGB) stars and the dust-producing red supergiant (RSG) and asymptotic giant branch (AGB) stars using observations from the Spitzer Space Telescope Legacy program entitled 'Surveying the Agents of Galaxy Evolution in the Tidally Stripped, Low Metallicity SMC', or SAGE-SMC. The survey includes, for the first time, full spatial coverage of the SMC bar, wing, and tail regions at IR wavelengths (3.6-160 μm). We identify evolved stars using a combination of near-IR and mid-IR photometry and point out a new feature in the mid-IR color-magnitude diagram that may be due to particularly dusty O-rich AGB stars. We find that the RSG and AGB stars each contribute ∼20% of the global SMC flux (extended + point-source) at 3.6 μm, which emphasizes the importance of both stellar types to the integrated flux of distant metal-poor galaxies. The equivalent SAGE survey of the higher-metallicity Large Magellanic Cloud (SAGE-LMC) allows us to explore the influence of metallicity on dust production. We find that the SMC RSG stars are less likely to produce a large amount of dust (as indicated by the [3.6] - [8] color). There is a higher fraction of carbon-rich stars in the SMC, and these stars appear to reach colors as red as their LMC counterparts, indicating that C-rich dust forms efficiently in both galaxies. A preliminary estimate of the dust production in AGB and RSG stars reveals that the extreme C-rich AGB stars dominate the dust input in both galaxies, and that the O-rich stars may play a larger role in the LMC than in the SMC.

  7. ON THE CLUSTER PHYSICS OF SUNYAEV-ZEL'DOVICH AND X-RAY SURVEYS. II. DECONSTRUCTING THE THERMAL SZ POWER SPECTRUM

    International Nuclear Information System (INIS)

    Battaglia, N.; Bond, J. R.; Pfrommer, C.; Sievers, J. L.

    2012-01-01

    Secondary anisotropies in the cosmic microwave background are a treasure-trove of cosmological information. Interpreting current experiments probing them are limited by theoretical uncertainties rather than by measurement errors. Here we focus on the secondary anisotropies resulting from the thermal Sunyaev-Zel'dovich (tSZ) effect; the amplitude of which depends critically on the average thermal pressure profile of galaxy groups and clusters. To this end, we use a suite of hydrodynamical TreePM-SPH simulations that include radiative cooling, star formation, supernova feedback, and energetic feedback from active galactic nuclei. We examine in detail how the pressure profile depends on cluster radius, mass, and redshift and provide an empirical fitting function. We employ three different approaches for calculating the tSZ power spectrum: an analytical approach that uses our pressure profile fit, a semianalytical method of pasting our pressure fit onto simulated clusters, and a direct numerical integration of our simulated volumes. We demonstrate that the detailed structure of the intracluster medium and cosmic web affect the tSZ power spectrum. In particular, the substructure and asphericity of clusters increase the tSZ power spectrum by 10%-20% at l ∼ 2000-8000, with most of the additional power being contributed by substructures. The contributions to the power spectrum from radii larger than R 500 is ∼20% at l = 3000, thus clusters interiors (r 500 ) dominate the power spectrum amplitude at these angular scales.

  8. ON THE CLUSTER PHYSICS OF SUNYAEV-ZEL'DOVICH AND X-RAY SURVEYS. II. DECONSTRUCTING THE THERMAL SZ POWER SPECTRUM

    Energy Technology Data Exchange (ETDEWEB)

    Battaglia, N. [Department of Astronomy and Astrophysics, University of Toronto, 50 St George, Toronto, ON M5S 3H4 (Canada); Bond, J. R.; Pfrommer, C.; Sievers, J. L. [Canadian Institute for Theoretical Astrophysics, 60 St George, Toronto, ON M5S 3H8 (Canada)

    2012-10-20

    Secondary anisotropies in the cosmic microwave background are a treasure-trove of cosmological information. Interpreting current experiments probing them are limited by theoretical uncertainties rather than by measurement errors. Here we focus on the secondary anisotropies resulting from the thermal Sunyaev-Zel'dovich (tSZ) effect; the amplitude of which depends critically on the average thermal pressure profile of galaxy groups and clusters. To this end, we use a suite of hydrodynamical TreePM-SPH simulations that include radiative cooling, star formation, supernova feedback, and energetic feedback from active galactic nuclei. We examine in detail how the pressure profile depends on cluster radius, mass, and redshift and provide an empirical fitting function. We employ three different approaches for calculating the tSZ power spectrum: an analytical approach that uses our pressure profile fit, a semianalytical method of pasting our pressure fit onto simulated clusters, and a direct numerical integration of our simulated volumes. We demonstrate that the detailed structure of the intracluster medium and cosmic web affect the tSZ power spectrum. In particular, the substructure and asphericity of clusters increase the tSZ power spectrum by 10%-20% at l {approx} 2000-8000, with most of the additional power being contributed by substructures. The contributions to the power spectrum from radii larger than R {sub 500} is {approx}20% at l = 3000, thus clusters interiors (r < R {sub 500}) dominate the power spectrum amplitude at these angular scales.

  9. Issues with data and analyses: Errors, underlying themes, and potential solutions.

    Science.gov (United States)

    Brown, Andrew W; Kaiser, Kathryn A; Allison, David B

    2018-03-13

    Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge.

  10. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  12. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  13. THE ARIZONA RADIO OBSERVATORY CO MAPPING SURVEY OF GALACTIC MOLECULAR CLOUDS. II. THE W3 REGION IN CO J = 2-1, 13CO J = 2-1, AND CO J = 3-2 EMISSION

    International Nuclear Information System (INIS)

    Bieging, John H.; Peters, William L.

    2011-01-01

    We present fully sampled 38'' resolution maps of the CO and 13 CO J = 2-1 lines in the molecular clouds toward the H II region complex W3. The maps cover a 2. 0 0 x 1. 0 67 section of the galactic plane and span -70 to -20 km s -1 (LSR) in velocity with a resolution of ∼1.3 km s -1 . The velocity range of the images includes all the gas in the Perseus spiral arm. We also present maps of CO J = 3-2 emission for a 0. 0 5 x 0. 0 33 area containing the H II regions W3 Main and W3(OH). The J = 3-2 maps have velocity resolution of 0.87 km s -1 and 24'' angular resolution. Color figures display the peak line brightness temperature, the velocity-integrated intensity, and velocity channel maps for all three lines, and also the (CO/ 13 CO) J = 2-1 line intensity ratios as a function of velocity. The line intensity image cubes are made available in standard FITS format as electronically readable files. We compare our molecular line maps with the 1.1 mm continuum image from the BOLOCAM Galactic Plane Survey (BGPS). From our 13 CO image cube, we derive kinematic information for the 65 BGPS sources in the mapped field, in the form of Gaussian component fits.

  14. CARS: the CFHTLS-Archive-Research Survey. II. Weighing dark matter halos of Lyman-break galaxies at z = 3-5

    Science.gov (United States)

    Hildebrandt, H.; Pielorz, J.; Erben, T.; van Waerbeke, L.; Simon, P.; Capak, P.

    2009-05-01

    Aims: We measure the clustering properties for a large samples of u- (z˜3), g- (z˜4), and r- (z˜5) dropouts from the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) Deep fields. Methods: Photometric redshift distributions along with simulations allow us to de-project the angular correlation measurements and estimate physical quantities such as the correlation length, halo mass, galaxy bias, and halo occupation as a function of UV luminosity. Results: For the first time we detect a significant one-halo term in the correlation function at z˜5. The comoving correlation lengths and halo masses of LBGs are found to decrease with decreasing rest-frame UV-luminosity. No significant redshift evolution is found in either quantity. The typical halo mass hosting an LBG is M⪆1012~h-1~M_⊙ and the halos are typically occupied by less than one galaxy. Clustering segregation with UV luminosity is clearly observed in the dropout samples, however redshift evolution cannot clearly be disentangled from systematic uncertainties introduced by the redshift distributions. We study a range of possible redshift distributions to illustrate the effect of this choice. Spectroscopy of representative subsamples is required to make high-accuracy absolute measurements of high-z halo masses. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. Based on zCOSMOS and VVDS observations carried out using the Very Large Telescope at the ESO Paranal Observatory under Programme IDs: LP175.A

  15. Policies on documentation and disciplinary action in hospital pharmacies after a medication error.

    Science.gov (United States)

    Bauman, A N; Pedersen, C A; Schommer, J C; Griffith, N L

    2001-06-15

    Hospital pharmacies were surveyed about policies on medication error documentation and actions taken against pharmacists involved in an error. The survey was mailed to 500 randomly selected hospital pharmacy directors in the United States. Data were collected on the existence of medication error reporting policies, what types of errors were documented and how, and hospital demographics. The response rate was 28%. Virtually all of the hospitals had policies and procedures for medication error reporting. Most commonly, documentation of oral and written reprimand was placed in the personnel file of a pharmacist involved in an error. One sixth of respondents had no policy on documentation or disciplinary action in the event of an error. Approximately one fourth of respondents reported that suspension or termination had been used as a form of disciplinary action; legal action was rarely used. Many respondents said errors that caused harm (42%) or death (40%) to the patient were documented in the personnel file, but 34% of hospitals did not document errors in the personnel file regardless of error type. Nearly three fourths of respondents differentiated between errors caught and not caught before a medication leaves the pharmacy and between errors caught and not caught before administration to the patient. More emphasis is needed on documentation of medication errors in hospital pharmacies.

  16. The Low-Resolution Spectrograph of the Hobby-Eberly Telescope. II. Observations of Quasar Candidates from the Sloan Digital Sky Survey

    International Nuclear Information System (INIS)

    Schneider, D. P.; Hill, Gary J.; Fan, X.; Ramsey, L. W.; MacQueen, P. J.; Weedman, D. W.; Booth, J. A.; Eracleous, M.; Gunn, J. E.; Lupton, R. H.

    2000-01-01

    This paper describes spectra of quasar candidates acquired during the commissioning phase of the Low-Resolution Spectrograph of the Hobby-Eberly Telescope. The objects were identified as possible quasars from multicolor image data from the Sloan Digital Sky Survey. The 10 sources had typical r' magnitudes of 19-20, except for one extremely red object with r ' ≅23. The data, obtained with exposure times between 10 and 25 minutes, reveal that the spectra of four candidates are essentially featureless and are not quasars, five are quasars with redshifts between 2.92 and 4.15 (including one broad absorption line quasar), and the red source is a very late M star or early L dwarf. (c) (c) 2000. The Astronomical Society of the Pacific

  17. Herschel Observations of Extraordinary Sources: Analysi sof the HIFI 1.2 THz Wide Spectral Survey toward Orion KL II. Chemical Implications

    Science.gov (United States)

    Crockett, N. R.; Bergin, E. A.; Neill, J. L.; Favre, C.; Blake, G. A.; Herbst, E.; Anderson, D. E.; Hassel, G. E.

    2015-06-01

    We present chemical implications arising from spectral models fit to the Herschel/HIFI spectral survey toward the Orion Kleinmann-Low nebula (Orion KL). We focus our discussion on the eight complex organics detected within the HIFI survey utilizing a novel technique to identify those molecules emitting in the hottest gas. In particular, we find the complex nitrogen bearing species CH3CN, C2H3CN, C2H5CN, and NH2CHO systematically trace hotter gas than the oxygen bearing organics CH3OH, C2H5OH, CH3OCH3, and CH3OCHO, which do not contain nitrogen. If these complex species form predominantly on grain surfaces, this may indicate N-bearing organics are more difficult to remove from grain surfaces than O-bearing species. Another possibility is that hot (Tkin ∼ 300 K) gas phase chemistry naturally produces higher complex cyanide abundances while suppressing the formation of O-bearing complex organics. We compare our derived rotation temperatures and molecular abundances to chemical models, which include gas-phase and grain surface pathways. Abundances for a majority of the detected complex organics can be reproduced over timescales ≳105 years, with several species being underpredicted by less than 3σ. Derived rotation temperatures for most organics, furthermore, agree reasonably well with the predicted temperatures at peak abundance. We also find that sulfur bearing molecules that also contain oxygen (i.e., SO, SO2, and OCS) tend to probe the hottest gas toward Orion KL, indicating the formation pathways for these species are most efficient at high temperatures. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  18. THE SPITZER SPACE TELESCOPE SURVEY OF THE ORION A AND B MOLECULAR CLOUDS. II. THE SPATIAL DISTRIBUTION AND DEMOGRAPHICS OF DUSTY YOUNG STELLAR OBJECTS

    International Nuclear Information System (INIS)

    Megeath, S. T.; Kryukova, E.; Gutermuth, R.; Muzerolle, J.; Hora, J. L.; Myers, P. C.; Fazio, G. G.; Allen, L. E.; Flaherty, K.; Hartmann, L.; Pipher, J. L.; Stauffer, J.; Young, E. T.

    2016-01-01

    We analyze the spatial distribution of dusty young stellar objects (YSOs) identified in the Spitzer Survey of the Orion Molecular clouds, augmenting these data with Chandra X-ray observations to correct for incompleteness in dense clustered regions. We also devise a scheme to correct for spatially varying incompleteness when X-ray data are not available. The local surface densities of the YSOs range from 1 pc −2 to over 10,000 pc −2 , with protostars tending to be in higher density regions. This range of densities is similar to other surveyed molecular clouds with clusters, but broader than clouds without clusters. By identifying clusters and groups as continuous regions with surface densities ≥10 pc −2 , we find that 59% of the YSOs are in the largest cluster, the Orion Nebula Cluster (ONC), while 13% of the YSOs are found in a distributed population. A lower fraction of protostars in the distributed population is evidence that it is somewhat older than the groups and clusters. An examination of the structural properties of the clusters and groups shows that the peak surface densities of the clusters increase approximately linearly with the number of members. Furthermore, all clusters with more than 70 members exhibit asymmetric and/or highly elongated structures. The ONC becomes azimuthally symmetric in the inner 0.1 pc, suggesting that the cluster is only ∼2 Myr in age. We find that the star formation efficiency (SFE) of the Orion B cloud is unusually low, and that the SFEs of individual groups and clusters are an order of magnitude higher than those of the clouds. Finally, we discuss the relationship between the young low mass stars in the Orion clouds and the Orion OB 1 association, and we determine upper limits to the fraction of disks that may be affected by UV radiation from OB stars or dynamical interactions in dense, clustered regions

  19. SPITZER ULTRA FAINT SURVEY PROGRAM (SURFS UP). II. IRAC-DETECTED LYMAN-BREAK GALAXIES AT 6 ≲ z ≲ 10 BEHIND STRONG-LENSING CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Kuang-Han; Bradač, Maruša; Hoag, Austin; Cain, Benjamin; Lubin, L. M.; Knight, Robert I. [University of California Davis, 1 Shields Avenue, Davis, CA 95616 (United States); Lemaux, Brian C. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Ryan, R. E. Jr.; Brammer, Gabriel B. [Aix Marseille Université, CNRS, LAM (Laboratoire d’Astrophysique de Marseille) UMR 7326, F-13388 Marseille (France); Castellano, Marco; Amorin, Ricardo; Fontana, Adriano; Merlin, Emiliano [INAF—Osservatorio Astronomico di Roma Via Frascati 33, I-00040 Monte Porzio Catone (Italy); Schmidt, Kasper B. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Schrabback, Tim [Argelander-Institut für Astronomie, Auf Dem Hügel 71, D-53121 Bonn (Germany); Treu, Tommaso [Department of Physics and Astronomy, UCLA, Los Angeles, CA 90095 (United States); Gonzalez, Anthony H. [Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL 32611 (United States); Linden, Anja von der, E-mail: khhuang@ucdavis.edu, E-mail: astrokuang@gmail.com [Department of Physics, Stanford University, 382 Via Pueblo Mall, Stanford, CA 94305 (United States)

    2016-01-20

    We study the stellar population properties of the IRAC-detected 6 ≲ z ≲ 10 galaxy candidates from the Spitzer UltRa Faint SUrvey Program. Using the Lyman Break selection technique, we find a total of 17 galaxy candidates at 6 ≲ z ≲ 10 from Hubble Space Telescope images (including the full-depth images from the Hubble Frontier Fields program for MACS 1149 and MACS 0717) that have detections at signal-to-noise ratios  ≥ 3 in at least one of the IRAC 3.6 and 4.5 μm channels. According to the best mass models available for the surveyed galaxy clusters, these IRAC-detected galaxy candidates are magnified by factors of ∼1.2–5.5. Due to the magnification of the foreground galaxy clusters, the rest-frame UV absolute magnitudes M{sub 1600} are between −21.2 and −18.9 mag, while their intrinsic stellar masses are between 2 × 10{sup 8}M{sub ⊙} and 2.9 × 10{sup 9}M{sub ⊙}. We identify two Lyα emitters in our sample from the Keck DEIMOS spectra, one at z{sub Lyα} = 6.76 (in RXJ 1347) and one at z{sub Lyα} = 6.32 (in MACS 0454). We find that 4 out of 17 z ≳ 6 galaxy candidates are favored by z ≲ 1 solutions when IRAC fluxes are included in photometric redshift fitting. We also show that IRAC [3.6]–[4.5] color, when combined with photometric redshift, can be used to identify galaxies which likely have strong nebular emission lines or obscured active galactic nucleus contributions within certain redshift windows.

  20. THE SWIFT GRB HOST GALAXY LEGACY SURVEY. II. REST-FRAME NEAR-IR LUMINOSITY DISTRIBUTION AND EVIDENCE FOR A NEAR-SOLAR METALLICITY THRESHOLD

    Energy Technology Data Exchange (ETDEWEB)

    Perley, D. A. [Department of Astronomy, California Institute of Technology, MC 249-17, 1200 East California Blvd., Pasadena, CA 91125 (United States); Tanvir, N. R. [Department of Physics and Astronomy, University of Leicester, University Road, Leicester, LE1 7RH (United Kingdom); Hjorth, J.; Fynbo, J. P. U.; Krühler, T. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 København Ø (Denmark); Laskar, T.; Berger, E. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Chary, R. [US Planck Data Center, MS220-6, Pasadena, CA 91125 (United States); Postigo, A. de Ugarte [Instituto de Astrofísica de Andalucía (IAA-CSIC), Glorieta de la Astronomía s/n, E-18008, Granada (Spain); Levan, A. J. [Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Michałowski, M. J. [Scottish Universities Physics Alliance, Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, EH9 3HJ (United Kingdom); Schulze, S., E-mail: dperley@dark-cosmology.dk [Instituto de Astrofísica, Facultad de Física, Pontificia Universidad Católica de Chile, Vicuña Mackenna 4860, 7820436 Macul, Santiago 22 (Chile)

    2016-01-20

    We present rest-frame near-IR (NIR) luminosities and stellar masses for a large and uniformly selected population of gamma-ray burst (GRB) host galaxies using deep Spitzer Space Telescope imaging of 119 targets from the Swift GRB Host Galaxy Legacy Survey spanning 0.03 < z < 6.3, and we determine the effects of galaxy evolution and chemical enrichment on the mass distribution of the GRB host population across cosmic history. We find a rapid increase in the characteristic NIR host luminosity between z ∼ 0.5 and z ∼ 1.5, but little variation between z ∼ 1.5 and z ∼ 5. Dust-obscured GRBs dominate the massive host population but are only rarely seen associated with low-mass hosts, indicating that massive star-forming galaxies are universally and (to some extent) homogeneously dusty at high redshift while low-mass star-forming galaxies retain little dust in their interstellar medium. Comparing our luminosity distributions with field surveys and measurements of the high-z mass–metallicity relation, our results have good consistency with a model in which the GRB rate per unit star formation is constant in galaxies with gas-phase metallicity below approximately the solar value but heavily suppressed in more metal-rich environments. This model also naturally explains the previously reported “excess” in the GRB rate beyond z ≳ 2; metals stifle GRB production in most galaxies at z < 1.5 but have only minor impact at higher redshifts. The metallicity threshold we infer is much higher than predicted by single-star models and favors a binary progenitor. Our observations also constrain the fraction of cosmic star formation in low-mass galaxies undetectable to Spitzer to be small at z < 4.

  1. Aliasing errors in measurements of beam position and ellipticity

    International Nuclear Information System (INIS)

    Ekdahl, Carl

    2005-01-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all

  2. Aliasing errors in measurements of beam position and ellipticity

    Science.gov (United States)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  3. Um exame dos modelos de redes de filas abertas aplicados a sistemas de manufatura discretos: parte II A survey on open queueing network models applied to discrete manufacturing systems: part II

    Directory of Open Access Journals (Sweden)

    Gabriel R. Bitran

    1995-12-01

    Full Text Available Este artigo apresenta a segunda (e última parte do nosso exame dos modelos de redes de filas abertas aplicados a sistemas de manufatura discretos. Nosso enfoque é em modelos de projeto e planejamento de job-shops. Na primeira parte (BITRAN & MORABITO, 1995b revisamos métodos de decomposição exatos e aproximados para modelos de avaliação de desempenho em sistemas com múltiplas classes de produtos e diversas estações de trabalho. Nesta segunda parte examinamos modelos de otimização de três categorias de problemas: a primeira minimiza o investimento de capital de maneira a atingir uma medida de desempenho (estoque em processo ou leadtime, a segunda busca otimizar a medida de desempenho sujeito às restrições de recursos, e a terceira explora resultados de pesquisas recentes com a redução de complexidade mediante reprojeto da planta e da partição de produtos.This paper presents the second (and last part of our survey on open queueing network models applied to discrete manufacturing systems. We focus on design and planning for job-shops. In the first part (Bitran and Morabito, 1995b we reviewed exact and approximate decomposition methods for performance evaluation models for single and multiple product class systems. The second part reviews optimization models of three categories of problems: the first minimizes capital investment subject to attaining a performance measure (WIP or leadtime, the second seeks to optimize the performance measure subject to resource constraints, and the third explores recent research developments in complexity reduction through shop redesign and products partitioning.

  4. THE SELF-CORRECTION OF ENGLISH SPEECH ERRORS IN SECOND LANGUANGE LEARNING

    Directory of Open Access Journals (Sweden)

    Ketut Santi Indriani

    2015-05-01

    Full Text Available The process of second language (L2 learning is strongly influenced by the factors of error reconstruction that occur when the language is learned. Errors will definitely appear in the learning process. However, errors can be used as a step to accelerate the process of understanding the language. Doing self-correction (with or without giving cues is one of the examples. In the aspect of speaking, self-correction is done immediately after the error appears. This study is aimed at finding (i what speech errors the L2 speakers are able to identify, (ii of the errors identified, what speech errors the L2 speakers are able to self correct and (iii whether the self-correction of speech error are able to immediately improve the L2 learning. Based on the data analysis, it was found that the majority identified errors are related to noun (plurality, subject-verb agreement, grammatical structure and pronunciation.. B2 speakers tend to correct errors properly. Of the 78% identified speech errors, as much as 66% errors could be self-corrected accurately by the L2 speakers. Based on the analysis, it was also found that self-correction is able to improve L2 learning ability directly. This is evidenced by the absence of repetition of the same error after the error had been corrected.

  5. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  6. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  7. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  8. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  9. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  10. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  11. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  12. Negligence, genuine error, and litigation

    OpenAIRE

    Sohn DH

    2013-01-01

    David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort syst...

  13. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  14. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  15. Psychological safety and error reporting within Veterans Health Administration hospitals.

    Science.gov (United States)

    Derickson, Ryan; Fishman, Jonathan; Osatuke, Katerine; Teclaw, Robert; Ramsel, Dee

    2015-03-01

    In psychologically safe workplaces, employees feel comfortable taking interpersonal risks, such as pointing out errors. Previous research suggested that psychologically safe climate optimizes organizational outcomes. We evaluated psychological safety levels in Veterans Health Administration (VHA) hospitals and assessed their relationship to employee willingness of reporting medical errors. We conducted an ANOVA on psychological safety scores from a VHA employees census survey (n = 185,879), assessing variability of means across racial and supervisory levels. We examined organizational climate assessment interviews (n = 374) evaluating how many employees asserted willingness to report errors (or not) and their stated reasons. Finally, based on survey data, we identified 2 (psychologically safe versus unsafe) hospitals and compared their number of employees who would be willing/unwilling to report an error. Psychological safety increased with supervisory level (P hospital (71% would report, 13% would not) were less willing to report an error than at the psychologically safe hospital (91% would, 0% would not). A substantial minority would not report an error and were willing to admit so in a private interview setting. Their stated reasons as well as higher psychological safety means for supervisory employees both suggest power as an important determinant. Intentions to report were associated with psychological safety, strongly suggesting this climate aspect as instrumental to improving patient safety and reducing costs.

  16. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  17. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  18. Medical Error and Moral Luck.

    Science.gov (United States)

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome.

  19. Radiology errors: are we learning from our mistakes?

    International Nuclear Information System (INIS)

    Mankad, K.; Hoey, E.T.D.; Jones, J.B.; Tirukonda, P.; Smith, J.T.

    2009-01-01

    Aim: To question practising radiologists and radiology trainees at a large international meeting in an attempt to survey individuals about error reporting. Materials and methods: Radiologists attending the 2007 Radiological Society of North America (RSNA) annual meeting were approached to fill in a written questionnaire. Participants were questioned as to their grade, country in which they practised, and subspecialty interest. They were asked whether they kept a personal log of their errors (with an error defined as 'a mistake that has management implications for the patient'), how many errors they had made in the preceding 12 months, and the types of errors that had occurred. They were also asked whether their local department held regular discrepancy/errors meetings, how many they had attended in the preceding 12 months, and the perceived atmosphere at these meetings (on a qualitative scale). Results: A total of 301 radiologists with a wide range of specialty interests from 32 countries agreed to take part. One hundred and sixty-six of 301 (55%) of responders were consultant/attending grade. One hundred and thirty-five of 301 (45%) were residents/fellows. Fifty-nine of 301 (20%) of responders kept a personal record of their errors. The number of errors made per person per year ranged from none (2%) to 16 or more (7%). The majority (91%) reported making between one and 15 errors/year. Overcalls (40%), under-calls (25%), and interpretation error (15%) were the predominant error types. One hundred and seventy-eight of 301 (59%) of participants stated that their department held regular errors meeting. One hundred and twenty-seven of 301 (42%) had attended three or more meetings in the preceding year. The majority (55%) who had attended errors meetings described the atmosphere as 'educational.' Only a small minority (2%) described the atmosphere as 'poor' meaning non-educational and/or blameful. Conclusion: Despite the undeniable importance of learning from errors

  20. Gamma-spectrometric surveys in differentiated granites. II: the Joaquim Murtinho Granite in the Cunhaporanga Granitic Complex, Parana, SE Brazil; Levantamentos gamaespectrometricos em granitos diferenciados. II: O exemplo do Granito Joaquim Murtinho, Complexo Granitico Cunhaporanga, Parana

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Francisco Jose Fonseca [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Geologia. Lab. de Pesquisas em Geofisica Aplicada; Fruchting, Allan [Votorantim Metais, Sao Paulo, SP (Brazil)], e-mail: allan.fruchting@vmetais.com.br; Guimaraes, Gilson Burigo [Universidade Estadual de Ponta Grossa (UEPG), PR (Brazil). Dept. de Geociencias], e-mail: gburigo@ig.com.br; Alves, Luizemara Soares [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)], e-mail: luizemara@petrobras.com.br; Martin, Victor Miguel Oliveira; Ulbrich, Horstpeter Herberto Gustavo Jose [Universidade de Sao Paulo (USP), SP (Brazil). Inst. de Geociencias. Dept. de Mineralogia e Geotectonica], e-mail: vicmartin6@ig.com.br, e-mail: hulbrich@usp.br

    2009-07-01

    Detailed mapping at the NW corner of the large Neo proterozoic Cunhaporanga Granitic Complex (CGC), Parana state, SE Brazil, redefined the Joaquim Murtinho Granite (JMG), a late intrusion in CGC with an exposed area of about 10 km{sup 2}, made up mainly by evolved 'alaskites' (alkali-feldspar leuco granites). This unit is in tectonic contact with the Neoproterozoic-Eocambrian volcano-sedimentary Castro Group, to the W, and is intrusive into other less evolved granitic units of the CGC to the E. Petrographically, JMG shows mainly mesoperthite and quartz, with subordinate amounts of altered micas and some accessory phases, mainly zircon. The equi to inequigranular granites are usually deformed with cataclastic textures, are often brecciated, and may have miarolitic structures. Formation of late albite, sericite, carbonate and hematite was caused by deuteric and hydrothermal alteration. A gamma-ray spectrometric survey at 231 stations which measured total counts (TC), Ueq K%, eU ppm and eTh ppm was used to construct several direct and derived maps. Compared to neighboring units the JMG has significant anomalies, especially in the TC, %K, eTh and eU maps, although the differences are less obvious in some derived maps. These evolved granites are enriched in these three elements. Geochemical behavior of K, Th and U is used to analyse the results observed in maps. Enhanced weathering under a subtropical climate with moderate to high average temperatures and heavy rainfall affects mainly feldspars and biotite, and may also destabilize most U and Th-bearing accessory phases. Th is most likely retained in restite minerals in soils, being relatively immobile, while part of U may migrate as uranyl ion in oxidizing media. K is especially affected by feldspar alteration to K-free clays (mainly kaolinite), and may be completely leached. Gamma-ray spectrometric methods are valid tools to study facies in granitic rocks, especially in those that are enriched in K, Th and U

  1. Hα3: an Hα imaging survey of HI selected galaxies from ALFALFA. II. Star formation properties of galaxies in the Virgo cluster and surroundings

    Science.gov (United States)

    Gavazzi, G.; Fumagalli, M.; Fossati, M.; Galardo, V.; Grossetti, F.; Boselli, A.; Giovanelli, R.; Haynes, M. P.

    2013-05-01

    Context. We present the analysis of Hα3, an Hα narrow-band imaging follow-up survey of 409 galaxies selected from the HI Arecibo Legacy Fast ALFA Survey (ALFALFA) in the Local Supercluster, including the Virgo cluster, in the region 11h advantage of Hα3, which provides the complete census of the recent massive star formation rate (SFR) in HI-rich galaxies in the local Universe and of ancillary optical data from SDSS we explore the relations between the stellar mass, the HI mass, and the current, massive SFR of nearby galaxies in the Virgo cluster. We compare these with those of isolated galaxies in the Local Supercluster, and we investigate the role of the environment in shaping the star formation properties of galaxies at the present cosmological epoch. Methods: By using the Hα hydrogen recombination line as a tracer of recent star formation, we investigated the relationships between atomic neutral gas and newly formed stars in different environments (cluster and field), for many morphological types (spirals and dwarfs), and over a wide range of stellar masses (107.5 to 1011.5 M⊙). To quantify the degree of environmental perturbation, we adopted an updated calibration of the HI deficiency parameter which we used to divide the sample into three classes: unperturbed galaxies (DefHI ≤ 0.3), perturbed galaxies (0.3 model. Once considered as a whole, the Virgo cluster is effective in removing neutral hydrogen from galaxies, and this perturbation is strong enough to appreciably reduce the SFR of its entire galaxy population. Conclusions: An estimate of the present infall rate of 300-400 galaxies per Gyr in the Virgo cluster is obtained from the number of existing HI-rich late-type systems, assuming 200-300 Myr as the time scale for HI ablation. If the infall process has been acting at a constant rate, this would imply that the Virgo cluster has formed approximately 2 Gyr ago, consistently with the idea that Virgo is in a young state of dynamical evolution. Based

  2. Error Analysis Of Clock Time (T), Declination (*) And Latitude ...

    African Journals Online (AJOL)

    ), latitude (Φ), longitude (λ) and azimuth (A); which are aimed at establishing fixed positions and orientations of survey points and lines on the earth surface. The paper attempts the analysis of the individual and combined effects of error in time ...

  3. (II) complexes

    African Journals Online (AJOL)

    activities of Schiff base tin (II) complexes. Neelofar1 ... Conclusion: All synthesized Schiff bases and their Tin (II) complexes showed high antimicrobial and ...... Singh HL. Synthesis and characterization of tin (II) complexes of fluorinated Schiff bases derived from amino acids. Spectrochim Acta Part A: Molec Biomolec.

  4. Surveying Future Surveys

    Science.gov (United States)

    Carlstrom, John E.

    2016-06-01

    The now standard model of cosmology has been tested and refined by the analysis of increasingly sensitive, large astronomical surveys, especially with statistically significant millimeter-wave surveys of the cosmic microwave background and optical surveys of the distribution of galaxies. This talk will offer a glimpse of the future, which promises an acceleration of this trend with cosmological information coming from new surveys across the electromagnetic spectrum as well as particles and even gravitational waves.

  5. Second Byurakan spectral sky survey. II. Results for region centered on alpha 09h50m, delta +55 deg 00 arcmin

    International Nuclear Information System (INIS)

    Markarian, B.E.; Stepanian, D.A.

    1984-01-01

    The second list of objects in the Second Biurakan Spectral Sky Survey of the region centered on alpha 09h50m, delta +55 deg 00 arcmin is given. The list contains data on 110 objects and galaxies of a peculiar physical nature and 24 blue stars. The observations were made with the 40-52 arcsec Schmidt telescope of the Biurakan Astrophysical Observatory with a set of three objective prisms using Kodak IIIaJ and IIIaF emulsions sensitized in nitrogen. The area is found to contain 20 quasar candidates and four Seyfert galaxies, 27 blue stellar objects, 24 galaxies with an appreciable ultraviolet continuum, and 39 emission galaxies without appreciable ultraviolet radiation. The surface brightness of the quasars and Seyferts on the considered area down to the limiting magnitude 19.5 M is more than 1.5 per square degree with allowance for the already known quasars. The surface density of emission galaxies is about four per square degree. 7 references

  6. Mutagens from the cooking of food. II. Survey by Ames/Salmonella test of mutagen formation in the major protein-rich foods of the American diet

    Energy Technology Data Exchange (ETDEWEB)

    Bjeldanes, L.F. (Univ. of California, Berkeley); Morris, M.M.; Felton, J.S.; Healy, S.; Stuermer, D.; Berry, P.; Timourian, H.; Hatch, F.T.

    1982-01-01

    The formation of mutagens in the major cooked protein-rich foods in the US diet was studied in the Ames Salmonella typhimurium test. The nine protein-rich foods most commonly eaten in the USA--ground beef, beef steak, eggs, pork chops, fried chicken, pot-roasted beef, ham, roast beef and bacon--were examined for their mutagenicity towards S. typhimurium TA1538 after normal 'household' cooking (deep frying, griddle/pan frying, baking/roasting, broiling, stewing, braising or boiling at 100-475/sup 0/C). Well-done fried ground beef, beef steak, ham, pork chops and bacon showed significant mutagen formation. For chicken and beef steak high-temperature broiling produced the most mutagenicity, followed by baking/roasting and frying. Stewing, braising and deep frying produced little mutagen. Eggs andd egg products produced mutagens only after cooking at high temperatures (the yolk to a greater extent than the white). Commercially cooked hamburgers showed a wide range of mutagenic activity. We conclude that mutagen formation following cooking of protein-containing foods is a complex function of food type, cooking time and cooking temperature. It seems clear that all the major protein-rich foods if cooked to a well-done state on the griddle (eggs only at temperature above 225/sup 0/C) or by broiling will contain mutagens detectable by the Ames/Salmonella assay. This survey is a step towards determining whether any human health hazard results from cooking protein-rich foods. Further testing in both short- and long-term genotoxicity bioassays and carcinogenesis assays are needed before any human risk extrapolations can be made.

  7. Mutagens from the cooking of food. II. Survey by Ames/Salmonella test of mutagen formation in the major protein-rich foods of the American diet.

    Science.gov (United States)

    Bjeldanes, L F; Morris, M M; Felton, J S; Healy, S; Stuermer, D; Berry, P; Timourian, H; Hatch, F T

    1982-08-01

    The formation of mutagens in the major cooked protein-rich foods in the US diet was studied in the Ames Salmonella typhimurium test. The nine protein-rich foods most commonly eaten in the USA--ground beef, beef steak, eggs, pork chops, fried chicken, pot-roasted beef, ham, roast beef and bacon--were examined for their mutagenicity towards S. typhimurium TA1538 after normal 'household' cooking (deep frying, griddle/pan frying, baking/roasting, broiling, stewing, braising or boiling of 100-475 degrees C). Well-done fried ground beef, beef steak, ham pork chops and bacon showed significant mutagen formation. For chicken and beef steak high-temperature broiling produced the most mutagenicity, followed by baking/roasting and frying. Stewing, braising and deep frying produced little mutagen. Eggs and egg products produced mutagens only after cooking at high temperatures (the yolk to a greater extent than the white). Commercially cooked hamburgers showed a wide range of mutagenic activity. We conclude that mutagen formation following cooking of protein-containing foods is a complex function of food type, cooking time and cooking temperature. It seems clear that all the major protein-rich foods if cooked to a well-done state on the griddle (eggs only at temperatures above 225 degrees C) or by broiling will contain mutagens detectable by the Ames/Salmonella assay. This survey is a step towards determining whether any human health hazard results from cooking protein-rich foods. Further testing in both short- and long-term genotoxicity bioassays and carcinogenesis assays are needed before any human risk extrapolations can be made.

  8. Large Interstellar Polarisation Survey. II. UV/optical study of cloud-to-cloud variations of dust in the diffuse ISM

    Science.gov (United States)

    Siebenmorgen, R.; Voshchinnikov, N. V.; Bagnulo, S.; Cox, N. L. J.; Cami, J.; Peest, C.

    2018-03-01

    It is well known that the dust properties of the diffuse interstellar medium exhibit variations towards different sight-lines on a large scale. We have investigated the variability of the dust characteristics on a small scale, and from cloud-to-cloud. We use low-resolution spectro-polarimetric data obtained in the context of the Large Interstellar Polarisation Survey (LIPS) towards 59 sight-lines in the Southern Hemisphere, and we fit these data using a dust model composed of silicate and carbon particles with sizes from the molecular to the sub-micrometre domain. Large (≥6 nm) silicates of prolate shape account for the observed polarisation. For 32 sight-lines we complement our data set with UVES archive high-resolution spectra, which enable us to establish the presence of single-cloud or multiple-clouds towards individual sight-lines. We find that the majority of these 35 sight-lines intersect two or more clouds, while eight of them are dominated by a single absorbing cloud. We confirm several correlations between extinction and parameters of the Serkowski law with dust parameters, but we also find previously undetected correlations between these parameters that are valid only in single-cloud sight-lines. We find that interstellar polarisation from multiple-clouds is smaller than from single-cloud sight-lines, showing that the presence of a second or more clouds depolarises the incoming radiation. We find large variations of the dust characteristics from cloud-to-cloud. However, when we average a sufficiently large number of clouds in single-cloud or multiple-cloud sight-lines, we always retrieve similar mean dust parameters. The typical dust abundances of the single-cloud cases are [C]/[H] = 92 ppm and [Si]/[H] = 20 ppm.

  9. THE BOSS EMISSION-LINE LENS SURVEY. II. INVESTIGATING MASS-DENSITY PROFILE EVOLUTION IN THE SLACS+BELLS STRONG GRAVITATIONAL LENS SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, Adam S.; Brownstein, Joel R.; Shu Yiping; Arneson, Ryan A. [Department of Physics and Astronomy, University of Utah, 115 South 1400 East, Salt Lake City, UT 84112 (United States); Kochanek, Christopher S. [Department of Astronomy and Center for Cosmology and Astroparticle Physics, Ohio State University, Columbus, OH 43210 (United States); Schlegel, David J. [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Eisenstein, Daniel J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS 20, Cambridge, MA 02138 (United States); Wake, David A. [Department of Astronomy, Yale University, New Haven, CT 06520 (United States); Connolly, Natalia [Department of Physics, Hamilton College, Clinton, NY 13323 (United States); Maraston, Claudia [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom); Weaver, Benjamin A., E-mail: bolton@astro.utah.edu [Center for Cosmology and Particle Physics, New York University, New York, NY 10003 (United States)

    2012-09-20

    We present an analysis of the evolution of the central mass-density profile of massive elliptical galaxies from the SLACS and BELLS strong gravitational lens samples over the redshift interval z Almost-Equal-To 0.1-0.6, based on the combination of strong-lensing aperture mass and stellar velocity-dispersion constraints. We find a significant trend toward steeper mass profiles (parameterized by the power-law density model with {rho}{proportional_to}r {sup -{gamma}}) at later cosmic times, with magnitude d < {gamma} > /dz = -0.60 {+-} 0.15. We show that the combined lens-galaxy sample is consistent with a non-evolving distribution of stellar velocity dispersions. Considering possible additional dependence of <{gamma} > on lens-galaxy stellar mass, effective radius, and Sersic index, we find marginal evidence for shallower mass profiles at higher masses and larger sizes, but with a significance that is subdominant to the redshift dependence. Using the results of published Monte Carlo simulations of spectroscopic lens surveys, we verify that our mass-profile evolution result cannot be explained by lensing selection biases as a function of redshift. Interpreted as a true evolutionary signal, our result suggests that major dry mergers involving off-axis trajectories play a significant role in the evolution of the average mass-density structure of massive early-type galaxies over the past 6 Gyr. We also consider an alternative non-evolutionary hypothesis based on variations in the strong-lensing measurement aperture with redshift, which would imply the detection of an 'inflection zone' marking the transition between the baryon-dominated and dark-matter halo-dominated regions of the lens galaxies. Further observations of the combined SLACS+BELLS sample can constrain this picture more precisely, and enable a more detailed investigation of the multivariate dependences of galaxy mass structure across cosmic time.

  10. The Global Online Sexuality Survey (GOSS): the United States of America in 2011 chapter II: phosphodiesterase inhibitors utilization among English speakers.

    Science.gov (United States)

    Shaeer, Osama

    2013-02-01

    Utility of phosphodiesterase inhibitors (PDEi's) for the treatment of erectile dysfunction (ED) has been the focus of experimental and clinical studies. However, public preferences, attitudes, and experiences with PDEi's are rarely addressed from a population/epidemiology viewpoint. The Global Online Sexuality Survey (GOSS) is a worldwide epidemiologic study of sexuality and sexual disorders, first launched in the Middle East in 2010, followed by the United States in 2011. To describe the utilization rates, trends, and attitudes toward PDEi's in the United States in the year 2011. GOSS was randomly deployed to English-speaking male Web surfers in the United States via paid advertising on Facebook®, comprising 146 questions. Utilization rates and preferences for PDEi's by brand. Six hundred three subjects participated; mean age 53.43 years ± 13.9. Twenty-three point seven percent used PDEi's on more consistent basis, 37.5% of those with ED vs. 15.6% of those without ED (recreational users). Unrealistic safety concerns including habituation were pronounced. Seventy-nine point six percent of utilization was on prescription basis. PDEi's were purchased through pharmacies (5.3% without prescription) and in 16.5% over the Internet (68% without prescription). Nine point six percent nonprescription users suffered coronary heart disease. Prescription use was inclined toward sildenafil, generally, and particularly in severe cases, and shifted toward tadalafil in moderate ED and for recreational use, followed by vardenafil. Nonprescription utilization trends were similar, except in recreational use where sildenafil came first. In the United States unrealistic safety concerns over PDEi's utility exist and should be addressed. Preference for particular PDEi's over the others is primarily dictated by health-care providers, despite lack of guidelines that govern physician choice. Online and over-the-counter sales of PDEi's are common, and can expose a subset of users to health

  11. Refractive errors in children and adolescents in Bucaramanga (Colombia).

    Science.gov (United States)

    Galvis, Virgilio; Tello, Alejandro; Otero, Johanna; Serrano, Andrés A; Gómez, Luz María; Castellanos, Yuly

    2017-01-01

    The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia). This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D). Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.

  12. Refractive errors in children and adolescents in Bucaramanga (Colombia

    Directory of Open Access Journals (Sweden)

    Virgilio Galvis

    Full Text Available ABSTRACT Purpose: The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia. Methods: This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. Results: One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D. Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. Conclusions: The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.

  13. Predictors of Errors of Novice Java Programmers

    Science.gov (United States)

    Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.

    2012-01-01

    This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…

  14. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  15. Learning a locomotor task: with or without errors?

    Science.gov (United States)

    Marchal-Crespo, Laura; Schneider, Jasmin; Jaeger, Lukas; Riener, Robert

    2014-03-04

    Robotic haptic guidance is the most commonly used robotic training strategy to reduce performance errors while training. However, research on motor learning has emphasized that errors are a fundamental neural signal that drive motor adaptation. Thus, researchers have proposed robotic therapy algorithms that amplify movement errors rather than decrease them. However, to date, no study has analyzed with precision which training strategy is the most appropriate to learn an especially simple task. In this study, the impact of robotic training strategies that amplify or reduce errors on muscle activation and motor learning of a simple locomotor task was investigated in twenty two healthy subjects. The experiment was conducted with the MAgnetic Resonance COmpatible Stepper (MARCOS) a special robotic device developed for investigations in the MR scanner. The robot moved the dominant leg passively and the subject was requested to actively synchronize the non-dominant leg to achieve an alternating stepping-like movement. Learning with four different training strategies that reduce or amplify errors was evaluated: (i) Haptic guidance: errors were eliminated by passively moving the limbs, (ii) No guidance: no robot disturbances were presented, (iii) Error amplification: existing errors were amplified with repulsive forces, (iv) Noise disturbance: errors were evoked intentionally with a randomly-varying force disturbance on top of the no guidance strategy. Additionally, the activation of four lower limb muscles was measured by the means of surface electromyography (EMG). Strategies that reduce or do not amplify errors limit muscle activation during training and result in poor learning gains. Adding random disturbing forces during training seems to increase attention, and therefore improve motor learning. Error amplification seems to be the most suitable strategy for initially less skilled subjects, perhaps because subjects could better detect their errors and correct them

  16. Measurement error in income and schooling, and the bias of linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...... with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result...

  17. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage

  18. Golden gravitational lensing systems from the Sloan Lens ACS Survey - II. SDSS J1430+4105: a precise inner total mass profile from lensing alone

    Science.gov (United States)

    Eichner, Thomas; Seitz, Stella; Bauer, Anne

    2012-12-01

    We study the Sloan Lens ACS (SLACS) survey strong-lensing system SDSS J1430+4105 at zl = 0.285. The lensed source (zs = 0.575) of this system has a complex morphology with several subcomponents. Its subcomponents span a radial range from 4 to 10 kpc in the plane of the lens. Therefore, we can constrain the slope of the total projected mass profile around the Einstein radius from lensing alone. We measure a density profile that is slightly but not significantly shallower than isothermal at the Einstein radius. We decompose the mass of the lensing galaxy into a de Vaucouleurs component to trace the stars and an additional dark component. The spread of multiple-image components over a large radial range also allows us to determine the amplitude of the de Vaucouleurs and dark matter components separately. We get a mass-to-light ratio of M de Vauc LB ≈ (5.5±1.5) M⊙L⊙,B and a dark matter fraction within the Einstein radius of ≈20 to 40 per cent. Modelling the star formation history assuming composite stellar populations at solar metallicity to the galaxy's photometry yields a mass-to-light ratio of M, salp LB ≈ 4.0-1.3+0.6 M⊙L⊙,B and M, chab LB ≈ 2.3-0.8+0.3 M⊙L⊙,B for Salpeter and Chabrier initial mass functions, respectively. Hence, the mass-to-light ratio derived from lensing is more Salpeter like, in agreement with results for massive Coma galaxies and other nearby massive early-type galaxies. We examine the consequences of the galaxy group in which the lensing galaxy is embedded, showing that it has little influence on the mass-to-light ratio obtained for the de Vaucouleurs component of the lensing galaxy. Finally, we decompose the projected, azimuthally averaged 2D density distribution of the de Vaucouleurs and dark matter components of the lensing signal into spherically averaged 3D density profiles. We can show that the 3D dark and luminous matter density within the Einstein radius (REin ≈ 0.6 Reff) of this SLACS galaxy is similar to the

  19. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  20. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  1. Negligence, genuine error, and litigation

    Directory of Open Access Journals (Sweden)

    Sohn DH

    2013-02-01

    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  2. Spacecraft and propulsion technician error

    Science.gov (United States)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  3. Sensation seeking and error processing.

    Science.gov (United States)

    Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan

    2014-09-01

    Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific. Copyright © 2014 Society for Psychophysiological Research.

  4. Errors of Inference Due to Errors of Measurement.

    Science.gov (United States)

    Linn, Robert L.; Werts, Charles E.

    Failure to consider errors of measurement when using partial correlation or analysis of covariance techniques can result in erroneous conclusions. Certain aspects of this problem are discussed and particular attention is given to issues raised in a recent article by Brewar, Campbell, and Crano. (Author)

  5. Measurement error models with uncertainty about the error variance

    NARCIS (Netherlands)

    Oberski, D.L.; Satorra, A.

    2013-01-01

    It is well known that measurement error in observable variables induces bias in estimates in standard regression analysis and that structural equation models are a typical solution to this problem. Often, multiple indicator equations are subsumed as part of the structural equation model, allowing

  6. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  7. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  8. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  9. Medication errors: definitions and classification

    Science.gov (United States)

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  10. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  11. Human Error and Organizational Management

    Directory of Open Access Journals (Sweden)

    Alecxandrina DEACONU

    2009-01-01

    Full Text Available The concern for performance is a topic that raises interest in the businessenvironment but also in other areas that – even if they seem distant from thisworld – are aware of, interested in or conditioned by the economy development.As individual performance is very much influenced by the human resource, wechose to analyze in this paper the mechanisms that generate – consciously or not–human error nowadays.Moreover, the extremely tense Romanian context,where failure is rather a rule than an exception, made us investigate thephenomenon of generating a human error and the ways to diminish its effects.

  12. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error