WorldWideScience

Sample records for total error rate

  1. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Total Survey Error for Longitudinal Surveys

    NARCIS (Netherlands)

    Lynn, Peter; Lugtig, P.J.

    2016-01-01

    This article describes the application of the total survey error paradigm to longitudinal surveys. Several aspects of survey error, and of the interactions between different types of error, are distinct in the longitudinal survey context. Furthermore, error trade-off decisions in survey design and

  3. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  4. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    Science.gov (United States)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  5. Multicenter Assessment of Gram Stain Error Rates.

    Science.gov (United States)

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  6. 45 CFR 98.100 - Error Rate Report.

    Science.gov (United States)

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... the total dollar amount of payments made in the sample); the average amount of improper payment; and... not received. (e) Costs of Preparing the Error Rate Report—Provided the error rate calculations and...

  7. Efficiently characterizing the total error in quantum circuits

    Science.gov (United States)

    Carignan-Dugas, Arnaud; Wallman, Joel J.; Emerson, Joseph

    A promising technological advancement meant to enlarge our computational means is the quantum computer. Such a device would harvest the quantum complexity of the physical world in order to unfold concrete mathematical problems more efficiently. However, the errors emerging from the implementation of quantum operations are likewise quantum, and hence share a similar level of intricacy. Fortunately, randomized benchmarking protocols provide an efficient way to characterize the operational noise within quantum devices. The resulting figures of merit, like the fidelity and the unitarity, are typically attached to a set of circuit components. While important, this doesn't fulfill the main goal: determining if the error rate of the total circuit is small enough in order to trust its outcome. In this work, we fill the gap by providing an optimal bound on the total fidelity of a circuit in terms of component-wise figures of merit. Our bound smoothly interpolates between the classical regime, in which the error rate grows linearly in the circuit's length, and the quantum regime, which can naturally allow quadratic growth. Conversely, our analysis substantially improves the bounds on single circuit element fidelities obtained through techniques such as interleaved randomized benchmarking. This research was supported by the U.S. Army Research Office through Grant W911NF- 14-1-0103, CIFAR, the Government of Ontario, and the Government of Canada through NSERC and Industry Canada.

  8. Errors in the Total Testing Process in the Clinical Chemistry ...

    African Journals Online (AJOL)

    2018-03-01

    Mar 1, 2018 ... Analytical errors related to internal and external quality control exceeding the target range, (14.4%) ... indicators to assess errors in the total testing process. The. University ... Evidence showed that the risk of .... Data management and quality control: Pre-test ..... indicators and specifications for key processes.

  9. Estimation of Total Error in DWPF Reported Radionuclide Inventories

    International Nuclear Information System (INIS)

    Edwards, T.B.

    1995-01-01

    This report investigates the impact of random errors due to measurement and sampling on the reported concentrations of radionuclides in DWPF's filled canister inventory resulting from each macro-batch. The objective of this investigation is to estimate the variance of the total error in reporting these radionuclide concentrations

  10. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  11. Generalizing human error rates: A taxonomic approach

    International Nuclear Information System (INIS)

    Buffardi, L.; Fleishman, E.; Allen, J.

    1989-01-01

    It is well established that human error plays a major role in malfunctioning of complex, technological systems and in accidents associated with their operation. Estimates of the rate of human error in the nuclear industry range from 20-65% of all system failures. In response to this, the Nuclear Regulatory Commission has developed a variety of techniques for estimating human error probabilities for nuclear power plant personnel. Most of these techniques require the specification of the range of human error probabilities for various tasks. Unfortunately, very little objective performance data on error probabilities exist for nuclear environments. Thus, when human reliability estimates are required, for example in computer simulation modeling of system reliability, only subjective estimates (usually based on experts' best guesses) can be provided. The objective of the current research is to provide guidelines for the selection of human error probabilities based on actual performance data taken in other complex environments and applying them to nuclear settings. A key feature of this research is the application of a comprehensive taxonomic approach to nuclear and non-nuclear tasks to evaluate their similarities and differences, thus providing a basis for generalizing human error estimates across tasks. In recent years significant developments have occurred in classifying and describing tasks. Initial goals of the current research are to: (1) identify alternative taxonomic schemes that can be applied to tasks, and (2) describe nuclear tasks in terms of these schemes. Three standardized taxonomic schemes (Ability Requirements Approach, Generalized Information-Processing Approach, Task Characteristics Approach) are identified, modified, and evaluated for their suitability in comparing nuclear and non-nuclear power plant tasks. An agenda for future research and its relevance to nuclear power plant safety is also discussed

  12. Logical error rate scaling of the toric code

    International Nuclear Information System (INIS)

    Watson, Fern H E; Barrett, Sean D

    2014-01-01

    To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)

  13. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  14. Total error vs. measurement uncertainty: revolution or evolution?

    Science.gov (United States)

    Oosterhuis, Wytze P; Theodorsson, Elvar

    2016-02-01

    The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.

  15. Total error components - isolation of laboratory variation from method performance

    International Nuclear Information System (INIS)

    Bottrell, D.; Bleyler, R.; Fisk, J.; Hiatt, M.

    1992-01-01

    The consideration of total error across sampling and analytical components of environmental measurements is relatively recent. The U.S. Environmental Protection Agency (EPA), through the Contract Laboratory Program (CLP), provides complete analyses and documented reports on approximately 70,000 samples per year. The quality assurance (QA) functions of the CLP procedures provide an ideal data base-CLP Automated Results Data Base (CARD)-to evaluate program performance relative to quality control (QC) criteria and to evaluate the analysis of blind samples. Repetitive analyses of blind samples within each participating laboratory provide a mechanism to separate laboratory and method performance. Isolation of error sources is necessary to identify effective options to establish performance expectations, and to improve procedures. In addition, optimized method performance is necessary to identify significant effects that result from the selection among alternative procedures in the data collection process (e.g., sampling device, storage container, mode of sample transit, etc.). This information is necessary to evaluate data quality; to understand overall quality; and to provide appropriate, cost-effective information required to support a specific decision

  16. Technological Advancements and Error Rates in Radiation Therapy Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States)

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  17. Technological Advancements and Error Rates in Radiation Therapy Delivery

    International Nuclear Information System (INIS)

    Margalit, Danielle N.; Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K.

    2011-01-01

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)–conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women’s Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher’s exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01–0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08–0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  18. Accelerated testing for cosmic soft-error rate

    International Nuclear Information System (INIS)

    Ziegler, J.F.; Muhlfeld, H.P.; Montrose, C.J.; Curtis, H.W.; O'Gorman, T.J.; Ross, J.M.

    1996-01-01

    This paper describes the experimental techniques which have been developed at IBM to determine the sensitivity of electronic circuits to cosmic rays at sea level. It relates IBM circuit design and modeling, chip manufacture with process variations, and chip testing for SER sensitivity. This vertical integration from design to final test and with feedback to design allows a complete picture of LSI sensitivity to cosmic rays. Since advanced computers are designed with LSI chips long before the chips have been fabricated, and the system architecture is fully formed before the first chips are functional, it is essential to establish the chip reliability as early as possible. This paper establishes techniques to test chips that are only partly functional (e.g., only 1Mb of a 16Mb memory may be working) and can establish chip soft-error upset rates before final chip manufacturing begins. Simple relationships derived from measurement of more than 80 different chips manufactured over 20 years allow total cosmic soft-error rate (SER) to be estimated after only limited testing. Comparisons between these accelerated test results and similar tests determined by ''field testing'' (which may require a year or more of testing after manufacturing begins) show that the experimental techniques are accurate to a factor of 2

  19. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    Science.gov (United States)

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  20. Errors in the Total Testing Process in the Clinical Chemistry ...

    African Journals Online (AJOL)

    2018-03-01

    Mar 1, 2018 ... testing processes impair the clinical decision-making process. Such errors are ... and external quality control exceeding the target range, (14.4%) and (51.4%) .... version 3.5.3 and transferred to Statistical. Package for the ...

  1. Process error rates in general research applications to the Human ...

    African Journals Online (AJOL)

    Objective. To examine process error rates in applications for ethics clearance of health research. Methods. Minutes of 586 general research applications made to a human health research ethics committee (HREC) from April 2008 to March 2009 were examined. Rates of approval were calculated and reasons for requiring ...

  2. Individual Differences and Rating Errors in First Impressions of Psychopathy

    Directory of Open Access Journals (Sweden)

    Christopher T. A. Gillen

    2016-10-01

    Full Text Available The current study is the first to investigate whether individual differences in personality are related to improved first impression accuracy when appraising psychopathy in female offenders from thin-slices of information. The study also investigated the types of errors laypeople make when forming these judgments. Sixty-seven undergraduates assessed 22 offenders on their level of psychopathy, violence, likability, and attractiveness. Psychopathy rating accuracy improved as rater extroversion-sociability and agreeableness increased and when neuroticism and lifestyle and antisocial characteristics decreased. These results suggest that traits associated with nonverbal rating accuracy or social functioning may be important in threat detection. Raters also made errors consistent with error management theory, suggesting that laypeople overappraise danger when rating psychopathy.

  3. A critique of recent models for human error rate assessment

    International Nuclear Information System (INIS)

    Apostolakis, G.E.

    1988-01-01

    This paper critically reviews two groups of models for assessing human error rates under accident conditions. The first group, which includes the US Nuclear Regulatory Commission (NRC) handbook model and the human cognitive reliability (HCR) model, considers as fundamental the time that is available to the operators to act. The second group, which is represented by the success likelihood index methodology multiattribute utility decomposition (SLIM-MAUD) model, relies on ratings of the human actions with respect to certain qualitative factors and the subsequent derivation of error rates. These models are evaluated with respect to two criteria: the treatment of uncertainties and the internal coherence of the models. In other words, this evaluation focuses primarily on normative aspects of these models. The principal findings are as follows: (1) Both of the time-related models provide human error rates as a function of the available time for action and the prevailing conditions. However, the HCR model ignores the important issue of state-of-knowledge uncertainties, dealing exclusively with stochastic uncertainty, whereas the model presented in the NRC handbook handles both types of uncertainty. (2) SLIM-MAUD provides a highly structured approach for the derivation of human error rates under given conditions. However, the treatment of the weights and ratings in this model is internally inconsistent. (author)

  4. Evaluation of soft errors rate in a commercial memory EEPROM

    International Nuclear Information System (INIS)

    Claro, Luiz H.; Silva, A.A.; Santos, Jose A.

    2011-01-01

    Soft errors are transient circuit errors caused by external radiation. When an ion intercepts a p-n region in an electronic component, the ionization produces excess charges along the track. These charges when collected can flip internal values, especially in memory cells. The problem affects not only space application but also terrestrial ones. Neutrons induced by cosmic rays and alpha particles, emitted from traces of radioactive contaminants contained in packaging and chip materials, are the predominant sources of radiation. The soft error susceptibility is different for different memory technology hence the experimental study are very important for Soft Error Rate (SER) evaluation. In this work, the methodology for accelerated tests is presented with the results for SER in a commercial electrically erasable and programmable read-only memory (EEPROM). (author)

  5. The 95% confidence intervals of error rates and discriminant coefficients

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2015-02-01

    Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.

  6. Assessment of salivary flow rate: biologic variation and measure error.

    NARCIS (Netherlands)

    Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.

    2004-01-01

    OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated

  7. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  8. Estimating error rates for firearm evidence identifications in forensic science

    Science.gov (United States)

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  9. Error rate performance of narrowband multilevel CPFSK signals

    Science.gov (United States)

    Ekanayake, N.; Fonseka, K. J. P.

    1987-04-01

    The paper presents a relatively simple method for analyzing the effect of IF filtering on the performance of multilevel FM signals. Using this method, the error rate performance of narrowband FM signals is analyzed for three different detection techniques, namely limiter-discriminator detection, differential detection and coherent detection followed by differential decoding. The symbol error probabilities are computed for a Gaussian IF filter and a second-order Butterworth IF filter. It is shown that coherent detection and differential decoding yields better performance than limiter-discriminator detection and differential detection, whereas two noncoherent detectors yield approximately identical performance.

  10. Aniseikonia quantification: error rate of rule of thumb estimation.

    Science.gov (United States)

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  11. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  12. The nearest neighbor and the bayes error rates.

    Science.gov (United States)

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal.

  13. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  14. Modeling of Bit Error Rate in Cascaded 2R Regenerators

    DEFF Research Database (Denmark)

    Öhman, Filip; Mørk, Jesper

    2006-01-01

    and the regenerating nonlinearity is investigated. It is shown that an increase in nonlinearity can compensate for an increase in noise figure or decrease in signal power. Furthermore, the influence of the improvement in signal extinction ratio along the cascade and the importance of choosing the proper threshold......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...

  15. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  16. Minimizing Symbol Error Rate for Cognitive Relaying with Opportunistic Access

    KAUST Repository

    Zafar, Ammar

    2012-12-29

    In this paper, we present an optimal resource allocation scheme (ORA) for an all-participate(AP) cognitive relay network that minimizes the symbol error rate (SER). The SER is derived and different constraints are considered on the system. We consider the cases of both individual and global power constraints, individual constraints only and global constraints only. Numerical results show that the ORA scheme outperforms the schemes with direct link only and uniform power allocation (UPA) in terms of minimizing the SER for all three cases of different constraints. Numerical results also show that the individual constraints only case provides the best performance at large signal-to-noise-ratio (SNR).

  17. Influence of calculation error of total field anomaly in strongly magnetic environments

    Science.gov (United States)

    Yuan, Xiaoyu; Yao, Changli; Zheng, Yuanman; Li, Zelin

    2016-04-01

    An assumption made in many magnetic interpretation techniques is that ΔTact (total field anomaly - the measurement given by total field magnetometers, after we remove the main geomagnetic field, T0) can be approximated mathematically by ΔTpro (the projection of anomalous field vector in the direction of the earth's normal field). In order to meet the demand for high-precision processing of magnetic prospecting, the approximate error E between ΔTact and ΔTpro is studied in this research. Generally speaking, the error E is extremely small when anomalies not greater than about 0.2T0. However, the errorE may be large in highly magnetic environments. This leads to significant effects on subsequent quantitative inference. Therefore, we investigate the error E through numerical experiments of high-susceptibility bodies. A systematic error analysis was made by using a 2-D elliptic cylinder model. Error analysis show that the magnitude of ΔTact is usually larger than that of ΔTpro. This imply that a theoretical anomaly computed without accounting for the error E overestimate the anomaly associated with the body. It is demonstrated through numerical experiments that the error E is obvious and should not be ignored. It is also shown that the curves of ΔTpro and the error E had a certain symmetry when the directions of magnetization and geomagnetic field changed. To be more specific, the Emax (the maximum of the error E) appeared above the center of the magnetic body when the magnetic parameters are determined. Some other characteristics about the error Eare discovered. For instance, the curve of Emax with respect to the latitude was symmetrical on both sides of magnetic equator, and the extremum of the Emax can always be found in the mid-latitudes, and so on. It is also demonstrated that the error Ehas great influence on magnetic processing transformation and inversion results. It is conclude that when the bodies have highly magnetic susceptibilities, the error E can

  18. Estimation of total error in DWPF reported radionuclide inventories. Revision 1

    International Nuclear Information System (INIS)

    Edwards, T.B.

    1995-01-01

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site is required to determine and report the radionuclide inventory of its glass product. For each macro-batch, the DWPF will report both the total amount (in curies) of each reportable radionuclide and the average concentration (in curies/gram of glass) of each reportable radionuclide. The DWPF is to provide the estimated error of these reported values of its radionuclide inventory as well. The objective of this document is to provide a framework for determining the estimated error in DWPF's reporting of these radionuclide inventories. This report investigates the impact of random errors due to measurement and sampling on the total amount of each reportable radionuclide in a given macro-batch. In addition, the impact of these measurement and sampling errors and process variation are evaluated to determine the uncertainty in the reported average concentrations of radionuclides in DWPF's filled canister inventory resulting from each macro-batch

  19. Systematic errors in the tables of theoretical total internal conversion coefficients

    International Nuclear Information System (INIS)

    Dragoun, O.; Rysavy, M.

    1992-01-01

    Some of the total internal conversion coefficients presented in widely used tables of Rosel et al (1978 Atom. Data Nucl. Data Tables 21, 291) were found to be erroneous. The errors appear for some low transition energies, all multipolarities, and probably for all elements. The origin of the errors is explained. The subshell conversion coefficients of Rosel et al, where available, agree with our calculations. to within a few percent. (author)

  20. Error-rate performance analysis of opportunistic regenerative relaying

    KAUST Repository

    Tourki, Kamel

    2011-09-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.

  1. The decline and fall of Type II error rates

    Science.gov (United States)

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  2. Low dose rate gamma ray induced loss and data error rate of multimode silica fibre links

    International Nuclear Information System (INIS)

    Breuze, G.; Fanet, H.; Serre, J.

    1993-01-01

    Fiber optics data transmission from numerous multiplexed sensors, is potentially attractive for nuclear plant applications. Multimode silica fiber behaviour during steady state gamma ray exposure is studied as a joint programme between LETI CE/SACLAY and EDF Renardieres: transmitted optical power and bit error rate have been measured on a 100 m optical fiber

  3. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  4. Airborne and total gamma absorbed dose rates at Patiala - India

    International Nuclear Information System (INIS)

    Tesfaye, Tilahun; Sahota, H.S.; Singh, K.

    1999-01-01

    The external gamma absorbed dose rate due to gamma rays originating from gamma emitting aerosols in air, is compared with the total external gamma absorbed dose rate at the Physics Department of Punjabi University, Patiala. It has been found out that the contribution, to the total external gamma absorbed dose rate, of radionuclides on particulate matter suspended in air is about 20% of the overall gamma absorbed dose rate. (author)

  5. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  6. Analysis of gross error rates in operation of commercial nuclear power stations

    International Nuclear Information System (INIS)

    Joos, D.W.; Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    Experience in operation of US commercial nuclear power plants is reviewed over a 25-month period. The reports accumulated in that period on events of human error and component failure are examined to evaluate gross operator error rates. The impact of such errors on plant operation and safety is examined through the use of proper taxonomies of error, tasks and failures. Four categories of human errors are considered; namely, operator, maintenance, installation and administrative. The computed error rates are used to examine appropriate operator models for evaluation of operator reliability. Human error rates are found to be significant to a varying degree in both BWR and PWR. This emphasizes the import of considering human factors in safety and reliability analysis of nuclear systems. The results also indicate that human errors, and especially operator errors, do indeed follow the exponential reliability model. (Auth.)

  7. Using wide area differential GPS to improve total system error for precision flight operations

    Science.gov (United States)

    Alter, Keith Warren

    Total System Error (TSE) refers to an aircraft's total deviation from the desired flight path. TSE can be divided into Navigational System Error (NSE), the error attributable to the aircraft's navigation system, and Flight Technical Error (FTE), the error attributable to pilot or autopilot control. Improvement in either NSE or FTE reduces TSE and leads to the capability to fly more precise flight trajectories. The Federal Aviation Administration's Wide Area Augmentation System (WAAS) became operational for non-safety critical applications in 2000 and will become operational for safety critical applications in 2002. This navigation service will provide precise 3-D positioning (demonstrated to better than 5 meters horizontal and vertical accuracy) for civil aircraft in the United States. Perhaps more importantly, this navigation system, which provides continuous operation across large regions, enables new flight instrumentation concepts which allow pilots to fly aircraft significantly more precisely, both for straight and curved flight paths. This research investigates the capabilities of some of these new concepts, including the Highway-In-The Sky (HITS) display, which not only improves FTE but also reduces pilot workload when compared to conventional flight instrumentation. Augmentation to the HITS display, including perspective terrain and terrain alerting, improves pilot situational awareness. Flight test results from demonstrations in Juneau, AK, and Lake Tahoe, CA, provide evidence of the overall feasibility of integrated, low-cost flight navigation systems based on these concepts. These systems, requiring no more computational power than current-generation low-end desktop computers, have immediate applicability to general aviation flight from Cessnas to business jets and can support safer and ultimately more economical flight operations. Commercial airlines may also, over time, benefit from these new technologies.

  8. An Analysis of Total Lightning Flash Rates Over Florida

    Science.gov (United States)

    Mazzetti, Thomas O.; Fuelberg, Henry E.

    2017-12-01

    Although Florida is known as the "Sunshine State", it also contains the greatest lightning flash densities in the United States. Flash density has received considerable attention in the literature, but lightning flash rate has received much less attention. We use data from the Earth Networks Total Lightning Network (ENTLN) to produce a 5 year (2010-2014) set of statistics regarding total flash rates over Florida and adjacent regions. Instead of tracking individual storms, we superimpose a 0.2° × 0.2° grid over the study region and count both cloud-to-ground (CG) and in-cloud (IC) flashes over 5 min intervals. Results show that the distribution of total flash rates is highly skewed toward small values, whereas the greatest rate is 185 flashes min-1. Greatest average annual flash rates ( 3 flashes min-1) are located near Orlando. The southernmost peninsula, North Florida, and the Florida Panhandle exhibit smaller average annual flash rates ( 1.5 flashes min-1). Large flash rates > 100 flashes min-1 can occur during any season, at any time during the 24 h period, and at any location within the domain. However, they are most likely during the afternoon and early evening in East Central Florida during the spring and summer months.

  9. Bit error rate analysis of free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy

    2016-12-24

    In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.

  10. Errors in the determination of the total filtration of diagnostic x-ray tubes by the HVL method

    International Nuclear Information System (INIS)

    Gilmore, B.J.; Cranley, K.

    1990-01-01

    Optimal technique and an analysis of errors are essential for interpreting whether the total filtration of a diagnostic x-ray tube is acceptable. The study discusses this problem from a theoretical viewpoint utilising recent theoretical HVL-total-filtration data relating to 10 0 and 16 0 tungsten target angles and 0-30% kilovoltage ripples. The theory indicates the typical accuracy to which each appropriate parameter must be determined to maintain acceptable errors in total filtration. A quantitative approach is taken to evaluate systematic errors in a technique for interpolation of HVL from raw attenuation curve data. A theoretical derivation is presented to enable random errors in HVL due to x-ray set inconsistency to be estimated for particular experimental techniques and data analysis procedures. Further formulae are presented to enable errors in the total filtration estimate to be readily determined from those in the individual parameters. (author)

  11. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    Science.gov (United States)

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  12. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    Science.gov (United States)

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  13. Bit Error Rate Minimizing Channel Shortening Equalizers for Single Carrier Cyclic Prefixed Systems

    National Research Council Canada - National Science Library

    Martin, Richard K; Vanbleu, Koen; Ysebaert, Geert

    2007-01-01

    .... Previous work on channel shortening has largely been in the context of digital subscriber lines, a wireline system that allows bit allocation, thus it has focused on maximizing the bit rate for a given bit error rate (BER...

  14. Framed bit error rate testing for 100G ethernet equipment

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2010-01-01

    rate. As the need for 100 Gigabit Ethernet equipment rises, so does the need for equipment, which can properly test these systems during development, deployment and use. This paper presents early results from a work-in-progress academia-industry collaboration project and elaborates on the challenges...

  15. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency.

    Science.gov (United States)

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2012-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 2nd graders and 974 3rd graders. Participants were assessed using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and the Woodcock Reading Mastery Test (WRMT) Passage Comprehension subtest. Results from this study further illuminate the significant relationships between error rate, oral reading fluency, and reading comprehension performance, and grade-specific guidelines for appropriate error rate levels. Low oral reading fluency and high error rates predict the level of passage comprehension performance. For second grade students below benchmark, a fall assessment error rate of 28% predicts that student comprehension performance will be below average. For third grade students below benchmark, the fall assessment cut point is 14%. Instructional implications of the findings are discussed.

  16. Dispensing error rate after implementation of an automated pharmacy carousel system.

    Science.gov (United States)

    Oswald, Scott; Caldwell, Richard

    2007-07-01

    A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.

  17. Sex differences in obesity associated with total fertility rate.

    Directory of Open Access Journals (Sweden)

    Robert Brooks

    Full Text Available The identification of biological and ecological factors that contribute to obesity may help in combating the spreading obesity crisis. Sex differences in obesity rates are particularly poorly understood. Here we show that the strong female bias in obesity in many countries is associated with high total fertility rate, which is well known to be correlated with factors such as low average income, infant mortality and female education. We also document effects of reduced access to contraception and increased inequality of income among households on obesity rates. These results are consistent with studies that implicate reproduction as a risk factor for obesity in women and that suggest the effects of reproduction interact with socioeconomic and educational factors. We discuss our results in the light of recent research in dietary ecology and the suggestion that insulin resistance during pregnancy is due to historic adaptation to protect the developing foetus during famine. Increased access to contraception and education in countries with high total fertility rate might have the additional benefit of reducing the rates of obesity in women.

  18. Estimating and localizing the algebraic and total numerical errors using flux reconstructions

    Czech Academy of Sciences Publication Activity Database

    Papež, Jan; Strakoš, Z.; Vohralík, M.

    2018-01-01

    Roč. 138, č. 3 (2018), s. 681-721 ISSN 0029-599X R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * a posteriori error estimation * algebraic error * discretization error * stopping criteria * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016

  19. A note on total muon capture rates in heavy nuclei

    International Nuclear Information System (INIS)

    Parthasarathy, R.

    1978-03-01

    The results of calculations of the total capture rates in heavy nuclei, into account the nucleon velocity-dependent terms in the Fujii-Primakoff Hamiltonian and the effective mass of nucleons inside the nucleus, are presented along with the recent experimental data. The results are in general agreement with experiment. However, they indicate a possible deviation from SU(4) symmetry and, in some nuclei, support the Salam-Strathdee idea of the vanishing of the Cabibbo angle at large magnetic fields.

  20. Double symbol error rates for differential detection of narrow-band FM

    Science.gov (United States)

    Simon, M. K.

    1985-01-01

    This paper evaluates the double symbol error rate (average probability of two consecutive symbol errors) in differentially detected narrow-band FM. Numerical results are presented for the special case of MSK with a Gaussian IF receive filter. It is shown that, not unlike similar results previously obtained for the single error probability of such systems, large inaccuracies in predicted performance can occur when intersymbol interference is ignored.

  1. Failure rate of cemented and uncemented total hip replacements

    DEFF Research Database (Denmark)

    Makela, K. T.; Matilainen, M.; Pulkkinen, P.

    2014-01-01

    ). Participants 347 899 total hip replacements performed during 1995-2011. Main outcome measures Probability of implant survival (Kaplan-Meier analysis) along with implant survival with revision for any reason as endpoint (Cox multiple regression) adjusted for age, sex, and diagnosis in age groups 55-64, 65......Objective To assess the failure rate of cemented, uncemented, hybrid, and reverse hybrid total hip replacements in patients aged 55 years or older. Design Register study. Setting Nordic Arthroplasty Register Association database (combined data from Sweden, Norway, Denmark, and Finland......-74, and 75 years or older. Results The proportion of total hip replacements using uncemented implants increased rapidly towards the end of the study period. The 10 year survival of cemented implants in patients aged 65 to 74 and 75 or older (93.8%, 95% confidence interval 93.6% to 94.0% and 95.9%, 95...

  2. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    Science.gov (United States)

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  3. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    Science.gov (United States)

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  4. Error rates in forensic DNA analysis: Definition, numbers, impact and communication

    NARCIS (Netherlands)

    Kloosterman, A.; Sjerps, M.; Quak, A.

    2014-01-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and

  5. Classification based upon gene expression data: bias and precision of error rates.

    Science.gov (United States)

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  6. Error Characterization and Mitigation for 16Nm MLC NAND Flash Memory Under Total Ionizing Dose Effect

    Science.gov (United States)

    Li, Yue (Inventor); Bruck, Jehoshua (Inventor)

    2018-01-01

    A data device includes a memory having a plurality of memory cells configured to store data values in accordance with a predetermined rank modulation scheme that is optional and a memory controller that receives a current error count from an error decoder of the data device for one or more data operations of the flash memory device and selects an operating mode for data scrubbing in accordance with the received error count and a program cycles count.

  7. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.; Ghaeb, Jasim A.; Jazzar, Saleh; Saraereh, Omar A.

    2012-01-01

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate

  8. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    Science.gov (United States)

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  9. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  10. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    Science.gov (United States)

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current

  11. The study of error for analysis in dynamic image from the error of count rates in Nal (Tl) scintillation camera

    International Nuclear Information System (INIS)

    Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam; Park, Hoon Hee

    2013-01-01

    This study is aimed to evaluate the effect of T 1/2 upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9 9m TcO 4 - of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ 2 test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T 1/2 error from change of gradient with -0.25% to +0.25%, if T 1/2 is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T 1/2 error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation measurement. Especially, we found a

  12. Safe and effective error rate monitors for SS7 signaling links

    Science.gov (United States)

    Schmidt, Douglas C.

    1994-04-01

    This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.

  13. Using total quality management approach to improve patient safety by preventing medication error incidences*.

    Science.gov (United States)

    Yousef, Nadin; Yousef, Farah

    2017-09-04

    Whereas one of the predominant causes of medication errors is a drug administration error, a previous study related to our investigations and reviews estimated that the incidences of medication errors constituted 6.7 out of 100 administrated medication doses. Therefore, we aimed by using six sigma approach to propose a way that reduces these errors to become less than 1 out of 100 administrated medication doses by improving healthcare professional education and clearer handwritten prescriptions. The study was held in a General Government Hospital. First, we systematically studied the current medication use process. Second, we used six sigma approach by utilizing the five-step DMAIC process (Define, Measure, Analyze, Implement, Control) to find out the real reasons behind such errors. This was to figure out a useful solution to avoid medication error incidences in daily healthcare professional practice. Data sheet was used in Data tool and Pareto diagrams were used in Analyzing tool. In our investigation, we reached out the real cause behind administrated medication errors. As Pareto diagrams used in our study showed that the fault percentage in administrated phase was 24.8%, while the percentage of errors related to prescribing phase was 42.8%, 1.7 folds. This means that the mistakes in prescribing phase, especially because of the poor handwritten prescriptions whose percentage in this phase was 17.6%, are responsible for the consequent) mistakes in this treatment process later on. Therefore, we proposed in this study an effective low cost strategy based on the behavior of healthcare workers as Guideline Recommendations to be followed by the physicians. This method can be a prior caution to decrease errors in prescribing phase which may lead to decrease the administrated medication error incidences to less than 1%. This improvement way of behavior can be efficient to improve hand written prescriptions and decrease the consequent errors related to administrated

  14. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  15. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin; Genton, Marc G.

    2013-01-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  16. Large-scale retrospective evaluation of regulated liquid chromatography-mass spectrometry bioanalysis projects using different total error approaches.

    Science.gov (United States)

    Tan, Aimin; Saffaj, Taoufiq; Musuku, Adrien; Awaiye, Kayode; Ihssane, Bouchaib; Jhilal, Fayçal; Sosse, Saad Alaoui; Trabelsi, Fethi

    2015-03-01

    The current approach in regulated LC-MS bioanalysis, which evaluates the precision and trueness of an assay separately, has long been criticized for inadequate balancing of lab-customer risks. Accordingly, different total error approaches have been proposed. The aims of this research were to evaluate the aforementioned risks in reality and the difference among four common total error approaches (β-expectation, β-content, uncertainty, and risk profile) through retrospective analysis of regulated LC-MS projects. Twenty-eight projects (14 validations and 14 productions) were randomly selected from two GLP bioanalytical laboratories, which represent a wide variety of assays. The results show that the risk of accepting unacceptable batches did exist with the current approach (9% and 4% of the evaluated QC levels failed for validation and production, respectively). The fact that the risk was not wide-spread was only because the precision and bias of modern LC-MS assays are usually much better than the minimum regulatory requirements. Despite minor differences in magnitude, very similar accuracy profiles and/or conclusions were obtained from the four different total error approaches. High correlation was even observed in the width of bias intervals. For example, the mean width of SFSTP's β-expectation is 1.10-fold (CV=7.6%) of that of Saffaj-Ihssane's uncertainty approach, while the latter is 1.13-fold (CV=6.0%) of that of Hoffman-Kringle's β-content approach. To conclude, the risk of accepting unacceptable batches was real with the current approach, suggesting that total error approaches should be used instead. Moreover, any of the four total error approaches may be used because of their overall similarity. Lastly, the difficulties/obstacles associated with the application of total error approaches in routine analysis and their desirable future improvements are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Considering the role of time budgets on copy-error rates in material culture traditions: an experimental assessment.

    Science.gov (United States)

    Schillinger, Kerstin; Mesoudi, Alex; Lycett, Stephen J

    2014-01-01

    Ethnographic research highlights that there are constraints placed on the time available to produce cultural artefacts in differing circumstances. Given that copying error, or cultural 'mutation', can have important implications for the evolutionary processes involved in material culture change, it is essential to explore empirically how such 'time constraints' affect patterns of artefactual variation. Here, we report an experiment that systematically tests whether, and how, varying time constraints affect shape copying error rates. A total of 90 participants copied the shape of a 3D 'target handaxe form' using a standardized foam block and a plastic knife. Three distinct 'time conditions' were examined, whereupon participants had either 20, 15, or 10 minutes to complete the task. One aim of this study was to determine whether reducing production time produced a proportional increase in copy error rates across all conditions, or whether the concept of a task specific 'threshold' might be a more appropriate manner to model the effect of time budgets on copy-error rates. We found that mean levels of shape copying error increased when production time was reduced. However, there were no statistically significant differences between the 20 minute and 15 minute conditions. Significant differences were only obtained between conditions when production time was reduced to 10 minutes. Hence, our results more strongly support the hypothesis that the effects of time constraints on copying error are best modelled according to a 'threshold' effect, below which mutation rates increase more markedly. Our results also suggest that 'time budgets' available in the past will have generated varying patterns of shape variation, potentially affecting spatial and temporal trends seen in the archaeological record. Hence, 'time-budgeting' factors need to be given greater consideration in evolutionary models of material culture change.

  18. Total error shift patterns for daily CT on rails image-guided radiotherapy to the prostate bed

    Directory of Open Access Journals (Sweden)

    Mota Helvecio C

    2011-10-01

    Full Text Available Abstract Background To evaluate the daily total error shift patterns on post-prostatectomy patients undergoing image guided radiotherapy (IGRT with a diagnostic quality computer tomography (CT on rails system. Methods A total of 17 consecutive post-prostatectomy patients receiving adjuvant or salvage IMRT using CT-on-rails IGRT were analyzed. The prostate bed's daily total error shifts were evaluated for a total of 661 CT scans. Results In the right-left, cranial-caudal, and posterior-anterior directions, 11.5%, 9.2%, and 6.5% of the 661 scans required no position adjustments; 75.3%, 66.1%, and 56.8% required a shift of 1 - 5 mm; 11.5%, 20.9%, and 31.2% required a shift of 6 - 10 mm; and 1.7%, 3.8%, and 5.5% required a shift of more than 10 mm, respectively. There was evidence of correlation between the x and y, x and z, and y and z axes in 3, 3, and 3 of 17 patients, respectively. Univariate (ANOVA analysis showed that the total error pattern was random in the x, y, and z axis for 10, 5, and 2 of 17 patients, respectively, and systematic for the rest. Multivariate (MANOVA analysis showed that the (x,y, (x,z, (y,z, and (x, y, z total error pattern was random in 5, 1, 1, and 1 of 17 patients, respectively, and systematic for the rest. Conclusions The overall daily total error shift pattern for these 17 patients simulated with an empty bladder, and treated with CT on rails IGRT was predominantly systematic. Despite this, the temporal vector trends showed complex behaviors and unpredictable changes in magnitude and direction. These findings highlight the importance of using daily IGRT in post-prostatectomy patients.

  19. Estimating the annotation error rate of curated GO database sequence annotations

    Directory of Open Access Journals (Sweden)

    Brown Alfred L

    2007-05-01

    Full Text Available Abstract Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO sequence database (GOSeqLite. This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006 at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information.

  20. Total elimination of sampling errors in polarization imagery obtained with integrated microgrid polarimeters.

    Science.gov (United States)

    Tyo, J Scott; LaCasse, Charles F; Ratliff, Bradley M

    2009-10-15

    Microgrid polarimeters operate by integrating a focal plane array with an array of micropolarizers. The Stokes parameters are estimated by comparing polarization measurements from pixels in a neighborhood around the point of interest. The main drawback is that the measurements used to estimate the Stokes vector are made at different locations, leading to a false polarization signature owing to instantaneous field-of-view (IFOV) errors. We demonstrate for the first time, to our knowledge, that spatially band limited polarization images can be ideally reconstructed with no IFOV error by using a linear system framework.

  1. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    Science.gov (United States)

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  2. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A. [Canis Lupus LLC and Department of Human Oncology, University of Wisconsin, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Departments of Human Oncology, Medical Physics, and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin 53792 (United States)

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa

  4. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    International Nuclear Information System (INIS)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-01-01

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  5. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei

    2014-06-01

    Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.

  6. Voice recognition versus transcriptionist: error rates and productivity in MRI reporting.

    Science.gov (United States)

    Strahan, Rodney H; Schneider-Kolsky, Michal E

    2010-10-01

    Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Fifty MRI reports generated by VR and 50 finalized MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Forty-two % and 30% of the finalized VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR. © 2010 The Authors. Journal of Medical Imaging and Radiation Oncology © 2010 The Royal Australian and New Zealand College of Radiologists.

  7. Voice recognition versus transcriptionist: error rated and productivity in MRI reporting

    International Nuclear Information System (INIS)

    Strahan, Rodney H.; Schneider-Kolsky, Michal E.

    2010-01-01

    Full text: Purpose: Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Methods: Fifty MRI reports generated by VR and 50 finalised MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Results: Forty-two % and 30% of the finalised VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Conclusion: Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR.

  8. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  9. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    International Nuclear Information System (INIS)

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-01-01

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization of the homodyne detection scheme.

  10. Analytical expression for the bit error rate of cascaded all-optical regenerators

    DEFF Research Database (Denmark)

    Mørk, Jesper; Öhman, Filip; Bischoff, S.

    2003-01-01

    We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed.......We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed....

  11. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    Science.gov (United States)

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  12. Error rates of a full-duplex system over EGK fading channels subject to laplacian interference

    KAUST Repository

    Soury, Hamza

    2017-07-31

    This paper develops a mathematical paradigm to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). Particularly, we study the dominant intra-cell interferer problem that appears between HD users scheduled on the same FD-channel. The distribution of the dominant interference is first characterized via its distribution function, which is derived in closed-form. Assuming Nakagami-m fading, the probability of error for different modulation schemes is studied and a unified closed-form expression for the average symbol error rate is derived. To this end, we show the effective downlink throughput gain, harvested by employing FD communication at a BS that serves HD users, as a function of the signal-to-interference-ratio when compared to an idealized HD interference and noise free BS operation.

  13. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    Science.gov (United States)

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of

  14. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates

    Directory of Open Access Journals (Sweden)

    Berhane Yemane

    2008-03-01

    Full Text Available Abstract Background As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs. Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. Methods This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. Results The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. Conclusion The low sensitivity of parameter

  15. Benefits and risks of using smart pumps to reduce medication error rates: a systematic review.

    Science.gov (United States)

    Ohashi, Kumiko; Dalleur, Olivia; Dykes, Patricia C; Bates, David W

    2014-12-01

    Smart infusion pumps have been introduced to prevent medication errors and have been widely adopted nationally in the USA, though they are not always used in Europe or other regions. Despite widespread usage of smart pumps, intravenous medication errors have not been fully eliminated. Through a systematic review of recent studies and reports regarding smart pump implementation and use, we aimed to identify the impact of smart pumps on error reduction and on the complex process of medication administration, and strategies to maximize the benefits of smart pumps. The medical literature related to the effects of smart pumps for improving patient safety was searched in PUBMED, EMBASE, and the Cochrane Central Register of Controlled Trials (CENTRAL) (2000-2014) and relevant papers were selected by two researchers. After the literature search, 231 papers were identified and the full texts of 138 articles were assessed for eligibility. Of these, 22 were included after removal of papers that did not meet the inclusion criteria. We assessed both the benefits and negative effects of smart pumps from these studies. One of the benefits of using smart pumps was intercepting errors such as the wrong rate, wrong dose, and pump setting errors. Other benefits include reduction of adverse drug event rates, practice improvements, and cost effectiveness. Meanwhile, the current issues or negative effects related to using smart pumps were lower compliance rates of using smart pumps, the overriding of soft alerts, non-intercepted errors, or the possibility of using the wrong drug library. The literature suggests that smart pumps reduce but do not eliminate programming errors. Although the hard limits of a drug library play a main role in intercepting medication errors, soft limits were still not as effective as hard limits because of high override rates. Compliance in using smart pumps is key towards effectively preventing errors. Opportunities for improvement include upgrading drug

  16. Kurzweil Reading Machine: A Partial Evaluation of Its Optical Character Recognition Error Rate.

    Science.gov (United States)

    Goodrich, Gregory L.; And Others

    1979-01-01

    A study designed to assess the ability of the Kurzweil reading machine (a speech reading device for the visually handicapped) to read three different type styles produced by five different means indicated that the machines tested had different error rates depending upon the means of producing the copy and upon the type style used. (Author/CL)

  17. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei; Ding, Xiaoli; Lu, Zhong; Jung, Hyungsup; Hu, Jun; Feng, Guangcai

    2014-01-01

    be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long

  18. Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.

    2005-01-01

    Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...

  19. Error rates of a full-duplex system over EGK fading channels subject to laplacian interference

    KAUST Repository

    Soury, Hamza; Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    modulation schemes is studied and a unified closed-form expression for the average symbol error rate is derived. To this end, we show the effective downlink throughput gain, harvested by employing FD communication at a BS that serves HD users, as a function

  20. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Impact of automated dispensing cabinets on medication selection and preparation error rates in an emergency department: a prospective and direct observational before-and-after study.

    Science.gov (United States)

    Fanning, Laura; Jones, Nick; Manias, Elizabeth

    2016-04-01

    The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.

  2. Case-related factors affecting cutting errors of the proximal tibia in total knee arthroplasty assessed by computer navigation.

    Science.gov (United States)

    Tsukeoka, Tadashi; Tsuneizumi, Yoshikazu; Yoshino, Kensuke; Suzuki, Mashiko

    2018-05-01

    The aim of this study was to determine factors that contribute to bone cutting errors of conventional instrumentation for tibial resection in total knee arthroplasty (TKA) as assessed by an image-free navigation system. The hypothesis is that preoperative varus alignment is a significant contributory factor to tibial bone cutting errors. This was a prospective study of a consecutive series of 72 TKAs. The amount of the tibial first-cut errors with reference to the planned cutting plane in both coronal and sagittal planes was measured by an image-free computer navigation system. Multiple regression models were developed with the amount of tibial cutting error in the coronal and sagittal planes as dependent variables and sex, age, disease, height, body mass index, preoperative alignment, patellar height (Insall-Salvati ratio) and preoperative flexion angle as independent variables. Multiple regression analysis showed that sex (male gender) (R = 0.25 p = 0.047) and preoperative varus alignment (R = 0.42, p = 0.001) were positively associated with varus tibial cutting errors in the coronal plane. In the sagittal plane, none of the independent variables was significant. When performing TKA in varus deformity, careful confirmation of the bone cutting surface should be performed to avoid varus alignment. The results of this study suggest technical considerations that can help a surgeon achieve more accurate component placement. IV.

  3. Competence in Streptococcus pneumoniae is regulated by the rate of ribosomal decoding errors.

    Science.gov (United States)

    Stevens, Kathleen E; Chang, Diana; Zwack, Erin E; Sebert, Michael E

    2011-01-01

    Competence for genetic transformation in Streptococcus pneumoniae develops in response to accumulation of a secreted peptide pheromone and was one of the initial examples of bacterial quorum sensing. Activation of this signaling system induces not only expression of the proteins required for transformation but also the production of cellular chaperones and proteases. We have shown here that activity of this pathway is sensitively responsive to changes in the accuracy of protein synthesis that are triggered by either mutations in ribosomal proteins or exposure to antibiotics. Increasing the error rate during ribosomal decoding promoted competence, while reducing the error rate below the baseline level repressed the development of both spontaneous and antibiotic-induced competence. This pattern of regulation was promoted by the bacterial HtrA serine protease. Analysis of strains with the htrA (S234A) catalytic site mutation showed that the proteolytic activity of HtrA selectively repressed competence when translational fidelity was high but not when accuracy was low. These findings redefine the pneumococcal competence pathway as a response to errors during protein synthesis. This response has the capacity to address the immediate challenge of misfolded proteins through production of chaperones and proteases and may also be able to address, through genetic exchange, upstream coding errors that cause intrinsic protein folding defects. The competence pathway may thereby represent a strategy for dealing with lesions that impair proper protein coding and for maintaining the coding integrity of the genome. The signaling pathway that governs competence in the human respiratory tract pathogen Streptococcus pneumoniae regulates both genetic transformation and the production of cellular chaperones and proteases. The current study shows that this pathway is sensitively controlled in response to changes in the accuracy of protein synthesis. Increasing the error rate during

  4. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  5. FPGA-based Bit-Error-Rate Tester for SEU-hardened Optical Links

    CERN Document Server

    Detraz, S; Moreira, P; Papadopoulos, S; Papakonstantinou, I; Seif El Nasr, S; Sigaud, C; Soos, C; Stejskal, P; Troska, J; Versmissen, H

    2009-01-01

    The next generation of optical links for future High-Energy Physics experiments will require components qualified for use in radiation-hard environments. To cope with radiation induced single-event upsets, the physical layer protocol will include Forward Error Correction (FEC). Bit-Error-Rate (BER) testing is a widely used method to characterize digital transmission systems. In order to measure the BER with and without the proposed FEC, simultaneously on several devices, a multi-channel BER tester has been developed. This paper describes the architecture of the tester, its implementation in a Xilinx Virtex-5 FPGA device and discusses the experimental results.

  6. A minimum bit error-rate detector for amplify and forward relaying systems

    KAUST Repository

    Ahmed, Qasim Zeeshan; Alouini, Mohamed-Slim; Aissa, Sonia

    2012-01-01

    In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.

  7. A minimum bit error-rate detector for amplify and forward relaying systems

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2012-05-01

    In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.

  8. Linear transceiver design for nonorthogonal amplify-and-forward protocol using a bit error rate criterion

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2014-04-01

    The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol. © 2002-2012 IEEE.

  9. The assessment of cognitive errors using an observer-rated method.

    Science.gov (United States)

    Drapeau, Martin

    2014-01-01

    Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.

  10. Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.

    Science.gov (United States)

    Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda

    2017-09-01

    We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe. © 2016 Family Process Institute.

  11. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    Directory of Open Access Journals (Sweden)

    Wei He

    Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.

  12. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  13. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    Directory of Open Access Journals (Sweden)

    Jeffrey R Kugelman

    Full Text Available Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5 of all compared methods.

  14. 38 CFR 4.15 - Total disability ratings.

    Science.gov (United States)

    2010-07-01

    ... must be given to unusual physical or mental effects in individual cases, to peculiar effects of occupational activities, to defects in physical or mental endowment preventing the usual amount of success in overcoming the handicap of disability and to the effect of combinations of disability. Total disability will...

  15. Error-Rate Bounds for Coded PPM on a Poisson Channel

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  16. Symbol error rate performance evaluation of the LM37 multimegabit telemetry modulator-demodulator unit

    Science.gov (United States)

    Malek, H.

    1981-01-01

    The LM37 multimegabit telemetry modulator-demodulator unit was tested for evaluation of its symbol error rate (SER) performance. Using an automated test setup, the SER tests were carried out at various symbol rates and signal-to-noise ratios (SNR), ranging from +10 to -10 dB. With the aid of a specially designed error detector and a stabilized signal and noise summation unit, measurement of the SER at low SNR was possible. The results of the tests show that at symbol rates below 20 megasymbols per second (MS)s) and input SNR above -6 dB, the SER performance of the modem is within the specified 0.65 to 1.5 dB of the theoretical error curve. At symbol rates above 20 MS/s, the specification is met at SNR's down to -2 dB. The results of the SER tests are presented with the description of the test setup and the measurement procedure.

  17. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.

    2012-12-06

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.

  18. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  19. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    Science.gov (United States)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  20. The type I error rate for in vivo Comet assay data when the hierarchical structure is disregarded

    DEFF Research Database (Denmark)

    Hansen, Merete Kjær; Kulahci, Murat

    the type I error rate is greater than the nominal _ at 0.05. Closed-form expressions based on scaled F-distributions using the Welch-Satterthwaite approximation are provided to show how the type I error rate is aUected. With this study we hope to motivate researchers to be more precise regarding......, and this imposes considerable impact on the type I error rate. This study aims to demonstrate the implications that result from disregarding the hierarchical structure. DiUerent combinations of the factor levels as they appear in a literature study give type I error rates up to 0.51 and for all combinations...

  1. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  2. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction

    Directory of Open Access Journals (Sweden)

    Boulesteix Anne-Laure

    2009-12-01

    Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.

  3. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    Science.gov (United States)

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  4. Comparing Response Times and Error Rates in a Simultaneous Masking Paradigm

    Directory of Open Access Journals (Sweden)

    F Hermens

    2014-08-01

    Full Text Available In simultaneous masking, performance on a foveally presented target is impaired by one or more flanking elements. Previous studies have demonstrated strong effects of the grouping of the target and the flankers on the strength of masking (e.g., Malania, Herzog & Westheimer, 2007. These studies have predominantly examined performance by measuring offset discrimination thresholds as a measure of performance, and it is therefore unclear whether other measures of performance provide similar outcomes. A recent study, which examined the role of grouping on error rates and response times in a speeded vernier offset discrimination task, similar to that used by Malania et al. (2007, suggested a possible dissociation between the two measures, with error rates mimicking threshold performance, but response times showing differential results (Panis & Hermens, 2014. We here report the outcomes of three experiments examining this possible dissociation, and demonstrate an overall similar pattern of results for error rates and response times across a broad range of mask layouts. Moreover, the pattern of results in our experiments strongly correlates with threshold performance reported earlier (Malania et al., 2007. Our results suggest that outcomes in a simultaneous masking paradigm do not critically depend on the outcome measure used, and therefore provide evidence for a common underlying mechanism.

  5. Tax revenue and inflation rate predictions in Banda Aceh using Vector Error Correction Model (VECM)

    Science.gov (United States)

    Maulia, Eva; Miftahuddin; Sofyan, Hizir

    2018-05-01

    A country has some important parameters to achieve the welfare of the economy, such as tax revenues and inflation. One of the largest revenues of the state budget in Indonesia comes from the tax sector. Besides, the rate of inflation occurring in a country can be used as one measure, to measure economic problems that the country facing. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the relationship and forecasting tax revenue and inflation rate. VECM (Vector Error Correction Model) was chosen as the method used in this research, because of the data used in the form of multivariate time series data. This study aims to produce a VECM model with optimal lag and to predict the tax revenue and inflation rate of the VECM model. The results show that the best model for data of tax revenue and the inflation rate in Banda Aceh City is VECM with 3rd optimal lag or VECM (3). Of the seven models formed, there is a significant model that is the acceptance model of income tax. The predicted results of tax revenue and the inflation rate in Kota Banda Aceh for the next 6, 12 and 24 periods (months) obtained using VECM (3) are considered valid, since they have a minimum error value compared to other models.

  6. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    Science.gov (United States)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  7. PS-022 Complex automated medication systems reduce medication administration error rates in an acute medical ward

    DEFF Research Database (Denmark)

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    2017-01-01

    Background Medication errors have received extensive attention in recent decades and are of significant concern to healthcare organisations globally. Medication errors occur frequently, and adverse events associated with medications are one of the largest causes of harm to hospitalised patients...... cabinet, automated dispensing and barcode medication administration; (2) non-patient specific automated dispensing and barcode medication administration. The occurrence of administration errors was observed in three 3 week periods. The error rates were calculated by dividing the number of doses with one...

  8. Soft error rate analysis methodology of multi-Pulse-single-event transients

    International Nuclear Information System (INIS)

    Zhou Bin; Huo Mingxue; Xiao Liyi

    2012-01-01

    As transistor feature size scales down, soft errors in combinational logic because of high-energy particle radiation is gaining more and more concerns. In this paper, a combinational logic soft error analysis methodology considering multi-pulse-single-event transients (MPSETs) and re-convergence with multi transient pulses is proposed. In the proposed approach, the voltage pulse produced at the standard cell output is approximated by a triangle waveform, and characterized by three parameters: pulse width, the transition time of the first edge, and the transition time of the second edge. As for the pulse with the amplitude being smaller than the supply voltage, the edge extension technique is proposed. Moreover, an efficient electrical masking model comprehensively considering transition time, delay, width and amplitude is proposed, and an approach using the transition times of two edges and pulse width to compute the amplitude of pulse is proposed. Finally, our proposed firstly-independently-propagating-secondly-mutually-interacting (FIP-SMI) is used to deal with more practical re-convergence gate with multi transient pulses. As for MPSETs, a random generation model of MPSETs is exploratively proposed. Compared to the estimates obtained using circuit level simulations by HSpice, our proposed soft error rate analysis algorithm has 10% errors in SER estimation with speed up of 300 when the single-pulse-single-event transient (SPSET) is considered. We have also demonstrated the runtime and SER decrease with the increment of P0 using designs from the ISCAS-85 benchmarks. (authors)

  9. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  10. Error rate of automated calculation for wound surface area using a digital photography.

    Science.gov (United States)

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    Science.gov (United States)

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  12. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  13. On the symmetric α-stable distribution with application to symbol error rate calculations

    KAUST Repository

    Soury, Hamza

    2016-12-24

    The probability density function (PDF) of the symmetric α-stable distribution is investigated using the inverse Fourier transform of its characteristic function. For general values of the stable parameter α, it is shown that the PDF and the cumulative distribution function of the symmetric stable distribution can be expressed in terms of the Fox H function as closed-form. As an application, the probability of error of single input single output communication systems using different modulation schemes with an α-stable perturbation is studied. In more details, a generic formula is derived for generalized fading distribution, such as the extended generalized-k distribution. Later, simpler expressions of these error rates are deduced for some selected special cases and compact approximations are derived using asymptotic expansions.

  14. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2011-06-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  15. Estimating time-based instantaneous total mortality rate based on the age-structured abundance index

    Science.gov (United States)

    Wang, Yingbin; Jiao, Yan

    2015-05-01

    The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.

  16. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    Science.gov (United States)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  17. Errors of car wheels rotation rate measurement using roller follower on test benches

    Science.gov (United States)

    Potapov, A. S.; Svirbutovich, O. A.; Krivtsov, S. N.

    2018-03-01

    The article deals with rotation rate measurement errors, which depend on the motor vehicle rate, on the roller, test benches. Monitoring of the vehicle performance under operating conditions is performed on roller test benches. Roller test benches are not flawless. They have some drawbacks affecting the accuracy of vehicle performance monitoring. Increase in basic velocity of the vehicle requires increase in accuracy of wheel rotation rate monitoring. It determines the degree of accuracy of mode identification for a wheel of the tested vehicle. To ensure measurement accuracy for rotation velocity of rollers is not an issue. The problem arises when measuring rotation velocity of a car wheel. The higher the rotation velocity of the wheel is, the lower the accuracy of measurement is. At present, wheel rotation frequency monitoring on roller test benches is carried out by following-up systems. Their sensors are rollers following wheel rotation. The rollers of the system are not kinematically linked to supporting rollers of the test bench. The roller follower is forced against the wheels of the tested vehicle by means of a spring-lever mechanism. Experience of the test bench equipment operation has shown that measurement accuracy is satisfactory at small rates of vehicles diagnosed on roller test benches. With a rising diagnostics rate, rotation velocity measurement errors occur in both braking and pulling modes because a roller spins about a tire tread. The paper shows oscillograms of changes in wheel rotation velocity and rotation velocity measurement system’s signals when testing a vehicle on roller test benches at specified rates.

  18. Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise

    KAUST Repository

    Souri, Hamza; Alouini, Mohamed-Slim

    2015-01-01

    The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.

  19. Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise

    KAUST Repository

    Souri, Hamza

    2015-06-01

    The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.

  20. Error rates and resource overheads of encoded three-qubit gates

    Science.gov (United States)

    Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.

    2017-10-01

    A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.

  1. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.

  2. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Science.gov (United States)

    Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn

    2013-01-01

    The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID).

  3. Comparison of Bit Error Rate of Line Codes in NG-PON2

    Directory of Open Access Journals (Sweden)

    Tomas Horvath

    2016-05-01

    Full Text Available This article focuses on simulation and comparison of line codes NRZ (Non Return to Zero, RZ (Return to Zero and Miller’s code for NG-PON2 (Next-Generation Passive Optical Network Stage 2 using. Our article provides solutions with Q-factor, BER (Bit Error Rate, and bandwidth comparison. Line codes are the most important part of communication over the optical fibre. The main role of these codes is digital signal representation. NG-PON2 networks use optical fibres for communication that is the reason why OptSim v5.2 is used for simulation.

  4. Inclusive bit error rate analysis for coherent optical code-division multiple-access system

    Science.gov (United States)

    Katz, Gilad; Sadot, Dan

    2002-06-01

    Inclusive noise and bit error rate (BER) analysis for optical code-division multiplexing (OCDM) using coherence techniques is presented. The analysis contains crosstalk calculation of the mutual field variance for different number of users. It is shown that the crosstalk noise depends deeply on the receiver integration time, the laser coherence time, and the number of users. In addition, analytical results of the power fluctuation at the received channel due to the data modulation at the rejected channels are presented. The analysis also includes amplified spontaneous emission (ASE)-related noise effects of in-line amplifiers in a long-distance communication link.

  5. Modeling the cosmic-ray-induced soft-error rate in integrated circuits: An overview

    International Nuclear Information System (INIS)

    Srinivasan, G.R.

    1996-01-01

    This paper is an overview of the concepts and methodologies used to predict soft-error rates (SER) due to cosmic and high-energy particle radiation in integrated circuit chips. The paper emphasizes the need for the SER simulation using the actual chip circuit model which includes device, process, and technology parameters as opposed to using either the discrete device simulation or generic circuit simulation that is commonly employed in SER modeling. Concepts such as funneling, event-by-event simulation, nuclear history files, critical charge, and charge sharing are examined. Also discussed are the relative importance of elastic and inelastic nuclear collisions, rare event statistics, and device vs. circuit simulations. The semi-empirical methodologies used in the aerospace community to arrive at SERs [also referred to as single-event upset (SEU) rates] in integrated circuit chips are reviewed. This paper is one of four in this special issue relating to SER modeling. Together, they provide a comprehensive account of this modeling effort, which has resulted in a unique modeling tool called the Soft-Error Monte Carlo Model, or SEMM

  6. Symbol and Bit Error Rates Analysis of Hybrid PIM-CDMA

    Directory of Open Access Journals (Sweden)

    Ghassemlooy Z

    2005-01-01

    Full Text Available A hybrid pulse interval modulation code-division multiple-access (hPIM-CDMA scheme employing the strict optical orthogonal code (SOCC with unity and auto- and cross-correlation constraints for indoor optical wireless communications is proposed. In this paper, we analyse the symbol error rate (SER and bit error rate (BER of hPIM-CDMA. In the analysis, we consider multiple access interference (MAI, self-interference, and the hybrid nature of the hPIM-CDMA signal detection, which is based on the matched filter (MF. It is shown that the BER/SER performance can only be evaluated if the bit resolution conforms to the condition set by the number of consecutive false alarm pulses that might occur and be detected, so that one symbol being divided into two is unlikely to occur. Otherwise, the probability of SER and BER becomes extremely high and indeterminable. We show that for a large number of users, the BER improves when increasing the code weight . The results presented are compared with other modulation schemes.

  7. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    Science.gov (United States)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  8. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    Science.gov (United States)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  9. Minimizing the symbol-error-rate for amplify-and-forward relaying systems using evolutionary algorithms

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2015-02-01

    In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.

  10. Correct mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme on ping-pong protocol

    OpenAIRE

    Zhang, Zhanjun

    2004-01-01

    Comment: The wrong mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme [PRL90(03)157901]on ping-pong protocol have been pointed out and corrected

  11. Calculation of the soft error rate of submicron CMOS logic circuits

    International Nuclear Information System (INIS)

    Juhnke, T.; Klar, H.

    1995-01-01

    A method to calculate the soft error rate (SER) of CMOS logic circuits with dynamic pipeline registers is described. This method takes into account charge collection by drift and diffusion. The method is verified by comparison of calculated SER's to measurement results. Using this method, the SER of a highly pipelined multiplier is calculated as a function of supply voltage for a 0.6 microm, 0.3 microm, and 0.12 microm technology, respectively. It has been found that the SER of such highly pipelined submicron CMOS circuits may become too high so that countermeasures have to be taken. Since the SER greatly increases with decreasing supply voltage, low-power/low-voltage circuits may show more than eight times the SER for half the normal supply voltage as compared to conventional designs

  12. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  13. Personnel selection and emotional stability certification: establishing a false negative error rate when clinical interviews

    International Nuclear Information System (INIS)

    Berghausen, P.E. Jr.

    1987-01-01

    The security plans of nuclear plants generally require that all personnel who are to have unescorted access to protected areas or vital islands be screened for emotional instability. Screening typically consists of first administering the MMPI and then conducting a clinical interview. Interviews-by-exception protocols provide for only those employees to be interviewed who have some indications of psychopathology in their MMPI results. A problem arises when the indications are not readily apparent: False negatives are likely to occur, resulting in employees being erroneously granted unescorted access. The present paper describes the development of a predictive equation which permits accurate identification, via analysis of MMPI results, of those employees who are most in need of being interviewed. The predictive equation also permits knowing probably maximum false negative error rates when a given percentage of employees is interviewed

  14. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2010-10-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  15. Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying

    KAUST Repository

    Fareed, Muhammad Mehboob; Uysal, Murat; Tsiftsis, Theodoros A.

    2014-01-01

    In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.

  16. Bit Error Rate Analysis for MC-CDMA Systems in Nakagami- Fading Channels

    Directory of Open Access Journals (Sweden)

    Li Zexian

    2004-01-01

    Full Text Available Multicarrier code division multiple access (MC-CDMA is a promising technique that combines orthogonal frequency division multiplexing (OFDM with CDMA. In this paper, based on an alternative expression for the -function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER of multiuser MC-CDMA systems in frequency-selective Nakagami- fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC or equal gain combining (EGC. The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.

  17. Evolutionary enhancement of the SLIM-MAUD method of estimating human error rates

    International Nuclear Information System (INIS)

    Zamanali, J.H.; Hubbard, F.R.; Mosleh, A.; Waller, M.A.

    1992-01-01

    The methodology described in this paper assigns plant-specific dynamic human error rates (HERs) for individual plant examinations based on procedural difficulty, on configuration features, and on the time available to perform the action. This methodology is an evolutionary improvement of the success likelihood index methodology (SLIM-MAUD) for use in systemic scenarios. It is based on the assumption that the HER in a particular situation depends of the combined effects of a comprehensive set of performance-shaping factors (PSFs) that influence the operator's ability to perform the action successfully. The PSFs relate the details of the systemic scenario in which the action must be performed according to the operator's psychological and cognitive condition

  18. Analysis of family-wise error rates in statistical parametric mapping using random field theory.

    Science.gov (United States)

    Flandin, Guillaume; Friston, Karl J

    2017-11-01

    This technical report revisits the analysis of family-wise error rates in statistical parametric mapping-using random field theory-reported in (Eklund et al. []: arXiv 1511.01863). Contrary to the understandable spin that these sorts of analyses attract, a review of their results suggests that they endorse the use of parametric assumptions-and random field theory-in the analysis of functional neuroimaging data. We briefly rehearse the advantages parametric analyses offer over nonparametric alternatives and then unpack the implications of (Eklund et al. []: arXiv 1511.01863) for parametric procedures. Hum Brain Mapp, 2017. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  19. Performance analysis for the bit-error rate of SAC-OCDMA systems

    Science.gov (United States)

    Feng, Gang; Cheng, Wenqing; Chen, Fujun

    2015-09-01

    Under low power, Gaussian statistics by invoking the central limit theorem is feasible to predict the upper bound in the spectral-amplitude-coding optical code division multiple access (SAC-OCDMA) system. However, this case severely underestimates the bit-error rate (BER) performance of the system under high power assumption. Fortunately, the exact negative binomial (NB) model is a perfect replacement for the Gaussian model in the prediction and evaluation. Based on NB statistics, a more accurate closed-form expression is analyzed and derived for the SAC-OCDMA system. The experiment shows that the obtained expression provides a more precise prediction of the BER performance under the low and high power assumptions.

  20. System care improves trauma outcome: patient care errors dominate reduced preventable death rate.

    Science.gov (United States)

    Thoburn, E; Norris, P; Flores, R; Goode, S; Rodriguez, E; Adams, V; Campbell, S; Albrink, M; Rosemurgy, A

    1993-01-01

    A review of 452 trauma deaths in Hillsborough County, Florida, in 1984 documented that 23% of non-CNS trauma deaths were preventable and occurred because of inadequate resuscitation or delay in proper surgical care. In late 1988 Hillsborough County organized a County Trauma Agency (HCTA) to coordinate trauma care among prehospital providers and state-designated trauma centers. The purpose of this study was to review county trauma deaths after the inception of the HCTA to determine the frequency of preventable deaths. 504 trauma deaths occurring between October 1989 and April 1991 were reviewed. Through committee review, 10 deaths were deemed preventable; 2 occurred outside the trauma system. Of the 10 deaths, 5 preventable deaths occurred late in severely injured patients. The preventable death rate has decreased to 7.0% with system care. The causes of preventable deaths have changed from delayed or inadequate intervention to postoperative care errors.

  1. Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-06-01

    In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.

  2. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, D; Ehler, E [University of Minnesota, Minneapolis, MN (United States)

    2015-06-15

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing.

  3. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    International Nuclear Information System (INIS)

    Sterling, D; Ehler, E

    2015-01-01

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing

  4. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    Science.gov (United States)

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ

  5. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    Science.gov (United States)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  6. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System

    Directory of Open Access Journals (Sweden)

    Jiayu Zhang

    2018-05-01

    Full Text Available The Semi-Strapdown Inertial Navigation System (SSINS provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS inertial measurement unit (MIMU outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.

  7. Quantitative comparison of errors in 15N transverse relaxation rates measured using various CPMG phasing schemes

    International Nuclear Information System (INIS)

    Myint Wazo; Cai Yufeng; Schiffer, Celia A.; Ishima, Rieko

    2012-01-01

    Nitrogen-15 Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation experiment are widely used to characterize protein backbone dynamics and chemical exchange parameters. Although an accurate value of the transverse relaxation rate, R 2 , is needed for accurate characterization of dynamics, the uncertainty in the R 2 value depends on the experimental settings and the details of the data analysis itself. Here, we present an analysis of the impact of CPMG pulse phase alternation on the accuracy of the 15 N CPMG R 2 . Our simulations show that R 2 can be obtained accurately for a relatively wide spectral width, either using the conventional phase cycle or using phase alternation when the r.f. pulse power is accurately calibrated. However, when the r.f. pulse is miscalibrated, the conventional CPMG experiment exhibits more significant uncertainties in R 2 caused by the off-resonance effect than does the phase alternation experiment. Our experiments show that this effect becomes manifest under the circumstance that the systematic error exceeds that arising from experimental noise. Furthermore, our results provide the means to estimate practical parameter settings that yield accurate values of 15 N transverse relaxation rates in the both CPMG experiments.

  8. Power penalties for multi-level PAM modulation formats at arbitrary bit error rates

    Science.gov (United States)

    Kaliteevskiy, Nikolay A.; Wood, William A.; Downie, John D.; Hurley, Jason; Sterlingov, Petr

    2016-03-01

    There is considerable interest in combining multi-level pulsed amplitude modulation formats (PAM-L) and forward error correction (FEC) in next-generation, short-range optical communications links for increased capacity. In this paper we derive new formulas for the optical power penalties due to modulation format complexity relative to PAM-2 and due to inter-symbol interference (ISI). We show that these penalties depend on the required system bit-error rate (BER) and that the conventional formulas overestimate link penalties. Our corrections to the standard formulas are very small at conventional BER levels (typically 1×10-12) but become significant at the higher BER levels enabled by FEC technology, especially for signal distortions due to ISI. The standard formula for format complexity, P = 10log(L-1), is shown to overestimate the actual penalty for PAM-4 and PAM-8 by approximately 0.1 and 0.25 dB respectively at 1×10-3 BER. Then we extend the well-known PAM-2 ISI penalty estimation formula from the IEEE 802.3 standard 10G link modeling spreadsheet to the large BER case and generalize it for arbitrary PAM-L formats. To demonstrate and verify the BER dependence of the ISI penalty, a set of PAM-2 experiments and Monte-Carlo modeling simulations are reported. The experimental results and simulations confirm that the conventional formulas can significantly overestimate ISI penalties at relatively high BER levels. In the experiments, overestimates up to 2 dB are observed at 1×10-3 BER.

  9. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Directory of Open Access Journals (Sweden)

    Philip J Kellman

    Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert

  10. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Science.gov (United States)

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  11. PERBANDINGAN BIT ERROR RATE KODE REED-SOLOMON DENGAN KODE BOSE-CHAUDHURI-HOCQUENGHEM MENGGUNAKAN MODULASI 32-FSK

    Directory of Open Access Journals (Sweden)

    Eva Yovita Dwi Utami

    2016-11-01

    Full Text Available Kode Reed-Solomon (RS dan kode Bose-Chaudhuri-Hocquenghem (BCH merupakan kode pengoreksi error yang termasuk dalam jenis kode blok siklis. Kode pengoreksi error diperlukan pada sistem komunikasi untuk memperkecil error pada informasi yang dikirimkan. Dalam makalah ini, disajikan hasil penelitian kinerja BER sistem komunikasi yang menggunakan kode RS, kode BCH, dan sistem yang tidak menggunakan kode RS dan kode BCH, menggunakan modulasi 32-FSK pada kanal Additive White Gaussian Noise (AWGN, Rayleigh dan Rician. Kemampuan memperkecil error diukur menggunakan nilai Bit Error Rate (BER yang dihasilkan. Hasil penelitian menunjukkan bahwa kode RS seiring dengan penambahan nilai SNR, menurunkan nilai BER yang lebih curam bila dibandingkan sistem dengan kode BCH. Sedangkan kode BCH memberikan keunggulan saat SNR bernilai kecil, memiliki BER lebih baik daripada sistem dengan kode RS.

  12. Assessment of the rate and etiology of pharmacological errors by nurses of two major teaching hospitals in Shiraz

    Directory of Open Access Journals (Sweden)

    Fatemeh Vizeshfar

    2015-06-01

    Full Text Available Medication errors have serious consequences for patients, their families and care givers. Reduction of these faults by care givers such as nurses can increase the safety of patients. The goal of study was to assess the rate and etiology of medication error in pediatric and medical wards. This cross-sectional-analytic study is done on 101 registered nurses who had the duty of drug administration in medical pediatric and adults’ wards. Data was collected by a questionnaire including demographic information, self report faults, etiology of medication error and researcher observations. The results showed that nurses’ faults in pediatric wards were 51/6% and in adults wards were 47/4%. The most common faults in adults wards were later or sooner drug administration (48/6%, and administration of drugs without prescription and administering wrong drugs were the most common medication errors in pediatric wards (each one 49/2%. According to researchers’ observations, the medication error rate of 57/9% was rated low in adults wards and the rate of 69/4% in pediatric wards was rated moderate. The most frequent medication errors in both adults and pediatric wards were that nurses didn’t explain the reason and type of drug they were going to administer to patients. Independent T-test showed a significant change in faults observations in pediatric wards (p=0.000 and in adults wards (p=0.000. Several studies have shown medication errors all over the world, especially in pediatric wards. However, by designing a suitable report system and use a multi disciplinary approach, we can be reduced the occurrence of medication errors and its negative consequences.

  13. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    Science.gov (United States)

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  14. Impact of catheter reconstruction error on dose distribution in high dose rate intracavitary brachytherapy and evaluation of OAR doses

    International Nuclear Information System (INIS)

    Thaper, Deepak; Shukla, Arvind; Rathore, Narendra; Oinam, Arun S.

    2016-01-01

    In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this study is to evaluate the impact of catheter reconstruction error on dose distribution in CT based intracavitary brachytherapy planning and evaluation of its effect on organ at risk (OAR) like bladder, rectum and sigmoid and target volume High risk clinical target volume (HR-CTV)

  15. Time Domain Equalizer Design Using Bit Error Rate Minimization for UWB Systems

    Directory of Open Access Journals (Sweden)

    Syed Imtiaz Husain

    2009-01-01

    Full Text Available Ultra-wideband (UWB communication systems occupy huge bandwidths with very low power spectral densities. This feature makes the UWB channels highly rich in resolvable multipaths. To exploit the temporal diversity, the receiver is commonly implemented through a Rake. The aim to capture enough signal energy to maintain an acceptable output signal-to-noise ratio (SNR dictates a very complicated Rake structure with a large number of fingers. Channel shortening or time domain equalizer (TEQ can simplify the Rake receiver design by reducing the number of significant taps in the effective channel. In this paper, we first derive the bit error rate (BER of a multiuser and multipath UWB system in the presence of a TEQ at the receiver front end. This BER is then written in a form suitable for traditional optimization. We then present a TEQ design which minimizes the BER of the system to perform efficient channel shortening. The performance of the proposed algorithm is compared with some generic TEQ designs and other Rake structures in UWB channels. It is shown that the proposed algorithm maintains a lower BER along with efficiently shortening the channel.

  16. Student laboratory experiments exploring optical fibre communication systems, eye diagrams, and bit error rates

    Science.gov (United States)

    Walsh, Douglas; Moodie, David; Mauchline, Iain; Conner, Steve; Johnstone, Walter; Culshaw, Brian

    2005-06-01

    Optical fibre communications has proved to be one of the key application areas, which created, and ultimately propelled the global growth of the photonics industry over the last twenty years. Consequently the teaching of the principles of optical fibre communications has become integral to many university courses covering photonics technology. However to reinforce the fundamental principles and key technical issues students examine in their lecture courses and to develop their experimental skills, it is critical that the students also obtain hands-on practical experience of photonics components, instruments and systems in an associated teaching laboratory. In recognition of this need OptoSci, in collaboration with university academics, commercially developed a fibre optic communications based educational package (ED-COM). This educator kit enables students to; investigate the characteristics of the individual communications system components (sources, transmitters, fibre, receiver), examine and interpret the overall system performance limitations imposed by attenuation and dispersion, conduct system design and performance analysis. To further enhance the experimental programme examined in the fibre optic communications kit, an extension module to ED-COM has recently been introduced examining one of the most significant performance parameters of digital communications systems, the bit error rate (BER). This add-on module, BER(COM), enables students to generate, evaluate and investigate signal quality trends by examining eye patterns, and explore the bit-rate limitations imposed on communication systems by noise, attenuation and dispersion. This paper will examine the educational objectives, background theory, and typical results for these educator kits, with particular emphasis on BER(COM).

  17. [The effectiveness of error reporting promoting strategy on nurse's attitude, patient safety culture, intention to report and reporting rate].

    Science.gov (United States)

    Kim, Myoungsoo

    2010-04-01

    The purpose of this study was to examine the impact of strategies to promote reporting of errors on nurses' attitude to reporting errors, organizational culture related to patient safety, intention to report and reporting rate in hospital nurses. A nonequivalent control group non-synchronized design was used for this study. The program was developed and then administered to the experimental group for 12 weeks. Data were analyzed using descriptive analysis, X(2)-test, t-test, and ANCOVA with the SPSS 12.0 program. After the intervention, the experimental group showed significantly higher scores for nurses' attitude to reporting errors (experimental: 20.73 vs control: 20.52, F=5.483, p=.021) and reporting rate (experimental: 3.40 vs control: 1.33, F=1998.083, porganizational culture and intention to report. The study findings indicate that strategies that promote reporting of errors play an important role in producing positive attitudes to reporting errors and improving behavior of reporting. Further advanced strategies for reporting errors that can lead to improved patient safety should be developed and applied in a broad range of hospitals.

  18. Residents' Ratings of Their Clinical Supervision and Their Self-Reported Medical Errors: Analysis of Data From 2009.

    Science.gov (United States)

    Baldwin, DeWitt C; Daugherty, Steven R; Ryan, Patrick M; Yaghmour, Nicholas A; Philibert, Ingrid

    2018-04-01

    Medical errors and patient safety are major concerns for the medical and medical education communities. Improving clinical supervision for residents is important in avoiding errors, yet little is known about how residents perceive the adequacy of their supervision and how this relates to medical errors and other education outcomes, such as learning and satisfaction. We analyzed data from a 2009 survey of residents in 4 large specialties regarding the adequacy and quality of supervision they receive as well as associations with self-reported data on medical errors and residents' perceptions of their learning environment. Residents' reports of working without adequate supervision were lower than data from a 1999 survey for all 4 specialties, and residents were least likely to rate "lack of supervision" as a problem. While few residents reported that they received inadequate supervision, problems with supervision were negatively correlated with sufficient time for clinical activities, overall ratings of the residency experience, and attending physicians as a source of learning. Problems with supervision were positively correlated with resident reports that they had made a significant medical error, had been belittled or humiliated, or had observed others falsifying medical records. Although working without supervision was not a pervasive problem in 2009, when it happened, it appeared to have negative consequences. The association between inadequate supervision and medical errors is of particular concern.

  19. Do illness rating systems predict discharge location, length of stay, and cost after total hip arthroplasty?

    Directory of Open Access Journals (Sweden)

    Sarah E. Rudasill, BA

    2018-06-01

    Conclusions: These findings suggest that although ASA classifications predict discharge location and SOI scores predict length of stay and total costs, other factors beyond illness rating systems remain stronger predictors of discharge for THA patients.

  20. Dependence of total dose response of bipolar linear microcircuits on applied dose rate

    International Nuclear Information System (INIS)

    McClure, S.; Will, W.; Perry, G.; Pease, R.L.

    1994-01-01

    The effect of dose rate on the total dose radiation hardness of three commercial bipolar linear microcircuits is investigated. Total dose tests of linear bipolar microcircuits show larger degradation at 0.167 rad/s than at 90 rad/s even after the high dose rate test is followed by a room temperature plus a 100 C anneal. No systematic correlation could be found for degradation at low dose rate versus high dose rate and anneal. Comparison of the low dose rate with the high dose rate anneal data indicates that MIL-STD-883, method 1019.4 is not a worst-case test method when applied to bipolar microcircuits for low dose rate space applications

  1. Attitudes of Mashhad Public Hospital's Nurses and Midwives toward the Causes and Rates of Medical Errors Reporting.

    Science.gov (United States)

    Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh

    2017-03-01

    Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer

  2. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    International Nuclear Information System (INIS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-01-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  3. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    Energy Technology Data Exchange (ETDEWEB)

    Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)

    2016-11-15

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  4. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    International Nuclear Information System (INIS)

    Chau, H.F.

    2002-01-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1√(5)≅27.6%, thereby making it the most error resistant scheme known to date

  5. Pyrosequencing as a tool for the detection of Phytophthora species: error rate and risk of false Molecular Operational Taxonomic Units.

    Science.gov (United States)

    Vettraino, A M; Bonants, P; Tomassini, A; Bruni, N; Vannini, A

    2012-11-01

    To evaluate the accuracy of pyrosequencing for the description of Phytophthora communities in terms of taxa identification and risk of assignment for false Molecular Operational Taxonomic Units (MOTUs). Pyrosequencing of Internal Transcribed Spacer 1 (ITS1) amplicons was used to describe the structure of a DNA mixture comprising eight Phytophthora spp. and Pythium vexans. Pyrosequencing resulted in 16 965 reads, detecting all species in the template DNA mixture. Reducing the ITS1 sequence identity threshold resulted in a decrease in numbers of unmatched reads but a concomitant increase in the numbers of false MOTUs. The total error rate was 0·63% and comprised mainly mismatches (0·25%) Pyrosequencing of ITS1 region is an efficient and accurate technique for the detection and identification of Phytophthora spp. in environmental samples. However, the risk of allocating false MOTUs, even when demonstrated to be low, may require additional validation with alternative detection methods. Phytophthora spp. are considered among the most destructive groups of invasive plant pathogens, affecting thousands of cultivated and wild plants worldwide. Simultaneous early detection of Phytophthora complexes in environmental samples offers an unique opportunity for the interception of known and unknown species along pathways of introduction, along with the identification of these organisms in invaded environments. © 2012 The Authors Letters in Applied Microbiology © 2012 The Society for Applied Microbiology.

  6. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  7. Finding the right coverage : The impact of coverage and sequence quality on single nucleotide polymorphism genotyping error rates

    NARCIS (Netherlands)

    Fountain, Emily D.; Pauli, Jonathan N.; Reid, Brendan N.; Palsboll, Per J.; Peery, M. Zachariah

    Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown.

  8. Survey of total error of precipitation and homogeneous HDL-cholesterol methods and simultaneous evaluation of lyophilized saccharose-containing candidate reference materials for HDL-cholesterol

    NARCIS (Netherlands)

    C.M. Cobbaert (Christa); H. Baadenhuijsen; L. Zwang (Louwerens); C.W. Weykamp; P.N. Demacker; P.G.H. Mulder (Paul)

    1999-01-01

    textabstractBACKGROUND: Standardization of HDL-cholesterol is needed for risk assessment. We assessed for the first time the accuracy of HDL-cholesterol testing in The Netherlands and evaluated 11 candidate reference materials (CRMs). METHODS: The total error (TE) of

  9. Error resilient H.264/AVC Video over Satellite for low Packet Loss Rates

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren; Andersen, Jakob Dahl

    2007-01-01

    The performance of video over satellite is simulated. The error resilience tools of intra macroblock refresh and slicing are optimized for live broadcast video over satellite. The improved performance using feedback, using a cross- layer approach, over the satellite link is also simulated. The ne...

  10. SNP discovery in nonmodel organisms: strand bias and base-substitution errors reduce conversion rates.

    Science.gov (United States)

    Gonçalves da Silva, Anders; Barendse, William; Kijas, James W; Barris, Wes C; McWilliam, Sean; Bunch, Rowan J; McCullough, Russell; Harrison, Blair; Hoelzel, A Rus; England, Phillip R

    2015-07-01

    Single nucleotide polymorphisms (SNPs) have become the marker of choice for genetic studies in organisms of conservation, commercial or biological interest. Most SNP discovery projects in nonmodel organisms apply a strategy for identifying putative SNPs based on filtering rules that account for random sequencing errors. Here, we analyse data used to develop 4723 novel SNPs for the commercially important deep-sea fish, orange roughy (Hoplostethus atlanticus), to assess the impact of not accounting for systematic sequencing errors when filtering identified polymorphisms when discovering SNPs. We used SAMtools to identify polymorphisms in a velvet assembly of genomic DNA sequence data from seven individuals. The resulting set of polymorphisms were filtered to minimize 'bycatch'-polymorphisms caused by sequencing or assembly error. An Illumina Infinium SNP chip was used to genotype a final set of 7714 polymorphisms across 1734 individuals. Five predictors were examined for their effect on the probability of obtaining an assayable SNP: depth of coverage, number of reads that support a variant, polymorphism type (e.g. A/C), strand-bias and Illumina SNP probe design score. Our results indicate that filtering out systematic sequencing errors could substantially improve the efficiency of SNP discovery. We show that BLASTX can be used as an efficient tool to identify single-copy genomic regions in the absence of a reference genome. The results have implications for research aiming to identify assayable SNPs and build SNP genotyping assays for nonmodel organisms. © 2014 John Wiley & Sons Ltd.

  11. Sharp Threshold Detection Based on Sup-norm Error rates in High-dimensional Models

    DEFF Research Database (Denmark)

    Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl

    focused almost exclusively on estimation errors in stronger norms. We show that this sup-norm bound can be used to distinguish between zero and non-zero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent...

  12. On the Symbol Error Rate of M-ary MPSK over Generalized Fading Channels with Additive Laplacian Noise

    KAUST Repository

    Soury, Hamza

    2015-01-07

    This work considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox’s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1].

  13. On the symbol error rate of M-ary MPSK over generalized fading channels with additive Laplacian noise

    KAUST Repository

    Soury, Hamza

    2014-06-01

    This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox\\'s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.

  14. On the symbol error rate of M-ary MPSK over generalized fading channels with additive Laplacian noise

    KAUST Repository

    Soury, Hamza; Alouini, Mohamed-Slim

    2014-01-01

    This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox's H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.

  15. On the Symbol Error Rate of M-ary MPSK over Generalized Fading Channels with Additive Laplacian Noise

    KAUST Repository

    Soury, Hamza; Alouini, Mohamed-Slim

    2015-01-01

    This work considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox’s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1].

  16. Determination of corrosion rate of reinforcement with a modulated guard ring electrode; analysis of errors due to lateral current distribution

    International Nuclear Information System (INIS)

    Wojtas, H.

    2004-01-01

    The main source of errors in measuring the corrosion rate of rebars on site is a non-uniform current distribution between the small counter electrode (CE) on the concrete surface and the large rebar network. Guard ring electrodes (GEs) are used in an attempt to confine the excitation current within a defined area. In order to better understand the functioning of modulated guard ring electrode and to assess its effectiveness in eliminating errors due to lateral spread of current signal from the small CE, measurements of the polarisation resistance performed on a concrete beam have been numerically simulated. Effect of parameters such as rebar corrosion activity, concrete resistivity, concrete cover depth and size of the corroding area on errors in the estimation of polarisation resistance of a single rebar has been examined. The results indicate that modulated GE arrangement fails to confine the lateral spread of the CE current within a constant area. Using the constant diameter of confinement for the calculation of corrosion rate may lead to serious errors when test conditions change. When high corrosion activity of rebar and/or local corrosion occur, the use of the modulated GE confinement may lead to significant underestimation of the corrosion rate

  17. Relationship of dose rate and total dose to responses of continuously irradiated beagles

    International Nuclear Information System (INIS)

    Fritz, T.E.; Norris, W.P.; Tolle, D.V.; Seed, T.M.; Poole, C.M.; Lombard, L.S.; Doyle, D.E.

    1978-01-01

    Young-adult beagles were exposed continuously (22 hours/day) to 60 Co γ rays in a specially constructed facility. The exposure rates were either 5, 10, 17, or 35 R/day, and the exposures were terminated at either 600, 1400, 2000, or 4000 R. A total of 354 dogs were irradiated; 221 are still alive as long-term survivors, some after more than 2000 days. The data on survival of these dogs, coupled with data from similar preliminary experiments, allow an estimate of the LD 50 for γ-ray exposures given at a number of exposure rates. They also allow comparison of the relative importance of dose rate and total dose, and the interaction of these two variables, in the early and late effects after protracted irradiation. The LD 50 for the beagle increases from 258 rad delivered at 15 R/minute to approximately 3000 rad at 10 R/day. Over this entire range, the LD 50 is dependent upon hematopoietic damage. At 5 R/day and less, no meaningful LD 50 can be determined; there is nearly normal continued hematopoietic function, survival is prolonged, and the dogs manifest varied individual responses in other organ systems. Although the experiment is not complete, interim data allow several important conclusions. Terminated exposures, while not as effective as radiation continued until death, can produce myelogenous leukemia at the same exposure rate, 10 R/day. More importantly, at the same total accumulated dose, lower exposure rates are more damaging than higher rates on the basis of the rate and degree of hematological recovery that occurs after termination of irradiation. Thus, the rate of hematologic depression, the nadir of the depression, and the rate of recovery are dependent upon exposure rate; the latter is inversely related and the former two are directly related to exposure rate

  18. Relationship of dose rate and total dose to responses of continuously irradiated beagles

    International Nuclear Information System (INIS)

    Fritz, T.E.; Norris, W.P.; Tolle, D.V.; Seed, T.M.; Poole, C.M.; Lombard, L.S.; Doyle, D.E.

    1978-01-01

    Young-adult beagles were exposed continuously (22 hours/day) to 60 Co gamma rays in a specially constructed facility. The exposure rates were 5, 19, 17 or 35 R/day, and the exposures were terminated at 600, 1400, 2000 or 4000 R. A total of 354 dogs were irradiated; 221 are still alive as long-term survivors, some after more than 2000 days. The data on survival of these dogs, coupled with data from similar preliminary experiments, allow an estimate of the LD 50 for gamma-ray exposures given at a number of exposure rates. They also allow comparison of the relativeimportance of dose rate and total dose, and the interaction of these two variables, in the early and late effects after protracted irradiation. The LD 50 for the beagle increases from 344 R (258 rads) delivered at 15 R/minute to approximately 4000 R (approximately 3000 rads) at 10 R/day. Over this entire range, the LD 50 is dependent upon haematopoietic damage. At 5 R/day and less, no definitive LD 50 can be determined; there is nearly normal continued haematopoietic function, survival is prolonged, and the dogs manifest varied individual responses in the organ systems. Although the experiment is not complete, interim data allow serveral important conclusions. Terminated exposures, while not as effective as irradiation continued until death, can produce myelogenous leukaemia at the same exposure rate, 10 R/day. More importantly, at the same total accumulated dose, lower exposure rates appear more damaging than higher rates on the basis of the rate and degree of haematological recovery that occurs after termination of irradiation. Thus, the rate of haematologic depression, the nadir of the depression and the rate of recovery are dependent upon exposure rate; the latter is inversely related and the first two are directly related to exposure rate. ( author)

  19. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  20. Impact of Total, Internal and External Government Debt on Interest Rate in Pakistan

    OpenAIRE

    Perveen, Asma; Munir, Kashif

    2017-01-01

    The objective of the study is to examine impact of total, internal and external government debt on nominal interest rate in Pakistan. To attain these objectives, the study used annual time series data from 1973 to 2016. The study used loanable fund theory as theoretical model and ARDL bound testing approach for cointegration and Granger causality test to estimate the results. The results of the study found negative relation between total government debt, external debt and nominal interest rat...

  1. High Re-Operation Rates Using Conserve Metal-On-Metal Total Hip Articulations

    DEFF Research Database (Denmark)

    Mogensen, S L; Jakobsen, Thomas; Christoffersen, Hardy

    2016-01-01

    INTRODUCTION: Metal-on-metal hip articulations have been intensely debated after reports of adverse reactions and high failure rates. The aim of this study was to retrospectively evaluate the implant of a metal-on.metal total hip articulation (MOM THA) from a single manufacture in a two-center st......INTRODUCTION: Metal-on-metal hip articulations have been intensely debated after reports of adverse reactions and high failure rates. The aim of this study was to retrospectively evaluate the implant of a metal-on.metal total hip articulation (MOM THA) from a single manufacture in a two...

  2. Determination of total flow rate and flow rate of every operating branch in commissioning of heavy water loop for ARR-2

    International Nuclear Information System (INIS)

    Han Yan

    1997-01-01

    The heavy water loop (i,e, RCS) for ARR-2 in Algeria is a complex loop. Flow regulating means are not provided by the design in order to operate the reactor safely and simplify operating processes. How to determine precisely the orifice diameters of resistance parts for the loop is a key point for decreasing deviation between practical and design flow rates. Commissioning tests shall ensure that under every one of combined operating modes for the pumps, total coolant flow rate is about the same (the number of pumps operating in parallel is the same) and is consistent with design requirement, as well as the distribution of coolant flow rate to every branch is uniform. The flow Determination is divided into two steps. First and foremost, corresponding resistance part at each pump outlet is determined in commissioning test of shorted heavy water loop with light water, so that the problem about uniform distribution of the flow rate to each branch is solved, Secondly, resistance part at the reactor inlet is determined in commissioning test of heavy water loop connected with the vessel, so that the problem about that total heavy water flow rate is within optimal range is solved. According to practical requirements of the project, a computer program of hydraulic calculation and analysis for heavy water loop has been developed, and hydraulic characteristics test for a part of loop has been conducted in order to correct calculation error. By means of program calculation combining with tests in site, orifice diameters of 9 resistance parts has been determined rapidly and precisely and requirements of design and operation has been met adequately

  3. Low-dose-rate total lymphoid irradiation: a new method of rapid immunosuppression

    International Nuclear Information System (INIS)

    Blum, J.E.; de Silva, S.M.; Rachman, D.B.; Order, S.E.

    1988-01-01

    Total Lymphoid Irradiation (TLI) has been successful in inducing immunosuppression in experimental and clinical applications. However, both the experimental and clinical utility of TLI are hampered by the prolonged treatment courses required (23 days in rats and 30-60 days in humans). Low-dose-rate TLI has the potential of reducing overall treatment time while achieving comparable immunosuppression. This study examines the immunosuppressive activity and treatment toxicity of conventional-dose-rate (23 days) vs low-dose-rate (2-7 days) TLI. Seven groups of Lewis rats were given TLI with 60Co. One group was treated at conventional-dose-rates (80-110 cGy/min) and received 3400 cGy in 17 fractions over 23 days. Six groups were treated at low-dose-rate (7 cGy/min) and received total doses of 800, 1200, 1800, 2400, 3000, and 3400 cGy over 2-7 days. Rats treated at conventional-dose-rates over 23 days and at low-dose-rate over 2-7 days tolerated radiation with minimal toxicity. The level of immunosuppression was tested using allogeneic (Brown-Norway) skin graft survival. Control animals retained allogeneic skin grafts for a mean of 14 days (range 8-21 days). Conventional-dose-rate treated animals (3400 cGy in 23 days) kept their grafts 60 days (range 50-66 days) (p less than .001). Low-dose-rate treated rats (800 to 3400 cGy total dose over 2-7 days) also had prolongation of allogeneic graft survival times following TLI with a dose-response curve established. The graft survival time for the 3400 cGy low-dose-rate group (66 days, range 52-78 days) was not significantly different from the 3400 cGy conventional-dose-rate group (p less than 0.10). When the total dose given was equivalent, low-dose-rate TLI demonstrated an advantage of reduced overall treatment time compared to conventional-dose-rate TLI (7 days vs. 23 days) with no increase in toxicity

  4. Order of current variance and diffusivity in the rate one totally asymmetric zero range process

    NARCIS (Netherlands)

    Balázs, M.; Komjáthy, J.

    2008-01-01

    We prove that the variance of the current across a characteristic is of order t 2/3 in a stationary constant rate totally asymmetric zero range process, and that the diffusivity has order t 1/3. This is a step towards proving universality of this scaling behavior in the class of one-dimensional

  5. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  6. Controlling type I error rate for fast track drug development programmes.

    Science.gov (United States)

    Shih, Weichung J; Ouyang, Peter; Quan, Hui; Lin, Yong; Michiels, Bart; Bijnens, Luc

    2003-03-15

    The U.S. Food and Drug Administration (FDA) Modernization Act of 1997 has a Section (No. 112) entitled 'Expediting Study and Approval of Fast Track Drugs' (the Act). In 1998, the FDA issued a 'Guidance for Industry: the Fast Track Drug Development Programs' (the FTDD programmes) to meet the requirement of the Act. The purpose of FTDD programmes is to 'facilitate the development and expedite the review of new drugs that are intended to treat serious or life-threatening conditions and that demonstrate the potential to address unmet medical needs'. Since then many health products have reached patients who suffered from AIDS, cancer, osteoporosis, and many other diseases, sooner by utilizing the Fast Track Act and the FTDD programmes. In the meantime several scientific issues have also surfaced when following the FTDD programmes. In this paper we will discuss the concept of two kinds of type I errors, namely, the 'conditional approval' and the 'final approval' type I errors, and propose statistical methods for controlling them in a new drug submission process. Copyright 2003 John Wiley & Sons, Ltd.

  7. Bit Error Rate Due to Misalignment of Earth Station Antenna Pointing to Satellite

    Directory of Open Access Journals (Sweden)

    Wahyu Pamungkas

    2010-04-01

    Full Text Available One problem causing reduction of energy in satellite communications system is the misalignment of earth station antenna pointing to satellite. Error in pointing would affect the quality of information signal to energy bit in earth station. In this research, error in pointing angle occurred only at receiver (Rx antenna, while the transmitter (Tx antennas precisely point to satellite. The research was conducted towards two satellites, namely TELKOM-1 and TELKOM-2. At first, measurement was made by directing Tx antenna precisely to satellite, resulting in an antenna pattern shown by spectrum analyzer. The output from spectrum analyzers is drawn with the right scale to describe swift of azimuth and elevation pointing angle towards satellite. Due to drifting from the precise pointing, it influenced the received link budget indicated by pattern antenna. This antenna pattern shows reduction of power level received as a result of pointing misalignment. As a conclusion, the increasing misalignment of pointing to satellite would affect in the reduction of received signal parameters link budget of down-link traffic.

  8. High bacterial contamination rate of electrocautery tips during total hip and knee arthroplasty.

    Science.gov (United States)

    Abdelaziz, Hussein; Zahar, Akos; Lausmann, Christian; Gehrke, Thorsten; Fickenscher, Helmut; Suero, Eduardo M; Gebauer, Matthias; Citak, Mustafa

    2018-04-01

    The aim of the study was to quantify the bacterial contamination rate of electrocautery tips during primary total joint replacement (TJR), as well as during aseptic and septic revision TJR. A total of 150 electrocautery tips were collected between April and July 2017. TJR surgeries were divided into three groups: (1) primary, (2) aseptic and (3) septic revisions. In each group, a total of 50 electrocautery tips were collected. A monopolar electrocautery with a reusable stainless-steel blade tip was used in all cases. The rate of bacterial contamination was determined for all groups. Correlation of exposure time and type of surgery was analyzed. The overall bacterial contamination rate was 14.7% (95% CI 9.4 to 21.4%). The highest contamination rate occurred in the septic revision group (30.0%; 95% CI 17.9 to 44.6%), followed by the primary cases group (10.0%; 95% CI 3.3 to 21.8%) and the aseptic revision group (4.0%; 95% CI 0.5 to 13.7%). Exposure time did not affect the bacterial contamination rate. In 12 out of 15 (80%) contaminations identified in the septic group, we found the same causative microorganism of the prosthetic joint infection on the electrocautery tip. The bacterial contamination of the electrocautery tips is relatively high, especially during septic hip revision arthroplasty. Electrocautery tips should be changed after debridement of infected tissue.

  9. Comparing 30-day all-cause readmission rates between tibiotalar fusion and total ankle replacement.

    Science.gov (United States)

    Merrill, Robert K; Ferrandino, Rocco M; Hoffman, Ryan; Ndu, Anthony; Shaffer, Gene W

    2018-01-12

    End-stage ankle arthritis is a debilitating condition that negatively impacts patient quality of life. Tibiotalar fusion and total ankle replacement are treatment options for managing ankle arthritis. Few studies have examined short term readmission rates of these two procedures. The objective of this study was compare all-cause 30-day readmission rates between patients undergoing tibiotalar fusion vs. total ankle replacement. This study queried the Nationwide Readmission Database (NRD) from 2013-2014 and used international classification of disease, 9th revision (ICD-9) procedure codes to identify all patients who underwent a tibiotalar fusion or a total ankle replacement. Comorbidities, insurance status, hospital characteristics, and readmission rates were statistically compared between the two cohorts. Risk factors were then identified for 30-day readmission. A total of 5660 patients were analyzed with 2667 in the tibiotalar fusion cohort and 2993 in the total ankle replacement cohort. Univariate analysis revealed that the readmission rate after tibiotalar fusion (4.4%) was statistically greater than after total ankle replacement (1.4%). Multivariable regression analysis indicated that deficiency anemia (OR 2.18), coagulopathy (OR 3.51), renal failure (OR 2.83), other insurance relative to private (OR 3.40), and tibiotalar fusion (OR 2.51) were all statistically significant independent risk factors for having a readmission within 30-days. These findings suggest that during the short-term period following discharge from the hospital, patients who received a tibiotalar fusion are more likely to experience a 30-day readmission. These findings are important for decision making when a surgeon encounters a patient with end stage ankle arthritis. Level III, cohort study. Published by Elsevier Ltd.

  10. The impact of the total tax rate reduction on public services provided in Romania

    Directory of Open Access Journals (Sweden)

    Adina TRANDAFIR

    2014-09-01

    Full Text Available Against the background of economic globalization, governments tend to take tax measures disadvantageous to society in order to increase the attractiveness of the business environment. A common measures for this purpose is the reduction in tax rate. According to the classical theory of tax competition such measure leads to under the provision of public goods. This article aims to show, through an econometric analysis, whether in Romania, in the period 2006-2013, reducing total tax rate had a negative impact on public services. For this, using linear regression technique, the article analysed the correlation between total tax rate and the variation in the share of the main public service spending in GDP.

  11. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  12. Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link

    Directory of Open Access Journals (Sweden)

    Berioli Matteo

    2007-01-01

    Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.

  13. Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link

    Directory of Open Access Journals (Sweden)

    Matteo Berioli

    2007-05-01

    Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.

  14. Standardized error severity score (ESS) ratings to quantify risk associated with child restraint system (CRS) and booster seat misuse.

    Science.gov (United States)

    Rudin-Brown, Christina M; Kramer, Chelsea; Langerak, Robin; Scipione, Andrea; Kelsey, Shelley

    2017-11-17

    Although numerous research studies have reported high levels of error and misuse of child restraint systems (CRS) and booster seats in experimental and real-world scenarios, conclusions are limited because they provide little information regarding which installation issues pose the highest risk and thus should be targeted for change. Beneficial to legislating bodies and researchers alike would be a standardized, globally relevant assessment of the potential injury risk associated with more common forms of CRS and booster seat misuse, which could be applied with observed error frequency-for example, in car seat clinics or during prototype user testing-to better identify and characterize the installation issues of greatest risk to safety. A group of 8 leading world experts in CRS and injury biomechanics, who were members of an international child safety project, estimated the potential injury severity associated with common forms of CRS and booster seat misuse. These injury risk error severity score (ESS) ratings were compiled and compared to scores from previous research that had used a similar procedure but with fewer respondents. To illustrate their application, and as part of a larger study examining CRS and booster seat labeling requirements, the new standardized ESS ratings were applied to objective installation performance data from 26 adult participants who installed a convertible (rear- vs. forward-facing) CRS and booster seat in a vehicle, and a child test dummy in the CRS and booster seat, using labels that only just met minimal regulatory requirements. The outcome measure, the risk priority number (RPN), represented the composite scores of injury risk and observed installation error frequency. Variability within the sample of ESS ratings in the present study was smaller than that generated in previous studies, indicating better agreement among experts on what constituted injury risk. Application of the new standardized ESS ratings to installation

  15. Linear transceiver design for nonorthogonal amplify-and-forward protocol using a bit error rate criterion

    KAUST Repository

    Ahmed, Qasim Zeeshan; Park, Kihong; Alouini, Mohamed-Slim; Aï ssa, Sonia

    2014-01-01

    The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network

  16. Relationship of Employee Attitudes and Supervisor-Controller Ratio to En Route Operational Error Rates

    National Research Council Canada - National Science Library

    Broach, Dana

    2002-01-01

    ...; Rodgers, Mogford, Mogford, 1998). In this study, the relationship of organizational factors to en route OE rates was investigated, based on an adaptation of the Human Factors Analysis and Classification System (HFACS; Shappell & Wiegmann 2000...

  17. Dose rate and dose fractionation studies in total body irradiation of dogs

    International Nuclear Information System (INIS)

    Kolb, H.J.; Netzel, B.; Schaffer, E.; Kolb, H.

    1979-01-01

    Total body irradiation (TBI) with 800-900 rads and allogeneic bone marrow transplantation according to the regimen designated by the Seattle group has induced remissions in patients with otherwise refractory acute leukemias. Relapse of leukemia after bone marrow transplantation remains the major problem, when the Seattle set up of two opposing 60 Co-sources and a low dose rate is used in TBI. Studies in dogs with TBI at various dose rates confirmed observations in mice that gastrointestinal toxicity is unlike toxicity against hemopoietic stem cells and possibly also leukemic stem cells depending on the dose rate. However, following very high single doses (2400 R) and marrow infusion acute gastrointestinal toxicity was not prevented by the lowest dose rate studied (0.5 R/min). Fractionated TBI with fractions of 600 R in addition to 1200 R (1000 rads) permitted the application of total doses up to 300 R followed by marrow infusion without irreversible toxicity. 26 dogs given 2400-3000 R have been observed for presently up to 2 years with regard to delayed radiation toxicity. This toxicity was mild in dogs given single doses at a low dose rate or fractionated TBI. Fractionated TBI is presently evaluated with allogeneic transplants in the dog before being applied to leukemic patients

  18. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    Science.gov (United States)

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Application of analytical quality by design principles for the determination of alkyl p-toluenesulfonates impurities in Aprepitant by HPLC. Validation using total-error concept.

    Science.gov (United States)

    Zacharis, Constantinos K; Vastardi, Elli

    2018-02-20

    In the research presented we report the development of a simple and robust liquid chromatographic method for the quantification of two genotoxic alkyl sulphonate impurities (namely methyl p-toluenesulfonate and isopropyl p-toluenesulfonate) in Aprepitant API substances using the Analytical Quality by Design (AQbD) approach. Following the steps of AQbD protocol, the selected critical method attributes (CMAs) were the separation criterions between the critical peak pairs, the analysis time and the peak efficiencies of the analytes. The critical method parameters (CMPs) included the flow rate, the gradient slope and the acetonitrile content at the first step of the gradient elution program. Multivariate experimental designs namely Plackett-Burman and Box-Behnken designs were conducted sequentially for factor screening and optimization of the method parameters. The optimal separation conditions were estimated using the desirability function. The method was fully validated in the range of 10-200% of the target concentration limit of the analytes using the "total error" approach. Accuracy profiles - a graphical decision making tool - were constructed using the results of the validation procedures. The β-expectation tolerance intervals did not exceed the acceptance criteria of±10%, meaning that 95% of future results will be included in the defined bias limits. The relative bias ranged between - 1.3-3.8% for both analytes, while the RSD values for repeatability and intermediate precision were less than 1.9% in all cases. The achieved limit of detection (LOD) and the limit of quantification (LOQ) were adequate for the specific purpose and found to be 0.02% (corresponding to 48μgg -1 in sample) for both methyl and isopropyl p-toluenesulfonate. As proof-of-concept, the validated method was successfully applied in the analysis of several Aprepitant batches indicating that this methodology could be used for routine quality control analyses. Copyright © 2017 Elsevier B

  20. Bit Error-Rate Minimizing Detector for Amplify-and-Forward Relaying Systems Using Generalized Gaussian Kernel

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-01-01

    In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.

  1. Efficient Total Nitrogen Removal in an Ammonia Gas Biofilter through High-Rate OLAND

    DEFF Research Database (Denmark)

    De Clippeleir, Haydée; Courtens, Emilie; Mosquera, Mariela

    2012-01-01

    Ammonia gas is conventionally treated in nitrifying biofilters; however, addition of organic carbon to perform post-denitrification is required to obtain total nitrogen removal. Oxygen-limited autotrophic nitrification/denitrification (OLAND), applied in full-scale for wastewater treatment, can...... offer a cost-effective alternative for gas treatment. In this study, the OLAND application thus was broadened toward ammonia loaded gaseous streams. A down flow, oxygen-saturated biofilter (height of 1.5 m; diameter of 0.11 m) was fed with an ammonia gas stream (248 ± 10 ppmv) at a loading rate of 0...... at water flow rates of 1.3 ± 0.4 m3 m–2 biofilter section d–1. Profile measurements revealed that 91% of the total nitrogen activity was taking place in the top 36% of the filter. This study demonstrated for the first time highly effective and sustainable autotrophic ammonia removal in a gas biofilter...

  2. Weight suppression predicts total weight gain and rate of weight gain in outpatients with anorexia nervosa.

    Science.gov (United States)

    Carter, Frances A; Boden, Joseph M; Jordan, Jennifer; McIntosh, Virginia V W; Bulik, Cynthia M; Joyce, Peter R

    2015-11-01

    The present study sought to replicate the finding of Wildes and Marcus, Behav Res Ther, 50, 266-274, 2012 that higher levels of weight suppression at pretreatment predict greater total weight gain, faster rate of weight gain, and bulimic symptoms amongst patients admitted with anorexia nervosa. Participants were 56 women with anorexia nervosa diagnosed by using strict or lenient weight criteria, who were participating in a randomized controlled psychotherapy trial (McIntosh et al., Am J Psychiatry, 162, 741-747, 2005). Thirty-five women completed outpatient treatment and post-treatment assessment. Weight suppression was the discrepancy between highest lifetime weight at adult height and weight at pretreatment assessment. Outcome variables were total weight gain, rate of weight gain, and bulimic symptoms in the month prior to post-treatment assessment [assessed using the Eating Disorders Examination (Fairburn et al., Binge-Eating: Nature, Assessment and Treatment. New York: Guilford, 1993)]. Weight suppression was positively associated with total weight gain and rate of weight gain over treatment. Regression models showed that this association could not be explained by covariates (age at onset of anorexia nervosa and treatment modality). Weight suppression was not significantly associated with bulimic symptoms in the month prior to post-treatment assessment, regardless of whether bulimic symptoms were examined as continuous or dichotomous variables. The present study reinforces the previous finding that weight suppression predicts total weight gain and rate of weight gain amongst patients being treated for anorexia nervosa. Methodological issues may explain the failure of the present study to find that weight suppression predicts bulimic symptoms. Weight suppression at pretreatment for anorexia nervosa should be assessed routinely and may inform treatment planning. © 2015 Wiley Periodicals, Inc.

  3. Insight into the Physical and Dynamical Processes that Control Rapid Increases in Total Flash Rate

    Science.gov (United States)

    Schultz, Christopher J.; Carey, Lawrence D.; Schultz, Elise V.; Blakeslee, Richard J.; Goodman, Steven J.

    2015-01-01

    Rapid increases in total lightning (also termed "lightning jumps") have been observed for many decades. Lightning jumps have been well correlated to severe and hazardous weather occurrence. The main focus of lightning jump work has been on the development of lightning algorithms to be used in real-time assessment of storm intensity. However, in these studies it is typically assumed that the updraft "increases" without direct measurements of the vertical motion, or specification of which updraft characteristic actually increases (e.g., average speed, maximum speed, or convective updraft volume). Therefore, an end-to-end physical and dynamical basis for coupling rapid increases in total flash rate to increases in updraft speed and volume must be understood in order to ultimately relate lightning occurrence to severe storm metrics. Herein, we use polarimetric, multi-Doppler, and lightning mapping array measurements to provide physical context as to why rapid increases in total lightning are closely tied to severe and hazardous weather.

  4. Soft error rate estimations of the Kintex-7 FPGA within the ATLAS Liquid Argon (LAr) Calorimeter

    International Nuclear Information System (INIS)

    Wirthlin, M J; Harding, A; Takai, H

    2014-01-01

    This paper summarizes the radiation testing performed on the Xilinx Kintex-7 FPGA in an effort to determine if the Kintex-7 can be used within the ATLAS Liquid Argon (LAr) Calorimeter. The Kintex-7 device was tested with wide-spectrum neutrons, protons, heavy-ions, and mixed high-energy hadron environments. The results of these tests were used to estimate the configuration ram and block ram upset rate within the ATLAS LAr. These estimations suggest that the configuration memory will upset at a rate of 1.1 × 10 −10 upsets/bit/s and the bram memory will upset at a rate of 9.06 × 10 −11 upsets/bit/s. For the Kintex 7K325 device, this translates to 6.85 × 10 −3 upsets/device/s for configuration memory and 1.49 × 10 −3 for block memory

  5. Reducing Error Rates for Iris Image using higher Contrast in Normalization process

    Science.gov (United States)

    Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa

    2017-08-01

    Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.

  6. Measurement Properties of Performance-Specific Pain Ratings of Patients Awaiting Total Joint Arthroplasty as a Consequence of Osteoarthritis

    Science.gov (United States)

    Stratford, Paul W.; Kennedy, Deborah M.; Woodhouse, Linda J.; Spadoni, Gregory

    2008-01-01

    Purpose: To estimate the test–retest reliability of the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) pain sub-scale and performance-specific assessments of pain, as well as the association between these measures for patients awaiting primary total hip or knee arthroplasty as a consequence of osteoarthritis. Methods: A total of 164 patients awaiting unilateral primary hip or knee arthroplasty completed four performance measures (self-paced walk, timed up and go, stair test, six-minute walk) and the WOMAC. Scores for 22 of these patients provided test–retest reliability data. Estimates of test–retest reliability (Type 2,1 intraclass correlation coefficient [ICC] and standard error of measurement [SEM]) and the association between measures were examined. Results: ICC values for individual performance-specific pain ratings were between 0.70 and 0.86; SEM values were between 0.97 and 1.33 pain points. ICC estimates for the four-item performance pain ratings and the WOMAC pain sub-scale were 0.82 and 0.57 respectively. The correlation between the sum of the pain scores for the four performance measures and the WOMAC pain sub-scale was 0.62. Conclusion: Reliability estimates for the performance-specific assessments of pain using the numeric pain rating scale were consistent with values reported for patients with a spectrum of musculoskeletal conditions. The reliability estimate for the WOMAC pain sub-scale was lower than typically reported in the literature. The level of association between the WOMAC pain sub-scale and the various performance-specific pain scales suggests that the scores can be used interchangeably when applied to groups but not for individual patients. PMID:20145758

  7. Effect of a health system's medical error disclosure program on gastroenterology-related claims rates and costs.

    Science.gov (United States)

    Adams, Megan A; Elmunzer, B Joseph; Scheiman, James M

    2014-04-01

    In 2001, the University of Michigan Health System (UMHS) implemented a novel medical error disclosure program. This study analyzes the effect of this program on gastroenterology (GI)-related claims and costs. This was a review of claims in the UMHS Risk Management Database (1990-2010), naming a gastroenterologist. Claims were classified according to pre-determined categories. Claims data, including incident date, date of resolution, and total liability dollars, were reviewed. Mean total liability incurred per claim in the pre- and post-implementation eras was compared. Patient encounter data from the Division of Gastroenterology was also reviewed in order to benchmark claims data with changes in clinical volume. There were 238,911 GI encounters in the pre-implementation era and 411,944 in the post-implementation era. A total of 66 encounters resulted in claims: 38 in the pre-implementation era and 28 in the post-implementation era. Of the total number of claims, 15.2% alleged delay in diagnosis/misdiagnosis, 42.4% related to a procedure, and 42.4% involved improper management, treatment, or monitoring. The reduction in the proportion of encounters resulting in claims was statistically significant (P=0.001), as was the reduction in time to claim resolution (1,000 vs. 460 days) (P<0.0001). There was also a reduction in the mean total liability per claim ($167,309 pre vs. $81,107 post, 95% confidence interval: 33682.5-300936.2 pre vs. 1687.8-160526.7 post). Implementation of a novel medical error disclosure program, promoting transparency and quality improvement, not only decreased the number of GI-related claims per patient encounter, but also dramatically shortened the time to claim resolution.

  8. Error associated with model predictions of wildland fire rate of spread

    Science.gov (United States)

    Miguel G. Cruz; Martin E. Alexander

    2015-01-01

    How well can we expect to predict the spread rate of wildfires and prescribed fires? The degree of accuracy in model predictions of wildland fire behaviour characteristics are dependent on the model's applicability to a given situation, the validity of the model's relationships, and the reliability of the model input data (Alexander and Cruz 2013b#. We...

  9. Relative effect of radiation dose rate on hemopoietic and nonhemopoietic lethality of total-body irradiation

    International Nuclear Information System (INIS)

    Peters, L.J.; McNeill, J.; Karolis, C.; Thames, H.D. Jr.; Travis, E.L.

    1986-01-01

    Experiments were undertaken to determine the influence of dose rate on the toxicity of total-body irrdiation (TBI) with and without syngeneic bone-marrow rescue in mice. The results showed a much greater dose-rate dependence for death from nonhemopoietic toxicity than from bone-marrow ablation, with the ratio of LD 50 's increasing from 1.73 at 25 cGy/min to 2.80 at 1 cGy/min. At the higher dose rates, dose-limiting nonhemopoietic toxicity resulted from late organ injury, affecting the lungs, kidneys, and liver. At 1 cGy/min the major dose-limiting nonhemopoietic toxicity was acute gastrointestinal injury. The implications of these results in the context of TBI in preparation for bone-marrow transplantation are discussed. 15 refs., 4 figs

  10. Relativistic quasiparticle random-phase approximation calculation of total muon capture rates

    International Nuclear Information System (INIS)

    Marketin, T.; Paar, N.; Niksic, T.; Vretenar, D.

    2009-01-01

    The relativistic proton-neutron quasiparticle random phase approximation (pn-RQRPA) is applied in the calculation of total muon capture rates on a large set of nuclei from 12 C to 244 Pu, for which experimental values are available. The microscopic theoretical framework is based on the relativistic Hartree-Bogoliubov (RHB) model for the nuclear ground state, and transitions to excited states are calculated using the pn-RQRPA. The calculation is fully consistent, i.e., the same interactions are used both in the RHB equations that determine the quasiparticle basis, and in the matrix equations of the pn-RQRPA. The calculated capture rates are sensitive to the in-medium quenching of the axial-vector coupling constant. By reducing this constant from its free-nucleon value g A =1.262 by 10% for all multipole transitions, the calculation reproduces the experimental muon capture rates to better than 10% accuracy.

  11. Effects of respiratory rate and tidal volume on gas exchange in total liquid ventilation.

    Science.gov (United States)

    Bull, Joseph L; Tredici, Stefano; Fujioka, Hideki; Komori, Eisaku; Grotberg, James B; Hirschl, Ronald B

    2009-01-01

    Using a rabbit model of total liquid ventilation (TLV), and in a corresponding theoretical model, we compared nine tidal volume-respiratory rate combinations to identify a ventilator strategy to maximize gas exchange, while avoiding choked flow, during TLV. Nine different ventilation strategies were tested in each animal (n = 12): low [LR = 2.5 breath/min (bpm)], medium (MR = 5 bpm), or high (HR = 7.5 bpm) respiratory rates were combined with a low (LV = 10 ml/kg), medium (MV = 15 ml/kg), or high (HV = 20 ml/kg) tidal volumes. Blood gases and partial pressures, perfluorocarbon gas content, and airway pressures were measured for each combination. Choked flow occurred in all high respiratory rate-high volume animals, 71% of high respiratory rate-medium volume (HRMV) animals, and 50% of medium respiratory rate-high volume (MRHV) animals but in no other combinations. Medium respiratory rate-medium volume (MRMV) resulted in the highest gas exchange of the combinations that did not induce choke. The HRMV and MRHV animals that did not choke had similar or higher gas exchange than MRMV. The theory predicted this behavior, along with spatial and temporal variations in alveolar gas partial pressures. Of the combinations that did not induce choked flow, MRMV provided the highest gas exchange. Alveolar gas transport is diffusion dominated and rapid during gas ventilation but is convection dominated and slow during TLV. Consequently, the usual alveolar gas equation is not applicable for TLV.

  12. Error-free 5.1 Tbit/s data generation on a single-wavelength channel using a 1.28 Tbaud symbol rate

    DEFF Research Database (Denmark)

    Mulvad, Hans Christian Hansen; Galili, Michael; Oxenløwe, Leif Katsuo

    2009-01-01

    We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER......We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER...

  13. Rate and Risk Factors for Periprosthetic Joint Infection Among 36,494 Primary Total Hip Arthroplasties.

    Science.gov (United States)

    Triantafyllopoulos, Georgios K; Soranoglou, Vasileios G; Memtsoudis, Stavros G; Sculco, Thomas P; Poultsides, Lazaros A

    2018-04-01

    As periprosthetic joint infections (PJIs) can have tremendous health and socioeconomic implications, recognizing patients at risk before surgery is of great importance. Therefore, we sought to determine the rate of and risk factors for deep PJI in patients undergoing primary total hip arthroplasty (THA). Clinical characteristics of patients treated with primary THA between January 1999 and December 2013 were retrospectively reviewed. These included patient demographics, comorbidities (including the Charlson/Deyo comorbidity index), length of stay, primary diagnosis, total/allogeneic transfusion rate, and in-hospital complications, which were grouped into local and systemic (minor and major). We determined the overall deep PJI rate, as well as the rates for early-onset (occurring within 2 years after index surgery) and late-onset PJI (occurring more than 2 years after surgery). A Cox proportional hazards regression model was constructed to identify risk factors for developing deep PJI. Significance level was set at 0.05. A deep PJI developed in 154 of 36,494 primary THAs (0.4%) during the study period. Early onset PJI was found in 122 patients (0.3%), whereas late PJI occurred in 32 patients (0.1%). Obesity, coronary artery disease, and pulmonary hypertension were identified as independent risk factors for deep PJI after primary THA. The rate of deep PJIs of the hip is relatively low, with the majority occurring within 2 years after THA. If the optimization of modifiable risk factors before THA can reduce the rate of this complication remains unknown, but should be attempted as part of good practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Increased Total Anesthetic Time Leads to Higher Rates of Surgical Site Infections in Spinal Fusions.

    Science.gov (United States)

    Puffer, Ross C; Murphy, Meghan; Maloney, Patrick; Kor, Daryl; Nassr, Ahmad; Freedman, Brett; Fogelson, Jeremy; Bydon, Mohamad

    2017-06-01

    A retrospective review of a consecutive series of spinal fusions comparing patient and procedural characteristics of patients who developed surgical site infections (SSIs) after spinal fusion. It is known that increased surgical time (incision to closure) is associated with a higher rate of postoperative SSIs. We sought to determine whether increased total anesthetic time (intubation to extubation) is a factor in the development of SSIs as well. In spine surgery for deformity and degenerative disease, SSI has been associated with operative time, revealing a nearly 10-fold increase in SSI rates in prolonged surgery. Surgical time is associated with infections in other surgical disciplines as well. No studies have reported whether total anesthetic time (intubation to extubation) has an association with SSIs. Surgical records were searched in a retrospective fashion to identify all spine fusion procedures performed between January 2010 and July 2012. All SSIs during that timeframe were recorded and compared with the list of cases performed between 2010 and 2012 in a case-control design. There were 20 (1.7%) SSIs in this fusion cohort. On univariate analyses of operative factors, there was a significant association between total anesthetic time (Infection 7.6 ± 0.5 hrs vs. no infection -6.0 ± 0.1 hrs, P operative time (infection 5.5 ± 0.4 hrs vs. no infection - 4.4 ± 0.06 hrs, P infections, whereas level of pathology and emergent surgery were not significant. On multivariate logistic analysis, BMI and total anesthetic time remained independent predictors of SSI whereas ASA status and operative time did not. Increasing BMI and total anesthetic time were independent predictors of SSIs in this cohort of over 1000 consecutive spinal fusions. 3.

  15. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  16. A Simulation Analysis of Errors in the Measurement of Standard Electrochemical Rate Constants from Phase-Selective Impedance Data.

    Science.gov (United States)

    1987-09-30

    RESTRICTIVE MARKINGSC Unclassif ied 2a SECURIly CLASSIFICATION ALIIMOA4TY 3 DIS1RSBj~jiOAVAILAB.I1Y OF RkPORI _________________________________ Approved...of the AC current, including the time dependence at a growing DME, at a given fixed potential either in the presence or the absence of an...the relative error in k b(app) is ob relatively small for ks (true) : 0.5 cm s-, and increases rapidly for ob larger rate constants as kob reaches the

  17. Rates of medical errors and preventable adverse events among hospitalized children following implementation of a resident handoff bundle.

    Science.gov (United States)

    Starmer, Amy J; Sectish, Theodore C; Simon, Dennis W; Keohane, Carol; McSweeney, Maireade E; Chung, Erica Y; Yoon, Catherine S; Lipsitz, Stuart R; Wassner, Ari J; Harper, Marvin B; Landrigan, Christopher P

    2013-12-04

    Handoff miscommunications are a leading cause of medical errors. Studies comprehensively assessing handoff improvement programs are lacking. To determine whether introduction of a multifaceted handoff program was associated with reduced rates of medical errors and preventable adverse events, fewer omissions of key data in written handoffs, improved verbal handoffs, and changes in resident-physician workflow. Prospective intervention study of 1255 patient admissions (642 before and 613 after the intervention) involving 84 resident physicians (42 before and 42 after the intervention) from July-September 2009 and November 2009-January 2010 on 2 inpatient units at Boston Children's Hospital. Resident handoff bundle, consisting of standardized communication and handoff training, a verbal mnemonic, and a new team handoff structure. On one unit, a computerized handoff tool linked to the electronic medical record was introduced. The primary outcomes were the rates of medical errors and preventable adverse events measured by daily systematic surveillance. The secondary outcomes were omissions in the printed handoff document and resident time-motion activity. Medical errors decreased from 33.8 per 100 admissions (95% CI, 27.3-40.3) to 18.3 per 100 admissions (95% CI, 14.7-21.9; P < .001), and preventable adverse events decreased from 3.3 per 100 admissions (95% CI, 1.7-4.8) to 1.5 (95% CI, 0.51-2.4) per 100 admissions (P = .04) following the intervention. There were fewer omissions of key handoff elements on printed handoff documents, especially on the unit that received the computerized handoff tool (significant reductions of omissions in 11 of 14 categories with computerized tool; significant reductions in 2 of 14 categories without computerized tool). Physicians spent a greater percentage of time in a 24-hour period at the patient bedside after the intervention (8.3%; 95% CI 7.1%-9.8%) vs 10.6% (95% CI, 9.2%-12.2%; P = .03). The average duration of verbal

  18. A web-based team-oriented medical error communication assessment tool: development, preliminary reliability, validity, and user ratings.

    Science.gov (United States)

    Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas

    2011-01-01

    Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.

  19. Evaluation of total energy-rate feedback for glidescope tracking in wind shear

    Science.gov (United States)

    Belcastro, C. M.; Ostroff, A. J.

    1986-01-01

    Low-altitude wind shear is recognized as an infrequent but significant hazard to all aircraft during take-off and landing. A total energy-rate sensor, which is potentially applicable to this problem, has been developed for measuring specific total energy-rate of an airplane with respect to the air mass. This paper presents control system designs, with and without energy-rate feedback, for the approach to landing of a transport airplane through severe wind shear and gusts to evaluate application of this sensor. A system model is developed which incorporates wind shear dynamics equations with the airplance equations of motion, thus allowing the control systems to be analyzed under various wind shears. The control systems are designed using optimal output feedback and are analyzed using frequency domain control theory techniques. Control system performance is evaluated using a complete nonlinear simulation of the airplane and a severe wind shear and gust data package. The analysis and simulation results indicate very similar stability and performance characteristics for the two designs. An implementation technique for distributing the velocity gains between airspeed and ground speed in the simulation is also presented, and this technique is shown to improve the performance characteristics of both designs.

  20. Choice of reference sequence and assembler for alignment of Listeria monocytogenes short-read sequence data greatly influences rates of error in SNP analyses.

    Directory of Open Access Journals (Sweden)

    Arthur W Pightling

    Full Text Available The wide availability of whole-genome sequencing (WGS and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i depth of sequencing coverage, ii choice of reference-guided short-read sequence assembler, iii choice of reference genome, and iv whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT, using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming. We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers

  1. Total dose and dose rate models for bipolar transistors in circuit simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Phillip Montgomery; Wix, Steven D.

    2013-05-01

    The objective of this work is to develop a model for total dose effects in bipolar junction transistors for use in circuit simulation. The components of the model are an electrical model of device performance that includes the effects of trapped charge on device behavior, and a model that calculates the trapped charge densities in a specific device structure as a function of radiation dose and dose rate. Simulations based on this model are found to agree well with measurements on a number of devices for which data are available.

  2. Total skin high-dose-rate electron therapy dosimetry using TG-51

    International Nuclear Information System (INIS)

    Gossman, Michael S.; Sharma, Subhash C.

    2004-01-01

    An approach to dosimetry for total skin electron therapy (TSET) is discussed using the currently accepted TG-51 high-energy calibration protocol. The methodology incorporates water phantom data for absolute calibration and plastic phantom data for efficient reference dosimetry. The scheme is simplified to include the high-dose-rate mode conversion and provides support for its use, as it becomes more available on newer linear accelerators. Using a 6-field, modified Stanford technique, one may follow the process for accurate determination of absorbed dose

  3. [Rates of total and free PSA prescriptions in France (2012-2014)].

    Science.gov (United States)

    Tuppin, Philippe; Leboucher, Claire; Peyre-Lanquar, Gabrielle; Lamy, Pierre-Jean; Gabach, Pierre; Rébillard, Xavier

    2017-10-01

    In 2010, the French Haute Autorité de santé (National Health Authority) confirmed the limited value of prostate cancer (PCa) screening by total prostate-specific antigen (PSA) assay. This study was designed to determine the modalities of ordering total PSA or free PSA assays (in the absence of PCa) according to various parameters and the corresponding sums reimbursed. Men aged 40 years and older covered by the national health insurance general scheme (73% of the French population) between 2012 and 2014 were selected. Data were derived from the Système national d'information inter-régimes de l'assurance maladie (Sniiram) (National health insurance information system) database. In 2014, 27% of the 11.6 million men 40 years and older underwent at least one total PSA assay and 5.6% underwent at least one free PSA assay, with marked variations according to the presence or absence of treated lower urinary tract symptoms (LUTS) (53% and 15% vs 24% and 5%) and from one administrative department to another. The peak total PSA assay rate was observed between the ages of 65 and 74 years: 64% of men with LUTS, 46% without LUTS. Between 2012 and 2014, men in whom at least one PSA assay had been performed underwent a mean of 1.8 total PSA assays and 1.7 free PSA assays, with means of 2.3 and 2, respectively, in the presence of LUTS. General practice specialists ordered 91% of the PSA tests reimbursed in 2014 (92% for total PSA and 87% for free PSA) and urologists ordered 4% of reimbursed tests. The total sum reimbursed was €28.5 million, comprising €8.7 million for free PSA. An average of 10 laboratory tests was performed at the same time as the PSA assay in the absence of treated LUTS. Total PSA and free PSA assays are performed in a large number of men, although the value of these tests as first-line test before biopsy remains controversial. These PSA assays are associated with many other laboratory tests looking for possible abnormalities, especially in younger

  4. Importaciones totales y de carne de cerdo en México en el contexto del TLCAN: un enfoque de corrección de error

    OpenAIRE

    Pablo Mejía Reyes

    2007-01-01

    Se modela la dinámica de las importaciones totales y de carne de cerdo de México para el periodo de vigencia del TLCAN. Asimismo, partiendo de un marco convencional, se analiza la existencia de cointegración entre las importaciones de cada tipo, la producción nacional y los precios relativos. Posteriormente, se modela la dinámica de corto plazo de cada tipo de importaciones mediante un modelo de corrección de error empleando las mismas variables explicatorias. Los resultados sugieren que las ...

  5. Emesis as a Screening Diagnostic for Low Dose Rate (LDR) Total Body Radiation Exposure.

    Science.gov (United States)

    Camarata, Andrew S; Switchenko, Jeffrey M; Demidenko, Eugene; Flood, Ann B; Swartz, Harold M; Ali, Arif N

    2016-04-01

    Current radiation disaster manuals list the time-to-emesis (TE) as the key triage indicator of radiation dose. The data used to support TE recommendations were derived primarily from nearly instantaneous, high dose-rate exposures as part of variable condition accident databases. To date, there has not been a systematic differentiation between triage dose estimates associated with high and low dose rate (LDR) exposures, even though it is likely that after a nuclear detonation or radiologic disaster, many surviving casualties would have received a significant portion of their total exposure from fallout (LDR exposure) rather than from the initial nuclear detonation or criticality event (high dose rate exposure). This commentary discusses the issues surrounding the use of emesis as a screening diagnostic for radiation dose after LDR exposure. As part of this discussion, previously published clinical data on emesis after LDR total body irradiation (TBI) is statistically re-analyzed as an illustration of the complexity of the issue and confounding factors. This previously published data includes 107 patients who underwent TBI up to 10.5 Gy in a single fraction delivered over several hours at 0.02 to 0.04 Gy min. Estimates based on these data for the sensitivity of emesis as a screening diagnostic for the low dose rate radiation exposure range from 57.1% to 76.6%, and the estimates for specificity range from 87.5% to 99.4%. Though the original data contain multiple confounding factors, the evidence regarding sensitivity suggests that emesis appears to be quite poor as a medical screening diagnostic for LDR exposures.

  6. Structure analysis of tax revenue and inflation rate in Banda Aceh using vector error correction model with multiple alpha

    Science.gov (United States)

    Sofyan, Hizir; Maulia, Eva; Miftahuddin

    2017-11-01

    A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).

  7. The dynamic effect of exchange-rate volatility on Turkish exports: Parsimonious error-correction model approach

    Directory of Open Access Journals (Sweden)

    Demirhan Erdal

    2015-01-01

    Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.

  8. Effect of radiation dose rate and cyclophosphamide on pulmonary toxicity after total body irradiation in a mouse model

    International Nuclear Information System (INIS)

    Safwat, Akmal; Nielsen, Ole S.; El-Badawy, Samy; Overgaard, Jens

    1996-01-01

    Purpose: Interstitial pneumonitis (IP) is still a major complication after total body irradiation (TBI) and bone marrow transplantation (BMT). It is difficult to determine the exact role of radiation in this multifactorial complication, especially because most of the experimental work on lung damage was done using localized lung irradiation and not TBI. We have thus tested the effect of radiation dose rate and combining cyclophosphamide (CTX) with single fraction TBI on lung damage in a mouse model for BMT. Methods and Materials: TBI was given as a single fraction at a high dose rate (HDR, 0.71 Gy/min) or a low dose rate (LDR, 0.08 Gy/min). CTX (250 mg/kg) was given 24 h before TBI. Bone marrow transplantation (BMT) was performed 4-6 h after the last treatment. Lung damage was assessed using ventilation rate (VR) and lethality between 28 and 180 days (LD (50(28))-180 ). Results: The LD 50 for lung damage, ± standard error (SE), increased from 12.0 (± 0.2) Gy using single fraction HDR to 15.8 (± 0.6) Gy using LDR. Adding CTX shifted the dose-response curves towards lower doses. The LD 50 values for the combined treatment were 5.3 (± 0.2) and 3.5 (± 0.2) Gy for HDR and LDR, respectively. This indicates that the combined effect of CTX and LDR was more toxic than that of combined CTX and HDR. Lung damage evaluated by VR demonstrated two waves of VR increase. The first wave of VR increase occurred after 6 weeks using TBI only and after 3 weeks in the combined CTX-TBI treatment, irrespective of total dose or dose rate. The second wave of VR elevation resembled the IP that follows localized thoracic irradiation in its time of occurrence. Conclusions: Lung damage following TBI could be spared using LDR. However, CTX markedly enhances TBI-induced lung damage. The combination of CTX and LDR is more toxic to the lungs than combining CTX and HDR

  9. Error rate on the director's task is influenced by the need to take another's perspective but not the type of perspective.

    Science.gov (United States)

    Legg, Edward W; Olivier, Laure; Samuel, Steven; Lurz, Robert; Clayton, Nicola S

    2017-08-01

    Adults are prone to responding erroneously to another's instructions based on what they themselves see and not what the other person sees. Previous studies have indicated that in instruction-following tasks participants make more errors when required to infer another's perspective than when following a rule. These inference-induced errors may occur because the inference process itself is error-prone or because they are a side effect of the inference process. Crucially, if the inference process is error-prone, then higher error rates should be found when the perspective to be inferred is more complex. Here, we found that participants were no more error-prone when they had to judge how an item appeared (Level 2 perspective-taking) than when they had to judge whether an item could or could not be seen (Level 1 perspective-taking). However, participants were more error-prone in the perspective-taking variants of the task than in a version that only required them to follow a rule. These results suggest that having to represent another's perspective induces errors when following their instructions but that error rates are not directly linked to errors in inferring another's perspective.

  10. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    Science.gov (United States)

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  11. Reply: Birnbaum's (2012 statistical tests of independence have unknown Type-I error rates and do not replicate within participant

    Directory of Open Access Journals (Sweden)

    Yun-shil Cha

    2013-01-01

    Full Text Available Birnbaum (2011, 2012 questioned the iid (independent and identically distributed sampling assumptions used by state-of-the-art statistical tests in Regenwetter, Dana and Davis-Stober's (2010, 2011 analysis of the ``linear order model''. Birnbaum (2012 cited, but did not use, a test of iid by Smith and Batchelder (2008 with analytically known properties. Instead, he created two new test statistics with unknown sampling distributions. Our rebuttal has five components: 1 We demonstrate that the Regenwetter et al. data pass Smith and Batchelder's test of iid with flying colors. 2 We provide evidence from Monte Carlo simulations that Birnbaum's (2012 proposed tests have unknown Type-I error rates, which depend on the actual choice probabilities and on how data are coded as well as on the null hypothesis of iid sampling. 3 Birnbaum analyzed only a third of Regenwetter et al.'s data. We show that his two new tests fail to replicate on the other two-thirds of the data, within participants. 4 Birnbaum selectively picked data of one respondent to suggest that choice probabilities may have changed partway into the experiment. Such nonstationarity could potentially cause a seemingly good fit to be a Type-II error. We show that the linear order model fits equally well if we allow for warm-up effects. 5 Using hypothetical data, Birnbaum (2012 claimed to show that ``true-and-error'' models for binary pattern probabilities overcome the alleged short-comings of Regenwetter et al.'s approach. We disprove this claim on the same data.

  12. Comparison of the effect of paper and computerized procedures on operator error rate and speed of performance

    International Nuclear Information System (INIS)

    Converse, S.A.; Perez, P.B.; Meyer, S.; Crabtree, W.

    1994-01-01

    The Computerized Procedures Manual (COPMA-II) is an advanced procedure manual that can be used to select and execute procedures, to monitor the state of plant parameters, and to help operators track their progress through plant procedures. COPMA-II was evaluated in a study that compared the speed and accuracy of operators' performance when they performed with COPMA-II and traditional paper procedures. Sixteen licensed reactor operators worked in teams of two to operate the Scales Pressurized Water Reactor Facility at North Carolina State University. Each team performed one change of power with each type of procedure to simulate performance under normal operating conditions. Teams then performed one accident scenario with COPMA-II and one with paper procedures. Error rates, performance times, and subjective estimates of workload were collected, and were evaluated for each combination of procedure type and scenario type. For the change of power task, accuracy and response time were not different for COPMA-II and paper procedures. Operators did initiate responses to both accident scenarios fastest with paper procedures. However, procedure type did not moderate response completion time for either accident scenario. For accuracy, performance with paper procedures resulted in twice as many errors as did performance with COPMA-II. Subjective measures of mental workload for the accident scenarios were not affected by procedure type

  13. Inflation of type I error rates by unequal variances associated with parametric, nonparametric, and Rank-Transformation Tests

    Directory of Open Access Journals (Sweden)

    Donald W. Zimmerman

    2004-01-01

    Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.

  14. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities.

    Science.gov (United States)

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-03-01

    Sound is among the significant environmental factors for people's health, and it has an important role in both physical and psychological injuries, and it also affects individuals' performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant's performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). This study found that a sound level of 110 dB had an important effect on the individuals' performances, i.e., the performances were decreased.

  15. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  16. Bit Error Rate Performance Analysis of a Threshold-Based Generalized Selection Combining Scheme in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    Kousa Maan

    2005-01-01

    Full Text Available The severity of fading on mobile communication channels calls for the combining of multiple diversity sources to achieve acceptable error rate performance. Traditional approaches perform the combining of the different diversity sources using either the conventional selective diversity combining (CSC, equal-gain combining (EGC, or maximal-ratio combining (MRC schemes. CSC and MRC are the two extremes of compromise between performance quality and complexity. Some researches have proposed a generalized selection combining scheme (GSC that combines the best branches out of the available diversity resources ( . In this paper, we analyze a generalized selection combining scheme based on a threshold criterion rather than a fixed-size subset of the best channels. In this scheme, only those diversity branches whose energy levels are above a specified threshold are combined. Closed-form analytical solutions for the BER performances of this scheme over Nakagami fading channels are derived. We also discuss the merits of this scheme over GSC.

  17. Total Body Capacitance for Estimating Human Basal Metabolic Rate in an Egyptian Population

    Science.gov (United States)

    M. Abdel-Mageed, Samir; I. Mohamed, Ehab

    2016-01-01

    Determining basal metabolic rate (BMR) is important for estimating total energy needs in the human being yet, concerns have been raised regarding the suitability of sex-specific equations based on age and weight for its calculation on an individual or population basis. It has been shown that body cell mass (BCM) is the body compartment responsible for BMR. The objectives of this study were to investigate the relationship between total body capacitance (TBC), which is considered as an expression for BCM, and BMR and to develop a formula for calculating BMR in comparison with widely used equations. Fifty healthy nonsmoking male volunteers [mean age (± SD): 24.93 ± 4.15 year and body mass index (BMI): 25.63 ± 3.59 kg/m2] and an equal number of healthy nonsmoking females matched for age and BMI were recruited for the study. TBC and BMR were measured for all participants using octopolar bioelectric impedance analysis and indirect calorimetry techniques, respectively. A significant regressing equation based on the covariates: sex, weight, and TBC for estimating BMR was derived (R=0.96, SEE=48.59 kcal, and P<0.0001), which will be useful for nutritional and health status assessment for both individuals and populations. PMID:27127453

  18. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Systematic error growth rate peak is observed at wavenumber 2 up to 4-day forecast then .... the influence of summer systematic error and ran- ... total exchange. When the error energy budgets are examined in spectral domain, one may ask ques- tions on the error growth at a certain wavenum- ber from its interaction with ...

  19. A parsimonious characterization of change in global age-specific and total fertility rates

    Science.gov (United States)

    2018-01-01

    This study aims to understand trends in global fertility from 1950-2010 though the analysis of age-specific fertility rates. This approach incorporates both the overall level, as when the total fertility rate is modeled, and different patterns of age-specific fertility to examine the relationship between changes in age-specific fertility and fertility decline. Singular value decomposition is used to capture the variation in age-specific fertility curves while reducing the number of dimensions, allowing curves to be described nearly fully with three parameters. Regional patterns and trends over time are evident in parameter values, suggesting this method provides a useful tool for considering fertility decline globally. The second and third parameters were analyzed using model-based clustering to examine patterns of age-specific fertility over time and place; four clusters were obtained. A country’s demographic transition can be traced through time by membership in the different clusters, and regional patterns in the trajectories through time and with fertility decline are identified. PMID:29377899

  20. Body mass and weight thresholds for increased prosthetic joint infection rates after primary total joint arthroplasty.

    Science.gov (United States)

    Lübbeke, Anne; Zingg, Matthieu; Vu, Diemlan; Miozzari, Hermes H; Christofilopoulos, Panayiotis; Uçkay, Ilker; Harbarth, Stephan; Hoffmeyer, Pierre

    2016-01-01

    Obesity increases the risk of deep infection after total joint arthroplasty (TJA). Our objective was to determine whether there may be body mass index (BMI) and weight thresholds indicating a higher prosthetic joint infection rate. We included all 9,061 primary hip and knee arthroplasties (mean age 70 years, 61% women) performed between March 1996 and December 2013 where the patient had received intravenous cefuroxime (1.5 g) perioperatively. The main exposures of interest were BMI (5 categories: prosthetic joint infection. The mean follow-up time was 6.5 years (0.5-18 years). 111 prosthetic joint infections were observed: 68 postoperative, 16 hematogenous, and 27 of undetermined cause. Incidence rates were similar in the first 3 BMI categories (infection from the early postoperative period onward (adjusted HR = 2.1, 95% CI: 1.3-3.6). BMI ≥ 35 or weight ≥ 100 kg may serve as a cutoff for higher perioperative dosage of antibiotics.

  1. Water cut measurement of oil–water flow in vertical well by combining total flow rate and the response of a conductance probe

    International Nuclear Information System (INIS)

    Chen, Jianjun; Xu, Lijun; Cao, Zhang; Zhang, Wen; Liu, Xingbin; Hu, Jinhai

    2015-01-01

    In this paper, a conductance probe-based well logging instrument was developed and the total flow rate is combined with the response of the conductance probe to estimate the water cut of the oil–water flow in a vertical well. The conductance probe records the time-varying electrical characteristics of the oil–water flow. Linear least squares regression (LSR) and nonlinear support vector regression (SVR) were used to establish models to map the total flow rate and features extracted from the probe response onto the water cut, respectively. Principal component analysis (PCA) and partial least squares analysis (PLSA) techniques were employed to reduce data redundancy within the extracted features. An experiment was carried out in a vertical pipe with an inner diameter of 125 mm and a height of 24 m in an experimental multi-phase flow setup, Daqing Oilfield, China. In the experiment, oil–water flow was used and the total flow rate varied from 10 to 200 m 3 per day and the water cut varied from 0% to 100%. As a direct comparison, the cases were also studied when the total flow rate was not used as an independent input to the models. The results obtained demonstrate that: (1) the addition of the total flow rate as an input to the regression models can greatly improve the accuracy of water cut prediction, (2) the nonlinear SVR model performs much better than the linear LSR model, and (3) for the SVR model with the total flow rate as an input, the adoption of PCA or PLSA not only decreases the dimensions of inputs, but also increases prediction accuracy. The SVR model with five PCA-treated features plus the total flow rate achieves the best performance in water cut prediction, with a coefficient of determination (R 2 ) as high as 0.9970. The corresponding root mean squared error (RMSE) and mean quoted error (MQE) are 0.0312% and 1.99%, respectively. (paper)

  2. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Science.gov (United States)

    2010-01-01

    ... Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions (a... loan cost rate for various transactions, as well as instructions, explanations, and examples for.... (2) Term of the transaction. For purposes of total annual loan cost disclosures, the term of a...

  3. Estimation of salivary flow rate, pH, buffer capacity, calcium, total protein content and total antioxidant capacity in relation to dental caries severity, age and gender.

    Science.gov (United States)

    Pandey, Pallavi; Reddy, N Venugopal; Rao, V Arun Prasad; Saxena, Aditya; Chaudhary, C P

    2015-03-01

    The aim of the study was to evaluate salivary flow rate, pH, buffering capacity, calcium, total protein content and total antioxidant capacity in relation to dental caries, age and gender. The study population consisted of 120 healthy children aged 7-15 years that was further divided into two groups: 7-10 years and 11-15 years. In this 60 children with DMFS/dfs = 0 and 60 children with DMFS/dfs ≥5 were included. The subjects were divided into two groups; Group A: Children with DMFS/dfs = 0 (caries-free) Group B: Children with DMFS/dfs ≥5 (caries active). Unstimulated saliva samples were collected from all groups. Flow rates were determined, and samples analyzed for pH, buffer capacity, calcium, total protein and total antioxidant status. Salivary antioxidant activity is measured with spectrophotometer by an adaptation of 2,2'-azino-di-(3-ethylbenzthiazoline-6-sulphonate) assays. The mean difference of the two groups; caries-free and caries active were proved to be statistically significant (P salivary calcium, total protein and total antioxidant level for both the sexes in the age group 7-10 years and for the age 11-15 years the mean difference of the two groups were proved to be statistically significant (P salivary calcium level for both the sexes. Salivary total protein and total antioxidant level were proved to be statistically significant for male children only. In general, total protein and total antioxidants in saliva were increased with caries activity. Calcium content of saliva was found to be more in caries-free group and increased with age.

  4. Oil flow rate measurements using 198Au and total count technique

    International Nuclear Information System (INIS)

    Goncalves, Eduardo R.; Crispim, Verginia R.

    2013-01-01

    In industrial plants, oil and oil compounds are usually transported by closed pipelines with circular cross-section. The use of radiotracers in oil transport and processing industrial facilities allows calibrating flowmeters, measuring mean residence time in cracking columns, locate points of obstruction or leak in underground ducts, as well as investigating flow behavior or industrial processes such as in distillation towers. Inspection techniques using radiotracers are non-destructive, simple, economic and highly accurate. Among them, Total Count, which uses a small amount of radiotracer with known activity, is acknowledged as an absolute technique for flow rate measurement. A viscous fluid transport system, composed by four PVC pipelines with 13m length (12m horizontal and 1m vertical) and 1/2, 3/4, 1 and 2-inch gauges, respectively, interconnected by maneuvering valves was designed and assembled in order to conduct the research. This system was used to simulate different flow conditions of petroleum compounds and for experimental studies of flow profile in the horizontal and upward directions. As 198 Au presents a single photopeak (411,8 keV), it was the radioisotope chosen for oil labeling, in small amounts (6 ml) or around 200 kBq activity, and it was injected in the oil transport lines. A NaI scintillation detector 2'x 2', with well-defined geometry, was used to measure total activity, determine the calibration factor F and, positioned after a homogenization distance and interconnected to a standardized electronic set of nuclear instrumentation modules (NIM), to detect the radioactive cloud. (author)

  5. Water turnover rate and total body water affected by different physiological factors under Egyptian environmental conditions

    International Nuclear Information System (INIS)

    Kamal, T.H.

    1982-01-01

    The tritiated water dilution technique was used to determine the total body water (TBW) and water turnover rate (WTR), which is assumed to be similar to water intake, in water buffalo, Red Danish cattle, fat-tailed Osemi sheep and crossed Nubian-Bedouin goats and camels (Camelus dromedarius). There was a significant (P < 0.05) effect of species on TBW and WTR. The combined data of buffalo, cattle and sheep revealed a significant (P < 0.05) effect of pregnancy on TBW, but not on WTR. The combined data of buffalo and cattle showed a significantly lower TBW (P < 0.01) and a higher WTR (P < 0.05) in lactating animals than in heifers. In buffalo WTR was on average 81% higher in summer grazing (SG) than in spring. It was also 118 and 20% higher in summer non-grazing (SNG), than in either spring or SG, respectively. The differences between treatments in heifers, pregnant and lactating, were significant (P<0.01), except between spring and SG in heifers. The TBW was on average 12% higher in SG than in spring. It was also 18 and 5% higher in SNG than in either spring or SG, respectively. The differences between treatments in heifers, pregnant and lactating, were significant, except between SG and SNG in heifers and lactating cows and between spring and SG in lactating cows. (author)

  6. The Differences in Error Rate and Type between IELTS Writing Bands and Their Impact on Academic Workload

    Science.gov (United States)

    Müller, Amanda

    2015-01-01

    This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…

  7. Analysis of error patterns in clinical radiotherapy

    International Nuclear Information System (INIS)

    Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin

    1996-01-01

    Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to

  8. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    Science.gov (United States)

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  9. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    Science.gov (United States)

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  10. The effect of speaking rate on serial-order sound-level errors in normal healthy controls and persons with aphasia.

    Science.gov (United States)

    Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I

    Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions

  11. 38 CFR 3.342 - Permanent and total disability ratings for pension purposes.

    Science.gov (United States)

    2010-07-01

    ... applied with other types of disabilities requiring hospitalization for indefinite periods. The need for... permanency of total disability contained in § 3.340, the following special considerations apply in pension... permanence of total disability will be established as of the earliest date consistent with the evidence in...

  12. Kinematic analysis of the gait of adult sheep during treadmill locomotion: Parameter values, allowable total error, and potential for use in evaluating spinal cord injury.

    Science.gov (United States)

    Safayi, Sina; Jeffery, Nick D; Shivapour, Sara K; Zamanighomi, Mahdi; Zylstra, Tyler J; Bratsch-Prince, Joshua; Wilson, Saul; Reddy, Chandan G; Fredericks, Douglas C; Gillies, George T; Howard, Matthew A

    2015-11-15

    We are developing a novel intradural spinal cord (SC) stimulator designed to improve the treatment of intractable pain and the sequelae of SC injury. In-vivo ovine models of neuropathic pain and moderate SC injury are being implemented for pre-clinical evaluations of this device, to be carried out via gait analysis before and after induction of the relevant condition. We extend previous studies on other quadrupeds to extract the three-dimensional kinematics of the limbs over the gait cycle of sheep walking on a treadmill. Quantitative measures of thoracic and pelvic limb movements were obtained from 17 animals. We calculated the total-error values to define the analytical performance of our motion capture system for these kinematic variables. The post- vs. pre-injury time delay between contralateral thoracic and pelvic-limb steps for normal and SC-injured sheep increased by ~24s over 100 steps. The pelvic limb hoof velocity during swing phase decreased, while range of pelvic hoof elevation and distance between lateral pelvic hoof placements increased after SC injury. The kinematics measures in a single SC-injured sheep can be objectively defined as changed from the corresponding pre-injury values, implying utility of this method to assess new neuromodulation strategies for specific deficits exhibited by an individual. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. 75 FR 47258 - Determination of Total Amounts of Fiscal Year 2011 Tariff-Rate Quotas for Raw Cane Sugar and...

    Science.gov (United States)

    2010-08-05

    ... Determination of Total Amounts of Fiscal Year 2011 Tariff-Rate Quotas for Raw Cane Sugar and Certain Sugars...) 2011 in-quota aggregate quantity of the raw, as well as, refined and specialty sugar Tariff-Rate Quotas (TRQ) as required under the U.S. World Trade Organization (WTO) commitments. The FY 2011 raw cane sugar...

  14. Laboratory Bioaccumulation, Depuration And Total Dose Rate Of Waterborne Th-232 In Freshwater Fish Of Anabas Testudineus

    International Nuclear Information System (INIS)

    Zal U'yun Wan Mahmood; Norfaizal Mohamed; Nita Salina Abu Bakar

    2014-01-01

    Preliminary results on the study of bioaccumulation, depuration and total dose rate of Th-232 in the whole body of Anabas testudineus are presented. The objective of this study was to evaluate the effect of Th-232 concentration activity on the laboratory bioaccumulation, depuration and total dose rate in Anabas testudineus. Anabas testudineus adults were exposed to different waterborne Th-232 levels: 0 BqL -1 (control), 50 BqL -1 and 100 BqL -1 for 30 day (uptake phase), followed by exposure to radionuclide-free water for 30 days (loss phase). Radionuclide concentration ratios between the whole body levels and water levels, percentage of Th-232 remaining in fish were calculated and total dose rates using ERICA Assessment Tool were also estimated. The results showed the increase of waterborne Th-232 concentration corresponded to a progressive increase of Th accumulation and total dose rate (internal and external) in the whole body of Anabas testudineus. Considering the ERICA dose rate screening value of 10 μGyh -1 , the findings can be concluded the estimated of total dose rate (< 5 μGyh -1 ) in Anabas testudineus is in order of small magnitude. Nevertheless, these preliminary results showed that the Anabas testudineus has a potential to accumulate thorium. (author)

  15. Liquid chromatography-tandem mass spectrometry multiresidue method for the analysis of quaternary ammonium compounds in cheese and milk products: Development and validation using the total error approach.

    Science.gov (United States)

    Slimani, Kahina; Féret, Aurélie; Pirotais, Yvette; Maris, Pierre; Abjean, Jean-Pierre; Hurtaud-Pessel, Dominique

    2017-09-29

    Quaternary ammonium compounds (QACs) are both cationic surfactants and biocidal substances widely used as disinfectants in the food industry. A sensitive and reliable method for the analysis of benzalkonium chlorides (BACs) and dialkyldimethylammonium chlorides (DDACs) has been developed that enables the simultaneous quantitative determination of ten quaternary ammonium residues in dairy products below the provisional maximum residue level (MRL), set at 0.1mgkg -1 . To the best of our knowledge, this method could be the one applicable to milk and to three major processed milk products selected, namely processed or hard pressed cheeses, and whole milk powder. The method comprises solvent extraction using a mixture of acetonitrile and ethyl acetate, without any further clean-up. Analyses were performed by liquid chromatography coupled with electrospray tandem mass spectrometry detection (LC-ESI-MS/MS) operating in positive mode. A C18 analytical column was used for chromatographic separation, with a mobile phase composed of acetonitrile and water both containing 0.3% formic acid; and methanol in the gradient mode. Five deuterated internal standards were added to obtain the most accurate quantification. Extraction recoveries were satisfactory and no matrix effects were observed. The method was validated using the total error approach in accordance with the NF V03-110 standard in order to characterize the trueness, repeatability, intermediate precision and analytical limits within the range of 5-150μgkg -1 for all matrices. These performance criteria, calculated by e.noval ® 3.0 software, were satisfactory and in full accordance with the proposed provisional MRL and with the recommendations in the European Union SANTE/11945/2015 regulatory guidelines. The limit of detection (LOD) was low (ammoniums in foodstuffs from dairy industries at residue levels, and could be used for biocide residues monitoring plans and to measure the exposition consumer to biocides products

  16. Bit-error-rate performance analysis of self-heterodyne detected radio-over-fiber links using phase and intensity modulation

    DEFF Research Database (Denmark)

    Yin, Xiaoli; Yu, Xianbin; Tafur Monroy, Idelfonso

    2010-01-01

    We theoretically and experimentally investigate the performance of two self-heterodyne detected radio-over-fiber (RoF) links employing phase modulation (PM) and quadrature biased intensity modulation (IM), in term of bit-error-rate (BER) and optical signal-to-noise-ratio (OSNR). In both links, self...

  17. Downlink Error Rates of Half-duplex Users in Full-duplex Networks over a Laplacian Inter-User Interference Limited and EGK fading

    KAUST Repository

    Soury, Hamza; Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have

  18. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations

    NARCIS (Netherlands)

    Derks, E. M.; Zwinderman, A. H.; Gamazon, E. R.

    2017-01-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (FST) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates;

  19. Predictors of low self-rated health in patients aged 65+ after total hip replacement (THA)

    DEFF Research Database (Denmark)

    Hørdam, Britta; Hemmingsen, Lars

    2013-01-01

    predicting low self-rated health after surgery. Material and method: A cross-sectional study including 287 patients aged 65+, who had had THR within 12-months were performed. Patients from five Danish counties received a mailed questionnaire assessing health status and demographic data. Short Form-36...

  20. Predicting temporal trends in total absenteeism rates for civil service employees of a federal public health agency.

    Science.gov (United States)

    Spears, D Ross; McNeil, Carrie; Warnock, Eli; Trapp, Jonathan; Oyinloye, Oluremi; Whitehurst, Vanessa; Decker, K C; Chapman, Sandy; Campbell, Morris; Meechan, Paul

    2014-06-01

    This study evaluates the predictability in temporal absences trends due to all causes (total absenteeism) among employees at a federal agency. The objective is to determine how leave trends vary within the year, and determine whether trends are predictable. Ten years of absenteeism data from an attendance system were analyzed for rates of total absence. Trends over a 10-year period followed predictable and regular patterns during a given year that correspond to major holiday periods. Temporal trends in leave among small, medium, and large facilities compared favorably with the agency as a whole. Temporal trends in total absenteeism rates for an organization can be determined using its attendance system. The ability to predict employee absenteeism rates can be extremely helpful for management in optimizing business performance and ensuring that an organization meets its mission.

  1. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  2. Total dose and dose rate radiation characterization of EPI-CMOS radiation hardened memory and microprocessor devices

    International Nuclear Information System (INIS)

    Gingerich, B.L.; Hermsen, J.M.; Lee, J.C.; Schroeder, J.E.

    1984-01-01

    The process, circuit discription, and total dose radiation characteristics are presented for two second generation hardened 4K EPI-CMOS RAMs and a first generation 80C85 microprocessor. Total dose radiation performance is presented to 10M rad-Si and effects of biasing and operating conditions are discussed. The dose rate sensitivity of the 4K RAMs is also presented along with single event upset (SEU) test data

  3. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  4. Impact of Inpatient Versus Outpatient Total Joint Arthroplasty on 30-Day Hospital Readmission Rates and Unplanned Episodes of Care.

    Science.gov (United States)

    Springer, Bryan D; Odum, Susan M; Vegari, David N; Mokris, Jeffrey G; Beaver, Walter B

    2017-01-01

    This article describes a study comparing 30-day readmission rates between patients undergoing outpatient versus inpatient total hip (THA) and knee (TKA) arthroplasty. A retrospective review of 137 patients undergoing outpatient total joint arthroplasty (TJA) and 106 patients undergoing inpatient (minimum 2-day hospital stay) TJA was conducted. Unplanned hospital readmissions and unplanned episodes of care were recorded. All patients completed a telephone survey. Seven inpatients and 16 outpatients required hospital readmission or an unplanned episode of care following hospital discharge. Readmission rates were higher for TKA than THA. The authors found no statistical differences in 30-day readmission or unplanned care episodes. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Resident Physicians' Clinical Training and Error Rate: The Roles of Autonomy, Consultation, and Familiarity with the Literature

    Science.gov (United States)

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-01-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores…

  6. The effects of dose rate in total body irradiation of dogs

    International Nuclear Information System (INIS)

    Kolb, H.J.; Bodenberger, U.; Holler, E.; Thierfelder, S.; Eckstein, R.

    1986-01-01

    In summary the studies in dogs show that the dose rate or exposure time has a great impact on survival of acute radiation syndromes. In contrast the inactivation of colony forming hemopoietic precursors is less influenced by the dose rate. The potential of hemopoietic recovery is determined by the survival of hemopoietic precursor cells. Therefore in patients with a suspected whole body exposure of more than 1.50 Gy, bacterial and fungal decontamination and reverse isolation in a sterile environment has to be started immediately. Human patients treated with about 10 Gy of TBI frequently developed nausea, elevated temperatures and swelling of the parotic glands at the first and second day. The extent of these changes varies from patient to patient. The temperature is rarely elevated above 38.5 0 C. The swelling of parotics and the nausea subside within 48 hours. The presence of such systemic symptoms may suggest the exposure to a lethal dose of radiation. The disappearance of immature red cells, i.e. reticulocytes, and bandforms of granulocytes within the first 5 days supports this suggestion. HLA typing of the victim and his family should be performed as soon as possible after the accident. An HLA-identical sibling would be a suitable bone marrow donor. Unlike therapeutic TBI accidental exposures bring about uncertainties in the calculation of dose, dose distribution and dose rate. Early after irradiation biological changes are extremely variable. Both biological and physical data have to be considered, when microbiological decontamination, reverse isolation and transplantation of bone marrow are to be decided upon. Obviously these intensive therapeutic efforts are limited to a small number of victims. (orig.)

  7. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    Energy Technology Data Exchange (ETDEWEB)

    Boehnke, E McKenzie; DeMarco, J; Steers, J; Fraass, B [Cedars-Sinai Medical Center, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readings are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.

  8. A system for accurate on-line measurement of total gas consumption or production rates in microbioreactors

    NARCIS (Netherlands)

    van Leeuwen, Michiel; Heijnen, Joseph J.; Gardeniers, Johannes G.E.; Oudshoorn, Arthur; Noorman, Henk; Visser, Jan; van der Wielen, Luuk A.M.; van Gulik, Walter M.

    2009-01-01

    A system has been developed, based on pressure controlled gas pumping, for accurate measurement of total gas consumption or production rates in the nmol/min range, applicable for on-line monitoring of bioconversions in microbioreactors. The system was validated by carrying out a bioconversion with

  9. Monitoring of German fertility: Estimation of monthly and yearly total fertility rates on the basis of preliminary monthly data

    NARCIS (Netherlands)

    G. Doblhammer (Gabriele); Milewski, N. (Nadja); F. Peters (Frederick)

    2010-01-01

    textabstractThis paper introduces a set of methods for estimating fertility indicators in the absence of recent and short-term birth statistics. For Germany, we propose a set of straightforward methods that allow for the computation of monthly and yearly total fertility rates (mTFR) on the basis of

  10. Low revision rate after total hip arthroplasty in patients with pediatric hip diseases

    DEFF Research Database (Denmark)

    Engesæter, Lars B; Engesæter, Ingvild Ø; Fenstad, Anne Marie

    2012-01-01

    Background The results of primary total hip arthroplasties (THAs) after pediatric hip diseases such as developmental dysplasia of the hip (DDH), slipped capital femoral epiphysis (SCFE), or Perthes' disease have been reported to be inferior to the results after primary osteoarthritis of the hip (OA.......9%) were operated due to pediatric hip diseases (3.1% for Denmark, 8.8% for Norway, and 1.9% for Sweden) and 288,435 THAs (77.8%) were operated due to OA. Unadjusted 10-year Kaplan-Meier survival of THAs after pediatric hip diseases (94.7% survival) was inferior to that after OA (96.6% survival......). Consequently, an increased risk of revision for hips with a previous pediatric hip disease was seen (risk ratio (RR) 1.4, 95% CI: 1.3-1.5). However, after adjustment for differences in sex and age of the patients, and in fixation of the prostheses, no difference in survival was found (93.6% after pediatric hip...

  11. A study of total measurement error in tomographic gamma scanning to assay nuclear material with emphasis on a bias issue for low-activity samples

    International Nuclear Information System (INIS)

    Burr, T.L.; Mercer, D.J.; Prettyman, T.H.

    1998-01-01

    Field experience with the tomographic gamma scanner to assay nuclear material suggests that the analysis techniques can significantly impact the assay uncertainty. For example, currently implemented image reconstruction methods exhibit a positive bias for low-activity samples. Preliminary studies indicate that bias reduction could be achieved at the expense of increased random error variance. In this paper, the authors examine three possible bias sources: (1) measurement error in the estimated transmission matrix, (2) the positivity constraint on the estimated mass of nuclear material, and (3) improper treatment of the measurement error structure. The authors present results from many small-scale simulation studies to examine this bias/variance tradeoff for a few image reconstruction methods in the presence of the three possible bias sources

  12. Throughput Estimation Method in Burst ACK Scheme for Optimizing Frame Size and Burst Frame Number Appropriate to SNR-Related Error Rate

    Science.gov (United States)

    Ohteru, Shoko; Kishine, Keiji

    The Burst ACK scheme enhances effective throughput by reducing ACK overhead when a transmitter sends sequentially multiple data frames to a destination. IEEE 802.11e is one such example. The size of the data frame body and the number of burst data frames are important burst transmission parameters that affect throughput. The larger the burst transmission parameters are, the better the throughput under error-free conditions becomes. However, large data frame could reduce throughput under error-prone conditions caused by signal-to-noise ratio (SNR) deterioration. If the throughput can be calculated from the burst transmission parameters and error rate, the appropriate ranges of the burst transmission parameters could be narrowed down, and the necessary buffer size for storing transmit data or received data temporarily could be estimated. In this paper, we present a method that features a simple algorithm for estimating the effective throughput from the burst transmission parameters and error rate. The calculated throughput values agree well with the measured ones for actual wireless boards based on the IEEE 802.11-based original MAC protocol. We also calculate throughput values for larger values of the burst transmission parameters outside the assignable values of the wireless boards and find the appropriate values of the burst transmission parameters.

  13. Utilizing the Total Design Method in medicine: maximizing response rates in long, non-incentivized, personal questionnaire postal surveys.

    Science.gov (United States)

    Kazzazi, Fawz; Haggie, Rebecca; Forouhi, Parto; Kazzazi, Nazar; Malata, Charles M

    2018-01-01

    Maximizing response rates in questionnaires can improve their validity and quality by reducing non-response bias. A comprehensive analysis is essential for producing reasonable conclusions in patient-reported outcome research particularly for topics of a sensitive nature. This often makes long (≥7 pages) questionnaires necessary but these have been shown to reduce response rates in mail surveys. Our work adapted the "Total Design Method," initially produced for commercial markets, to raise response rates in a long (total: 11 pages, 116 questions), non-incentivized, very personal postal survey sent to almost 350 women. A total of 346 women who had undergone mastectomy and immediate breast reconstruction from 2008-2014 (inclusive) at Addenbrooke's University Hospital were sent our study pack (Breast-Q satisfaction questionnaire and support documents) using our modified "Total Design Method." Participants were sent packs and reminders according to our designed schedule. Of the 346 participants, we received 258 responses, an overall response rate of 74.5% with a useable response rate of 72.3%. One hundred and six responses were received before the week 1 reminder (30.6%), 120 before week 3 (34.6%), 225 before the week 7 reminder (64.6%) and the remainder within 3 weeks of the final pack being sent. The median age of patients that the survey was sent to, and the median age of the respondents, was 54 years. In this study, we have demonstrated the successful implementation of a novel approach to postal surveys. Despite the length of the questionnaire (nine pages, 116 questions) and limitations of expenses to mail a survey to ~350 women, we were able to attain a response rate of 74.6%.

  14. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  15. Quantitative estimation of 123I-MIBG scintigraphy in neuroblastoma. Usefulness of the total body retention rate

    International Nuclear Information System (INIS)

    Okuyama, Chio; Ushijima, Yo; Sugihara, Hiroki; Nishimura, Tunehiko

    2000-01-01

    A new method of easily and simply quantifying 123 I-MIBG accumulation as a criterion for curative effect of chemotherapy in infants with neuroblastoma was developed. This method uses the data from two images: an early image (at 5-7.5 hr) and a delayed image (at 25-32 hr). Twenty infants with untreated neuroblastoma which showed clear accumulation of 123 I-MIBG at the primary site were examined. The differences between the accumulation counts on the early image and the delayed image showed that washout of 123 I-MIBG in the neuroblastomas was delayed in tumor regions. This finding indicated that the total body 123 I-MIBG retention rate reflects the total volume of the neuroblastoma. The 123 I-MIBG retention rate was significantly higher in patients with advanced stage neuroblastoma with systemic metastases, and there was a good correlation between the retention rate and tumor markers (VMA and HVA values in urine). The response to chemotherapy paralleled the change in markers. These results suggested that the total body 123 I-MIBG retention rate is useful as a criterion for curative effect in advanced neuroblastoma. (K.H.)

  16. Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment

    Science.gov (United States)

    Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.

    2016-11-01

    This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.

  17. Interference in the gg→h→γγ On-Shell Rate and the Higgs Boson Total Width.

    Science.gov (United States)

    Campbell, John; Carena, Marcela; Harnik, Roni; Liu, Zhen

    2017-11-03

    We consider interference between the Higgs signal and QCD background in gg→h→γγ and its effect on the on-shell Higgs rate. The existence of sizable strong phases leads to destructive interference of about 2% of the on-shell cross section in the standard model. This effect can be enhanced by beyond the standard model physics. In particular, since it scales differently from the usual rates, the presence of interference allows indirect limits to be placed on the Higgs width in a novel way, using on-shell rate measurements. Our study motivates further QCD calculations to reduce uncertainties. We discuss possible width-sensitive observables, both using total and differential rates and find that the HL-LHC can potentially indirectly constrain widths of order tens of MeV.

  18. Who Do Hospital Physicians and Nurses Go to for Advice About Medications? A Social Network Analysis and Examination of Prescribing Error Rates.

    Science.gov (United States)

    Creswick, Nerida; Westbrook, Johanna Irene

    2015-09-01

    To measure the weekly medication advice-seeking networks of hospital staff, to compare patterns across professional groups, and to examine these in the context of prescribing error rates. A social network analysis was conducted. All 101 staff in 2 wards in a large, academic teaching hospital in Sydney, Australia, were surveyed (response rate, 90%) using a detailed social network questionnaire. The extent of weekly medication advice seeking was measured by density of connections, proportion of reciprocal relationships by reciprocity, number of colleagues to whom each person provided advice by in-degree, and perceptions of amount and impact of advice seeking between physicians and nurses. Data on prescribing error rates from the 2 wards were compared. Weekly medication advice-seeking networks were sparse (density: 7% ward A and 12% ward B). Information sharing across professional groups was modest, and rates of reciprocation of advice were low (9% ward A, 14% ward B). Pharmacists provided advice to most people, and junior physicians also played central roles. Senior physicians provided medication advice to few people. Many staff perceived that physicians rarely sought advice from nurses when prescribing, but almost all believed that an increase in communication between physicians and nurses about medications would improve patient safety. The medication networks in ward B had higher measures for density, reciprocation, and fewer senior physicians who were isolates. Ward B had a significantly lower rate of both procedural and clinical prescribing errors than ward A (0.63 clinical prescribing errors per admission [95%CI, 0.47-0.79] versus 1.81/ admission [95%CI, 1.49-2.13]). Medication advice-seeking networks among staff on hospital wards are limited. Hubs of advice provision include pharmacists, junior physicians, and senior nurses. Senior physicians are poorly integrated into medication advice networks. Strategies to improve the advice-giving networks between senior

  19. Anesthesia Preoperative Clinic Referral for Elevated Hba1c Reduces Complication Rate in Diabetic Patients Undergoing Total Joint Arthroplasty

    OpenAIRE

    Kallio, Peter J.; Nolan, Jenea; Olsen, Amy C.; Breakwell, Susan; Topp, Richard; Pagel, Paul S.

    2015-01-01

    Background: Diabetes mellitus (DM) is risk factor for complications after orthopedic surgery. Objectives: We tested the hypothesis that anesthesia preoperative clinic (APC) referral for elevated glycosylated hemoglobin (HbA1c) reduces complication rate after total joint arthroplasty (TJA). Patients and Methods: Patients (n = 203) with and without DM were chosen from 1,237 patients undergoing TJA during 2006 - 12. Patients evaluated in the APC had surgery in 2006 - 8 regardless of HbA1c (uncon...

  20. Warming and organic matter sources impact the proportion of dissolved to total activities in marine extracellular enzymatic rates

    KAUST Repository

    Baltar, Federico

    2017-04-19

    Extracellular enzymatic activities (EEAs) are the rate-limiting step in the degradation of organic matter. Extracellular enzymes can be found associated to cells or dissolved in the surrounding water. The proportion of cell-free EEA constitutes in many marine environments more than half of the total activity. This high proportion causes an uncoupling between hydrolysis rates and the actual bacterial activity. However, we do not know what factors control the proportion of dissolved relative to total EEA, nor how this may change in the future ocean. To resolve this, we performed laboratory experiments with water from the Great Barrier Reef (Australia) to study the effects of temperature and dissolved organic matter sources on EEA and the proportion of dissolved EEA. We found that warming increases the rates of organic matter hydrolysis and reduces the proportion of dissolved relative to total EEA. This suggests a potential increase of the coupling between organic matter hydrolysis and heterotrophic activities with increasing ocean temperatures, although strongly dependent on the organic matter substrates available. Our study suggests that local differences in the organic matter composition in tropical coastal ecosystems will strongly affect the proportion of dissolved EEA in response to ocean warming.

  1. Effect Of Adding Sago Flour In Yoghurt Based On Viscosity, Overrun, Melting Rate And Total Solid Of Yoghurt Ice Cream

    Directory of Open Access Journals (Sweden)

    Ika Ayu Wijayanti

    2017-03-01

    Full Text Available The purpose of this research was to find out the best concentration of adding sago flour in yoghurt based on viscosity, overrun, melting rate and total solid of yoghurt ice cream. The experiment was designed by Completely Randomized Design (CRD using four treatments were 0 %, 2 %, 4 %, 6 % from volume of fresh milk and four replication. The data were analyzed by using Analysis of Variance (ANOVA and continued by Duncan’s Multiple Range Test (DMRT. Result of this research showed that concentration of adding sago flour in yoghurt gave highly significant difference effect (P<0.01 on viscosity, overrun, melting rate and total solid of yoghurt ice cream. It can be concluded that the adding of sago flour 2% in yoghurt gave the best result with the viscosity was 1750.75 cP, overrun was 25.14%, melting rate was 39.13 minutes/50 g, total solid was 36.20% and gave the best quality of yoghurt ice cream.

  2. Estimates of the Tempo-adjusted Total Fertility Rate in Western and Eastern Germany, 1955-2008

    Directory of Open Access Journals (Sweden)

    Marc Luy

    2011-09-01

    Full Text Available In this article we present estimates of the tempo-adjusted total fertility rate in Western and Eastern Germany from 1955 to 2008. Tempo adjustment of the total fertility rate (TFR requires data on the annual number of births by parity and age of the mother. Since official statistics do not provide such data for West Germany as well as Eastern Germany from 1990 on we used alternative data sources which include these specific characteristics. The combined picture of conventional TFR and tempo-adjusted TFR* provides interesting information about the trends in period fertility in Western and Eastern Germany, above all with regard to the differences between the two regions and the enormous extent of tempo effects in Eastern Germany during the 1990s. Compared to corresponding data for populations from other countries, our estimates of the tempo-adjusted TFR* for Eastern and Western Germany show plausible trends. Nevertheless, it is important to note that the estimates of the tempo-adjusted total fertility rate presented in this paper should not be seen as being on the level of or equivalent to official statistics since they are based on different kinds of data with different degrees of quality.

  3. A software solution to estimate the SEU-induced soft error rate for systems implemented on SRAM-based FPGAs

    International Nuclear Information System (INIS)

    Wang Zhongming; Lu Min; Yao Zhibin; Guo Hongxia

    2011-01-01

    SRAM-based FPGAs are very susceptible to radiation-induced Single-Event Upsets (SEUs) in space applications. The failure mechanism in FPGA's configuration memory differs from those in traditional memory device. As a result, there is a growing demand for methodologies which could quantitatively evaluate the impact of this effect. Fault injection appears to meet such requirement. In this paper, we propose a new methodology to analyze the soft errors in SRAM-based FPGAs. This method is based on in depth understanding of the device architecture and failure mechanisms induced by configuration upsets. The developed programs read in the placed and routed netlist, search for critical logic nodes and paths that may destroy the circuit topological structure, and then query a database storing the decoded relationship of the configurable resources and corresponding control bit to get the sensitive bits. Accelerator irradiation test and fault injection experiments were carried out to validate this approach. (semiconductor integrated circuits)

  4. Error Rates of M-PAM and M-QAM in Generalized Fading and Generalized Gaussian Noise Environments

    KAUST Repository

    Soury, Hamza

    2013-07-01

    This letter investigates the average symbol error probability (ASEP) of pulse amplitude modulation and quadrature amplitude modulation coherent signaling over flat fading channels subject to additive white generalized Gaussian noise. The new ASEP results are derived in a generic closed-form in terms of the Fox H function and the bivariate Fox H function for the extended generalized-K fading case. The utility of this new general closed-form is that it includes some special fading distributions, like the Generalized-K, Nakagami-m, and Rayleigh fading and special noise distributions such as Gaussian and Laplacian. Some of these special cases are also treated and are shown to yield simplified results.

  5. Errors in Computing the Normalized Protein Catabolic Rate due to Use of Single-pool Urea Kinetic Modeling or to Omission of the Residual Kidney Urea Clearance.

    Science.gov (United States)

    Daugirdas, John T

    2017-07-01

    The protein catabolic rate normalized to body size (PCRn) often is computed in dialysis units to obtain information about protein ingestion. However, errors can manifest when inappropriate modeling methods are used. We used a variable volume 2-pool urea kinetic model to examine the percent errors in PCRn due to use of a 1-pool urea kinetic model or after omission of residual urea clearance (Kru). When a single-pool model was used, 2 sources of errors were identified. The first, dependent on the ratio of dialyzer urea clearance to urea distribution volume (K/V), resulted in a 7% inflation of the PCRn when K/V was in the range of 6 mL/min per L. A second, larger error appeared when Kt/V values were below 1.0 and was related to underestimation of urea distribution volume (due to overestimation of effective clearance) by the single-pool model. A previously reported prediction equation for PCRn was valid, but data suggest that it should be modified using 2-pool eKt/V and V coefficients instead of single-pool values. A third source of error, this one unrelated to use of a single-pool model, namely omission of Kru, was shown to result in an underestimation of PCRn, such that each ml/minute Kru per 35 L of V caused a 5.6% underestimate in PCRn. Marked overestimation of PCRn can result due to inappropriate use of a single-pool urea kinetic model, particularly when Kt/V <1.0 (as in short daily dialysis), or after omission of residual native kidney clearance. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  6. Reliability of perceived neighbourhood conditions and the effects of measurement error on self-rated health across urban and rural neighbourhoods.

    Science.gov (United States)

    Pruitt, Sandi L; Jeffe, Donna B; Yan, Yan; Schootman, Mario

    2012-04-01

    Limited psychometric research has examined the reliability of self-reported measures of neighbourhood conditions, the effect of measurement error on associations between neighbourhood conditions and health, and potential differences in the reliabilities between neighbourhood strata (urban vs rural and low vs high poverty). We assessed overall and stratified reliability of self-reported perceived neighbourhood conditions using five scales (social and physical disorder, social control, social cohesion, fear) and four single items (multidimensional neighbouring). We also assessed measurement error-corrected associations of these conditions with self-rated health. Using random-digit dialling, 367 women without breast cancer (matched controls from a larger study) were interviewed twice, 2-3 weeks apart. Test-retest (intraclass correlation coefficients (ICC)/weighted κ) and internal consistency reliability (Cronbach's α) were assessed. Differences in reliability across neighbourhood strata were tested using bootstrap methods. Regression calibration corrected estimates for measurement error. All measures demonstrated satisfactory internal consistency (α ≥ 0.70) and either moderate (ICC/κ=0.41-0.60) or substantial (ICC/κ=0.61-0.80) test-retest reliability in the full sample. Internal consistency did not differ by neighbourhood strata. Test-retest reliability was significantly lower among rural (vs urban) residents for two scales (social control, physical disorder) and two multidimensional neighbouring items; test-retest reliability was higher for physical disorder and lower for one multidimensional neighbouring item among the high (vs low) poverty strata. After measurement error correction, the magnitude of associations between neighbourhood conditions and self-rated health were larger, particularly in the rural population. Research is needed to develop and test reliable measures of perceived neighbourhood conditions relevant to the health of rural populations.

  7. Study of systematic errors in the determination of total Hg levels in the range -5% in inorganic and organic matrices with two reliable spectrometrical determination procedures

    International Nuclear Information System (INIS)

    Kaiser, G.; Goetz, D.; Toelg, G.; Max-Planck-Institut fuer Metallforschung, Stuttgart; Knapp, G.; Maichin, B.; Spitzy, H.

    1978-01-01

    In the determiniation of Hg at ng/g and pg/g levels systematic errors are due to faults in the analytical methods such as intake, preparation and decomposition of a sample. The sources of these errors have been studied both with 203 Hg-radiotracer techniques and two multi-stage procedures developed for the determiniation of trace levels. The emission spectrometrie (OES-MIP) procedure includes incineration of the sample in a microwave induced oxygen plasma (MIP), the isolation and enrichment on a gold absorbent and its excitation in an argon plasma (MIP). The emitted Hg-radiation (253,7 nm) is evaluated photometrically with a semiconductor element. The detection limit of the OES-MIP procedure was found to be 0,01 ng, the coefficient of variation 5% for 1 ng Hg. The second procedure combines a semi-automated wet digestion method (HCLO 3 /HNO 3 ) with a reduction-aeration (ascorbic acid/SnCl 2 ), and the flameless atomic absorption technique (253,7 nm). The detection limit of this procedure was found to be 0,5 ng, the coefficient of variation 5% for 5 ng Hg. (orig.) [de

  8. Comparison of 3 Types of Readmission Rates for Measuring Hospital and Surgeon Performance After Primary Total Hip and Knee Arthroplasty.

    Science.gov (United States)

    Bottle, Alex; Loeffler, Mark D; Aylin, Paul; Ali, Adam M

    2018-02-26

    All-cause 30-day hospital readmission is in widespread use for monitoring and incentivizing hospital performance for patients undergoing total hip arthroplasty (THA) and total knee arthroplasty (TKA). However, little is known on the extent to which all-cause readmission is influenced by hospital or surgeon performance and whether alternative measures may be more valid. This is an observational study using multilevel modeling on English administrative data to determine the interhospital and intersurgeon variation for 3 readmission metrics: all-cause, surgical, and return-to-theater. Power calculations estimated the likelihood of identifying whether the readmission rate for a surgeon or hospital differed from the national average by a factor of 1.25, 1.5, 2, or 3 times, for both average and high-volume providers. 259,980 THAs and 311,033 TKAs were analyzed. Variations by both surgeons and hospitals were smaller for the all-cause measure than for the surgical or return-to-theater metrics, although statistical power to detect differences was higher. Statistical power to detect surgeon-level rates of 1.25 or 1.5 times the average was consistently low. However, at the hospital level, the surgical readmission measure showed more variation by hospital while maintaining excellent power to detect differences in rates between hospitals performing the average number of THA or TKA cases per year in England. In practice, more outliers than expected from purely random variation were found for all-cause and surgical readmissions, especially at hospital level. The 30-day surgical readmission rate should be considered as an adjunctive measure to 30-day all-cause readmission rate when assessing hospital performance. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Comparison of reconstructed radial pin total fission rates with experimental results in full scale BWR fuel elements

    Energy Technology Data Exchange (ETDEWEB)

    Giust, Flavio [Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland); Nordostschweizerische Kraftwerke AG, Parkstrasse 23, CH-5401 Baden (Switzerland); Grimm, Peter; Jatuff, Fabian [Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Chawla, Rakesh [Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland)

    2008-07-01

    Total fission rate measurements have been performed on full size BWR fuel assemblies of type SVEA-96+ in the zero power reactor PROTEUS at the Paul Scherrer Institute. This work presents comparisons of reconstructed 2D pin fission rates in two configurations, I-1A and I-2A. Both configurations contain, in the central test zone, an array of 3x3 SVEA-96+ fuel elements moderated with light water at 20 deg. C. In configuration I-2A, an L-shaped hafnium control blade (half of a real cruciform blade) is inserted adjacent to the NW corner of the central fuel element. To minimize the impact of the surroundings, all measurements were done in fuel pins belonging to the central assembly. The 3x3 experimental configuration was modeled using the core monitoring and design tools that are applied at the Leibstadt Nuclear Power Plant (KKL). These are the 2D transport code HELIOS, used for the cross-section generation, and the 3D, 2-group nodal diffusion code PRESTO-2. The exterior is represented, in the axial and radial directions, by 2-group albedos calculated at the test zone boundary using a full-core 3D MCNPX model. The calculated-to-experimental (C/E) ratios of the total fission rates have a standard deviation of 1.3% in configuration I-1A (uncontrolled) and 3.2% in configuration I-2A (controlled). Sensitivity cases are analyzed to show the impact of certain parameters on the calculated fission rate distribution and reactivity. It is shown that the relative pin fission rate is only weakly dependent on these parameters. In cases without a control blade, the pin power reconstruction methodology delivers the same level of accuracy as 2D transport calculations. On the other hand, significant deviations, that are inherent to the use of reflected geometry in the lattice calculations, are observed in cases when the control blade is inserted. (authors)

  10. Effect of γ-dose rate and total dose interrelation on the polymeric hydrogel: A novel injectable male contraceptive

    International Nuclear Information System (INIS)

    Jha, Pradeep K.; Jha, Rakhi; Gupta, B.L.; Guha, Sujoy K.

    2010-01-01

    Functional necessity to use a particular range of dose rate and total dose of γ-initiated polymerization to manufacture a novel polymeric hydrogel RISUG (reversible inhibition of sperm under guidance) made of styrene maleic anhydride (SMA) dissolved in dimethyl sulphoxide (DMSO), for its broad biomedical application explores new dimension of research. The present work involves 16 irradiated samples. They were tested by fourier transform infrared spectroscopy, matrix assisted laser desorption/ionization-TOF, field emission scanning electron microscopy, high resolution transmission electron microscopy, etc. to see the interrelation effect of gamma dose rates (8.25, 17.29, 20.01 and 25.00 Gy/min) and four sets of doses (1.8, 2.0, 2.2 and 2.4 kGy) on the molecular weight, molecular weight distribution and porosity analysis of the biopolymeric drug RISUG. The results of randomized experiment indicated that a range of 18-24 Gy/min γ-dose rate and 2.0-2.4 kGy γ-total doses is suitable for the desirable in vivo performance of the contraceptive copolymer.

  11. Fractional rate of degradation (kd) of starch in the rumen and its relation to in vivo rumen and total digestibility

    DEFF Research Database (Denmark)

    Hvelplund, Torben; Larsen, Mogens; Lund, Peter

    2009-01-01

    in different ways both chemically and physically. The starch sources were fed in mixed diets together with grass silage and soya bean meal and allocated ad libitum to fistulated dairy cows. The starch content varied between 13 and 35% in ration dry matter for the different starch sources. The design...... was a series of cross-over experiments with two cows and two periods. Ruminal starch pool was estimated from rumen evacuation and starch flow was estimated by duodenal and faeces sampling. Fractional rate of rumen degradation was estimated from the equation [kd = rumen degraded/rumen pool] and rumen and total...

  12. Application of Fermat's Principle to Calculation of the Errors of Acoustic Flow-Rate Measurements for a Three-Dimensional Fluid Flow or Gas

    Science.gov (United States)

    Petrov, A. G.; Shkundin, S. Z.

    2018-01-01

    Fermat's variational principle is used for derivation of the formula for the time of propagation of a sonic signal between two set points A and B in a steady three-dimensional flow of a fluid or gas. It is shown that the fluid flow changes the time of signal reception by a value proportional to the flow rate independently of the velocity profile. The time difference in the reception of the signals from point B to point A and vice versa is proportional with a high accuracy to the flow rate. It is shown that the relative error of the formula does not exceed the square of the largest Mach number. This makes it possible to measure the flow rate of a fluid or gas with an arbitrary steady subsonic velocity field.

  13. Radiobiological basis of total body irradiation with different dose rate and fractionation: repair capacity of hemopoietic cells

    International Nuclear Information System (INIS)

    Song, C.W.; Kim, T.H.; Khan, F.M.; Kersey, J.H.; Levitt, S.H.

    1981-01-01

    Total body irradiation (TBI) followed by bone marrow transplantation is being used in the treatment of malignant or non-malignant hemopoietic disorders. It has been believed that the ability of hemopoietic cells to repair sublethal radiation damage is negligible. Therefore, several schools of investigators suggested that TBI in a single exposure at extremely low dose rate (5 rad/min) over several hours, or in several fractions in 2-3 days, should yield a higher therapeutic gain, as compared with a single exposure at a high dose rate (26 rad/min). We reviewed the existing data in the literature, in particular, the response of hemopoietic cells to fractionated doses of irradiation and found that the repair capacity of both malignant and non-malignant hemopoietic cells might be greater than has been thought. It is concluded that we should not underestimate the ability of hemopoietic cells to repair sublethal radiation damage in using TBI

  14. Monitoring of German Fertility: Estimation of Monthly and Yearly Total Fertility Rates on the Basis of Preliminary Monthly Data

    Directory of Open Access Journals (Sweden)

    Gabriele Doblhammer

    2011-02-01

    Full Text Available This paper introduces a set of methods for estimating fertility indicators in the absence of recent and short-term birth statistics. For Germany, we propose a set of straightforward methods that allow for the computation of monthly and yearly total fertility rates (mTFR on the basis of preliminary monthly data, including a confidence interval. The method for estimating most current fertility rates can be applied when no information on the age structure and the number of women exposed to childbearing is available. The methods introduced in this study are useful for calculating monthly birth indicators, with minimal requirements for data quality and statistical effort. In addition, we suggest an approach for projecting the yearly TFR based on preliminary monthly information up to June.

  15. Social life factors affecting the mortality, longevity, and birth rate of total Japanese population: effects of rapid industrialization and urbanization.

    Science.gov (United States)

    Araki, S; Uchida, E; Murata, K

    1990-12-01

    employment were positively related to the birth rate. The birth rate is higher in rural areas. Mortality of professional, engineering, and administrative workers was slightly lower than the total working population, while sales workers, those in farming, fishing, and forestry, and in personal and domestic service had significantly higher mortality. The mortality of the nonworking population was 6-8 times higher than sales, transportation, and communication, and personal and domestic service as well as the total population.

  16. Why are autopsy rates low in Japan? Views of ordinary citizens and doctors in the case of unexpected patient death and medical error.

    Science.gov (United States)

    Maeda, Shoichi; Kamishiraki, Etsuko; Starkey, Jay; Ikeda, Noriaki

    2013-01-01

    This article examines what could account for the low autopsy rate in Japan based on the findings from an anonymous, self-administered, structured questionnaire that was given to a sample population of the general public and physicians in Japan. The general public and physicians indicated that autopsy may not be carried out because: (1) conducting an autopsy might result in the accusation that patient death was caused by a medical error even when there was no error (50.4% vs. 13.1%, respectively), (2) suggesting an autopsy makes the families suspicious of a medical error even when there was none (61.0% vs. 19.1%, respectively), (3) families do not want the body to be damaged by autopsy (81.6% vs. 87.3%, respectively), and (4) families do not want to make the patient suffer any more in addition to what he/she has already endured (61.8% vs. 87.1%, respectively). © 2013 American Society for Healthcare Risk Management of the American Hospital Association.

  17. An evaluation of a Low-Dose-Rate (LDR) brachytherapy procedure using a systems engineering & error analysis methodology for health care (SEABH) - (SAVE)

    LENUS (Irish Health Repository)

    Chadwick, Liam

    2012-03-12

    Health Care Failure Modes and Effects Analysis (HFMEA®) is an established tool for risk assessment in health care. A number of deficiencies have been identified in the method. A new method called Systems and Error Analysis Bundle for Health Care (SEABH) was developed to address these deficiencies. SEABH has been applied to a number of medical processes as part of its validation and testing. One of these, Low Dose Rate (LDR) prostate Brachytherapy is reported in this paper. The case study supported the validity of SEABH with respect to its capacity to address the weaknesses of (HFMEA®).

  18. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  19. Anesthesia Preoperative Clinic Referral for Elevated Hba1c Reduces Complication Rate in Diabetic Patients Undergoing Total Joint Arthroplasty.

    Science.gov (United States)

    Kallio, Peter J; Nolan, Jenea; Olsen, Amy C; Breakwell, Susan; Topp, Richard; Pagel, Paul S

    2015-06-01

    Diabetes mellitus (DM) is risk factor for complications after orthopedic surgery. We tested the hypothesis that anesthesia preoperative clinic (APC) referral for elevated glycosylated hemoglobin (HbA1c) reduces complication rate after total joint arthroplasty (TJA). Patients (n = 203) with and without DM were chosen from 1,237 patients undergoing TJA during 2006 - 12. Patients evaluated in the APC had surgery in 2006 - 8 regardless of HbA1c (uncontrolled). Those evaluated between in subsequent two-year intervals were referred to primary care for HbA1c ≥ 10% and ≥ 8%, respectively, to improve DM control before surgery. Complications and mortality were quantified postoperatively and at three, six, and twelve months. Length of stay (LOS) and patients requiring a prolonged LOS (> 5 days) were recorded. Patients (197 men, 6 women) underwent 71, 131, and 1 total hip, knee, and shoulder replacements, respectively. Patients undergoing TJA with uncontrolled HbA1c and those with HbA1c patients without DM. An increase in complication rate was observed in DM patients with uncontrolled HbA1c versus patients without DM (P patients with preoperative HbA1c that was uncontrolled or ≥ 10% required prolonged LOS versus those without DM (P diabetics undergoing TJA.

  20. Effects of seeding date and seeding rate on yield, proximate composition and total tannins content of two Kabuli chickpea cultivars

    Directory of Open Access Journals (Sweden)

    Roberto Ruggeri

    2017-09-01

    Full Text Available Experiments were conducted in open field to assess the effect of seeding season and density on the yield, the chemical composition and the accumulation of total tannins in grains of two chickpea (Cicer arietinum L. cultivars (Pascià and Sultano. Environmental conditions and genetic factors considerably affected grain yield, nutrient and total tannins content of chickpea seeds, giving a considerable range in its qualitative characteristics. Results confirmed cultivar selection as a central factor when a late autumn-early winter sowing is performed. In effect, a more marked resistance to Ascochyta blight (AB of Sultano, allowed better agronomic performances when favourable-to-AB climatic conditions occur. Winter sowing appeared to be the best choice in the Mediterranean environment when cultivating to maximise the grain yield (+19%. Spring sowing improved crude protein (+10% and crude fibre (+8% content, whereas it did not significantly affect the accumulation of anti-nutrients compounds such as total tannins. The most appropriate seeding rate was 70 seeds m–2, considering that plant density had relatively little effect on the parameters studied.

  1. Increased error rates in preliminary reports issued by radiology residents working more than 10 consecutive hours overnight.

    Science.gov (United States)

    Ruutiainen, Alexander T; Durand, Daniel J; Scanlon, Mary H; Itri, Jason N

    2013-03-01

    To determine if the rate of major discrepancies between resident preliminary reports and faculty final reports increases during the final hours of consecutive 12-hour overnight call shifts. Institutional review board exemption status was obtained for this study. All overnight radiology reports interpreted by residents on-call between January 2010 and June 2010 were reviewed by board-certified faculty and categorized as major discrepancies if they contained a change in interpretation with the potential to impact patient management or outcome. Initial determination of a major discrepancy was at the discretion of individual faculty radiologists based on this general definition. Studies categorized as major discrepancies were secondarily reviewed by the residency program director (M.H.S.) to ensure consistent application of the major discrepancy designation. Multiple variables associated with each report were collected and analyzed, including the time of preliminary interpretation, time into shift study was interpreted, volume of studies interpreted during each shift, day of the week, patient location (inpatient or emergency department), block of shift (2-hour blocks for 12-hour shifts), imaging modality, patient age and gender, resident identification, and faculty identification. Univariate risk factor analysis was performed to determine the optimal data format of each variable (ie, continuous versus categorical). A multivariate logistic regression model was then constructed to account for confounding between variables and identify independent risk factors for major discrepancies. We analyzed 8062 preliminary resident reports with 79 major discrepancies (1.0%). There was a statistically significant increase in major discrepancy rate during the final 2 hours of consecutive 12-hour call shifts. Multivariate analysis confirmed that interpretation during the last 2 hours of 12-hour call shifts (odds ratio (OR) 1.94, 95% confidence interval (CI) 1.18-3.21), cross

  2. The sensitivity of bit error rate (BER) performance in multi-carrier (OFDM) and single-carrier

    Science.gov (United States)

    Albdran, Saleh; Alshammari, Ahmed; Matin, Mohammad

    2012-10-01

    Recently, the single-carrier and multi-carrier transmissions have grabbed the attention of industrial systems. Theoretically, OFDM as a Multicarrier has more advantages over the Single-Carrier especially for high data rate. In this paper we will show which one of the two techniques outperforms the other. We will study and compare the performance of BER for both techniques for a given channel. As a function of signal to noise ratio SNR, the BER will be measure and studied. Also, Peak-to-Average Power Ratio (PAPR) is going to be examined and presented as a drawback of using OFDM. To make a reasonable comparison between the both techniques, we will use additive white Gaussian noise (AWGN) as a communication channel.

  3. Evaluation of the effect of noise on the rate of errors and speed of work by the ergonomic test of two-hand co-ordination

    Directory of Open Access Journals (Sweden)

    Ehsanollah Habibi

    2013-01-01

    Full Text Available Background: Among the most important and effective factors affecting the efficiency of the human workforce are accuracy, promptness, and ability. In the context of promoting levels and quality of productivity, the aim of this study was to investigate the effects of exposure to noise on the rate of errors, speed of work, and capability in performing manual activities. Methods: This experimental study was conducted on 96 students (52 female and 44 male of the Isfahan Medical Science University with the average and standard deviations of age, height, and weight of 22.81 (3.04 years, 171.67 (8.51 cm, and 65.05 (13.13 kg, respectively. Sampling was conducted with a randomized block design. Along with controlling for intervening factors, a combination of sound pressure levels [65 dB (A, 85 dB (A, and 95 dB (A] and exposure times (0, 20, and 40 were used for evaluation of precision and speed of action of the participants, in the ergonomic test of two-hand coordination. Data was analyzed by SPSS18 software using a descriptive and analytical statistical method by analysis of covariance (ANCOVA repeated measures. Results: The results of this study showed that increasing sound pressure level from 65 to 95 dB in network ′A′ increased the speed of work (P 0.05. Male participants got annoyed from the noise more than females. Also, increase in sound pressure level increased the rate of error (P < 0.05. Conclusions: According to the results of this research, increasing the sound pressure level decreased efficiency and increased the errors and in exposure to sounds less than 85 dB in the beginning, the efficiency decreased initially and then increased in a mild slope.

  4. Downlink Error Rates of Half-duplex Users in Full-duplex Networks over a Laplacian Inter-User Interference Limited and EGK fading

    KAUST Repository

    Soury, Hamza

    2017-03-14

    This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have non-line of sight (NLOS) interfering link. Consequently, we study the interferer limited problem that appears between NLOS HD users-pair that are scheduled on the same FD channel. The distribution of the interference is first characterized via its distribution function, which is derived in closed form. Then, a comprehensive performance assessment for the proposed pairing scheme is provided by assuming Extended Generalized- $cal{K}$ (EGK) fading for the downlink and studying different modulation schemes. To this end, a unified closed form expression for the average symbol error rate is derived. Furthermore, we show the effective downlink throughput gain harvested by the pairing NLOS users as a function of the average signal-to-interferenceratio when compared to an idealized HD scenario with neither interference nor noise. Finally, we show the minimum required channel gain pairing threshold to harvest downlink throughput via the FD operation when compared to the HD case for each modulation scheme.

  5. Improved read disturb and write error rates in voltage-control spintronics memory (VoCSM) by controlling energy barrier height

    Science.gov (United States)

    Inokuchi, T.; Yoda, H.; Kato, Y.; Shimizu, M.; Shirotori, S.; Shimomura, N.; Koi, K.; Kamiguchi, Y.; Sugiyama, H.; Oikawa, S.; Ikegami, K.; Ishikawa, M.; Altansargai, B.; Tiwari, A.; Ohsawa, Y.; Saito, Y.; Kurobe, A.

    2017-06-01

    A hybrid writing scheme that combines the spin Hall effect and voltage-controlled magnetic-anisotropy effect is investigated in Ta/CoFeB/MgO/CoFeB/Ru/CoFe/IrMn junctions. The write current and control voltage are applied to Ta and CoFeB/MgO/CoFeB junctions, respectively. The critical current density required for switching the magnetization in CoFeB was modulated 3.6-fold by changing the control voltage from -1.0 V to +1.0 V. This modulation of the write current density is explained by the change in the surface anisotropy of the free layer from 1.7 mJ/m2 to 1.6 mJ/m2, which is caused by the electric field applied to the junction. The read disturb rate and write error rate, which are important performance parameters for memory applications, are drastically improved, and no error was detected in 5 × 108 cycles by controlling read and write sequences.

  6. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  7. Global minimum profile error (GMPE) - a least-squares-based approach for extracting macroscopic rate coefficients for complex gas-phase chemical reactions.

    Science.gov (United States)

    Duong, Minh V; Nguyen, Hieu T; Mai, Tam V-T; Huynh, Lam K

    2018-01-03

    Master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) has shown to be a powerful framework for modeling kinetic and dynamic behaviors of a complex gas-phase chemical system on a complicated multiple-species and multiple-channel potential energy surface (PES) for a wide range of temperatures and pressures. Derived from the ME time-resolved species profiles, the macroscopic or phenomenological rate coefficients are essential for many reaction engineering applications including those in combustion and atmospheric chemistry. Therefore, in this study, a least-squares-based approach named Global Minimum Profile Error (GMPE) was proposed and implemented in the MultiSpecies-MultiChannel (MSMC) code (Int. J. Chem. Kinet., 2015, 47, 564) to extract macroscopic rate coefficients for such a complicated system. The capability and limitations of the new approach were discussed in several well-defined test cases.

  8. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  9. Estimation of salivary glucose, salivary amylase, salivary total protein and salivary flow rate in diabetics in India.

    Science.gov (United States)

    Panchbhai, Arati S; Degwekar, Shirish S; Bhowte, Rahul R

    2010-09-01

    Diabetes is known to influence salivary composition and function, eventually affecting the oral cavity. We thus evaluated saliva samples for levels of glucose, amylase and total protein, and assessed salivary flow rate in diabetics and healthy non-diabetics. We also analyzed these parameters with regard to duration and type of diabetes mellitus and gender, and aimed to assess the interrelationships among the variables included in the study. A total of 120 age- and sex-matched participants were divided into 3 groups of 40 each; the uncontrolled diabetic group, the controlled diabetic group and the healthy non-diabetic group. Salivary investigations were performed using unstimulated whole saliva. Mean salivary glucose levels were found to be significantly elevated in both uncontrolled and controlled diabetics, as compared to healthy non-diabetics. There were significant decreases in mean salivary amylase levels in controlled diabetics when compared to healthy non-diabetics. Other than salivary glucose, no other parameters were found to be markedly affected in diabetes mellitus. Further research is needed to explore the clinical implications of these study results.

  10. Total intravenous anaesthesia by boluses or by continuous rate infusion of propofol in mute swans (Cygnus olor).

    Science.gov (United States)

    Müller, Kerstin; Holzapfel, Judith; Brunnberg, Leo

    2011-07-01

    To investigate intravenous (IV) propofol given by intermittent boluses or by continuous rate infusion (CRI) for anaesthesia in swans. Prospective randomized clinical study. Twenty mute swans (Cygnus olor) (eight immature and 12 adults) of unknown sex undergoing painless diagnostic or therapeutic procedures. Induction of anaesthesia was with 8 mg kg(-1) propofol IV. To maintain anaesthesia, ten birds (group BOLI) received propofol as boluses, whilst 10 (group CRI) received propofol as a CRI. Some physiological parameters were measured. Anaesthetic duration was 35 minutes. Groups were compared using Mann-Whitney U-test. Results are median (range). Anaesthetic induction was smooth and tracheal intubation was achieved easily in all birds. Bolus dose in group BOLI was 2.9 (1.3-4.3) mg kg(-1); interval between and number of boluses required were 4 (1-8) minutes and 6 (4-11) boluses respectively. Total dose of propofol was 19 (12.3-37.1) mg kg(-1). Awakening between boluses was very abrupt. In group CRI, propofol infusion rate was 0.85 (0.8-0.9) mg kg(-1) minute(-1), and anaesthesia was stable. Body temperature, heart and respiratory rates, oxygen saturation (by pulse oximeter) and reflexes did not differ between groups. Oxygen saturations (from pulse oximeter readings) were low in some birds. Following anaesthesia, all birds recovered within 40 minutes. In 55% of all, transient signs of central nervous system excitement occurred during recovery. 8 mg kg(-1) propofol appears an adequate induction dose for mute swans. For maintenance, a CRI of 0.85 mg kg(-1) minute(-1) produced stable anaesthesia suitable for painless clinical procedures. In contrast bolus administration, was unsatisfactory as birds awoke very suddenly, and the short intervals between bolus requirements hampered clinical procedures. Administration of additional oxygen throughout anaesthesia might reduce the incidence of low arterial haemoglobin saturation. © 2011 The Authors. Veterinary Anaesthesia and

  11. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    Science.gov (United States)

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The

  12. The human error rate assessment and optimizing system HEROS - a new procedure for evaluating and optimizing the man-machine interface in PSA

    International Nuclear Information System (INIS)

    Richei, A.; Hauptmanns, U.; Unger, H.

    2001-01-01

    A new procedure allowing the probabilistic evaluation and optimization of the man-machine system is presented. This procedure and the resulting expert system HEROS, which is an acronym for Human Error Rate Assessment and Optimizing System, is based on the fuzzy set theory. Most of the well-known procedures employed for the probabilistic evaluation of human factors involve the use of vague linguistic statements on performance shaping factors to select and to modify basic human error probabilities from the associated databases. This implies a large portion of subjectivity. Vague statements are expressed here in terms of fuzzy numbers or intervals which allow mathematical operations to be performed on them. A model of the man-machine system is the basis of the procedure. A fuzzy rule-based expert system was derived from ergonomic and psychological studies. Hence, it does not rely on a database, whose transferability to situations different from its origin is questionable. In this way, subjective elements are eliminated to a large extent. HEROS facilitates the importance analysis for the evaluation of human factors, which is necessary for optimizing the man-machine system. HEROS is applied to the analysis of a simple diagnosis of task of the operating personnel in a nuclear power plant

  13. Bit Error Rate Performance of a MIMO-CDMA System Employing Parity-Bit-Selected Spreading in Frequency Nonselective Rayleigh Fading

    Directory of Open Access Journals (Sweden)

    Claude D'Amours

    2011-01-01

    Full Text Available We analytically derive the upper bound for the bit error rate (BER performance of a single user multiple input multiple output code division multiple access (MIMO-CDMA system employing parity-bit-selected spreading in slowly varying, flat Rayleigh fading. The analysis is done for spatially uncorrelated links. The analysis presented demonstrates that parity-bit-selected spreading provides an asymptotic gain of 10log(Nt dB over conventional MIMO-CDMA when the receiver has perfect channel estimates. This analytical result concurs with previous works where the (BER is determined by simulation methods and provides insight into why the different techniques provide improvement over conventional MIMO-CDMA systems.

  14. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  15. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    Science.gov (United States)

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.

  16. Comparison of pregnancy rates in pre-treatment male infertility and low total motile sperm count at insemination.

    Science.gov (United States)

    Xiao, Cheng Wei; Agbo, Chioma; Dahan, Michael H

    2016-01-01

    In intrauterine insemination (IUI), total motile sperm count (TMSC) is an important predictor of pregnancy. However, the clinical significance of a poor TMSC on the day of IUI in a patient with prior normal semen analysis (SA) is unclear. We performed this study to determine if these patients perform as poorly as those who had male factor infertility diagnosed prior to commencing treatment. 147 males with two abnormal SA based on the 2010 World Health Organization criteria underwent 356 IUI with controlled ovarian hyper-stimulation (COH). Their pregnancy rates were compared to 120 males who had abnormal TMSC at the time of 265 IUI with COH, in a retrospective university-based study. The two groups were comparable in female age (p = 0.11), duration of infertility (p = 0.17), previous pregnancies (p = 0.13), female basal serum FSH level (p = 0.54) and number of mature follicles on the day of ovulation trigger (p = 0.27). Despite better semen parameters on the day of IUI in the pre-treatment male factor infertility group (TMSC mean ± SD: 61 ± 30 million vs. 3.5 ± 2 million, p male factor infertility. More studies should be performed to confirm these findings.

  17. Total Absorption Spectroscopy Study of the Beta Decay of 60Mn to Constrain the Neutron Capture Rate of 60Fe

    Science.gov (United States)

    Richman, Debra; Spyrou, Artemis; Dombos, Alex; Couture, Aaron; e15034 Collaboration

    2017-09-01

    Interest in 60Fe, a long lived radioisotope synthesized in massive stars, has recently peaked. The signature of its decay allows us to probe astrophysical processes, events such as the early formation of the solar system and nucleosynthesis. To understand these observations a complete understanding of the creation, destruction and nuclear properties of 60Fe in the astrophysical environment are required. Using the beta decay of 60Mn in conjunction with total absorption spectroscopy (TAS), made possible by the high efficiency gamma ray calorimeter SuN (Summing NaI detector) at the National Superconducting Cyclotron Laboratory (NSCL), to study the distribution of beta-decay intensity over the daughter-nucleus 60Fe, provides information about the structure of the daughter and improves the predictive power of astrophysical models. In addition to the ongoing TAS analysis, The Beta-Oslo method will be used to extract the nuclear level density and gamma-strength function of 60Fe providing much needed constraints on the neutron capture reaction rate responsible for the creation of this nucleus.

  18. Estimating the reproductive number, total outbreak size, and reporting rates for Zika epidemics in South and Central America

    Directory of Open Access Journals (Sweden)

    Deborah P. Shutt

    2017-12-01

    Full Text Available As South and Central American countries prepare for increased birth defects from Zika virus outbreaks and plan for mitigation strategies to minimize ongoing and future outbreaks, understanding important characteristics of Zika outbreaks and how they vary across regions is a challenging and important problem. We developed a mathematical model for the 2015/2016 Zika virus outbreak dynamics in Colombia, El Salvador, and Suriname. We fit the model to publicly available data provided by the Pan American Health Organization, using Approximate Bayesian Computation to estimate parameter distributions and provide uncertainty quantification. The model indicated that a country-level analysis was not appropriate for Colombia. We then estimated the basic reproduction number to range between 4 and 6 for El Salvador and Suriname with a median of 4.3 and 5.3, respectively. We estimated the reporting rate to be around 16% in El Salvador and 18% in Suriname with estimated total outbreak sizes of 73,395 and 21,647 people, respectively. The uncertainty in parameter estimates highlights a need for research and data collection that will better constrain parameter ranges.

  19. Estimating the reproductive number, total outbreak size, and reporting rates for Zika epidemics in South and Central America.

    Science.gov (United States)

    Shutt, Deborah P; Manore, Carrie A; Pankavich, Stephen; Porter, Aaron T; Del Valle, Sara Y

    2017-12-01

    As South and Central American countries prepare for increased birth defects from Zika virus outbreaks and plan for mitigation strategies to minimize ongoing and future outbreaks, understanding important characteristics of Zika outbreaks and how they vary across regions is a challenging and important problem. We developed a mathematical model for the 2015/2016 Zika virus outbreak dynamics in Colombia, El Salvador, and Suriname. We fit the model to publicly available data provided by the Pan American Health Organization, using Approximate Bayesian Computation to estimate parameter distributions and provide uncertainty quantification. The model indicated that a country-level analysis was not appropriate for Colombia. We then estimated the basic reproduction number to range between 4 and 6 for El Salvador and Suriname with a median of 4.3 and 5.3, respectively. We estimated the reporting rate to be around 16% in El Salvador and 18% in Suriname with estimated total outbreak sizes of 73,395 and 21,647 people, respectively. The uncertainty in parameter estimates highlights a need for research and data collection that will better constrain parameter ranges. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Effects of Postacute Settings on Readmission Rates and Reasons for Readmission Following Total Knee Arthroplasty.

    Science.gov (United States)

    Welsh, Rodney Laine; Graham, James E; Karmarkar, Amol M; Leland, Natalie E; Baillargeon, Jacques G; Wild, Dana L; Ottenbacher, Kenneth J

    2017-04-01

    Examine the effects of postacute discharge setting on unplanned hospital readmissions following total knee arthroplasty (TKA) in older adults. Secondary analyses of 100% Medicare (inpatient) claims files. Acute hospitals across the United States. Medicare fee-for-service beneficiaries ≥66 years of age who were discharged from an acute hospital following TKA in 2009-2011 (n = 608,031). The outcome measure was unplanned readmissions at 30, 60, and 90 days. The independent variable of interest was postacute discharge setting: inpatient rehabilitation facility (IRF), skilled nursing facility (SNF), or community. Covariates included demographic, clinical, and facility-level factors. The top 10 reasons for readmission were tabulated for each discharge setting across the 3 consecutive 30-day time periods. A total of 32,226 patients (5.3%) were re-admitted within 30 days. Compared with community discharge, patients discharged to IRF and SNF had 44% and 40% higher odds of 30-day readmission, respectively. IRF and SNF discharge settings were also associated with 48% and 45% higher odds of 90-day readmission, respectively, compared with community discharge. The largest increase in readmission rates occurred within the first 30 days of hospital discharge for each discharge setting. From 1 to 30 days, postoperative and post-traumatic infections were among the top causes for readmission in all 3 discharge settings. From 31 to 60 days, postoperative or traumatic infections remained in the top 5-7 reasons for readmission in all settings, but they were not in the top 10 at 61 to 90 days. Patients discharged to either SNF or IRF, in comparison with those discharged to the community, had greater likelihood of readmission within 30 and 90 days. The reasons for readmission were relatively consistent across discharge settings and time periods. These findings provide new information relevant to the delivery of postacute care to older adults following TKA. Copyright © 2017

  1. [Medication errors in Spanish intensive care units].

    Science.gov (United States)

    Merino, P; Martín, M C; Alonso, A; Gutiérrez, I; Alvarez, J; Becerril, F

    2013-01-01

    To estimate the incidence of medication errors in Spanish intensive care units. Post hoc study of the SYREC trial. A longitudinal observational study carried out during 24 hours in patients admitted to the ICU. Spanish intensive care units. Patients admitted to the intensive care unit participating in the SYREC during the period of study. Risk, individual risk, and rate of medication errors. The final study sample consisted of 1017 patients from 79 intensive care units; 591 (58%) were affected by one or more incidents. Of these, 253 (43%) had at least one medication-related incident. The total number of incidents reported was 1424, of which 350 (25%) were medication errors. The risk of suffering at least one incident was 22% (IQR: 8-50%) while the individual risk was 21% (IQR: 8-42%). The medication error rate was 1.13 medication errors per 100 patient-days of stay. Most incidents occurred in the prescription (34%) and administration (28%) phases, 16% resulted in patient harm, and 82% were considered "totally avoidable". Medication errors are among the most frequent types of incidents in critically ill patients, and are more common in the prescription and administration stages. Although most such incidents have no clinical consequences, a significant percentage prove harmful for the patient, and a large proportion are avoidable. Copyright © 2012 Elsevier España, S.L. and SEMICYUC. All rights reserved.

  2. Relationship of Total Motile Sperm Count and Percentage Motile Sperm to Successful Pregnancy Rates Following Intrauterine Insemination

    OpenAIRE

    Pasqualotto, Eleonora B.; Daitch, James A.; Hendin, Benjamin N.; Falcone, Tommaso; Thomas, Anthony J.; Nelson, David R.; Agarwal, Ashok

    1999-01-01

    Purpose:This study sought (i) to investigate the relationship between postwash total motile sperm count and postwash percentage motile sperm in predicting successful intrauterine insemination and (ii) to determine the minimal postwash total motile sperm count required to achieve pregnancy with intrauterine insemination.

  3. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  4. A Physics-Based Engineering Methodology for Calculating Soft Error Rates of Bulk CMOS and SiGe Heterojunction Bipolar Transistor Integrated Circuits

    Science.gov (United States)

    Fulkerson, David E.

    2010-02-01

    This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.

  5. Total dose and dose-rate effects on start-up current in anti-fuse FPGA

    International Nuclear Information System (INIS)

    Wang, J.; Wong, W.; McCollum, J.; Cronquist, B.; Katz, R.; Kleyner, I.; Kleyner, F.

    1999-01-01

    Radiation enhanced start-up current (RESC) in an anti-fuse FPGA, A1280A, is thoroughly investigated and a comprehensive transistor-level mechanism is proposed. Low dose-rate testing, appropriate for civilian space applications, and annealing at room temperature shows RESC to be negligible for the lot of parts tested with a fixed power supply slew rate. (authors)

  6. Warming and organic matter sources impact the proportion of dissolved to total activities in marine extracellular enzymatic rates

    KAUST Repository

    Baltar, Federico; Moran, Xose Anxelu G.; Lø nborg, Christian

    2017-01-01

    Extracellular enzymatic activities (EEAs) are the rate-limiting step in the degradation of organic matter. Extracellular enzymes can be found associated to cells or dissolved in the surrounding water. The proportion of cell-free EEA constitutes

  7. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    Science.gov (United States)

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Short communication: Effect of straw inclusion rate in a dry total mixed ration on the behavior of weaned dairy calves

    NARCIS (Netherlands)

    Groen, M.J.; Steele, M.A.; DeVries, T.J.

    2015-01-01

    The primary objective of this study was to determine the effect of straw inclusion levels on the feeding behavior of young, weaned calves adapted to a dry total mixed ration (TMR) composed of a multitextured concentrate and chopped straw. A secondary objective was to determine how developed feeding

  9. 38 CFR 3.22 - DIC benefits for survivors of certain veterans rated totally disabled at time of death.

    Science.gov (United States)

    2010-07-01

    ... benefits under paragraph (a) of this section receives any money or property pursuant to a judicial... amount of money received and the fair market value of the property received. The provisions of this... veteran. The amount to be reported is the total of the amount of money received and the fair market value...

  10. Scintillation and bit error rate analysis of a phase-locked partially coherent flat-topped array laser beam in oceanic turbulence.

    Science.gov (United States)

    Yousefi, Masoud; Kashani, Fatemeh Dabbagh; Golmohammady, Shole; Mashal, Ahmad

    2017-12-01

    In this paper, the performance of underwater wireless optical communication (UWOC) links, which is made up of the partially coherent flat-topped (PCFT) array laser beam, has been investigated in detail. Providing high power, array laser beams are employed to increase the range of UWOC links. For characterization of the effects of oceanic turbulence on the propagation behavior of the considered beam, using the extended Huygens-Fresnel principle, an analytical expression for cross-spectral density matrix elements and a semi-analytical one for fourth-order statistical moment have been derived. Then, based on these expressions, the on-axis scintillation index of the mentioned beam propagating through weak oceanic turbulence has been calculated. Furthermore, in order to quantify the performance of the UWOC link, the average bit error rate (BER) has also been evaluated. The effects of some source factors and turbulent ocean parameters on the propagation behavior of the scintillation index and the BER have been studied in detail. The results of this investigation indicate that in comparison with the Gaussian array beam, when the source size of beamlets is larger than the first Fresnel zone, the PCFT array laser beam with the higher flatness order is found to have a lower scintillation index and hence lower BER. Specifically, in the sense of scintillation index reduction, using the PCFT array laser beams has a considerable benefit in comparison with the single PCFT or Gaussian laser beams and also Gaussian array beams. All the simulation results of this paper have been shown by graphs and they have been analyzed in detail.

  11. Ventilator-associated pneumonia: the influence of bacterial resistance, prescription errors, and de-escalation of antimicrobial therapy on mortality rates

    Directory of Open Access Journals (Sweden)

    Ana Carolina Souza-Oliveira

    2016-09-01

    Conclusion: Prescription errors influenced mortality of patients with Ventilator-associated pneumonia, underscoring the challenge of proper Ventilator-associated pneumonia treatment, which requires continuous reevaluation to ensure that clinical response to therapy meets expectations.

  12. Explaining quantitative variation in the rate of Optional Infinitive errors across languages: a comparison of MOSAIC and the Variational Learning Model.

    Science.gov (United States)

    Freudenthal, Daniel; Pine, Julian; Gobet, Fernand

    2010-06-01

    In this study, we use corpus analysis and computational modelling techniques to compare two recent accounts of the OI stage: Legate & Yang's (2007) Variational Learning Model and Freudenthal, Pine & Gobet's (2006) Model of Syntax Acquisition in Children. We first assess the extent to which each of these accounts can explain the level of OI errors across five different languages (English, Dutch, German, French and Spanish). We then differentiate between the two accounts by testing their predictions about the relation between children's OI errors and the distribution of infinitival verb forms in the input language. We conclude that, although both accounts fit the cross-linguistic patterning of OI errors reasonably well, only MOSAIC is able to explain why verbs that occur more frequently as infinitives than as finite verb forms in the input also occur more frequently as OI errors than as correct finite verb forms in the children's output.

  13. Nursing Errors in Intensive Care Unit by Human Error Identification in Systems Tool: A Case Study

    Directory of Open Access Journals (Sweden)

    Nezamodini

    2016-03-01

    Full Text Available Background Although health services are designed and implemented to improve human health, the errors in health services are a very common phenomenon and even sometimes fatal in this field. Medical errors and their cost are global issues with serious consequences for the patients’ community that are preventable and require serious attention. Objectives The current study aimed to identify possible nursing errors applying human error identification in systems tool (HEIST in the intensive care units (ICUs of hospitals. Patients and Methods This descriptive research was conducted in the intensive care unit of a hospital in Khuzestan province in 2013. Data were collected through observation and interview by nine nurses in this section in a period of four months. Human error classification was based on Rose and Rose and Swain and Guttmann models. According to HEIST work sheets the guide questions were answered and error causes were identified after the determination of the type of errors. Results In total 527 errors were detected. The performing operation on the wrong path had the highest frequency which was 150, and the second rate with a frequency of 136 was doing the tasks later than the deadline. Management causes with a frequency of 451 were the first rank among identified errors. Errors mostly occurred in the system observation stage and among the performance shaping factors (PSFs, time was the most influencing factor in occurrence of human errors. Conclusions Finally, in order to prevent the occurrence and reduce the consequences of identified errors the following suggestions were proposed : appropriate training courses, applying work guidelines and monitoring their implementation, increasing the number of work shifts, hiring professional workforce, equipping work space with appropriate facilities and equipment.

  14. The dramatic increase in total knee replacement utilization rates in the United States cannot be fully explained by growth in population size and the obesity epidemic.

    Science.gov (United States)

    Losina, Elena; Thornhill, Thomas S; Rome, Benjamin N; Wright, John; Katz, Jeffrey N

    2012-02-01

    Total knee replacement utilization in the United States more than doubled from 1999 to 2008. Although the reasons for this increase have not been examined rigorously, some have attributed the increase to population growth and the obesity epidemic. Our goal was to investigate whether the rapid increase in total knee replacement use over the past decade can be sufficiently attributed to changes in these two factors. We used data from the Nationwide Inpatient Sample to estimate changes in total knee replacement utilization rates from 1999 to 2008, stratified by age (eighteen to forty-four years, forty-five to sixty-four years, and sixty-five years or older). We obtained data on obesity prevalence and U.S. population growth from federal sources. We compared the rate of change in total knee replacement utilization with the rates of population growth and change in obesity prevalence from 1999 to 2008. In 2008, 615,050 total knee replacements were performed in the United States adult population, 134% more than in 1999. During the same time period, the overall population size increased by 11%. While the population of forty-five to sixty-four-year-olds grew by 29%, the number of total knee replacements in this age group more than tripled. The number of obese and non-obese individuals in the United States increased by 23% and 4%, respectively. Assuming unchanged indications for total knee replacement among obese and non-obese individuals with knee osteoarthritis over the last decade, these changes fail to account for the 134% growth in total knee replacement use. Population growth and obesity cannot fully explain the rapid expansion of total knee replacements in the last decade, suggesting that other factors must also be involved. The disproportionate increase in total knee replacements among younger patients may be a result of a growing number of knee injuries and expanding indications for the procedure.

  15. A comparison of post-op haemoglobin levels and allogeneic blood transfusion rates following total knee arthroplasty without drainage or with reinfusion drains.

    Science.gov (United States)

    Hazarika, Shariff; Bhattacharya, Rajarshi; Bhavikatti, Mainudden; Dawson, Matthew

    2010-02-01

    The effects of re-infusion drains on the rate of allogeneic blood transfusion and post-op haemoglobin levels in Total Knee Arthroplasty were examined. A group of 22 patients undergoing primary Total Knee Arthroplasty using a CBCII Constavac Stryker re-infusion drainage system were compared with a group of 30 patients, matched for age, sex and type of prosthesis but without any drain usage. The re-infusion drain.group had a significantly lower day 1 and day 3 post-operative haemoglobin compared to the non-drainage group. The re-infusion drain group had a higher allogenic transfusion rate compared to the non-drainage group. There were no significant differences between the two groups regarding the rate of wound and transfusion related complications and mean length of post-operative stay. We found that reinfusion drains were ineffective in reducing allogeneic transfusion requirements as compared with non-drainage in total knee arthroplasty.

  16. The Impact of use of Double Set-up on Infection Rates in Revision Total Knee Replacement and Limb Salvage Procedures

    Directory of Open Access Journals (Sweden)

    Jennifer Waterman

    2015-03-01

    Full Text Available A retrospective analysis was performed to determine the impact of utilizing a double set-up procedure on reducing infection rates revision total knee and limb salvage procedures in patients with known joint infection.  Eighteen cases fit selection criteria.  The recurrence rate of infection was 5.5% which is less than reported recent literature review.   This suggests the use of a double set-up in combination with other infection reducing protocols may help further reduce recurrent infection.  Keywords: double set-up, infection, revision total knee arthroplasty, limb-salvage

  17. Postprandial appetite ratings are reproducible and moderately related to total day energy intakes, but not ad libitum lunch energy intakes, in healthy young women.

    Science.gov (United States)

    Tucker, Amy J; Heap, Sarah; Ingram, Jessica; Law, Marron; Wright, Amanda J

    2016-04-01

    Reproducibility and validity testing of appetite ratings and energy intakes are needed in experimental and natural settings. Eighteen healthy young women ate a standardized breakfast for 8 days. Days 1 and 8, they rated their appetite (Hunger, Fullness, Desire to Eat, Prospective Food Consumption (PFC)) over a 3.5 h period using visual analogue scales, consumed an ad libitum lunch, left the research center and recorded food intake for the remainder of the day. Days 2-7, participants rated their at-home Hunger at 0 and 30 min post-breakfast and recorded food intake for the day. Total area under the curve (AUC) over the 180 min period before lunch, and energy intakes were calculated. Reproducibility of satiety measures between days was evaluated using coefficients of repeatability (CR), coefficients of variation (CV) and intra-class coefficients (ri). Correlation analysis was used to examine validity between satiety measures. AUCs for Hunger, Desire to Eat and PFC (ri = 0.73-0.78), ad libitum energy intakes (ri = 0.81) and total day energy intakes (ri​ = 0.48) were reproducible; fasted ratings were not. Average AUCs for Hunger, Desire to Eat and PFC, Desire to Eat at nadir and PFC at fasting, nadir and 180 min were correlated to total day energy intakes (r = 0.50-0.77, P < 0.05), but no ratings were correlated to lunch consumption. At-home Hunger ratings were weakly reproducible but not correlated to reported total energy intakes. Satiety ratings did not concur with next meal intake but PFC ratings may be useful predictors of intake. Overall, this study adds to the limited satiety research on women and challenges the accepted measures of satiety in an experimental setting. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Epicardial, pericardial and total cardiac fat and cardiovascular disease in type 2 diabetic patients with elevated urinary albumin excretion rate

    DEFF Research Database (Denmark)

    Christensen, Regitse H.; Von Scholten, Bernt J.; Hansen, Christian S.

    2017-01-01

    of 200 patients with type 2 diabetes and elevated urinary albumin excretion rate (UAER). Methods Cardiac adipose tissue was measured from baseline echocardiography. The composite endpoint comprised incident cardiovascular disease and all-cause mortality. Coronary artery calcium, carotid intima media.......7, p = 0.017) models. Cardiac adipose tissue (p = 0.033) was associated with baseline coronary artery calcium (model 1) and interleukin-8 (models 1-3, all p type 2 diabetes patients without coronary artery disease, high cardiac adipose tissue levels were associated...

  19. Medicaid/CHIP Program; Medicaid Program and Children's Health Insurance Program (CHIP); Changes to the Medicaid Eligibility Quality Control and Payment Error Rate Measurement Programs in Response to the Affordable Care Act. Final rule.

    Science.gov (United States)

    2017-07-05

    This final rule updates the Medicaid Eligibility Quality Control (MEQC) and Payment Error Rate Measurement (PERM) programs based on the changes to Medicaid and the Children's Health Insurance Program (CHIP) eligibility under the Patient Protection and Affordable Care Act. This rule also implements various other improvements to the PERM program.

  20. 78 FR 56646 - Determination of Total Amounts of Fiscal Year 2014 WTO Tariff-Rate Quotas for Raw Cane Sugar and...

    Science.gov (United States)

    2013-09-13

    ... Secretary Determination of Total Amounts of Fiscal Year 2014 WTO Tariff- Rate Quotas for Raw Cane Sugar and Certain Sugars, Syrups and Molasses AGENCY: Office of the Secretary, USDA. ACTION: Notice. SUMMARY: The... sugar at 1,117,195 metric tons raw value (MTRV). The Secretary also announces the establishment of the...

  1. Repeatability and individual correlates of basal metabolic rate and total evaporative water loss in birds : A case study in European stonechats

    NARCIS (Netherlands)

    Versteegh, Maaike A.; Heim, Barbara; Dingemanse, Niels J.; Tieleman, B. Irene

    Basal metabolic rate (BMR) and total evaporative water loss (TEWL) are thought to have evolved in conjunction with life history traits and are often assumed to be characteristic features of an animal. Physiological traits can show large intraindividual variation at short and long timescales, yet

  2. Impact of total PSA, PSA doubling time and PSA velocity on detection rates of 11C-Choline positron emission tomography in recurrent prostate cancer

    NARCIS (Netherlands)

    Rybalov, Maxim; Breeuwsma, Anthonius J.; Leliveld, Anna M.; Pruim, Jan; Dierckx, Rudi A.; de Jong, Igle J.

    PURPOSE: To evaluate the effect of total PSA (tPSA) and PSA kinetics on the detection rates of (11)C-Choline PET in patients with biochemical recurrence (BCR) after radical prostatectomy (RP) or external beam radiotherapy (EBRT). METHODS: We included 185 patients with BCR after RP (PSA >0.2 ng/ml)

  3. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    Science.gov (United States)

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  4. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  5. Increased mortality by septicemia, interstitial pneumonitis and pulmonary fibrosis among bone marrow transplant recipients receiving an increased mean dose rate of total irradiation

    International Nuclear Information System (INIS)

    Ringden, O.; Baaryd, I.; Johansson, B.

    1983-01-01

    Seven bone marrow transplant recipients with acute lymphoblastic leukemia receiving a mean dose rate of 0.07 Gy/min of total body irradiation towards the pelvic midpoint and the lungs had an increased (p<0.01) overall death rate of 86 per cent compared with 33 per cent among 27 patients with acute non-lymphoblastic leukemia or acute lymphoblastic leukemia treated with a mean dose rate of 0.04 Gy/min. Among the patients receiving the higher dose rate there was an increased mortality in causes related to radiation toxicity like early septicemia, interstitial pneumonitis and pulmonary fibrosis, compared with all patients receiving the lower dose rate (p<0.01) and also with 10 patients from this group with acute lymphoblastic leukemia (p<0.02). (Auth.)

  6. Isotopic and chemical dilution effects on the vibrational relaxation rate of some totally symmetric motions of liquid acetonitrile

    International Nuclear Information System (INIS)

    Marri, E.; Morresi, A.; Paliani, G.; Cataliotti, R.S.; Giorgini, M.G.

    1999-01-01

    The vibrational dephasing of the ν 1 (C-H, C-D stretching) and ν 3 (C-H, C-D bending) symmetric motions of liquid acetonitrile in its light and fully deuterated forms has been studied in the frame of the vibrational time correlation functions obtained as Fourier transforms of the isotropic Raman spectral distributions and interpreted within the Kubo theory. In addition, the experimental isotropic profiles have been analysed within the bandshape approach formulated by analytical Fourier transformation of the Kubo vibrational time correlation functions in order to derive the relaxation parameters in the frequency domain. The effects of the isotopic (CH 3 CN/CD 3 CN and vice versa) and chemical (CCl 4 ) dilution on the bandshapes and on the vibrational relaxation parameters have been studied. It was observed that the decay rate of ν 1 mode is insensitive to the isotopic dilution but varies appreciably with chemical (CCl 4 ) dilution. The vibrational dephasing of ν 3 mode is qualitatively, but not quantitatively, affected in the same way by chemical dilution and shows a slower modulation regime than that exhibited by the stretching mode. Unlikely from the latter, the ν 3 mode results are slightly affected by the isotopic dilution. Phase relaxation mechanisms of these two motions of acetonitrile in the liquid state are proposed on the basis of these data, and a comparison is made with the results earlier published. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  7. Challenges in assessing hospital-level stroke mortality as a quality measure: comparison of ischemic, intracerebral hemorrhage, and total stroke mortality rates.

    Science.gov (United States)

    Xian, Ying; Holloway, Robert G; Pan, Wenqin; Peterson, Eric D

    2012-06-01

    Public reporting efforts currently profile hospitals based on overall stroke mortality rates, yet the "mix" of hemorrhagic and ischemic stroke cases may impact this rate. Using the 2005 to 2006 New York state data, we examined the degree to which hospital stroke mortality rankings varied regarding ischemic versus hemorrhagic versus total stroke. Observed/expected ratio was calculated using the Agency for Healthcare Research and Quality Inpatient Quality Indicator software. The observed/expected ratio and outlier status based on stroke types across hospitals were examined using Pearson correlation coefficients (r) and weighted κ. Overall 30-day stroke mortality rates were 15.2% and varied from 11.3% for ischemic stroke and 37.3% for intracerebral hemorrhage. Hospital risk-adjusted ischemic stroke observed/expected ratio was weakly correlated with its own intracerebral hemorrhage observed/expected ratio (r=0.38). When examining hospital performance group (mortality better, worse, or no different than average), disagreement was observed in 35 of 81 hospitals (κ=0.23). Total stroke mortality observed/expected ratio and rankings were correlated with intracerebral hemorrhage (r=0.61 and κ=0.36) and ischemic stroke (r=0.94 and κ=0.71), but many hospitals still switched classification depending on mortality metrics. However, hospitals treating a higher percent of hemorrhagic stroke did not have a statistically significant higher total stroke mortality rate relative to those treating fewer hemorrhagic strokes. Hospital stroke mortality ratings varied considerably depending on whether ischemic, hemorrhagic, or total stroke mortality rates were used. Public reporting of stroke mortality measures should consider providing risk-adjusted outcome on separate stroke types.

  8. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  9. Short communication: Effect of straw inclusion rate in a dry total mixed ration on the behavior of weaned dairy calves.

    Science.gov (United States)

    Groen, M J; Steele, M A; DeVries, T J

    2015-04-01

    The primary objective of this study was to determine the effect of straw inclusion levels on the feeding behavior of young, weaned calves adapted to a dry total mixed ration (TMR) composed of a multitextured concentrate and chopped straw. A secondary objective was to determine how developed feeding patterns persist after calves were switched to a conventional silage-based diet. Ten Holstein bull calves (91 ± 2.4d of age, weighing 136 ± 12.3 kg) were assigned to 1 of 2 treatments: a TMR containing [dry matter (DM) basis] either (1) 85% concentrate and 15% chopped straw for 10 wk (wk 1 to 10) or (2) 85% concentrate and 15% chopped straw for 5 wk (wk 1 to 5), then 70% concentrate and 30% chopped straw for 5 wk (wk 6 to 10). After 10 wk, all animals were transitioned to a TMR containing (DM basis) 42.3% corn silage and 57.7% haylage for 2 wk (wk 11 to 12). During wk 1 to 5, all calves had similar DMI (5.5 kg/d), average daily gain (1.7 kg/d), feed efficiency (3.5 kg of DM/kg of gain), and eating time (151.9 min/d). During wk 6 to 10, calves transitioned to the 70% diet ate less DM (5.5 vs. 7.4 kg/d), grew more slowly (1.3 vs. 1.6 kg/d), sorted more against long forage particles (62.8 vs. 103.8%), and had greater feeding times (194.9 vs. 102.6 min/d). The difference in feeding time occurred only during the first 8 h after feed delivery. Despite similar DMI (5.2 kg/d) and average daily gain (1.1 kg/d) in wk 11 to 12, differences in behavior were observed resulting from previous diets. In wk 11 to 12, calves previously fed the 70% diet continued to have a longer meal immediately after feed delivery. Overall, the results indicate that diluting a dry TMR containing a multitextured concentrate and chopped straw with more straw resulted in calves spending more time feeding and having longer meals immediately after feed delivery; this feeding pattern carried over after calves were transitioned to a silage-based ration. Copyright © 2015 American Dairy Science Association

  10. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  11. Prevalence and cost of hospital medical errors in the general and elderly United States populations.

    Science.gov (United States)

    Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S

    2013-12-01

    The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.

  12. Effects of radiation and α-tocopherol on saliva flow rate, amylase activity, total protein and electrolyte levels in oral cavity cancer

    Directory of Open Access Journals (Sweden)

    Chitra S

    2008-01-01

    Full Text Available Objective: The objective of the present study was to evaluate early and late effects of radiation and a-tocopherol on the secretion rate of saliva and on selected saliva salivary parameters in oral cavity cancer patients. Patients & Methods: Eighty-nine histologically confirmed oral cavity cancer patients (OCC were enrolled in the study. Resting whole saliva was collected before, during and at the end of the radiation therapy (RT and simultaneous supplementation with α - tocopherol to the radiation treated patients (RT + AT. Results: Salivary flow rate, pH, amylase activity, total protein, sodium and potassium were analyzed. Increased pH, potassium and decreased flow rate, amylase activity, protein content and sodium were observed in 6 weeks of radiation treated patients when compared to OCC patients. A significant improvement of those parameters was observed on α - tocopherol supplementation in RT + AT patients. Conclusion: Supplementation with α - tocopherol improves the salivary flow rate thereby, maintains salivary parameters.

  13. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  14. Revision rates for metal-on-metal hip resurfacing and metal-on-metal total hip arthroplasty – a systematic review

    DEFF Research Database (Denmark)

    Ras Sørensen, Sofie-amalie L.; Jørgensen, Henrik L.; Sporing, Sune L.

    2016-01-01

    Purpose To compare revision rates of metal-on-metal (MoM) hip resurfacing (HRS) and MoM total hip arthroplasty (THA), as well as the primary causes for revisions. Methods The PubMed database was queried for potentially relevant articles addressing MoMTHA and MoMHRS, a total of 51 articles were....... The odds ratio was 1.25 (1.03:1.53) 95% CI (p = 0.03) (MoMHRS vs. MoMTHA). The studies of hip prostheses were separated into 2 categories of short- and long-term (more or less than 5 years). Short-term revision rate for MoMTHA was 4.5% after 4.8 years, and for MoMHRS 4.0% after 4.2 years. The odds ratio...

  15. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  16. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  17. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  18. Nonresponse Error in Mail Surveys: Top Ten Problems

    Directory of Open Access Journals (Sweden)

    Jeanette M. Daly

    2011-01-01

    Full Text Available Conducting mail surveys can result in nonresponse error, which occurs when the potential participant is unwilling to participate or impossible to contact. Nonresponse can result in a reduction in precision of the study and may bias results. The purpose of this paper is to describe and make readers aware of a top ten list of mailed survey problems affecting the response rate encountered over time with different research projects, while utilizing the Dillman Total Design Method. Ten nonresponse error problems were identified, such as inserter machine gets sequence out of order, capitalization in databases, and mailing discarded by postal service. These ten mishaps can potentiate nonresponse errors, but there are ways to minimize their frequency. Suggestions offered stem from our own experiences during research projects. Our goal is to increase researchers' knowledge of nonresponse error problems and to offer solutions which can decrease nonresponse error in future projects.

  19. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  20. Use of Added Sugars Instead of Total Sugars May Improve the Capacity of the Health Star Rating System to Discriminate between Core and Discretionary Foods.

    Science.gov (United States)

    Menday, Hannah; Neal, Bruce; Wu, Jason H Y; Crino, Michelle; Baines, Surinder; Petersen, Kristina S

    2017-12-01

    The Australian Government has introduced a voluntary front-of-package labeling system that includes total sugar in the calculation. Our aim was to determine the effect of substituting added sugars for total sugars when calculating Health Star Ratings (HSR) and identify whether use of added sugars improves the capacity to distinguish between core and discretionary food products. This study included packaged food and beverage products available in Australian supermarkets (n=3,610). The product categories included in the analyses were breakfast cereals (n=513), fruit (n=571), milk (n=309), non-alcoholic beverages (n=1,040), vegetables (n=787), and yogurt (n=390). Added sugar values were estimated for each product using a validated method. HSRs were then estimated for every product according to the established method using total sugar, and then by substituting added sugar for total sugar. The scoring system was not modified when added sugar was used in place of total sugar in the HSR calculation. Products were classified as core or discretionary based on the Australian Dietary Guidelines. To investigate whether use of added sugar in the HSR algorithm improved the distinction between core and discretionary products as defined by the Australian Dietary Guidelines, the proportion of core products that received an HSR of ≥3.5 stars and the proportion of discretionary products that received an HSR of added sugars were determined. There were 2,263 core and 1,347 discretionary foods; 1,684 of 3,610 (47%) products contained added sugar (median 8.4 g/100 g, interquartile range=5.0 to 12.2 g). When the HSR was calculated with added sugar instead of total sugar, an additional 166 (7.3%) core products received an HSR of ≥3.5 stars and 103 (7.6%) discretionary products received a rating of ≥3.5 stars. The odds of correctly identifying a product as core vs discretionary were increased by 61% (odds ratio 1.61, 95% CI 1.26 to 2.06; Padded compared to total sugars. In the six

  1. Using snowball sampling method with nurses to understand medication administration errors.

    Science.gov (United States)

    Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In

    2009-02-01

    We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non

  2. Soil efflux and total emission rates of magmatic CO2 at the horseshoe lake tree kill, mammoth mountain, California, 1995-1999

    Science.gov (United States)

    Gerlach, T.M.; Doukas, M.P.; McGee, K.A.; Kessler, R.

    2001-01-01

    We report the results of eight soil CO2 efflux surveys by the closed circulation chamber method at the Horseshoe Lake tree kill (HLTK) - the largest tree kill on Mammoth Mountain. The surveys were undertaken from 1995 to 1999 to constrain total HLTK CO2 emissions and to evaluate occasional efflux surveys as a surveillance tool for the tree kills. HLTK effluxes range from 1 to > 10,000 g m -2 day -1 (grams CO2 per square meter per day); they are not normally distributed. Station efflux rates can vary by 7-35% during the course of the 8- to 16-h surveys. Disturbance of the upper 2 cm of ground surface causes effluxes to almost double. Semivariograms of efflux spatial covariance fit exponential or spherical models; they lack nugget effects. Efflux contour maps and total CO2 emission rates based on exponential, spherical, and linear kriging models of survey data are nearly identical; similar results are also obtained with triangulation models, suggesting that the kriging models are not seriously distorted by the lack of normal efflux distributions. In addition, model estimates of total CO2 emission rates are relatively insensitive to the measurement precision of the efflux rates and to the efflux value used to separate magmatic from forest soil sources of CO2. Surveys since 1997 indicate that, contrary to earlier speculations, a termination of elevated CO2 emissions at the HLTK is unlikely anytime soon. The HLTK CO2 efflux anomaly fluctuated greatly in size and intensity throughout the 1995-1999 surveys but maintained a N-S elongation, presumably reflecting fault control of CO2 transport from depth. Total CO2 emission rates also fluctuated greatly, ranging from 46 to 136 t day-1 (metric tons CO2 per day) and averaging 93 t day-1. The large inter-survey variations are caused primarily by external (meteorological) processes operating on time scales of hours to days. The externally caused variations can mask significant changes occurring at depth; a striking example is

  3. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  4. [Effect of citric acid stimulation on salivary alpha-amylase, total protein, salivary flow rate and pH value in Pi deficiency children].

    Science.gov (United States)

    Yang, Ze-min; Chen, Long-hui; Lin, Jing; Zhang, Min; Yang, Xiao-rong; Chen, Wei-wen

    2015-02-01

    To compare the effect of citric acid stimulation on salivary alpha-amylase (sAA), total protein (TP), salivary flow rate, and pH value between Pi deficiency (PD) children and healthy children, thereby providing evidence for Pi controlling saliva theory. Twenty PD children were recruited, and 29 healthy children were also recruited at the same time. Saliva samples from all subjects were collected before and after citric acid stimulation. The sAA activity and amount, TP contents, salivary flow rate, and pH value were determined and compared. (1) Citric acid stimulation was able to significantly increase salivary flow rate, pH value, sAA activities, sAA specific activity and sAA amount (including glycosylated and non-glycosylated sAA amount) in healthy children (Psalivary flow rate, pH value, and glycosylated sAA levels in PD children (Psalivary indices between the two groups (P>0.05), salivary indices except salivary flow rate and glycosylated sAA levels decreased more in PD children. There was statistical difference in sAA activity ratio, sAA specific activity ratio, and the ratio of glycosylated sAA levels between PD children and healthy children (P<0.05). PD children had decreased response to citric acid stimulation.

  5. [Studies on the relationship between beta-adrenergic receptor density on cell wall lymphocytes, total serum catecholamine level and heart rate in patients with hyperthyroidism].

    Science.gov (United States)

    Gajek, J; Zieba, I; Zyśko, D

    2000-08-01

    Hyperthyreosis mimics the hyperadrenergic state and its symptoms were though to be dependent on increased level of catecholamines. Another reason for the symptoms could be the increased density or affinity of beta-adrenergic receptors to catecholamines. The aim of the study was to examine the elements of sympathetic nervous system, thyroid hormones level and their influence on heart rate control in patients with hyperthyreosis. The study was carried out in 18 women, mean age 48.9 +/- 8.7 yrs and 6 men, mean age 54.2 +/- 8.7 yrs. The control group consisted of 30 healthy persons matched for age and sex. We examined the density of beta-adrenergic receptors using radioligand labelling method with 125I-cyanopindolol, serum total catecholamines level with radioenzymatic assay kit, the levels of free thyroid hormones using radioimmunoassays and thyreotropine level with immunoradiometric assay. Maximal, minimal and mean heart rate were studied using Holter monitoring system. The density of beta-adrenergic receptors in hyperthyreosis was 37.3 +/- 21.7 vs 37.2 +/- 18.1 fmol/mg in the control group (p = NS). Total catecholamines level was significantly decreased in hyperthyreosis group: 1.5 +/- 0.89 vs 1.9 +/- 0.73 pmol/ml (p < 0.05). There was significantly higher minimal, maximal and mean heart rate in hyperthyreosis group (p < 0.0001, p < 0.0001 and p < 0.05 respectively). There was a weak inverse correlation between minimum heart rate and triiodothyronine level (r = -0.38, p < 0.05). An inverse correlation between triiodothyronine and catecholamines level (r = -0.49, p < 0.05) was observed. Beta-adrenergic receptors density is unchanged and catecholamines level is decreased in hyperthyreosis when compared to normal subjects. There is no correlation between minimal heart rate and adrenergic receptors density or catecholamines level in hyperthyreosis.

  6. Dose rate and total dose dependence of the 1/f noise performance of a GaAs operational amplifier during irradiation

    International Nuclear Information System (INIS)

    Hiemstra, D.M.

    1995-01-01

    A pictorial of a sectioned view of the torus of the International Thermonuclear Experimental Reactor (ITER) is shown. Maintenance and inspection of the reactor are required to be performed remotely. This is due to the high gamma radiation environment in vessel during inspection and maintenance activities. The custom GaAs operational amplifier is to be used to readout sensors on the in-vessel manipulator and inspection equipment. The gamma dose rate during maintenance and inspection is anticipated to be 3 Mrad(GaAs)/hour. Here, dose rate and total dose dependence of the 1/f noise performance of a custom GaAs MESFET operational amplifier during irradiation are presented. Dose rate dependent 1/f noise degradation during irradiation is believed to be due to electron trapping in deep levels, enhanced by backgating and shallow traps excited during irradiation. The reduction of this affect with accumulated total dose is believed to be due a reduction of deep level site concentration associated with substitutional oxygen. Post irradiation 1/f noise degradation is also presented.The generation-recombination noise observed post irradiation can be attributed to the production of shallow traps due to ionizing radiation

  7. Long-Term Primary Patency Rate After Nitinol Self-Expandable Stents Implantation in Long, Totally Occluded Femoropopliteal (TASC II C & D) Lesions.

    Science.gov (United States)

    Elmahdy, Mahmoud Farouk; Buonamici, Piergiovanni; Trapani, Maurizio; Valenti, Renato; Migliorini, Angela; Parodi, Guido; Antoniucci, David

    2017-06-01

    Endovascular therapy for long femoropopliteal lesions using percutaneous transluminal balloon angioplasty or first-generation of peripheral stents has been associated with unacceptable one-year restenosis rates. However, with recent advances in equipment and techniques, a better primary patency rate is expected. This study was conducted to detect the long-term primary patency rate of nitinol self-expandable stents implanted in long, totally occluded femoropopliteal lesions TransAtlantic Inter-Society Census (TASC II type C & D), and determine the predictors of reocclusion or restenosis in the stented segments. The demographics, clinical, anatomical, and procedural data of 213 patients with 240 de novo totally occluded femoropopliteal (TASC II type C & D) lesions treated with nitinol self-expandable stents were retrospectively analysed. Of these limbs, 159 (66.2%) presented with intermittent claudication, while 81 (33.8%) presented with critical limb ischaemia. The mean-time of follow-up was 36±22.6 months, (range: 6.3-106.2 months). Outcomes evaluated were, primary patency rate and predictors of reocclusion or restenosis in the stented segments. The mean age of the patients was 70.9±9.3 years, with male gender 66.2%. Mean pre-procedural ABI was 0.45±0.53. One-hundred-and-seventy-five (73%) lesions were TASC II type C, while 65 (27%) were type D lesions. The mean length of the lesions was 17.9±11.3mm. Procedure related complications occurred in 10 (4.1%) limbs. There was no periprocedural mortality. Reocclusion and restenosis were detected during follow-up in 45 and 30 limbs respectively, and all were re-treated by endovascular approach. None of the patients required major amputation. Primary patency rates were 81.4±1.1%, 77.7±1.9% and 74.4±2.8% at 12, 24, and 36 months respectively. Male gender, severe calcification, and TASC II D lesion were independent predictors for reocclusion, while predictors of restenosis were DM, smoking and TASC II D lesions

  8. The Effect of Music Listening on Pain, Heart Rate Variability, and Range of Motion in Older Adults After Total Knee Replacement.

    Science.gov (United States)

    Hsu, Chih-Chung; Chen, Su-Ru; Lee, Pi-Hsia; Lin, Pi-Chu

    2017-12-01

    The purpose of this study was to investigate the effects that listening and not listening to music had on pain relief, heart rate variability (HRV), and knee range of motion in total knee replacement (TKR) patients who underwent continuous passive motion (CPM) rehabilitation. We adopted a single-group quasi-experimental design. A sample of 49 TKR patients listened to music for 25 min during one session of CPM and no music during another session of CPM the same day for a total of 2 days. Results indicated that during CPM, patients exhibited a significant decrease in the pain level ( p listening to music compared with no music. This study demonstrated that listening to music can effectively decrease pain during CPM rehabilitation and improve the joint range of motion in patients who underwent TKR surgery.

  9. A reviewed technique for total body electron therapy using a Varian Clinac 2100C/D high dose rate treatment beam facility

    International Nuclear Information System (INIS)

    Oliver, L.D.; Xuereb, E.M.A.; Last, V.; Hunt, P.B.; Wilfert, A.

    1996-01-01

    Our (Royal North Shore Hospital) most recent linear accelerator acquisition is a Varian Clinac 2100C/D which has a high dose rate (approximately 25Gy per minute at 1 metre) total body electron option. We investigated the physical characteristics of the electron beam to develop a suitable method of treatment for total body electron therapy. The useful electron beam width is defined as 80cm above and below the reference height. Measurements of the electron dose received from the two angled electron beams showed a critical dependence on the gantry angles. The treatment protocol uses ten different patient angles, fractionated into directly opposing fields and treated seuqentially each day. A full cycle of treatment is completed in five days. (author)

  10. Effect of selective and nonselective beta-blockers on resting energy production rate and total body substrate utilization in chronic heart failure.

    Science.gov (United States)

    Podbregar, Matej; Voga, Gorazd

    2002-12-01

    In chronic heart failure (CHF) beta-blockers reduce myocardial oxygen consumption and improve myocardial efficiency by shifting myocardial substrate utilization from increased free fatty acid oxidation to increased glucose oxidation. The effect of selective and nonselective beta-blockers on total body resting energy production rate (EPR) and substrate utilization is not known. Twenty-six noncachectic patients with moderately severe heart failure (New York Heart Association class II or III, left ventricular ejection fraction < 0.40) were treated with carvedilol (37.5 +/- 13.5 mg/12 h) or bisoprolol (5.4 +/- 3.0 mg/d) for 6 months. Indirect calorimetry was performed before and after 6 months of treatment. Resting EPR was decreased in carvedilol (5.021 +/- 0.803 to 4.552 +/- 0.615 kJ/min, P <.001) and bisoprolol group (5.230 +/- 0.828 to 4.978 +/- 0.640 kJ/min, P <.05; nonsignificant difference between groups). Lipid oxidation rate decreased in carvedilol and remained unchanged in bisoprolol group (2.4 +/- 1.4 to 1.5 +/- 0.9 mg m(2)/kg min versus 2.7 +/- 1.1 to 2.5 +/- 1.1 mg m(2)/kg min, P <.05). Glucose oxidation rate was increased only in carvedilol (2.6 +/- 1.4 to 4.4 +/- 1.6 mg m(2)/kg min, P <.05), but did not change in bisoprolol group. Both selective and nonselective beta-blockers reduce total body resting EPR in noncachectic CHF patients. Carvedilol compared to bisoprolol shifts total body substrate utilization from lipid to glucose oxidation.

  11. Rate of progression of total, upper, and lower visual field defects in patients with open-angle glaucoma and high myopia.

    Science.gov (United States)

    Yoshino, Takaiko; Fukuchi, Takeo; Togano, Tetsuya; Sakaue, Yuta; Seki, Masaaki; Tanaka, Takayuki; Ueda, Jun

    2016-03-01

    We evaluated the rate of progression of total, upper, and lower visual field defects in patients with treated primary open-angle glaucoma (POAG) with high myopia (HM). Seventy eyes of 70 POAG patients with HM [≤-8 diopters (D)] were examined. The mean deviation (MD) slope and the upper and lower total deviation (upper TD, lower TD) slopes of the Humphrey Field Analyzer were calculated in patients with high-tension glaucoma (HTG) (>21 mmHg) versus normal-tension glaucoma (NTG) (≤21 mmHg). The mean age of all the patients (29 eyes with HTG and 41 eyes with NTG) was 48.5 ± 9.6 years. The MD slope, and upper and lower TD slopes of the HM group were compared to those of the non-HM group (NHM) (>-8 D) selected from 544 eyes in 325 age-matched POAG patients. In all, 70 eyes with HM and NHM were examined. The mean MD slope was -0.33 ± 0.33 dB/year in the HM, and -0.38 ± 0.49 dB/year in the NHM. There were no statistical differences between the HM and NHM (p = 0.9565). In the comparison of HTG versus NTG patients in both groups, the MD slope, and upper and lower TD slopes were similar. The rate of progression of total, upper, and lower visual field defects was similar among patients with HM and NHM. Although HM is a risk factor for the onset of glaucoma, HM may not be a risk factor for progression of visual field defects as assessed by the progression rate under treatment.

  12. Opioids Consumed in the Immediate Post-Operative Period Do Not Influence How Patients Rate Their Experience of Care After Total Hip Arthroplasty.

    Science.gov (United States)

    Etcheson, Jennifer I; Gwam, Chukwuweike U; George, Nicole E; Virani, Sana; Mont, Michael A; Delanois, Ronald E

    2018-04-01

    Patient perception of care, commonly measured with Press Ganey (PG) surveys, is an important metric used to determine hospital and provider reimbursement. However, post-operative pain following total hip arthroplasty (THA) may negatively affect patient satisfaction. As a result, over-administration of opioids may occur, even without marked evidence of pain. Therefore, this study evaluated whether opioid consumption in the immediate postoperative period bears any influence on satisfaction scores after THA. Specifically, this study assessed the correlation between post-operative opioid consumption and 7 PG domains: (1) Overall hospital rating; (2) Communication with nurses; (3) Responsiveness of hospital staff; (4) Communication with doctors; (5) Hospital environment; (6) Pain Management; and (7) Communication about medicines. Our institutional PG database was reviewed for patients who received THA from 2011 to 2014. A total of 322 patients (mean age = 65 years; 61% female) were analyzed. Patient's opioid consumption was measured using a morphine milli-equivalent conversion algorithm. Bivariate correlation analysis assessed the association between opioid consumption and Press-Ganey survey elements. Pearson's r assessed the strength of the association. No correlation was found between total opioid consumption and Overall hospital rating (r = 0.004; P = .710), Communication with nurses (r = 0.093; P = .425), Responsiveness of hospital staff (r = 0.104; P = .381), Communication with doctors (r = 0.009; P = .940), Hospital environment (r = 0.081; P = .485), and Pain management (r = 0.075; P = .536). However, there was a positive correlation between total opioid consumption and "Communication about medicines" (r = 0.262; P = .043). Our report demonstrates that PG patient satisfaction scores are not influenced by post-operative opioid use, with the exception of PG domain, "Communication about medications." These results suggest that opioid medications should be

  13. Antiretroviral medication prescribing errors are common with hospitalization of HIV-infected patients.

    Science.gov (United States)

    Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel

    2014-01-01

    Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.

  14. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  15. Constraints on the ^22Ne(α,n)^25Mg reaction rate from ^natMg+n Total and ^25Mg(n,γ ) Cross Sections

    Science.gov (United States)

    Koehler, Paul

    2002-10-01

    The ^22Ne(α,n)^25Mg reaction is the neutron source during the s process in massive and intermediate mass stars as well as a secondary neutron source during the s process in low mass stars. Therefore, an accurate determination of this rate is important for a better understanding of the origin of nuclides heavier than iron as well as for improving s-process models. Also, because the s process produces seed nuclides for a later p process in massive stars, an accurate value for this rate is important for a better understanding of the p process. Because the lowest observed resonance in direct ^22Ne(α,n)^25Mg measurements is considerably above the most important energy range for s-process temperatures, the uncertainty in this rate is dominated by the poorly known properties of states in ^26Mg between this resonance and threshold. Neutron measurements can observe these states with much better sensitivity and determine their parameters much more accurately than direct ^22Ne(α,n)^25Mg measurements. I have analyzed previously reported Mg+n total and ^25Mg(n,γ ) cross sections to obtain a much improved set of resonance parameters for states in ^26Mg in this region, and an improved estimate of the uncertainty in the ^22Ne(α,n)^25Mg reaction rate. This work was supported by the U.S. DOE under contract No. DE-AC05-00OR22725 with UT-Battell, LLC.

  16. Total volatile fatty acids and bacterial production rates as affected by rations containing untreated or ammonia (urea) treated rice straw in croos-bred cattle

    International Nuclear Information System (INIS)

    Puri, J.P.; Gupta, B.N.

    1990-01-01

    An experiment was conducted to study the effect of feeding ammoniated rice straw on ruminal total volatile fatty acid (TVFA) and bacterial production rates. Twelve karan swiss, male, rumen fistulated calves (2-2.5 yrs) were divided in three equal groups. Animals were offered rice straw either untreated (A) or 4 per cent urea+40 per cent moisture treated and ensiled for 30 days (B) or 5 per cent urea+30 per cent moisture treated and ensiled for 30 days (C). Protein requirements were met through concentrate mixture. Levels of NH 3 -N and TCA-precipitable-N in strained rumen liquor (SRL) were significantly higher (20.34±0.022, 63.26±0.81 (B), 20.78±0.41, 64.98±0.87 (C) (mg/100 ml SRL) in groups fed ammoniated ±0.31, 45.94±1.91 mg/100 ml S RL), respectively. The bacterial production rates in the rumen (g/day) were significantly higher in groups B and C as compared to group A. TVFA concentrations (mmole/100 ml SRL ) and TVFA production rates (mmole/d) were also significantly higher in groups B and C as compared to group A. The bacterial production rates were significantly co-related with TVFA, NH 3 -N, TCA precipitable-N concentration in the rumen and ATP production. Multiple regression equations relating bacterial production rates with (i)NH 3 -N and TVFA concentration in the rumen, (ii)NH 3 -N and TVFA production rates and (iii)NH 3 -N and ATP produced were also developed. (author). 18 refs., 2 tabs

  17. Total- and methyl-mercury concentrations and methylation rates across the freshwater to hypersaline continuum of the Great Salt Lake, Utah, USA

    Science.gov (United States)

    Johnson, William P.; Swanson, Neil; Black, Brooks; Rudd, Abigail; Carling, Gregory; Fernandez, Diego P.; Luft, John; Van Leeuwen, Jim; Marvin-DiPasquale, Mark C.

    2015-01-01

    We examined mercury (Hg) speciation in water and sediment of the Great Salt Lake and surrounding wetlands, a locale spanning fresh to hypersaline and oxic to anoxic conditions, in order to test the hypothesis that spatial and temporal variations in Hg concentration and methylation rates correspond to observed spatial and temporal trends in Hg burdens previously reported in biota. Water column, sediment, and pore water concentrations of methylmercury (MeHg) and total mercury (THg), as well as related aquatic chemical parameters were examined. Inorganic Hg(II)-methylation rates were determined in selected water column and sediment subsamples spiked with inorganic divalent mercury (204Hg(II)). Net production of Me204Hg was expressed as apparent first-order rate constants for methylation (kmeth), which were also expanded to MeHg production potential (MPP) rates via combination with tin reducible ‘reactive’ Hg(II) (Hg(II)R) as a proxy for bioavailable Hg(II). Notable findings include: 1) elevated Hg concentrations previously reported in birds and brine flies were spatially proximal to the measured highest MeHg concentrations, the latter occurring in the anoxic deep brine layer (DBL) of the Great Salt Lake; 2) timing of reduced Hg(II)-methylation rates in the DBL (according to both kmeth and MPP) coincides with reduced Hg burdens among aquatic invertebrates (brine shrimp and brine flies) that act as potential vectors of Hg propagation to the terrestrial ecosystem; 3) values ofkmeth were found to fall within the range reported by other studies; and 4) MPP rates were on the lower end of the range reported in methodologically comparable studies, suggesting the possibility that elevated MeHg in the anoxic deep brine layer results from its accumulation and persistence in this quasi-isolated environment, due to the absence of light (restricting abiotic photo demethylation) and/or minimal microbiological demethylation.

  18. Decreasing patient identification band errors by standardizing processes.

    Science.gov (United States)

    Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie

    2013-04-01

    Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.

  19. High rates of clinically relevant incidental findings by total-body CT scanning in trauma patients; results of the REACT-2 trial

    Energy Technology Data Exchange (ETDEWEB)

    Treskes, K.; Bos, S.A.; Sierink, J.C.; Luitse, J.S.K.; Goslings, J.C. [Academic Medical Center, Trauma Unit, Department of Surgery, Amsterdam (Netherlands); Beenen, L.F.M. [Academic Medical Center, Department of Radiology, Amsterdam (Netherlands); Edwards, M.J.R. [Radboud University Medical Center, Department of Trauma and emergency surgery, Nijmegen (Netherlands); Beuker, B.J.A. [University Medical Center Groningen, Trauma Unit, Department of Surgery, Groningen (Netherlands); Muradin, G.S.R. [University Medical Center Rotterdam, Department of Radiology, Erasmus MC, Rotterdam (Netherlands); Hohmann, J. [University of Basel Hospital, Department of Radiology and Nuclear Medicine, Basel (Switzerland); Hollmann, M.W. [Academic Medical Center, Department of Anaesthesiology, Amsterdam (Netherlands); Dijkgraaf, M.G.W. [Academic Medical Center, Clinical Research Unit, Amsterdam (Netherlands); Collaboration: REACT-2 study group

    2017-06-15

    To determine whether there is a difference in frequency and clinical relevance of incidental findings detected by total-body computed tomography scanning (TBCT) compared to those by the standard work-up (STWU) with selective computed tomography (CT) scanning. Trauma patients from five trauma centres were randomized between April 2011 and January 2014 to TBCT imaging or STWU consisting of conventional imaging with selective CT scanning. Incidental findings were divided into three categories: 1) major finding, may cause mortality; 2) moderate finding, may cause morbidity; and 3) minor finding, hardly relevant. Generalized estimating equations were applied to assess differences in incidental findings. In total, 1083 patients were enrolled, of which 541 patients (49.9 %) were randomized for TBCT and 542 patients (50.1 %) for STWU. Major findings were detected in 23 patients (4.3 %) in the TBCT group compared to 9 patients (1.7 %) in the STWU group (adjusted rate ratio 2.851; 95%CI 1.337-6.077; p < 0.007). Findings of moderate relevance were detected in 120 patients (22.2 %) in the TBCT group compared to 86 patients (15.9 %) in the STWU group (adjusted rate ratio 1.421; 95%CI 1.088-1.854; p < 0.010). Compared to selective CT scanning, more patients with clinically relevant incidental findings can be expected by TBCT scanning. (orig.)

  20. Estimates of rates and errors for measurements of direct-γ and direct-γ + jet production by polarized protons at RHIC

    International Nuclear Information System (INIS)

    Beddo, M.E.; Spinka, H.; Underwood, D.G.

    1992-01-01

    Studies of inclusive direct-γ production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between √s = 50 and 500 GeV. Also, rates were computed for direct-γ + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made

  1. Exercise order affects the total training volume and the ratings of perceived exertion in response to a super-set resistance training session

    Directory of Open Access Journals (Sweden)

    Balsamo S

    2012-02-01

    Full Text Available Sandor Balsamo1–3, Ramires Alsamir Tibana1,2,4, Dahan da Cunha Nascimento1,2, Gleyverton Landim de Farias1,2, Zeno Petruccelli1,2, Frederico dos Santos de Santana1,2, Otávio Vanni Martins1,2, Fernando de Aguiar1,2, Guilherme Borges Pereira4, Jéssica Cardoso de Souza4, Jonato Prestes41Department of Physical Education, Centro Universitário UNIEURO, Brasília, 2GEPEEFS (Resistance training and Health Research Group, Brasília/DF, 3Graduate Program in Medical Sciences, School of Medicine, Universidade de Brasília (UnB, Brasília, 4Graduation Program in Physical Education, Catholic University of Brasilia (UCB, Brasília/DF, BrazilAbstract: The super-set is a widely used resistance training method consisting of exercises for agonist and antagonist muscles with limited or no rest interval between them – for example, bench press followed by bent-over rows. In this sense, the aim of the present study was to compare the effects of different super-set exercise sequences on the total training volume. A secondary aim was to evaluate the ratings of perceived exertion and fatigue index in response to different exercise order. On separate testing days, twelve resistance-trained men, aged 23.0 ± 4.3 years, height 174.8 ± 6.75 cm, body mass 77.8 ± 13.27 kg, body fat 12.0% ± 4.7%, were submitted to a super-set method by using two different exercise orders: quadriceps (leg extension + hamstrings (leg curl (QH or hamstrings (leg curl + quadriceps (leg extension (HQ. Sessions consisted of three sets with a ten-repetition maximum load with 90 seconds rest between sets. Results revealed that the total training volume was higher for the HQ exercise order (P = 0.02 with lower perceived exertion than the inverse order (P = 0.04. These results suggest that HQ exercise order involving lower limbs may benefit practitioners interested in reaching a higher total training volume with lower ratings of perceived exertion compared with the leg extension plus leg curl

  2. Disparities in total knee replacement: Population losses in quality-adjusted life years due to differential offer, acceptance, and complication rates for Black Americans.

    Science.gov (United States)

    Kerman, Hannah M; Smith, Savannah R; Smith, Karen C; Collins, Jamie E; Suter, Lisa G; Katz, Jeffrey N; Losina, Elena

    2018-01-24

    Total knee replacement (TKR) is an effective treatment for end-stage knee osteoarthritis (OA). American racial minorities undergo fewer TKRs than Whites. We estimated quality-adjusted life years (QALYs) lost for Black knee OA patients due to differences in TKR offer, acceptance, and complication rates. We used the Osteoarthritis Policy Model, a computer simulation of knee OA, to predict QALY outcomes for Black and White knee OA patients with and without TKR. We estimated per-person QALYs gained from TKR as the difference between QALYs with current TKR use and QALYs when no TKR was performed. We estimated average, per-person QALY losses in Blacks as the difference between QALYs gained with White rates of TKR and QALYs gained with Black rates of TKR. We calculated population-level QALY losses by multiplying per-person QALY losses by the number of persons with advanced knee OA. Finally, we estimated QALYs lost specifically due to lower TKR offer and acceptance and higher complications among Black knee OA patients. Black men and women gain 64,100 QALYs from current TKR use. With white offer and complications rates, they would gain an additional 72,000 QALYs. Because these additional gains are unrealized, we call this a loss of 72,000 QALYs. Black Americans lose 67,500 QALYs because of lower offer, 15,800 QALYs because of lower acceptance, and 2,600 QALYs because of higher complications. Black Americans lose 72,000 QALYs due to disparities in TKR offer and complication rates. Programs to decrease disparities in TKR use are urgently needed. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  3. TU-CD-304-04: Scanning Field Total Body Irradiation Using Dynamic Arc with Variable Dose Rate and Gantry Speed

    Energy Technology Data Exchange (ETDEWEB)

    Yi, B; Xu, H; Mutaf, Y; Prado, K [Univ. of Maryland School Of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose: Enable a scanning field total body irradiation (TBI) technique, using dynamic arcs, which is biologically equivalent to a moving couch TBI. Methods: Patient is treated slightly above the floor and the treatment field scans across the patient by a moving gantry. MLC positions change during gantry motion to keep same field opening at the level of the treatment plane (170 cm). This is done to mimic the same geometry as the moving couch TBI technique which has been used in our institution for over 10 years. The dose rate and the gantry speed are determined considering a constant speed of the moving field, variations in SSD and slanted depths resulting from oblique gantry angles. An Eclipse (Varian) planning system is commissioned to accommodate the extended SSD. The dosimetric foundations of the technique have been thoroughly investigated using phantom measurements. Results: Dose uniformity better than 2% across 180 cm length at 10cm depth is achieved by moving the gantry from −55 to +55 deg. Treatment range can be extended by increasing gantry range. No device such as a gravity-oriented compensator is needed to achieve a uniform dose. It is feasible to modify the dose distribution by adjusting the dose rate at each gantry angle to compensate for body thickness differences. Total treatment time for 2 Gy AP/PA fields is 40–50 minutes excluding patient set up time, at the machine dose rate of 100 MU/min. Conclusion: This novel yet transportable moving field technique enables TBI treatment in a small treatment room with less program development preparation than other techniques. Treatment length can be extended per need, and. MLC-based thickness compensation and partial lung blocking are also possible.

  4. Estimation of DMFT, Salivary Streptococcus Mutans Count, Flow Rate, Ph, and Salivary Total Calcium Content in Pregnant and Non-Pregnant Women: A Prospective Study.

    Science.gov (United States)

    Kamate, Wasim Ismail; Vibhute, Nupura Aniket; Baad, Rajendra Krishna

    2017-04-01

    Pregnancy, a period from conception till birth, causes changes in the functioning of the human body as a whole and specifically in the oral cavity that may favour the emergence of dental caries. Many studies have shown pregnant women at increased risk for dental caries, however, specific salivary caries risk factors and the particular period of pregnancy at heightened risk for dental caries are yet to be explored and give a scope of further research in this area. The aim of the present study was to assess the severity of dental caries in pregnant women compared to non-pregnant women by evaluating parameters like Decayed, Missing, Filled Teeth (DMFT) index, salivary Streptococcus mutans count, flow rate, pH and total calcium content. A total of 50 first time pregnant women in the first trimester were followed during their second trimester, third trimester and postpartum period for the evaluation of DMFT by World Health Organization (WHO) scoring criteria, salivary flow rate by drooling method, salivary pH by pH meter, salivary total calcium content by bioassay test kit and salivary Streptococcus mutans count by semiautomatic counting of colonies grown on Mitis Salivarius (MS) agar supplemented by 0.2U/ml of bacitracin and 10% sucrose. The observations of pregnant women were then compared with same parameters evaluated in the 50 non-pregnant women. Paired t-test and Wilcoxon sign rank test were performed to assess the association between the study parameters. Evaluation of different caries risk factors between pregnant and non-pregnant women clearly showed that pregnant women were at a higher risk for dental caries. Comparison of caries risk parameters during the three trimesters and postpartum period showed that the salivary Streptococcus mutans count had significantly increased in the second trimester , third trimester and postpartum period while the mean pH and mean salivary total calcium content decreased in the third trimester and postpartum period. These

  5. Estimation of DMFT, Salivary Streptococcus Mutans Count, Flow Rate, Ph, and Salivary Total Calcium Content in Pregnant and Non-Pregnant Women: A Prospective Study

    Science.gov (United States)

    Vibhute, Nupura Aniket; Baad, Rajendra Krishna

    2017-01-01

    Introduction Pregnancy, a period from conception till birth, causes changes in the functioning of the human body as a whole and specifically in the oral cavity that may favour the emergence of dental caries. Many studies have shown pregnant women at increased risk for dental caries, however, specific salivary caries risk factors and the particular period of pregnancy at heightened risk for dental caries are yet to be explored and give a scope of further research in this area. Aim The aim of the present study was to assess the severity of dental caries in pregnant women compared to non-pregnant women by evaluating parameters like Decayed, Missing, Filled Teeth (DMFT) index, salivary Streptococcus mutans count, flow rate, pH and total calcium content. Materials and Methods A total of 50 first time pregnant women in the first trimester were followed during their second trimester, third trimester and postpartum period for the evaluation of DMFT by World Health Organization (WHO) scoring criteria, salivary flow rate by drooling method, salivary pH by pH meter, salivary total calcium content by bioassay test kit and salivary Streptococcus mutans count by semiautomatic counting of colonies grown on Mitis Salivarius (MS) agar supplemented by 0.2U/ml of bacitracin and 10% sucrose. The observations of pregnant women were then compared with same parameters evaluated in the 50 non-pregnant women. Paired t-test and Wilcoxon sign rank test were performed to assess the association between the study parameters. Results Evaluation of different caries risk factors between pregnant and non-pregnant women clearly showed that pregnant women were at a higher risk for dental caries. Comparison of caries risk parameters during the three trimesters and postpartum period showed that the salivary Streptococcus mutans count had significantly increased in the second trimester, third trimester and postpartum period while the mean pH and mean salivary total calcium content decreased in the third

  6. Error analysis in predictive modelling demonstrated on mould data.

    Science.gov (United States)

    Baranyi, József; Csernus, Olívia; Beczner, Judit

    2014-01-17

    The purpose of this paper was to develop a predictive model for the effect of temperature and water activity on the growth rate of Aspergillus niger and to determine the sources of the error when the model is used for prediction. Parallel mould growth curves, derived from the same spore batch, were generated and fitted to determine their growth rate. The variances of replicate ln(growth-rate) estimates were used to quantify the experimental variability, inherent to the method of determining the growth rate. The environmental variability was quantified by the variance of the respective means of replicates. The idea is analogous to the "within group" and "between groups" variability concepts of ANOVA procedures. A (secondary) model, with temperature and water activity as explanatory variables, was fitted to the natural logarithm of the growth rates determined by the primary model. The model error and the experimental and environmental errors were ranked according to their contribution to the total error of prediction. Our method can readily be applied to analysing the error structure of predictive models of bacterial growth models, too. © 2013.

  7. The COS/UVES absorption survey of the Magellanic stream. III. Ionization, total mass, and inflow rate onto the Milky Way

    International Nuclear Information System (INIS)

    Fox, Andrew J.; Thom, Christopher; Tumlinson, Jason; Ely, Justin; Kumari, Nimisha; Wakker, Bart P.; Hernandez, Audra K.; Haffner, L. Matthew; Barger, Kathleen A.; Lehner, Nicolas; Howk, J. Christopher; Richter, Philipp; Bland-Hawthorn, Joss; Charlton, Jane C.; Westmeier, Tobias; Misawa, Toru; Rodriguez-Hidalgo, Paola

    2014-01-01

    Dynamic interactions between the two Magellanic Clouds have flung large quantities of gas into the halo of the Milky Way. The result is a spectacular arrangement of gaseous structures, including the Magellanic Stream, the Magellanic Bridge, and the Leading Arm (collectively referred to as the Magellanic System). In this third paper of a series studying the Magellanic gas in absorption, we analyze the gas ionization level using a sample of 69 Hubble Space Telescope/Cosmic Origins Spectrograph sightlines that pass through or within 30° of the 21 cm emitting regions. We find that 81% (56/69) of the sightlines show UV absorption at Magellanic velocities, indicating that the total cross-section of the Magellanic System is ≈11,000 deg 2 , or around one-quarter of the entire sky. Using observations of the Si III/Si II ratio together with Cloudy photoionization modeling, we calculate the total gas mass (atomic plus ionized) of the Magellanic System to be ≈2.0 × 10 9 M ☉ (d/55 kpc) 2 , with the ionized gas contributing around three times as much mass as the atomic gas. This is larger than the current-day interstellar H I mass of both Magellanic Clouds combined, indicating that they have lost most of their initial gas mass. If the gas in the Magellanic System survives to reach the Galactic disk over its inflow time of ∼0.5-1.0 Gyr, it will represent an average inflow rate of ∼3.7-6.7 M ☉ yr –1 , potentially raising the Galactic star formation rate. However, multiple signs of an evaporative interaction with the hot Galactic corona indicate that the Magellanic gas may not survive its journey to the disk fully intact and will instead add material to (and cool) the corona.

  8. The COS/UVES absorption survey of the Magellanic stream. III. Ionization, total mass, and inflow rate onto the Milky Way

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Andrew J.; Thom, Christopher; Tumlinson, Jason; Ely, Justin; Kumari, Nimisha [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Wakker, Bart P.; Hernandez, Audra K.; Haffner, L. Matthew [Department of Astronomy, University of Wisconsin-Madison, 475 North Charter Street, Madison, WI 53706 (United States); Barger, Kathleen A.; Lehner, Nicolas; Howk, J. Christopher [Department of Physics, University of Notre Dame, 225 Nieuwland Science Hall, Notre Dame, IN 46556 (United States); Richter, Philipp [Institut für Physik und Astronomie, Universität Potsdam, Haus 28, Karl-Liebknecht-Strasse 24/25, D-14476, Potsdam (Germany); Bland-Hawthorn, Joss [Institute of Astronomy, School of Physics, University of Sydney, Sydney, NSW 2006 (Australia); Charlton, Jane C. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Westmeier, Tobias [ICRAR, The University of Western Australia, 35 Stirling Highway, Crawley, WA 6009 (Australia); Misawa, Toru [School of General Education, Shinshu University, 3-1-1 Asahi, Matsumoto, Nagano 390-8621 (Japan); Rodriguez-Hidalgo, Paola, E-mail: afox@stsci.edu [Department of Physics and Astronomy, York University, 4700 Keele Street, Toronto, ON M3J 1P3 (Canada)

    2014-06-01

    Dynamic interactions between the two Magellanic Clouds have flung large quantities of gas into the halo of the Milky Way. The result is a spectacular arrangement of gaseous structures, including the Magellanic Stream, the Magellanic Bridge, and the Leading Arm (collectively referred to as the Magellanic System). In this third paper of a series studying the Magellanic gas in absorption, we analyze the gas ionization level using a sample of 69 Hubble Space Telescope/Cosmic Origins Spectrograph sightlines that pass through or within 30° of the 21 cm emitting regions. We find that 81% (56/69) of the sightlines show UV absorption at Magellanic velocities, indicating that the total cross-section of the Magellanic System is ≈11,000 deg{sup 2}, or around one-quarter of the entire sky. Using observations of the Si III/Si II ratio together with Cloudy photoionization modeling, we calculate the total gas mass (atomic plus ionized) of the Magellanic System to be ≈2.0 × 10{sup 9} M {sub ☉} (d/55 kpc){sup 2}, with the ionized gas contributing around three times as much mass as the atomic gas. This is larger than the current-day interstellar H I mass of both Magellanic Clouds combined, indicating that they have lost most of their initial gas mass. If the gas in the Magellanic System survives to reach the Galactic disk over its inflow time of ∼0.5-1.0 Gyr, it will represent an average inflow rate of ∼3.7-6.7 M {sub ☉} yr{sup –1}, potentially raising the Galactic star formation rate. However, multiple signs of an evaporative interaction with the hot Galactic corona indicate that the Magellanic gas may not survive its journey to the disk fully intact and will instead add material to (and cool) the corona.

  9. SU-E-T-501: Normal Tissue Toxicities of Pulsed Low Dose Rate Radiotherapy and Conventional Radiotherapy: An in Vivo Total Body Irradiation Study

    Energy Technology Data Exchange (ETDEWEB)

    Cvetkovic, D; Zhang, P; Wang, B; Chen, L; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)

    2014-06-01

    Purpose: Pulsed low dose rate radiotherapy (PLDR) is a re-irradiation technique for therapy of recurrent cancers. We have previously shown a significant difference in the weight and survival time between the mice treated with conventional radiotherapy (CRT) and PLDR using total body irradiation (TBI). The purpose of this study was to investigate the in vivo effects of PLDR on normal mouse tissues.Materials and Methods: Twenty two male BALB/c nude mice, 4 months of age, were randomly assigned into a PLDR group (n=10), a CRT group (n=10), and a non-irradiated control group (n=2). The Siemens Artiste accelerator with 6 MV photon beams was used. The mice received a total of 18Gy in 3 fractions with a 20day interval. The CRT group received the 6Gy dose continuously at a dose rate of 300 MU/min. The PLDR group was irradiated with 0.2Gyx20 pulses with a 3min interval between the pulses. The mice were weighed thrice weekly and sacrificed 2 weeks after the last treatment. Brain, heart, lung, liver, spleen, gastrointestinal, urinary and reproductive organs, and sternal bone marrow were removed, formalin-fixed, paraffin-embedded and stained with H and E. Morphological changes were observed under a microscope. Results: Histopathological examination revealed atrophy in several irradiated organs. The degree of atrophy was mild to moderate in the PLDR group, but severe in the CRT group. The most pronounced morphological abnormalities were in the immune and hematopoietic systems, namely spleen and bone marrow. Brain hemorrhage was seen in the CRT group, but not in the PLDR group. Conclusions: Our results showed that PLDR induced less toxicity in the normal mouse tissues than conventional radiotherapy for the same dose and regimen. Considering that PLDR produces equivalent tumor control as conventional radiotherapy, it would be a good modality for treatment of recurrent cancers.

  10. Flujo y concentración de proteínas en saliva total humana Salivary flow rate and protein concentration in human whole saliva

    Directory of Open Access Journals (Sweden)

    JOSÉ ANTONIO BANDERAS-TARABAY

    1997-09-01

    Full Text Available Objetivo. Determinar los promedios de flujo salival y la concentración de proteínas totales en una población joven del Estado de México. Material y métodos. Se seleccionaron 120 sujetos a quienes se les colectó saliva total humana (STH no estimulada y estimulada, la cual se analizó por medio de gravimetría y espectrofotometría (LV/LU; se calcularon medidas de tendencia central y de dispersión; posteriormente, se correlacionaron estos datos con los índices CPOD y CPITN. Resultados. Los sujetos estudiados mostraron un promedio de flujo salival (ml/min ± DE en STH no estimulada de 0.397±.26, y en STH estimulada, de 0.973±.53. El promedio en la concentración de proteínas (mg/ml ± DE fue de 1.374±.45 en STH no estimulada y de 1.526±.44 en STH estimulada. Las mujeres presentaron un menor porcentaje de flujo salival y mayor concentración de proteínas. No se observaron correlaciones entre el flujo y la concentración de proteínas totales y el CPOD y CPITN; sin embargo, sí las hubo con otras variables. Conclusiones. Estos hallazgos podrían estar asociados con el grado de nutrición, las características genéticas y los niveles de salud bucal en nuestra población. El presente estudio representa la fase inicial de la creación de una base de datos en sialoquímica, cuya meta será identificar los parámetros que indiquen el riesgo de enfermedades sistémicas o bucodentales.Objective. To determine the average salivary flow rates and total protein concentrations in a population of the State of Mexico. Material and methods. A gravimetric and spectrophotometric analysis was applied to 120 subjects in total resting and stimulated whole saliva and results were correlated with the DMFT and CPITN indexes. Results. Subjects allowed average salivary flow rate (ml/min ± SD in non-stimulated human whole saliva (HWS of 0.397±.26 and in stimulated HWS of 0.973±.53. Average protein concentration was (mg/ml ± SD 1.374±.45 in non

  11. Theoretical analysis of the distribution of isolated particles in totally asymmetric exclusion processes: Application to mRNA translation rate estimation

    Science.gov (United States)

    Dao Duc, Khanh; Saleem, Zain H.; Song, Yun S.

    2018-01-01

    The Totally Asymmetric Exclusion Process (TASEP) is a classical stochastic model for describing the transport of interacting particles, such as ribosomes moving along the messenger ribonucleic acid (mRNA) during translation. Although this model has been widely studied in the past, the extent of collision between particles and the average distance between a particle to its nearest neighbor have not been quantified explicitly. We provide here a theoretical analysis of such quantities via the distribution of isolated particles. In the classical form of the model in which each particle occupies only a single site, we obtain an exact analytic solution using the matrix ansatz. We then employ a refined mean-field approach to extend the analysis to a generalized TASEP with particles of an arbitrary size. Our theoretical study has direct applications in mRNA translation and the interpretation of experimental ribosome profiling data. In particular, our analysis of data from Saccharomyces cerevisiae suggests a potential bias against the detection of nearby ribosomes with a gap distance of less than approximately three codons, which leads to some ambiguity in estimating the initiation rate and protein production flux for a substantial fraction of genes. Despite such ambiguity, however, we demonstrate theoretically that the interference rate associated with collisions can be robustly estimated and show that approximately 1% of the translating ribosomes get obstructed.

  12. Effects of breathing frequency and flow rate on the total inward leakage of an elastomeric half-mask donned on an advanced manikin headform.

    Science.gov (United States)

    He, Xinjian; Grinshpun, Sergey A; Reponen, Tiina; McKay, Roy; Bergman, Michael S; Zhuang, Ziqing

    2014-03-01

    The objective of this study was to investigate the effects of breathing frequency and flow rate on the total inward leakage (TIL) of an elastomeric half-mask donned on an advanced manikin headform and challenged with combustion aerosols. An elastomeric half-mask respirator equipped with P100 filters was donned on an advanced manikin headform covered with life-like soft skin and challenged with aerosols originated by burning three materials: wood, paper, and plastic (polyethylene). TIL was determined as the ratio of aerosol concentrations inside (C in) and outside (C out) of the respirator (C in/C out) measured with a nanoparticle spectrometer operating in the particle size range of 20-200nm. The testing was performed under three cyclic breathing flows [mean inspiratory flow (MIF) of 30, 55, and 85 l/min] and five breathing frequencies (10, 15, 20, 25, and 30 breaths/min). A completely randomized factorial study design was chosen with four replicates for each combination of breathing flow rate and frequency. Particle size, MIF, and combustion material had significant (P plastic aerosol produced higher mean TIL values than wood and paper aerosols. The effect of the breathing frequency was complex. When analyzed using all combustion aerosols and MIFs (pooled data), breathing frequency did not significantly (P = 0.08) affect TIL. However, once the data were stratified according to combustion aerosol and MIF, the effect of breathing frequency became significant (P plastic combustion aerosol. The effect of breathing frequency on TIL is less significant than the effects of combustion aerosol and breathing flow rate for the tested elastomeric half-mask respirator. The greatest TIL occurred when challenged with plastic aerosol at 30 l/min and at a breathing frequency of 30 breaths/min.

  13. High-dose total-body irradiation and autologous marrow reconstitution in dogs: dose-rate-related acute toxicity and fractionation-dependent long-term survival

    International Nuclear Information System (INIS)

    Deeg, H.J.; Storb, R.; Weiden, P.L.; Schumacher, D.; Shulman, H.; Graham, T.; Thomas, E.D.

    1981-01-01

    Beagle dogs treated by total-body irradiation (TBI) were given autologous marrow grafts in order to avoid death from marrow toxicity. Acute and delayed non-marrow toxicities of high single-dose (27 dogs) and fractionated TBI (20 dogs) delivered at 0.05 or 0.1 Gy/min were compared. Fractionated TBI was given in increments of 2 Gy every 6 hr for three increments per day. Acute toxicity and early mortality (<1 month) at identical total irradiation doses were comparable for dogs given fractionated or single-dose TBI. With single-dose TBI, 14, 16, and 18 Gy, respectively, given at 0.05 Gy/min, 0/5, 5/5, and 2/2 dogs died from acute toxicity; with 10, 12, and 14 Gy, respectively, given at 0.1 Gy/min, 1/5, 4/5, and 5/5 dogs died acutely. With fractionated TBI, 14 and 16 Gy, respectively, given at 0.1 Gy/min, 1/5, 4/5, and 2/2 dogs died auctely. Early deaths were due to radiation enteritis with or without associated septicemia (29 dogs; less than or equal to Day 10). Three dogs given 10 Gy of TBI at 0.1 Gy/min died from bacterial pneumonia; one (Day 18) had been given fractionated and two (Days 14, 22) single-dose TBI. Fifteen dogs survived beyond 1 month; eight of these had single-dose TBI (10-14 Gy) and all died within 7 months of irradiation from a syndrome consisting of hepatic damage, pancreatic fibrosis, malnutrition, wasting, and anemia. Seven of the 15 had fractionated TBI, and only one (14 Gy) died on Day 33 from hepatic failure, whereas 6 (10-14 Gy) are alive and well 250 to 500 days after irradiation. In conclusion, fractionated TBI did not offer advantages over single-dose TBI with regard to acute toxicity and early mortality; rather, these were dependent upon the total dose of TBI. The total acutely tolerated dose was dependent upon the exposure rate; however, only dogs given fractionated TBI became healthy long-term survivors

  14. IMRT QA: Selecting gamma criteria based on error detection sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Steers, Jennifer M. [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California 90048 and Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095 (United States); Fraass, Benedick A., E-mail: benedick.fraass@cshs.org [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California 90048 (United States)

    2016-04-15

    Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose

  15. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  16. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  17. Trends in primary total hip arthroplasty in Spain from 2001 to 2008: Evaluating changes in demographics, comorbidity, incidence rates, length of stay, costs and mortality

    Directory of Open Access Journals (Sweden)

    Jimenez-Trujillo Isabel

    2011-02-01

    Full Text Available Abstract Background Hip arthroplasties is one of the most frequent surgical procedures in Spain and are conducted mainly in elderly subjects. We aim to analyze changes in incidence, co-morbidity profile, length of hospital stay (LOHS, costs and in-hospital mortality (IHM of patients undergoing primary total hip arthroplasty (THA over an 8-year study period in Spain. Methods We selected all surgical admissions in individuals aged ≥40 years who had received a primary THA (ICD-9-CM procedure code 81.51 between 2001 and 2008 from the National Hospital Discharge Database. Age- and sex-specific incidence rates, LOHS, costs and IHM were estimated for each year. Co-morbidity was assessed using the Charlson comorbidity index. Multivariate analysis of time trends was conducted using Poisson regression. Logistic regression models were conducted to analyze IHM. Results We identified a total of 161,791 discharges of patients having undergone THA from 2001 to 2008. Overall crude incidence had increased from 99 to 105 THA per 100.000 inhabitants from 2001 to 2008 (p 2 and in 2008, the prevalence of 1-2 or >2 had increased to 20.4% and 1.1% respectively (p Conclusions The current study provides clear and valid data indicating increased incidence of primary THA in Spain from 2001 to 2008 with concomitant reductions in LOHS, slight reduction IHM, but a significant increase in cost per patient. The health profile of the patient undergoing a THA seems to be worsening in Spain.

  18. Combined intravenous, topical and oral tranexamic acid administration in total knee replacement: Evaluation of safety in patients with previous thromboembolism and effect on hemoglobin level and transfusion rate.

    Science.gov (United States)

    Jansen, Joris A; Lameijer, Joost R C; Snoeker, Barbara A M

    2017-10-01

    The aims of this study were to investigate the safety of combined intravenous, oral and topical tranexamic acid (TXA) in primary total knee replacement. We assessed dose-related efficacy on hemoglobin level, transfusion, length of stay and thromboembolic complications. In addition, TXA safety in patients with previous history of thromboembolism >12months ago was monitored specifically. From January 2013 until January 2016, 922 patients were included who received TXA after primary total knee replacement. Patients without TXA administration or with thromboembolic events 10-25mg/kg and >25-50mg/kg. Between the three TXA groups no significant difference was found in thromboembolic complications (deep venous thrombosis (DVT) and pulmonary embolism (PE)), wound leakage and transfusion rate. For patients with DVT or PE in their history >12months ago specifically, no more complications were noted in higher-TXA-dosage groups compared to the low-dosage group. Length of stay was shorter in the highest-TXA-dosage group compared with lower-dosage groups (median two vs three days). With high TXA dose a smaller difference between pre- and postoperative Hb was found: the >25-50mg/kg TXA group had a 0.419mmol/l smaller decrease in postoperative hemoglobin compared to the lowest-dosage group (Ptopical TXA is effective in knee replacement and can safely be given to patients with a thromboembolic history >12months ago. High dosage (>25-50mg/kg) TXA resulted in the smallest decrease in postoperative hemoglobin. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system.

    Science.gov (United States)

    Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O

    2015-02-01

    To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association

  20. Total body irradiation (TBI) in pediatric patients. A single-center experience after 30 years of low-dose rate irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Linsenmeier, Claudia; Thoennessen, Daniel; Negretti, Laura; Streller, Tino; Luetolf, Urs Martin [University Hospital Zurich (Switzerland). Dept. of Radiation-Oncology; Bourquin, Jean-Pierre [University Children' s Hospital Zurich (Switzerland). Dept. of Hemato-Oncology; Oertel, Susanne [University Hospital Zurich (Switzerland). Dept. of Radiation-Oncology; Heidelberg Univ. (Germany). Dept. of Radiation Oncology

    2010-11-15

    To retrospectively analyze patient characteristics, treatment, and treatment outcome of pediatric patients with hematologic diseases treated with total body irradiation (TBI) between 1978 and 2006. 32 pediatric patients were referred to the Department of Radiation-Oncology at the University of Zurich for TBI. Records of regular follow-up of 28 patients were available for review. Patient characteristics as well as treatment outcome regarding local control and overall survival were assessed. A total of 18 patients suffered from acute lymphoblastic leukemia (ALL), 5 from acute and 2 from chronic myelogenous leukemia, 1 from non-Hodgkin lymphoma, and 2 from anaplastic anemia. The cohort consisted of 15 patients referred after first remission and 13 patients with relapsed leukemia. Mean follow-up was 34 months (2-196 months) with 15 patients alive at the time of last follow-up. Eight patients died of recurrent disease, 1 of graft vs. host reaction, 2 of sepsis, and 2 patients died of a secondary malignancy. The 5-year overall survival rate (OS) was 60%. Overall survival was significantly inferior in patients treated after relapse compared to those treated for newly diagnosed leukemia (24% versus 74%; p=0.004). At the time of last follow-up, 11 patients survived for more than 36 months following TBI. Late effects (RTOG {>=}3) were pneumonitis in 1 patient, chronic bronchitis in 1 patient, cardiomyopathy in 2 patients, severe cataractogenesis in 1 patient (48 months after TBI with 10 Gy in a single dose) and secondary malignancies in 2 patients (36 and 190 months after TBI). Growth disturbances were observed in all patients treated prepubertally. In 2 patients with identical twins treated at ages 2 and 7, a loss of 8% in final height of the treated twin was observed. As severe late sequelae after TBI, we observed 2 secondary malignancies in 11 patients who survived in excess of 36 months. However, long-term morbidity is moderate following treatment with the fractionated

  1. The contribution of large trees to total transpiration rates in a pre-montane tropical forest and its implications for selective logging practices

    Science.gov (United States)

    Orozco, G.; Moore, G. W.; Miller, G. R.

    2012-12-01

    In the humid tropics, conservationists generally prefer selective logging practices over clearcutting. Large valuable timber is removed while the remaining forest is left relatively undisturbed. However, little is known about the impact of selective logging on site water balance. Because large trees have very deep sapwood and exposed canopies, they tend to have high transpiration. The first objective was to evaluate the methods used for scaling sap flow measurements to the watershed with particular emphasis on large trees. The second objective of this study was to determine the relative contribution of large trees to site water balance. Our study was conducted in a pre-montane transitional forest at the Texas A&M University Soltis Center in north-central Costa Rica. During the period between January and July 2012, sap flux was monitored in a 30-m diameter plot within a 10-ha watershed. Two pairs of heat dissipation sensors were installed in the outer 0-20 mm of each of 15 trees selected to represent the full range of tree sizes. In six of the largest trees, depth profiles were recorded at 10-mm intervals to a depth of 60 mm using compensation heat pulse sensors. To estimate sapwood basal area of the entire watershed, a stand survey was conducted in three 30-m-diameter plots. In each plot, we measured basal area of all trees and estimated sapwood basal area from sapwood depth measured in nearly half of the trees. An estimated 36.5% of the total sapwood area in this watershed comes from the outer 20 mm of sapwood, with the remaining 63.5% of sapwood from depths deeper than 20 mm. Nearly 13% of sapwood is from depths beyond 60 mm. Sap velocity profiles indicate the highest flow rates occurred in the 0-2 cm depths, with declines of 17% and 25% in the 20-40 mm and 40-60 mm ranges, respectively. Our results demonstrate the need to measure sap velocity profiles in large tropical trees. If total transpiration had been estimated solely from the 0-20 mm heat dissipation

  2. Naming game with learning errors in communications

    OpenAIRE

    Lou, Yang; Chen, Guanrong

    2014-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network topology. By pair-wise iterative interactions, the population reaches a consensus state asymptotically. In this paper, we study naming game with communication errors during pair-wise conversations, where errors are represented by error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed....

  3. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  4. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  5. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  6. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  7. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  8. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  9. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  10. PERM Error Rate Findings and Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — Federal agencies are required to annually review programs they administer and identify those that may be susceptible to significant improper payments, to estimate...

  11. Medicare FFS Jurisdiction Error Rate Contribution Data

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services CMS is dedicated to continually strengthening and improving the Medicare program, which provides vital services to...

  12. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  13. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  14. The Human Bathtub: Safety and Risk Predictions Including the Dynamic Probability of Operator Errors

    International Nuclear Information System (INIS)

    Duffey, Romney B.; Saull, John W.

    2006-01-01

    Reactor safety and risk are dominated by the potential and major contribution for human error in the design, operation, control, management, regulation and maintenance of the plant, and hence to all accidents. Given the possibility of accidents and errors, now we need to determine the outcome (error) probability, or the chance of failure. Conventionally, reliability engineering is associated with the failure rate of components, or systems, or mechanisms, not of human beings in and interacting with a technological system. The probability of failure requires a prior knowledge of the total number of outcomes, which for any predictive purposes we do not know or have. Analysis of failure rates due to human error and the rate of learning allow a new determination of the dynamic human error rate in technological systems, consistent with and derived from the available world data. The basis for the analysis is the 'learning hypothesis' that humans learn from experience, and consequently the accumulated experience defines the failure rate. A new 'best' equation has been derived for the human error, outcome or failure rate, which allows for calculation and prediction of the probability of human error. We also provide comparisons to the empirical Weibull parameter fitting used in and by conventional reliability engineering and probabilistic safety analysis methods. These new analyses show that arbitrary Weibull fitting parameters and typical empirical hazard function techniques cannot be used to predict the dynamics of human errors and outcomes in the presence of learning. Comparisons of these new insights show agreement with human error data from the world's commercial airlines, the two shuttle failures, and from nuclear plant operator actions and transient control behavior observed in transients in both plants and simulators. The results demonstrate that the human error probability (HEP) is dynamic, and that it may be predicted using the learning hypothesis and the minimum

  15. Modelling strategic interventions in a population with a total fertility rate of 8.3: a cross-sectional study of Idjwi Island, DRC

    Directory of Open Access Journals (Sweden)

    Thomson Dana R

    2012-11-01

    Full Text Available Abstract Background Idjwi, an island of approximately 220,000 people, is located in eastern DRC and functions semi-autonomously under the governance of two kings (mwamis. At more than 8 live births per woman, Idjwi has one of the highest total fertility rates (TFRs in the world. Rapid population growth has led to widespread environmental degradation and food insecurity. Meanwhile family planning services are largely unavailable. Methods At the invitation of local leaders, we conducted a representative survey of 2,078 households in accordance with MEASURE DHS protocols, and performed ethnographic interviews and focus groups with key informants and vulnerable subpopulations. Modelling proximate determinates of fertility, we evaluated how the introduction of contraceptives and/or extended periods of breastfeeding could reduce the TFR. Results Over half of all women reported an unmet need for spacing or limiting births, and nearly 70% named a specific modern method of contraception they would prefer to use; pills (25.4% and injectables (26.5% were most desired. We predicted that an increased length of breastfeeding (from 10 to 21 months or an increase in contraceptive prevalence (from 1% to 30%, or a combination of both could reduce TFR on Idjwi to 6, the average desired number of children. Increasing contraceptive prevalence to 15% could reduce unmet need for contraception by 8%. Conclusions To meet women’s need and desire for fertility control, we recommend adding family planning services at health centers with NGO support, pursuing a community health worker program, promoting extended breastfeeding, and implementing programs to end sexual- and gender-based violence toward women.

  16. Patient-Reported Outcomes, Quality of Life, and Satisfaction Rates in Young Patients Aged 50 Years or Younger After Total Knee Arthroplasty.

    Science.gov (United States)

    Goh, Graham Seow-Hng; Liow, Ming Han Lincoln; Bin Abd Razak, Hamid Rahmatullah; Tay, Darren Keng-Jin; Lo, Ngai-Nung; Yeo, Seng-Jin

    2017-02-01

    Recent studies have shown a discrepancy between traditional functional outcomes and patient satisfaction, with some reporting less than 85% satisfaction in older patients undergoing total knee arthroplasty (TKA). As native knee biomechanics are not completely replicated, the resulting functional limitations may cause dissatisfaction in higher-demand individuals. Few studies have recorded patient-reported outcomes, health-related quality of life scores, and patient satisfaction in a young population undergoing TKA. One hundred thirty-six primary TKAs were performed in 114 patients aged 50 years or younger (mean age, 47.0 years; range, 30-50 years) at a single institution. The main diagnoses were osteoarthritis (85%) and rheumatoid arthritis (10%). The range of motion, Knee Society Score, Oxford Knee Score, and Physical and Mental Component Scores of Short Form-36 increased significantly (P patients had good/excellent knee scores, 71.3% had good/excellent function scores, 94.9% met the minimal clinically important difference for the Oxford Knee Score, and 84.6% met the minimal clinically important difference for the Physical Component Score. We found that 88.8% of patients were satisfied with their surgeries, whereas 86.8% had their expectations fulfilled. Survivorship using revision as an end point was 97.8% at a mean of 7 years (range, 3-16 years). Patients aged 50 years or younger undergoing TKA can experience significant improvements in their quality of life, have their expectations met, and be satisfied with their surgeries, at rates similar to those of non-age-restricted populations. Surgeons should inform them of these benefits and the potential risk of revision surgery in the future, albeit increasingly shown to be low. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Fitness-related differences in the rate of whole-body total heat loss in exercising young healthy women are heat-load dependent.

    Science.gov (United States)

    Lamarche, Dallon T; Notley, Sean R; Poirier, Martin P; Kenny, Glen P

    2018-03-01

    What is the central question of this study? Aerobic fitness modulates heat loss, albeit the heat load at which fitness-related differences occur in young healthy women remains unclear. What is the main finding and its importance? We demonstrate using direct calorimetry that fitness modulates heat loss in a heat-load dependent manner, with differences occurring between young women of low and high fitness and matched physical characteristics when the metabolic heat load is at least 400 W in hot, dry conditions. Although fitness has been known for some time to modulate heat loss, our findings define the metabolic heat load at which fitness-related differences occur. Aerobic fitness has recently been shown to alter heat loss capacity in a heat-load dependent manner in young men. However, given that sex-related differences in heat loss capacity exist, it is unclear whether this response is consistent in women. We therefore assessed whole-body total heat loss in young (21 ± 3 years old) healthy women matched for physical characteristics, but with low (low-fit; 35.8 ± 4.5 ml O 2  kg -1  min -1 ) or high aerobic fitness (high-fit; 53.1 ± 5.1 ml O 2  kg -1  min -1 ; both n = 8; indexed by peak oxygen consumption), during three 30 min bouts of cycling performed at increasing rates of metabolic heat production of 250 (Ex1), 325 (Ex2) and 400 W (Ex3), each separated by a 15 min recovery, in hot, dry conditions (40°C, 11% relative humidity). Whole-body total heat loss (evaporative ± dry heat exchange) and metabolic heat production were measured using direct and indirect calorimetry, respectively. Body heat content was measured as the temporal summation of heat production and loss. Total heat loss did not differ during Ex1 (low-fit, 215 ± 16 W; high-fit, 231 ± 20 W; P > 0.05) and Ex2 (low-fit, 278 ± 15 W; high-fit, 301 ± 20 W; P > 0.05), but was lower in the low-fit (316 ± 21 W) compared with the high-fit women (359 ± 32

  18. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs

  19. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  20. Characteristics of medication errors with parenteral cytotoxic drugs

    OpenAIRE

    Fyhr, A; Akselsson, R

    2012-01-01

    Errors involving cytotoxic drugs have the potential of being fatal and should therefore be prevented. The objective of this article is to identify the characteristics of medication errors involving parenteral cytotoxic drugs in Sweden. A total of 60 cases reported to the national error reporting systems from 1996 to 2008 were reviewed. Classification was made to identify cytotoxic drugs involved, type of error, where the error occurred, error detection mechanism, and consequences for the pati...

  1. Total protein

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003483.htm Total protein To use the sharing features on this page, please enable JavaScript. The total protein test measures the total amount of two classes ...

  2. Medication Errors in an Internal Intensive Care Unit of a Large Teaching Hospital: A Direct Observation Study

    Directory of Open Access Journals (Sweden)

    Saadat Delfani

    2012-06-01

    Full Text Available Medication errors account for about 78% of serious medical errors in intensive care unit (ICU. So far no study has been performed in Iran to evaluate all type of possible medication errors in ICU. Therefore the objective of this study was to reveal the frequency, type and consequences of all type of errors in an ICU of a large teaching hospital. The prospective observational study was conducted in an 11 bed internal ICU of a university hospital in Shiraz. In each shift all processes that were performed on one selected patient was observed and recorded by a trained pharmacist. Observer would intervene only if medication error would cause substantial harm. The data was evaluated and then were entered in a form that was designed for this purpose. The study continued for 38 shifts. During this period, a total of 442 errors per 5785 opportunities for errors (7.6% occurred. Of those, there were 9.8% administration errors, 6.8% prescribing errors, 3.3% transcription errors and, 2.3% dispensing errors. Totally 45 interventions were made, 40% of interventions result in the correction of errors. The most common causes of errors were observed to be: rule violations, slip and memory lapses and lack of drug knowledge. According to our results, the rate of errors is alarming and requires implementation of a serious solution. Since our system lacks a well-organize detection and reporting mechanism, there is no means for preventing errors in the first place. Hence, as the first step we must implement a system where errors are routinely detected and reported.

  3. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  4. Female married illiteracy as the most important continual determinant of total fertility rate among districts of Empowered Action Group States of India: Evidence from Annual Health Survey 2011–12

    OpenAIRE

    Rajesh Kumar; Vishal Dogra; Khushbu Rani; Kanti Sahu

    2017-01-01

    Background: District level determinants of total fertility rate in Empowered Action Group states of India can help in ongoing population stabilization programs in India. Objective: Present study intends to assess the role of district level determinants in predicting total fertility rate among districts of the Empowered Action Group states of India. Material and Methods: Data from Annual Health Survey (2011-12) was analysed using STATA and R software packages. Multiple linear regression models...

  5. Protocol for the PINCER trial: a cluster randomised trial comparing the effectiveness of a pharmacist-led IT-based intervention with simple feedback in reducing rates of clinically important errors in medicines management in general practices

    Directory of Open Access Journals (Sweden)

    Murray Scott A

    2009-05-01

    ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05 to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial and the interventions have been delivered. Analysis has not yet been undertaken. Trial registration Current controlled trials ISRCTN21785299

  6. WE-H-BRC-09: Simulated Errors in Mock Radiotherapy Plans to Quantify the Effectiveness of the Physics Plan Review

    International Nuclear Information System (INIS)

    Gopan, O; Kalet, A; Smith, W; Hendrickson, K; Kim, M; Young, L; Nyflot, M; Chvetsov, A; Phillips, M; Ford, E

    2016-01-01

    Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included in the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in

  7. WE-H-BRC-09: Simulated Errors in Mock Radiotherapy Plans to Quantify the Effectiveness of the Physics Plan Review

    Energy Technology Data Exchange (ETDEWEB)

    Gopan, O; Kalet, A; Smith, W; Hendrickson, K; Kim, M; Young, L; Nyflot, M; Chvetsov, A; Phillips, M; Ford, E [University of Washington, Seattle, WA (United States)

    2016-06-15

    Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included in the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in

  8. Error analysis to improve the speech recognition accuracy on ...

    Indian Academy of Sciences (India)

    dictionary plays a key role in the speech recognition accuracy. .... Sophisticated microphone is used for the recording speech corpus in a noise free environment. .... values, word error rate (WER) and error-rate will be calculated as follows:.

  9. Female married illiteracy as the most important continual determinant of total fertility rate among districts of Empowered Action Group States of India: Evidence from Annual Health Survey 2011–12

    Directory of Open Access Journals (Sweden)

    Rajesh Kumar

    2017-01-01

    Full Text Available Background: District level determinants of total fertility rate in Empowered Action Group states of India can help in ongoing population stabilization programs in India. Objective: Present study intends to assess the role of district level determinants in predicting total fertility rate among districts of the Empowered Action Group states of India. Material and Methods: Data from Annual Health Survey (2011-12 was analysed using STATA and R software packages. Multiple linear regression models were built and evaluated using Akaike Information Criterion. For further understanding, recursive partitioning was used to prepare a regression tree. Results: Female married illiteracy positively associated with total fertility rate and explained more than half (53% of variance. Under multiple linear regression model, married illiteracy, infant mortality rate, Ante natal care registration, household size, median age of live birth and sex ratio explained 70% of total variance in total fertility rate. In regression tree, female married illiteracy was the root node and splits at 42% determined TFR = 2.7. The next left side branch was again married illiteracy with splits at 23% to determine TFR = 2.1. Conclusion: We conclude that female married illiteracy is one of the most important determinants explaining total fertility rate among the districts of an Empowered Action Group states. Focus on female literacy is required to stabilize the population growth in long run.

  10. Female married illiteracy as the most important continual determinant of total fertility rate among districts of Empowered Action Group States of India: Evidence from Annual Health Survey 2011-12.

    Science.gov (United States)

    Kumar, Rajesh; Dogra, Vishal; Rani, Khushbu; Sahu, Kanti

    2017-01-01

    District level determinants of total fertility rate in Empowered Action Group states of India can help in ongoing population stabilization programs in India. Present study intends to assess the role of district level determinants in predicting total fertility rate among districts of the Empowered Action Group states of India. Data from Annual Health Survey (2011-12) was analysed using STATA and R software packages. Multiple linear regression models were built and evaluated using Akaike Information Criterion. For further understanding, recursive partitioning was used to prepare a regression tree. Female married illiteracy positively associated with total fertility rate and explained more than half (53%) of variance. Under multiple linear regression model, married illiteracy, infant mortality rate, Ante natal care registration, household size, median age of live birth and sex ratio explained 70% of total variance in total fertility rate. In regression tree, female married illiteracy was the root node and splits at 42% determined TFR illiteracy with splits at 23% to determine TFR illiteracy is one of the most important determinants explaining total fertility rate among the districts of an Empowered Action Group states. Focus on female literacy is required to stabilize the population growth in long run.

  11. Female married illiteracy as the most important continual determinant of total fertility rate among districts of Empowered Action Group States of India: Evidence from Annual Health Survey 2011–12

    Science.gov (United States)

    Kumar, Rajesh; Dogra, Vishal; Rani, Khushbu; Sahu, Kanti

    2017-01-01

    Background: District level determinants of total fertility rate in Empowered Action Group states of India can help in ongoing population stabilization programs in India. Objective: Present study intends to assess the role of district level determinants in predicting total fertility rate among districts of the Empowered Action Group states of India. Material and Methods: Data from Annual Health Survey (2011-12) was analysed using STATA and R software packages. Multiple linear regression models were built and evaluated using Akaike Information Criterion. For further understanding, recursive partitioning was used to prepare a regression tree. Results: Female married illiteracy positively associated with total fertility rate and explained more than half (53%) of variance. Under multiple linear regression model, married illiteracy, infant mortality rate, Ante natal care registration, household size, median age of live birth and sex ratio explained 70% of total variance in total fertility rate. In regression tree, female married illiteracy was the root node and splits at 42% determined TFR illiteracy with splits at 23% to determine TFR illiteracy is one of the most important determinants explaining total fertility rate among the districts of an Empowered Action Group states. Focus on female literacy is required to stabilize the population growth in long run. PMID:29416999

  12. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  13. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  14. Relationship Between Technical Errors and Decision-Making Skills in the Junior Resident.

    Science.gov (United States)

    Nathwani, Jay N; Fiers, Rebekah M; Ray, Rebecca D; Witt, Anna K; Law, Katherine E; DiMarco, ShannonM; Pugh, Carla M

    The purpose of this study is to coevaluate resident technical errors and decision-making capabilities during placement of a subclavian central venous catheter (CVC). We hypothesize that there would be significant correlations between scenario-based decision-making skills and technical proficiency in central line insertion. We also predict residents would face problems in anticipating common difficulties and generating solutions associated with line placement. Participants were asked to insert a subclavian central line on a simulator. After completion, residents were presented with a real-life patient photograph depicting CVC placement and asked to anticipate difficulties and generate solutions. Error rates were analyzed using chi-square tests and a 5% expected error rate. Correlations were sought by comparing technical errors and scenario-based decision-making skills. This study was performed at 7 tertiary care centers. Study participants (N = 46) largely consisted of first-year research residents who could be followed longitudinally. Second-year research and clinical residents were not excluded. In total, 6 checklist errors were committed more often than anticipated. Residents committed an average of 1.9 errors, significantly more than the 1 error, at most, per person expected (t(44) = 3.82, p technical errors committed negatively correlated with the total number of commonly identified difficulties and generated solutions (r (33) = -0.429, p = 0.021, r (33) = -0.383, p = 0.044, respectively). Almost half of the surgical residents committed multiple errors while performing subclavian CVC placement. The correlation between technical errors and decision-making skills suggests a critical need to train residents in both technique and error management. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  15. Improved compliance with the World Health Organization Surgical Safety Checklist is associated with reduced surgical specimen labelling errors.

    Science.gov (United States)

    Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J

    2016-09-09

    A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.

  16. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  17. Automated drug dispensing system reduces medication errors in an intensive care setting.

    Science.gov (United States)

    Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick

    2010-12-01

    We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; perror (20.4% and 13.5%; perror showed a significant impact of the automated dispensing system in reducing preparation errors (perrors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.

  18. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  19. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  20. Razón de contacto total en los engranajes cónicos circulares. // Contact Rate in circular Bevel gears.

    OpenAIRE

    E. Mirabet Lemos; L. Martínez Delgado

    2006-01-01

    En el presente trabajo se realiza un análisis deductivo de las expresiones que permiten evaluar la razón de contacto total enlos engranajes cónicos circulares. Dicho análisis se ha realizado haciendo uso de la transmisión cilíndrica virtual.Se obtienen dos formulas equivalentes, pero que se diferencian en algunos de los parámetros que las conforman. Cada unade estas formulas esta basada en formas diferentes de analizar la razón de contacto total y en particular uno de suscomponentes, la razón...

  1. Spacecraft and propulsion technician error

    Science.gov (United States)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  2. Reaction path of the oxidative coupling of methane over a lithium-doped magnesium oxide catalyst : Factors affecting the Rate of Total Oxidation of Ethane and Ethylene

    NARCIS (Netherlands)

    Roos, J.A.; Korf, S.J.; Veehof, R.H.J.; van Ommen, J.G.; Ross, J.R.H.

    1989-01-01

    Experiments using gas mixtures of O2, C2H6 or C2H4 and CH4 or He have been carried out with a Li/MgO catalyst using a well-mixed reaction system which show that the total oxidation products, CO and CO2, are formed predominantly from ethylene, formed in the oxidative coupling of methane. It is

  3. The influence of obesity on the complication rate and outcome of total knee arthroplasty: a meta-analysis and systematic literature review.

    Science.gov (United States)

    Kerkhoffs, Gino M M J; Servien, Elvire; Dunn, Warren; Dahm, Diane; Bramer, Jos A M; Haverkamp, Daniel

    2012-10-17

    The increase in the number of individuals with an unhealthy high body weight is particularly relevant in the United States. Obesity (body mass index ≥ 30 kg/m2) is a well-documented risk factor for the development of osteoarthritis. Furthermore, an increased prevalence of total knee arthroplasty in obese individuals has been observed in the last decades. The primary aim of this systematic literature review was to determine whether obesity has a negative influence on outcome after primary total knee arthroplasty. A search of the literature was performed, and studies comparing the outcome of total knee arthroplasty in different weight groups were included. The methodology of the included studies was scored according to the Cochrane guidelines. Data extraction and pooling were performed. The weighted mean difference for continuous data and the weighted odds ratio for dichotomous variables were calculated. Heterogeneity was calculated with use of the I2 statistic. After consensus was reached, twenty studies were included in the data analysis. The presence of any infection was reported in fourteen studies including 15,276 patients (I2, 26%). Overall, infection occurred more often in obese patients, with an odds ratio of 1.90 (95% confidence interval [CI], 1.46 to 2.47). Deep infection requiring surgical debridement was reported in nine studies including 5061 patients (I2, 0%). Deep infection occurred more often in obese patients, with an odds ratio of 2.38 (95% CI, 1.28 to 4.55). Revision of the total knee arthroplasty, defined as exchange or removal of the components for any reason, was documented in eleven studies including 12,101 patients (I2, 25%). Revision for any reason occurred more often in obese patients, with an odds ratio of 1.30 (95% CI, 1.02 to 1.67). Obesity had a negative influence on outcome after total knee arthroplasty.

  4. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    Science.gov (United States)

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  5. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  6. Interference in the $gg\\rightarrow h \\rightarrow \\gamma\\gamma$ On-Shell Rate and the Higgs Boson Total Width

    OpenAIRE

    Campbell, John; Carena, Marcela; Harnik, Roni; Liu, Zhen

    2017-01-01

    We consider interference between the Higgs signal and QCD background in $gg\\rightarrow h \\rightarrow \\gamma\\gamma$ and its effect on the on-shell Higgs rate. The existence of sizable strong phases leads to destructive interference of about 2% of the on-shell cross section in the Standard Model. This effect can be enhanced by beyond the standard model physics. In particular, since it scales differently from the usual rates, the presence of interference allows indirect limits to be placed on th...

  7. The Effects of Total Motile Sperm Count on Spontaneous Pregnancy Rate and Pregnancy After IUI Treatment in Couples with Male Factor and Unexplained Infertility.

    Science.gov (United States)

    Hajder, Mithad; Hajder, Elmira; Husic, Amela

    2016-02-01

    Male infertility factor is defined if the total number of motile spermatozoa (TMSC) 3,10(6) / ejaculate and a spontaneous pregnancy, group (B) with TMSCl 3 x 10(6) / ejaculate and couples who have not achieved pregnancy. From a total of 98 pairs of men's and unexplained infertility, 42 of them (42.8%) achieved spontaneous pregnancy, while 56 (57.2%) pairs did not achieve spontaneous pregnancy. TMSC was significantly higher (42.4 ± 28.4 vs. 26.2 ± 24, p 20 x 10(6) / ejaculate (RR = 1.7, 95% CI: 1.56-1.82, 5 x 10(6) / ejaculate are indicated for treatment with IUI. TMSC can be used as the method of choice for diagnosis and treatment of male infertility.

  8. The impact of pre-operative weight loss on incidence of surgical site infection and readmission rates after total joint arthroplasty.

    Science.gov (United States)

    Inacio, Maria C S; Kritz-Silverstein, Donna; Raman, Rema; Macera, Caroline A; Nichols, Jeanne F; Shaffer, Richard A; Fithian, Donald C

    2014-03-01

    This study characterized a cohort of obese total hip arthroplasty (THA) and total knee arthroplasty (TKA) patients (1/1/2008-12/31/2010) and evaluated whether a clinically significant amount of pre-operative weight loss (5% decrease in body weight) is associated with a decreased risk of surgical site infections (SSI) and readmissions post-surgery. 10,718 TKAs and 4066 THAs were identified. During the one year pre-TKA 7.6% of patients gained weight, 12.4% lost weight, and 79.9% remained the same. In the one year pre-THA, 6.3% of patients gained weight, 18.0% lost weight, and 75.7% remained the same. In TKAs and THAs, after adjusting for covariates, the risk of SSI and readmission was not significantly different in the patients who gained or lost weight pre-operatively compared to those who remained the same. © 2013.

  9. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  10. The effects of intra-articular tranexamic acid given intraoperatively and intravenous tranexamic acid given preoperatively on post surgical bleeding and transfusion rate post total knee arthroplasty

    Directory of Open Access Journals (Sweden)

    Aryo N. Triyudanto

    2017-01-01

    Full Text Available Background: Despite the advances in the design and fixation of implants in total knee replacement (TKR. the amount of postoperative bleeding is still an important issue that has not been resolved. This study aimed to measure the effectiveness of various tranexamic acid administration.Methods: This was a randomized controlled trial study, held from August 2014 to February 2016 at Cipto Mangunkusumo Hospital, Jakarta. Twenty two patients having TKR were divided into three groups: the control group, the tranexamic acid intra-articular-intraoperative group, and the intravenous preoperative group. Intraoperative bleeding, haemoglobin (Hb level on preoperative to five-day-post-surgery, total drain production, total blood tranfusion needed and the drain removal timing were recorded and compared. Numerical data were analyzed by using parametric and non-parametric test, depended on the normality of the data.Results: The amount of blood transfusion needed in both the intra-articular group (200±SD 100 mL and the intravenous group (238±SD 53 mL were significantly different compared to those in the control group (1,016±SD 308.2 mL (p=0.001. Meanwhile, there was no significant difference between the amount of blood transfusion needed in the intra-articular group and the intravenous group. Total drain production in the intra-articular group (328±SD 193 mL and intravenous group (391±SD 185 mL was significantly different compared to the control group (652±SD 150 mL (p=0.003. No significant difference between the levels of both preoperative and postoperative haemoglobin, the amount of intraoperative bleeding, and the duration of drain usage.Conclusion: Intravenous and intra-articular tranexamic acid effectively decreased transfusion volume and drain production in patients undergoing TKR.

  11. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  12. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  13. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  14. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  15. Patient-reported Outcomes and Revision Rates at a Mean Follow-up of 10 Years After Lumbar Total Disc Replacement

    DEFF Research Database (Denmark)

    Laugesen, Line A.; Tendal Paulsen, Rune; Carreon, Leah

    2017-01-01

    STUDY DESIGN.: Prospective observational cohort study. OBJECTIVE.: To determine the long-term clinical results and prosthesis survival in patients treated with lumbar total disc replacement (TDR). SUMMARY OF BACKGROUND DATA.: Fusion has become the current standard surgical treatment for lumbar...... and statistically significant worse outcome scores at last follow-up compared to patients who had no revision. Thirty patients (52.6%) would choose the same treatment again if they were faced with the same problem. CONCLUSION.: This study demonstrated significant improvement in long-term clinical outcomes, similar...

  16. Modification of Death rate and Disturbances induced in the Levels of serum total Lipids and free fatty acids of irradiated rats by ascorbic acid and serotonin

    International Nuclear Information System (INIS)

    Mahdy, A.M.; Saada, H.N.; Osama, Z.S.

    1999-01-01

    Intraperitoneal injection of normal rats with ascorbic acid (10 mg/100 g body weight ) or serotonin (2 mg/100 g body weight) had no harmful effect on the life span. Moreover, the levels of serum total lipids and free fatty acids did not show any significant changes at 3, 7, 10 and 14 days after injection. Administration of ascorbic acid or serotonin to rats at the pre mentioned doses, 15 minutes, before gamma irradiation at 7.5 Gy (single dose ) improved the survival time of rats and the hyperlipemic state recorded after radiation exposure

  17. Totally James

    Science.gov (United States)

    Owens, Tom

    2006-01-01

    This article presents an interview with James Howe, author of "The Misfits" and "Totally Joe". In this interview, Howe discusses tolerance, diversity and the parallels between his own life and his literature. Howe's four books in addition to "The Misfits" and "Totally Joe" and his list of recommended books with lesbian, gay, bisexual, transgender,…

  18. Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.

    Science.gov (United States)

    Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M

    2006-10-01

    Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.

  19. The relation of putamen nucleus 6-[18F]fluoro-L-m-tyrosine uptake to total Unified Parkinson's Disease Rating Scale scores

    International Nuclear Information System (INIS)

    Buchy, R.

    2002-01-01

    The contribution of dopaminergic deficiency in the striatum to the severity of locomotor disability in Parkinson's disease has been consistently shown with 6-[ 18 F]fluoro-L-DOPA in positron emission tomography. Recently, 6-[ 18 F]fluoro-L-m-tyrosine, an alternative tracer with similar distribution kinetics has been used to facilitate data analysis. Locomotor disability in Parkinson's disease can be measured using the Unified Parkinson's Disease Rating Scale. The Unified Parkinson's Disease Rating Scale was used in conjunction with 6-[ 18 F]fluoro-L-m-tyrosine -PET to clinically examine a group of five Parkinson's disease patients. An inverse relation similar to that previously demonstrated with 6-[ 18 F]fluoro-L-DOPA was found between the putamen nucleus 6-[ 18 F]fluoro-L-m-tyrosine influx constant and Unified Parkinson's Disease Rating Scale score. This finding suggests that like 6-[ 18 F]fluoro-L-m-tyrosine can be used to accurately measure the degree of locomotor disability caused by Parkinson's disease. (author)

  20. Common patterns in 558 diagnostic radiology errors.

    Science.gov (United States)

    Donald, Jennifer J; Barnard, Stuart A

    2012-04-01

    As a Quality Improvement initiative our department has held regular discrepancy meetings since 2003. We performed a retrospective analysis of the cases presented and identified the most common pattern of error. A total of 558 cases were referred for discussion over 92 months, and errors were classified as perceptual or interpretative. The most common patterns of error for each imaging modality were analysed, and the misses were scored by consensus as subtle or non-subtle. Of 558 diagnostic errors, 447 (80%) were perceptual and 111 (20%) were interpretative errors. Plain radiography and computed tomography (CT) scans were the most frequent imaging modalities accounting for 246 (44%) and 241 (43%) of the total number of errors, respectively. In the plain radiography group 120 (49%) of the errors occurred in chest X-ray reports with perceptual miss of a lung nodule occurring in 40% of this subgroup. In the axial and appendicular skeleton missed fractures occurred most frequently, and metastatic bone disease was overlooked in 12 of 50 plain X-rays of the pelvis or spine. The majority of errors within the CT group were in reports of body scans with the commonest perceptual errors identified including 16 missed significant bone lesions, 14 cases of thromboembolic disease and 14 gastrointestinal tumours. Of the 558 errors, 312 (56%) were considered subtle and 246 (44%) non-subtle. Diagnostic errors are not uncommon and are most frequently perceptual in nature. Identification of the most common patterns of error has the potential to improve the quality of reporting by improving the search behaviour of radiologists. © 2012 The Authors. Journal of Medical Imaging and Radiation Oncology © 2012 The Royal Australian and New Zealand College of Radiologists.

  1. Functional outcome, revision rates and mortality after primary total hip replacement--a national comparison of nine prosthesis brands in England.

    Directory of Open Access Journals (Sweden)

    Mark Pennington

    Full Text Available The number of prosthesis brands used for hip replacement has increased rapidly, but there is little evidence on their effectiveness. We compared patient-reported outcomes, revision rates, and mortality for the three most frequently used brands within each prosthesis type: cemented (Exeter V40 Contemporary, Exeter V40 Duration and Exeter V40 Elite Plus Ogee, cementless (Corail Pinnacle, Accolade Trident, and Taperloc Exceed, and hybrid (Exeter V40 Trilogy, Exeter V40 Trilogy, and CPT Trilogy.We used three national databases of patients who had hip replacements between 2008 and 2011 in the English NHS to compare functional outcome (Oxford Hip Score (OHS ranging from 0 (worst to 48 (best in 43,524 patients at six months. We analysed revisions and mortality in 187,201 patients. We used multiple regression to adjust for pre-operative differences. Prosthesis type had an impact on post-operative OHS and revision rates (both p<0.001. Patients with hybrid prostheses had the best functional outcome (mean OHS 39.4, 95%CI 39.1 to 39.7 and those with cemented prostheses the worst (37.7, 37.3 to 38.1. Patients with cemented prostheses had the lowest reported 5-year revision rates (1.3%, 1.2% to 1.4% and those with cementless prostheses the highest (2.2%, 2.1% to 2.4%. Differences in mortality according to prosthesis type were small and not significant (p = 0.06. Functional outcome varied according to brand among cemented (p = 0.05, with Exeter V40 Duration having the best and cementless prostheses (p = 0.01, with Corail Pinnacle having the best. Revision rates varied according to brand among hybrids (p = 0.05, with Exeter V40 Trident having the lowest.Functional outcomes were better with cementless cups and revision rates were lower with cemented stems, which underlies the good overall performance of hybrids. The hybrid Exeter V40 Trident seemed to produce the best overall results. This brand should be considered as a benchmark in randomised trials.

  2. Direct measurements of the total rate constant of the reaction NCN + H and implications for the product branching ratio and the enthalpy of formation of NCN.

    Science.gov (United States)

    Fassheber, Nancy; Dammeier, Johannes; Friedrichs, Gernot

    2014-06-21

    The overall rate constant of the reaction (2), NCN + H, which plays a key role in prompt-NO formation in flames, has been directly measured at temperatures 962 K rate constants are best represented by the combination of two Arrhenius expressions, k2/(cm(3) mol(-1) s(-1)) = 3.49 × 10(14) exp(-33.3 kJ mol(-1)/RT) + 1.07 × 10(13) exp(+10.0 kJ mol(-1)/RT), with a small uncertainty of ±20% at T = 1600 K and ±30% at the upper and lower experimental temperature limits.The two Arrhenius terms basically can be attributed to the contributions of reaction channel (2a) yielding CH + N2 and channel (2b) yielding HCN + N as the products. A more refined analysis taking into account experimental and theoretical literature data provided a consistent rate constant set for k2a, its reverse reaction k1a (CH + N2 → NCN + H), k2b as well as a value for the controversial enthalpy of formation of NCN, ΔfH = 450 kJ mol(-1). The analysis verifies the expected strong temperature dependence of the branching fraction ϕ = k2b/k2 with reaction channel (2b) dominating at the experimental high-temperature limit. In contrast, reaction (2a) dominates at the low-temperature limit with a possible minor contribution of the HNCN forming recombination channel (2d) at T < 1150 K.

  3. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  4. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  5. Collection of offshore human error probability data

    International Nuclear Information System (INIS)

    Basra, Gurpreet; Kirwan, Barry

    1998-01-01

    Accidents such as Piper Alpha have increased concern about the effects of human errors in complex systems. Such accidents can in theory be predicted and prevented by risk assessment, and in particular human reliability assessment (HRA), but HRA ideally requires qualitative and quantitative human error data. A research initiative at the University of Birmingham led to the development of CORE-DATA, a Computerised Human Error Data Base. This system currently contains a reasonably large number of human error data points, collected from a variety of mainly nuclear-power related sources. This article outlines a recent offshore data collection study, concerned with collecting lifeboat evacuation data. Data collection methods are outlined and a selection of human error probabilities generated as a result of the study are provided. These data give insights into the type of errors and human failure rates that could be utilised to support offshore risk analyses

  6. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  7. Frecuencia de errores de los pacientes con su medicación Frequency of medication errors by patients

    Directory of Open Access Journals (Sweden)

    José Joaquín Mira

    2012-02-01

    Full Text Available OBJETIVO: Analizar la frecuencia de errores de medicación que son cometidos e informados por los pacientes. MÉTODOS: Estudio descriptivo basado en encuestas telefónicas a una muestra aleatoria de pacientes adultos del nivel primario de salud del sistema público español. Respondieron un total de 1 247 pacientes (tasa de respuesta, 75%. El 63% eran mujeres y 29% eran mayores de 70 años. RESULTADOS: Mientras 37 pacientes (3%, IC 95%: 2-4 sufrieron complicaciones asociadas a la medicación en el curso del tratamiento, 241 (19,4%, IC 95%: 17-21 informaron haber cometido algún error con la medicación. Un menor tiempo de consulta (P OBJECTIVE: Analyze the frequency of medication errors committed and reported by patients. METHODS: Descriptive study based on a telephone survey of a random sample of adult patients from the primary care level of the Spanish public health care system. A total of 1 247 patients responded (75% response rate; 63% were women and 29% were older than 70 years. RESULTS: While 37 patients (3%, 95% CI: 2-4 experienced complications associated with medication in the course of treatment, 241 (19.4%, 95% CI: 17-21 reported having made some mistake with their medication. A shorter consultation time (P < 0.01 and a worse assessment of the information provided by the physician (P < 0.01 were associated with the fact that during pharmacy dispensing the patient was told that the prescribed treatment was not appropriate. CONCLUSIONS: In addition to the known risks of an adverse event due to a health intervention resulting from a system or practitioner error, there are risks associated with patient errors in the self-administration of medication. Patients who were unsatisfied with the information provided by the physician reported a greater number of errors.

  8. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  9. Relationship between PSA kinetics and [{sup 18}F]fluorocholine PET/CT detection rates of recurrence in patients with prostate cancer after total prostatectomy

    Energy Technology Data Exchange (ETDEWEB)

    Graute, Vera; Jansen, Nathalie; Uebleis, Christopher; Cumming, Paul; Klanke, Katharina; Tiling, Reinhold; Bartenstein, Peter; Hacker, Marcus [University of Munich, Department of Nuclear Medicine, Munich (Germany); Seitz, Michael [University of Munich, Department of Urology, Munich (Germany); Hartenbach, Markus [Bundeswehrkrankenhaus Ulm, Department of Nuclear Medicine, Ulm (Germany); Scherr, Michael Karl; Thieme, Sven [University of Munich, Institute of Clinical Radiology, Munich (Germany)

    2012-02-15

    The aim of the present study was to identify prostate-specific antigen (PSA) threshold levels, as well as PSA velocity, progression rate and doubling time in relation to the detectability and localization of recurrent lesions with [{sup 18}F]fluorocholine (FC) PET/CT in patients after radical prostatectomy. The study group comprised 82 consecutive patients with biochemical relapse after radical prostatectomy. PSA levels measured at the time of imaging were correlated with the FC PET/CT detection rates in the entire group with PSA velocity (in 48 patients), with PSA doubling time (in 47 patients) and with PSA progression (in 29 patients). FC PET/CT detected recurrent lesions in 51 of the 82 patients (62%). The median PSA value was significantly higher in PET-positive than in PET-negative patients (4.3 ng/ml vs. 1.0 ng/ml; p < 0.01). The optimal PSA threshold from ROC analysis for the detection of recurrent prostate cancer lesions was 1.74 ng/ml (AUC 0.818, 82% sensitivity, 74% specificity). Significant differences between PET-positive and PET-negative patients were found for median PSA velocity (6.4 vs. 1.1 ng/ml per year; p < 0.01) and PSA progression (5.0 vs. 0.3 ng/ml per year, p < 0.01) with corresponding optimal thresholds of 1.27 ng/ml per year and 1.28 ng/ml per year, respectively. The PSA doubling time suggested a threshold of 3.2 months, but this just failed to reach statistical significance (p = 0.071). In a study cohort of patients with biochemical recurrence of prostate cancer after radical prostatectomy there emerged clear PSA thresholds for the presence of FC PET/CT-detectable lesions. (orig.)

  10. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    Science.gov (United States)

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  11. Clinical errors and medical negligence.

    Science.gov (United States)

    Oyebode, Femi

    2013-01-01

    This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3-16% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. Copyright © 2013 S. Karger AG, Basel.

  12. Teamwork and Clinical Error Reporting among Nurses in Korean Hospitals

    Directory of Open Access Journals (Sweden)

    Jee-In Hwang, PhD

    2015-03-01

    Conclusions: Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety.

  13. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  14. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  15. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  16. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  17. Soft errors in dynamic random access memories - a basis for dosimetry

    International Nuclear Information System (INIS)

    Haque, A.K.M.M.; Yates, J.; Stevens, D.

    1986-01-01

    The soft error rates of a number of 64k and 256k dRAMs from several manufacturers have been measured, employing a MC 68000 microprocessor. For this 'accelerated test' procedure, a 37 kBq (1 μCi) 241 Am alpha emitting source was used. Both 64k and 256k devices exhibited widely differing error rates. It was generally observed that the spread of errors over a particular device/manufacturer was much smaller than the differences between device families and manufacturers. Bit line errors formed a significant part of the total for 64k dRAMs, whereas in 256k dRAMs cell errors dominated; the latter also showed an enhanced sensitivity to integrated dose leading to total failure, and a time-dependent recovery. Although several theoretical models explain soft error mechanisms and predict responses which are compatible with our experimental results, it is considered that microdosimetric and track structure methods should be applied to the problem for its better appreciation. Finally, attention is drawn to the need for further studies of dRAMs, with a view to their use as digital dosemeters. (author)

  18. Error identification in a high-volume clinical chemistry laboratory: Five-year experience.

    Science.gov (United States)

    Jafri, Lena; Khan, Aysha Habib; Ghani, Farooq; Shakeel, Shahid; Raheem, Ahmed; Siddiqui, Imran

    2015-07-01

    Quality indicators for assessing the performance of a laboratory require a systematic and continuous approach in collecting and analyzing data. The aim of this study was to determine the frequency of errors utilizing the quality indicators in a clinical chemistry laboratory and to convert errors to the Sigma scale. Five-year quality indicator data of a clinical chemistry laboratory was evaluated to describe the frequency of errors. An 'error' was defined as a defect during the entire testing process from the time requisition was raised and phlebotomy was done until the result dispatch. An indicator with a Sigma value of 4 was considered good but a process for which the Sigma value was 5 (i.e. 99.977% error-free) was considered well controlled. In the five-year period, a total of 6,792,020 specimens were received in the laboratory. Among a total of 17,631,834 analyses, 15.5% were from within hospital. Total error rate was 0.45% and of all the quality indicators used in this study the average Sigma level was 5.2. Three indicators - visible hemolysis, failure of proficiency testing and delay in stat tests - were below 5 on the Sigma scale and highlight the need to rigorously monitor these processes. Using Six Sigma metrics quality in a clinical laboratory can be monitored more effectively and it can set benchmarks for improving efficiency.

  19. Frequency of medication errors in an emergency department of a large teaching hospital in southern Iran

    Directory of Open Access Journals (Sweden)

    Vazin A

    2014-12-01

    Full Text Available Afsaneh Vazin,1 Zahra Zamani,1 Nahid Hatam2 1Department of Clinical Pharmacy, Faculty of Pharmacy, 2School of Management and Medical Information Sciences, Shiraz University of Medical Sciences, Shiraz, Iran Abstract: This study was conducted with the purpose of determining the frequency of medication errors (MEs occurring in tertiary care emergency department (ED of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5% MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2% and antimicrobial (23.6% medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6% and wrong time error (4.4% were the most frequent administration errors. The less-experienced nurses (P=0.04, higher patient-to-nurse ratio (P=0.017, and the morning shifts (P=0.035 were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction

  20. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe