WorldWideScience

Sample records for higher error rate

  1. Reducing Error Rates for Iris Image using higher Contrast in Normalization process

    Science.gov (United States)

    Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa

    2017-08-01

    Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.

  2. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  3. Generalizing human error rates: A taxonomic approach

    International Nuclear Information System (INIS)

    Buffardi, L.; Fleishman, E.; Allen, J.

    1989-01-01

    It is well established that human error plays a major role in malfunctioning of complex, technological systems and in accidents associated with their operation. Estimates of the rate of human error in the nuclear industry range from 20-65% of all system failures. In response to this, the Nuclear Regulatory Commission has developed a variety of techniques for estimating human error probabilities for nuclear power plant personnel. Most of these techniques require the specification of the range of human error probabilities for various tasks. Unfortunately, very little objective performance data on error probabilities exist for nuclear environments. Thus, when human reliability estimates are required, for example in computer simulation modeling of system reliability, only subjective estimates (usually based on experts' best guesses) can be provided. The objective of the current research is to provide guidelines for the selection of human error probabilities based on actual performance data taken in other complex environments and applying them to nuclear settings. A key feature of this research is the application of a comprehensive taxonomic approach to nuclear and non-nuclear tasks to evaluate their similarities and differences, thus providing a basis for generalizing human error estimates across tasks. In recent years significant developments have occurred in classifying and describing tasks. Initial goals of the current research are to: (1) identify alternative taxonomic schemes that can be applied to tasks, and (2) describe nuclear tasks in terms of these schemes. Three standardized taxonomic schemes (Ability Requirements Approach, Generalized Information-Processing Approach, Task Characteristics Approach) are identified, modified, and evaluated for their suitability in comparing nuclear and non-nuclear power plant tasks. An agenda for future research and its relevance to nuclear power plant safety is also discussed

  4. Multicenter Assessment of Gram Stain Error Rates.

    Science.gov (United States)

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  5. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  6. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  7. Aniseikonia quantification: error rate of rule of thumb estimation.

    Science.gov (United States)

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  8. 45 CFR 98.100 - Error Rate Report.

    Science.gov (United States)

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... the total dollar amount of payments made in the sample); the average amount of improper payment; and... not received. (e) Costs of Preparing the Error Rate Report—Provided the error rate calculations and...

  9. Technological Advancements and Error Rates in Radiation Therapy Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States)

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  10. Technological Advancements and Error Rates in Radiation Therapy Delivery

    International Nuclear Information System (INIS)

    Margalit, Danielle N.; Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K.

    2011-01-01

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)–conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women’s Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher’s exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01–0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08–0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  11. Logical error rate scaling of the toric code

    International Nuclear Information System (INIS)

    Watson, Fern H E; Barrett, Sean D

    2014-01-01

    To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)

  12. Selectively Fortifying Reconfigurable Computing Device to Achieve Higher Error Resilience

    Directory of Open Access Journals (Sweden)

    Mingjie Lin

    2012-01-01

    Full Text Available With the advent of 10 nm CMOS devices and “exotic” nanodevices, the location and occurrence time of hardware defects and design faults become increasingly unpredictable, therefore posing severe challenges to existing techniques for error-resilient computing because most of them statically assign hardware redundancy and do not account for the error tolerance inherently existing in many mission-critical applications. This work proposes a novel approach to selectively fortifying a target reconfigurable computing device in order to achieve hardware-efficient error resilience for a specific target application. We intend to demonstrate that such error resilience can be significantly improved with effective hardware support. The major contributions of this work include (1 the development of a complete methodology to perform sensitivity and criticality analysis of hardware redundancy, (2 a novel problem formulation and an efficient heuristic methodology to selectively allocate hardware redundancy among a target design’s key components in order to maximize its overall error resilience, and (3 an academic prototype of SFC computing device that illustrates a 4 times improvement of error resilience for a H.264 encoder implemented with an FPGA device.

  13. Process error rates in general research applications to the Human ...

    African Journals Online (AJOL)

    Objective. To examine process error rates in applications for ethics clearance of health research. Methods. Minutes of 586 general research applications made to a human health research ethics committee (HREC) from April 2008 to March 2009 were examined. Rates of approval were calculated and reasons for requiring ...

  14. Individual Differences and Rating Errors in First Impressions of Psychopathy

    Directory of Open Access Journals (Sweden)

    Christopher T. A. Gillen

    2016-10-01

    Full Text Available The current study is the first to investigate whether individual differences in personality are related to improved first impression accuracy when appraising psychopathy in female offenders from thin-slices of information. The study also investigated the types of errors laypeople make when forming these judgments. Sixty-seven undergraduates assessed 22 offenders on their level of psychopathy, violence, likability, and attractiveness. Psychopathy rating accuracy improved as rater extroversion-sociability and agreeableness increased and when neuroticism and lifestyle and antisocial characteristics decreased. These results suggest that traits associated with nonverbal rating accuracy or social functioning may be important in threat detection. Raters also made errors consistent with error management theory, suggesting that laypeople overappraise danger when rating psychopathy.

  15. A critique of recent models for human error rate assessment

    International Nuclear Information System (INIS)

    Apostolakis, G.E.

    1988-01-01

    This paper critically reviews two groups of models for assessing human error rates under accident conditions. The first group, which includes the US Nuclear Regulatory Commission (NRC) handbook model and the human cognitive reliability (HCR) model, considers as fundamental the time that is available to the operators to act. The second group, which is represented by the success likelihood index methodology multiattribute utility decomposition (SLIM-MAUD) model, relies on ratings of the human actions with respect to certain qualitative factors and the subsequent derivation of error rates. These models are evaluated with respect to two criteria: the treatment of uncertainties and the internal coherence of the models. In other words, this evaluation focuses primarily on normative aspects of these models. The principal findings are as follows: (1) Both of the time-related models provide human error rates as a function of the available time for action and the prevailing conditions. However, the HCR model ignores the important issue of state-of-knowledge uncertainties, dealing exclusively with stochastic uncertainty, whereas the model presented in the NRC handbook handles both types of uncertainty. (2) SLIM-MAUD provides a highly structured approach for the derivation of human error rates under given conditions. However, the treatment of the weights and ratings in this model is internally inconsistent. (author)

  16. Evaluation of soft errors rate in a commercial memory EEPROM

    International Nuclear Information System (INIS)

    Claro, Luiz H.; Silva, A.A.; Santos, Jose A.

    2011-01-01

    Soft errors are transient circuit errors caused by external radiation. When an ion intercepts a p-n region in an electronic component, the ionization produces excess charges along the track. These charges when collected can flip internal values, especially in memory cells. The problem affects not only space application but also terrestrial ones. Neutrons induced by cosmic rays and alpha particles, emitted from traces of radioactive contaminants contained in packaging and chip materials, are the predominant sources of radiation. The soft error susceptibility is different for different memory technology hence the experimental study are very important for Soft Error Rate (SER) evaluation. In this work, the methodology for accelerated tests is presented with the results for SER in a commercial electrically erasable and programmable read-only memory (EEPROM). (author)

  17. The 95% confidence intervals of error rates and discriminant coefficients

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2015-02-01

    Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.

  18. Assessment of salivary flow rate: biologic variation and measure error.

    NARCIS (Netherlands)

    Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.

    2004-01-01

    OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated

  19. Color-motion feature-binding errors are mediated by a higher-order chromatic representation.

    Science.gov (United States)

    Shevell, Steven K; Wang, Wei

    2016-03-01

    Peripheral and central moving objects of the same color may be perceived to move in the same direction even though peripheral objects have a different true direction of motion [Nature429, 262 (2004)10.1038/429262a]. The perceived, illusory direction of peripheral motion is a color-motion feature-binding error. Recent work shows that such binding errors occur even without an exact color match between central and peripheral objects, and, moreover, the frequency of the binding errors in the periphery declines as the chromatic difference increases between the central and peripheral objects [J. Opt. Soc. Am. A31, A60 (2014)JOAOD60740-323210.1364/JOSAA.31.000A60]. This change in the frequency of binding errors with the chromatic difference raises the general question of the chromatic representation from which the difference is determined. Here, basic properties of the chromatic representation are tested to discover whether it depends on independent chromatic differences on the l and the s cardinal axes or, alternatively, on a more specific higher-order chromatic representation. Experimental tests compared the rate of feature-binding errors when the central and peripheral colors had the identical s chromaticity (so zero difference in s) and a fixed magnitude of l difference, while varying the identical s level in center and periphery (thus always keeping the s difference at zero). A chromatic representation based on independent l and s differences would result in the same frequency of color-motion binding errors at everyslevel. The results are contrary to this prediction, thus showing that the chromatic representation at the level of color-motion feature binding depends on a higher-order chromatic mechanism.

  20. Estimating error rates for firearm evidence identifications in forensic science

    Science.gov (United States)

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  1. Error rate performance of narrowband multilevel CPFSK signals

    Science.gov (United States)

    Ekanayake, N.; Fonseka, K. J. P.

    1987-04-01

    The paper presents a relatively simple method for analyzing the effect of IF filtering on the performance of multilevel FM signals. Using this method, the error rate performance of narrowband FM signals is analyzed for three different detection techniques, namely limiter-discriminator detection, differential detection and coherent detection followed by differential decoding. The symbol error probabilities are computed for a Gaussian IF filter and a second-order Butterworth IF filter. It is shown that coherent detection and differential decoding yields better performance than limiter-discriminator detection and differential detection, whereas two noncoherent detectors yield approximately identical performance.

  2. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  3. The nearest neighbor and the bayes error rates.

    Science.gov (United States)

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal.

  4. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  5. Safe and effective error rate monitors for SS7 signaling links

    Science.gov (United States)

    Schmidt, Douglas C.

    1994-04-01

    This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.

  6. Estimating diversification rates for higher taxa: BAMM can give problematic estimates of rates and rate shifts.

    Science.gov (United States)

    Meyer, Andreas L S; Wiens, John J

    2018-01-01

    Estimates of diversification rates are invaluable for many macroevolutionary studies. Recently, an approach called BAMM (Bayesian Analysis of Macro-evolutionary Mixtures) has become widely used for estimating diversification rates and rate shifts. At the same time, several articles have concluded that estimates of net diversification rates from the method-of-moments (MS) estimators are inaccurate. Yet, no studies have compared the ability of these two methods to accurately estimate clade diversification rates. Here, we use simulations to compare their performance. We found that BAMM yielded relatively weak relationships between true and estimated diversification rates. This occurred because BAMM underestimated the number of rates shifts across each tree, and assigned high rates to small clades with low rates. Errors in both speciation and extinction rates contributed to these errors, showing that using BAMM to estimate only speciation rates is also problematic. In contrast, the MS estimators (particularly using stem group ages), yielded stronger relationships between true and estimated diversification rates, by roughly twofold. Furthermore, the MS approach remained relatively accurate when diversification rates were heterogeneous within clades, despite the widespread assumption that it requires constant rates within clades. Overall, we caution that BAMM may be problematic for estimating diversification rates and rate shifts. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  7. Modeling of Bit Error Rate in Cascaded 2R Regenerators

    DEFF Research Database (Denmark)

    Öhman, Filip; Mørk, Jesper

    2006-01-01

    and the regenerating nonlinearity is investigated. It is shown that an increase in nonlinearity can compensate for an increase in noise figure or decrease in signal power. Furthermore, the influence of the improvement in signal extinction ratio along the cascade and the importance of choosing the proper threshold......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...

  8. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  9. Minimizing Symbol Error Rate for Cognitive Relaying with Opportunistic Access

    KAUST Repository

    Zafar, Ammar

    2012-12-29

    In this paper, we present an optimal resource allocation scheme (ORA) for an all-participate(AP) cognitive relay network that minimizes the symbol error rate (SER). The SER is derived and different constraints are considered on the system. We consider the cases of both individual and global power constraints, individual constraints only and global constraints only. Numerical results show that the ORA scheme outperforms the schemes with direct link only and uniform power allocation (UPA) in terms of minimizing the SER for all three cases of different constraints. Numerical results also show that the individual constraints only case provides the best performance at large signal-to-noise-ratio (SNR).

  10. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    Science.gov (United States)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  11. Accelerated testing for cosmic soft-error rate

    International Nuclear Information System (INIS)

    Ziegler, J.F.; Muhlfeld, H.P.; Montrose, C.J.; Curtis, H.W.; O'Gorman, T.J.; Ross, J.M.

    1996-01-01

    This paper describes the experimental techniques which have been developed at IBM to determine the sensitivity of electronic circuits to cosmic rays at sea level. It relates IBM circuit design and modeling, chip manufacture with process variations, and chip testing for SER sensitivity. This vertical integration from design to final test and with feedback to design allows a complete picture of LSI sensitivity to cosmic rays. Since advanced computers are designed with LSI chips long before the chips have been fabricated, and the system architecture is fully formed before the first chips are functional, it is essential to establish the chip reliability as early as possible. This paper establishes techniques to test chips that are only partly functional (e.g., only 1Mb of a 16Mb memory may be working) and can establish chip soft-error upset rates before final chip manufacturing begins. Simple relationships derived from measurement of more than 80 different chips manufactured over 20 years allow total cosmic soft-error rate (SER) to be estimated after only limited testing. Comparisons between these accelerated test results and similar tests determined by ''field testing'' (which may require a year or more of testing after manufacturing begins) show that the experimental techniques are accurate to a factor of 2

  12. Error-rate performance analysis of opportunistic regenerative relaying

    KAUST Repository

    Tourki, Kamel

    2011-09-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.

  13. The decline and fall of Type II error rates

    Science.gov (United States)

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  14. Estimating the annotation error rate of curated GO database sequence annotations

    Directory of Open Access Journals (Sweden)

    Brown Alfred L

    2007-05-01

    Full Text Available Abstract Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO sequence database (GOSeqLite. This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006 at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information.

  15. Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.

    Science.gov (United States)

    Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda

    2017-09-01

    We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe. © 2016 Family Process Institute.

  16. Low dose rate gamma ray induced loss and data error rate of multimode silica fibre links

    International Nuclear Information System (INIS)

    Breuze, G.; Fanet, H.; Serre, J.

    1993-01-01

    Fiber optics data transmission from numerous multiplexed sensors, is potentially attractive for nuclear plant applications. Multimode silica fiber behaviour during steady state gamma ray exposure is studied as a joint programme between LETI CE/SACLAY and EDF Renardieres: transmitted optical power and bit error rate have been measured on a 100 m optical fiber

  17. Linear transceiver design for nonorthogonal amplify-and-forward protocol using a bit error rate criterion

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2014-04-01

    The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol. © 2002-2012 IEEE.

  18. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  19. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  20. Analysis of gross error rates in operation of commercial nuclear power stations

    International Nuclear Information System (INIS)

    Joos, D.W.; Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    Experience in operation of US commercial nuclear power plants is reviewed over a 25-month period. The reports accumulated in that period on events of human error and component failure are examined to evaluate gross operator error rates. The impact of such errors on plant operation and safety is examined through the use of proper taxonomies of error, tasks and failures. Four categories of human errors are considered; namely, operator, maintenance, installation and administrative. The computed error rates are used to examine appropriate operator models for evaluation of operator reliability. Human error rates are found to be significant to a varying degree in both BWR and PWR. This emphasizes the import of considering human factors in safety and reliability analysis of nuclear systems. The results also indicate that human errors, and especially operator errors, do indeed follow the exponential reliability model. (Auth.)

  1. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    Science.gov (United States)

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  2. Error rate of automated calculation for wound surface area using a digital photography.

    Science.gov (United States)

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Bit error rate analysis of free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy

    2016-12-24

    In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.

  4. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    Science.gov (United States)

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  5. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    Science.gov (United States)

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  6. Bit Error Rate Minimizing Channel Shortening Equalizers for Single Carrier Cyclic Prefixed Systems

    National Research Council Canada - National Science Library

    Martin, Richard K; Vanbleu, Koen; Ysebaert, Geert

    2007-01-01

    .... Previous work on channel shortening has largely been in the context of digital subscriber lines, a wireline system that allows bit allocation, thus it has focused on maximizing the bit rate for a given bit error rate (BER...

  7. Framed bit error rate testing for 100G ethernet equipment

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2010-01-01

    rate. As the need for 100 Gigabit Ethernet equipment rises, so does the need for equipment, which can properly test these systems during development, deployment and use. This paper presents early results from a work-in-progress academia-industry collaboration project and elaborates on the challenges...

  8. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency.

    Science.gov (United States)

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2012-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 2nd graders and 974 3rd graders. Participants were assessed using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and the Woodcock Reading Mastery Test (WRMT) Passage Comprehension subtest. Results from this study further illuminate the significant relationships between error rate, oral reading fluency, and reading comprehension performance, and grade-specific guidelines for appropriate error rate levels. Low oral reading fluency and high error rates predict the level of passage comprehension performance. For second grade students below benchmark, a fall assessment error rate of 28% predicts that student comprehension performance will be below average. For third grade students below benchmark, the fall assessment cut point is 14%. Instructional implications of the findings are discussed.

  9. Dispensing error rate after implementation of an automated pharmacy carousel system.

    Science.gov (United States)

    Oswald, Scott; Caldwell, Richard

    2007-07-01

    A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.

  10. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    Science.gov (United States)

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  11. Rich or poor: Who should pay higher tax rates?

    Science.gov (United States)

    Murilo Castro de Oliveira, Paulo

    2017-08-01

    A dynamic agent model is introduced with an annual random wealth multiplicative process followed by taxes paid according to a linear wealth-dependent tax rate. If poor agents pay higher tax rates than rich agents, eventually all wealth becomes concentrated in the hands of a single agent. By contrast, if poor agents are subject to lower tax rates, the economic collective process continues forever.

  12. Errors of car wheels rotation rate measurement using roller follower on test benches

    Science.gov (United States)

    Potapov, A. S.; Svirbutovich, O. A.; Krivtsov, S. N.

    2018-03-01

    The article deals with rotation rate measurement errors, which depend on the motor vehicle rate, on the roller, test benches. Monitoring of the vehicle performance under operating conditions is performed on roller test benches. Roller test benches are not flawless. They have some drawbacks affecting the accuracy of vehicle performance monitoring. Increase in basic velocity of the vehicle requires increase in accuracy of wheel rotation rate monitoring. It determines the degree of accuracy of mode identification for a wheel of the tested vehicle. To ensure measurement accuracy for rotation velocity of rollers is not an issue. The problem arises when measuring rotation velocity of a car wheel. The higher the rotation velocity of the wheel is, the lower the accuracy of measurement is. At present, wheel rotation frequency monitoring on roller test benches is carried out by following-up systems. Their sensors are rollers following wheel rotation. The rollers of the system are not kinematically linked to supporting rollers of the test bench. The roller follower is forced against the wheels of the tested vehicle by means of a spring-lever mechanism. Experience of the test bench equipment operation has shown that measurement accuracy is satisfactory at small rates of vehicles diagnosed on roller test benches. With a rising diagnostics rate, rotation velocity measurement errors occur in both braking and pulling modes because a roller spins about a tire tread. The paper shows oscillograms of changes in wheel rotation velocity and rotation velocity measurement system’s signals when testing a vehicle on roller test benches at specified rates.

  13. Double symbol error rates for differential detection of narrow-band FM

    Science.gov (United States)

    Simon, M. K.

    1985-01-01

    This paper evaluates the double symbol error rate (average probability of two consecutive symbol errors) in differentially detected narrow-band FM. Numerical results are presented for the special case of MSK with a Gaussian IF receive filter. It is shown that, not unlike similar results previously obtained for the single error probability of such systems, large inaccuracies in predicted performance can occur when intersymbol interference is ignored.

  14. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    Science.gov (United States)

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  15. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    Science.gov (United States)

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  16. Error rates in forensic DNA analysis: Definition, numbers, impact and communication

    NARCIS (Netherlands)

    Kloosterman, A.; Sjerps, M.; Quak, A.

    2014-01-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and

  17. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    Science.gov (United States)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  18. Classification based upon gene expression data: bias and precision of error rates.

    Science.gov (United States)

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  19. Time-discrete higher order ALE formulations: a priori error analysis

    KAUST Repository

    Bonito, Andrea

    2013-03-16

    We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our estimates hold without any restrictions on the time steps for dG with exact integration or Reynolds\\' quadrature. They involve a mild restriction on the time steps for the practical Runge-Kutta-Radau methods of any order. The key ingredients are the stability results shown earlier in Bonito et al. (Time-discrete higher order ALE formulations: stability, 2013) along with a novel ALE projection. Numerical experiments illustrate and complement our theoretical results. © 2013 Springer-Verlag Berlin Heidelberg.

  20. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.; Ghaeb, Jasim A.; Jazzar, Saleh; Saraereh, Omar A.

    2012-01-01

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate

  1. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    Science.gov (United States)

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  2. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  3. The study of error for analysis in dynamic image from the error of count rates in Nal (Tl) scintillation camera

    International Nuclear Information System (INIS)

    Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam; Park, Hoon Hee

    2013-01-01

    This study is aimed to evaluate the effect of T 1/2 upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9 9m TcO 4 - of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ 2 test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T 1/2 error from change of gradient with -0.25% to +0.25%, if T 1/2 is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T 1/2 error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation measurement. Especially, we found a

  4. [The effectiveness of error reporting promoting strategy on nurse's attitude, patient safety culture, intention to report and reporting rate].

    Science.gov (United States)

    Kim, Myoungsoo

    2010-04-01

    The purpose of this study was to examine the impact of strategies to promote reporting of errors on nurses' attitude to reporting errors, organizational culture related to patient safety, intention to report and reporting rate in hospital nurses. A nonequivalent control group non-synchronized design was used for this study. The program was developed and then administered to the experimental group for 12 weeks. Data were analyzed using descriptive analysis, X(2)-test, t-test, and ANCOVA with the SPSS 12.0 program. After the intervention, the experimental group showed significantly higher scores for nurses' attitude to reporting errors (experimental: 20.73 vs control: 20.52, F=5.483, p=.021) and reporting rate (experimental: 3.40 vs control: 1.33, F=1998.083, porganizational culture and intention to report. The study findings indicate that strategies that promote reporting of errors play an important role in producing positive attitudes to reporting errors and improving behavior of reporting. Further advanced strategies for reporting errors that can lead to improved patient safety should be developed and applied in a broad range of hospitals.

  5. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  6. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin; Genton, Marc G.

    2013-01-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  7. Higher rates of sex evolve in spatially heterogeneous environments.

    Science.gov (United States)

    Becks, Lutz; Agrawal, Aneil F

    2010-11-04

    The evolution and maintenance of sexual reproduction has puzzled biologists for decades. Although this field is rich in hypotheses, experimental evidence is scarce. Some important experiments have demonstrated differences in evolutionary rates between sexual and asexual populations; other experiments have documented evolutionary changes in phenomena related to genetic mixing, such as recombination and selfing. However, direct experiments of the evolution of sex within populations are extremely rare (but see ref. 12). Here we use the rotifer, Brachionus calyciflorus, which is capable of both sexual and asexual reproduction, to test recent theory predicting that there is more opportunity for sex to evolve in spatially heterogeneous environments. Replicated experimental populations of rotifers were maintained in homogeneous environments, composed of either high- or low-quality food habitats, or in heterogeneous environments that consisted of a mix of the two habitats. For populations maintained in either type of homogeneous environment, the rate of sex evolves rapidly towards zero. In contrast, higher rates of sex evolve in populations experiencing spatially heterogeneous environments. The data indicate that the higher level of sex observed under heterogeneity is not due to sex being less costly or selection against sex being less efficient; rather sex is sufficiently advantageous in heterogeneous environments to overwhelm its inherent costs. Counter to some alternative theories for the evolution of sex, there is no evidence that genetic drift plays any part in the evolution of sex in these populations.

  8. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    Science.gov (United States)

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  9. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A. [Canis Lupus LLC and Department of Human Oncology, University of Wisconsin, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Departments of Human Oncology, Medical Physics, and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin 53792 (United States)

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa

  11. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    International Nuclear Information System (INIS)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-01-01

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  12. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei

    2014-06-01

    Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.

  13. Voice recognition versus transcriptionist: error rates and productivity in MRI reporting.

    Science.gov (United States)

    Strahan, Rodney H; Schneider-Kolsky, Michal E

    2010-10-01

    Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Fifty MRI reports generated by VR and 50 finalized MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Forty-two % and 30% of the finalized VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR. © 2010 The Authors. Journal of Medical Imaging and Radiation Oncology © 2010 The Royal Australian and New Zealand College of Radiologists.

  14. Voice recognition versus transcriptionist: error rated and productivity in MRI reporting

    International Nuclear Information System (INIS)

    Strahan, Rodney H.; Schneider-Kolsky, Michal E.

    2010-01-01

    Full text: Purpose: Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Methods: Fifty MRI reports generated by VR and 50 finalised MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Results: Forty-two % and 30% of the finalised VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Conclusion: Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR.

  15. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    International Nuclear Information System (INIS)

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-01-01

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization of the homodyne detection scheme.

  16. Analytical expression for the bit error rate of cascaded all-optical regenerators

    DEFF Research Database (Denmark)

    Mørk, Jesper; Öhman, Filip; Bischoff, S.

    2003-01-01

    We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed.......We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed....

  17. Power penalties for multi-level PAM modulation formats at arbitrary bit error rates

    Science.gov (United States)

    Kaliteevskiy, Nikolay A.; Wood, William A.; Downie, John D.; Hurley, Jason; Sterlingov, Petr

    2016-03-01

    There is considerable interest in combining multi-level pulsed amplitude modulation formats (PAM-L) and forward error correction (FEC) in next-generation, short-range optical communications links for increased capacity. In this paper we derive new formulas for the optical power penalties due to modulation format complexity relative to PAM-2 and due to inter-symbol interference (ISI). We show that these penalties depend on the required system bit-error rate (BER) and that the conventional formulas overestimate link penalties. Our corrections to the standard formulas are very small at conventional BER levels (typically 1×10-12) but become significant at the higher BER levels enabled by FEC technology, especially for signal distortions due to ISI. The standard formula for format complexity, P = 10log(L-1), is shown to overestimate the actual penalty for PAM-4 and PAM-8 by approximately 0.1 and 0.25 dB respectively at 1×10-3 BER. Then we extend the well-known PAM-2 ISI penalty estimation formula from the IEEE 802.3 standard 10G link modeling spreadsheet to the large BER case and generalize it for arbitrary PAM-L formats. To demonstrate and verify the BER dependence of the ISI penalty, a set of PAM-2 experiments and Monte-Carlo modeling simulations are reported. The experimental results and simulations confirm that the conventional formulas can significantly overestimate ISI penalties at relatively high BER levels. In the experiments, overestimates up to 2 dB are observed at 1×10-3 BER.

  18. Time-discrete higher order ALE formulations: a priori error analysis

    KAUST Repository

    Bonito, Andrea; Kyza, Irene; Nochetto, Ricardo H.

    2013-01-01

    We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our

  19. Enabling Higher Data Rates for Planetary Science Missions

    Science.gov (United States)

    Deutsch, L. J.; Townes, S. A.; Lazio, J.; Bell, D. J.; Chahat, N. E.; Kovalik, J. M.; Kuperman, I.; Sauder, J.; Liebrecht, P. E.

    2017-12-01

    The data rate from deep space spacecraft has increased by more than 10 orders of magnitude since the first lunar missions in the 1960s. The demand for increased data rates has stemmed from the increasing sophistication of the science questions being addressed and the concomitant increase in the complexity of the missions themselves (from fly-by to orbit to land and rove). Projections for the next few decades suggest the demand for data rates for deep space missions will continue to increase by approximately one order of magnitude every decade, driven by these same factors. Achieving higher data rates requires a partnership between the spacecraft and the ground system. We describe a series of technology developments for flight telecommunications systems, both at radio frequency (RF) and optical, to enable spacecraft to transmit and receive larger data volumes. These technology developments include deployable high gain antennas for small spacecraft, re-programmable software-defined radios, and optical communication packages designed for CubeSat form factors. The intent is that these developments would provide enhancements in capability for both spacecraft-Earth and spacecraft-spacecraft telecommunications. We also describe the future planning for NASA's Deep Space Network (DSN), which remains the prime conduit for data from all planetary science missions. Through a combination of new antennas and backends being installed over the next five years and incorporation of optical communications, the DSN aims to ensure that the historical improvements in data rates and volumes will continue for many decades. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.

  20. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    Science.gov (United States)

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  1. Error rates of a full-duplex system over EGK fading channels subject to laplacian interference

    KAUST Repository

    Soury, Hamza

    2017-07-31

    This paper develops a mathematical paradigm to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). Particularly, we study the dominant intra-cell interferer problem that appears between HD users scheduled on the same FD-channel. The distribution of the dominant interference is first characterized via its distribution function, which is derived in closed-form. Assuming Nakagami-m fading, the probability of error for different modulation schemes is studied and a unified closed-form expression for the average symbol error rate is derived. To this end, we show the effective downlink throughput gain, harvested by employing FD communication at a BS that serves HD users, as a function of the signal-to-interference-ratio when compared to an idealized HD interference and noise free BS operation.

  2. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    Science.gov (United States)

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of

  3. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates

    Directory of Open Access Journals (Sweden)

    Berhane Yemane

    2008-03-01

    Full Text Available Abstract Background As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs. Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. Methods This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. Results The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. Conclusion The low sensitivity of parameter

  4. Benefits and risks of using smart pumps to reduce medication error rates: a systematic review.

    Science.gov (United States)

    Ohashi, Kumiko; Dalleur, Olivia; Dykes, Patricia C; Bates, David W

    2014-12-01

    Smart infusion pumps have been introduced to prevent medication errors and have been widely adopted nationally in the USA, though they are not always used in Europe or other regions. Despite widespread usage of smart pumps, intravenous medication errors have not been fully eliminated. Through a systematic review of recent studies and reports regarding smart pump implementation and use, we aimed to identify the impact of smart pumps on error reduction and on the complex process of medication administration, and strategies to maximize the benefits of smart pumps. The medical literature related to the effects of smart pumps for improving patient safety was searched in PUBMED, EMBASE, and the Cochrane Central Register of Controlled Trials (CENTRAL) (2000-2014) and relevant papers were selected by two researchers. After the literature search, 231 papers were identified and the full texts of 138 articles were assessed for eligibility. Of these, 22 were included after removal of papers that did not meet the inclusion criteria. We assessed both the benefits and negative effects of smart pumps from these studies. One of the benefits of using smart pumps was intercepting errors such as the wrong rate, wrong dose, and pump setting errors. Other benefits include reduction of adverse drug event rates, practice improvements, and cost effectiveness. Meanwhile, the current issues or negative effects related to using smart pumps were lower compliance rates of using smart pumps, the overriding of soft alerts, non-intercepted errors, or the possibility of using the wrong drug library. The literature suggests that smart pumps reduce but do not eliminate programming errors. Although the hard limits of a drug library play a main role in intercepting medication errors, soft limits were still not as effective as hard limits because of high override rates. Compliance in using smart pumps is key towards effectively preventing errors. Opportunities for improvement include upgrading drug

  5. Kurzweil Reading Machine: A Partial Evaluation of Its Optical Character Recognition Error Rate.

    Science.gov (United States)

    Goodrich, Gregory L.; And Others

    1979-01-01

    A study designed to assess the ability of the Kurzweil reading machine (a speech reading device for the visually handicapped) to read three different type styles produced by five different means indicated that the machines tested had different error rates depending upon the means of producing the copy and upon the type style used. (Author/CL)

  6. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei; Ding, Xiaoli; Lu, Zhong; Jung, Hyungsup; Hu, Jun; Feng, Guangcai

    2014-01-01

    be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long

  7. Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.

    2005-01-01

    Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...

  8. Error rates of a full-duplex system over EGK fading channels subject to laplacian interference

    KAUST Repository

    Soury, Hamza; Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    modulation schemes is studied and a unified closed-form expression for the average symbol error rate is derived. To this end, we show the effective downlink throughput gain, harvested by employing FD communication at a BS that serves HD users, as a function

  9. Errors of first-order probe correction for higher-order probes in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Nielsen, Jeppe Majlund; Pivnenko, Sergiy

    2004-01-01

    An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe.......An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe....

  10. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Reduction Rates for Higher Americium Oxidation States in Nitric Acid

    Energy Technology Data Exchange (ETDEWEB)

    Grimes, Travis Shane [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mincher, Bruce Jay [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmitt, Nicholas C [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-30

    The stability of hexavalent americium was measured using multiple americium concentrations and nitric acid concentrations after contact with the strong oxidant sodium bismuthate. Contrary to our hypotheses Am(VI) was not reduced faster at higher americium concentrations, and the reduction was only zero-order at short time scales. Attempts to model the reduction kinetics using zero order kinetic models showed Am(VI) reduction in nitric acid is more complex than the autoreduction processes reported by others in perchloric acid. The classical zero-order reduction of Am(VI) was found here only for short times on the order of a few hours. We did show that the rate of Am(V) production was less than the rate of Am(VI) reduction, indicating that some Am(VI) undergoes two electron-reduction to Am(IV). We also monitored the Am(VI) reduction in contact with the organic diluent dodecane. A direct comparison of these results with those in the absence of the organic diluent showed the reduction rates for Am(VI) were not statistically different for both systems. Additional americium oxidations conducted in the presence of Ce(IV)/Ce(III) ions showed that Am(VI) is reduced without the typical growth of Am(V) observed in the systems sans Ce ion. This was an interesting result which suggests a potential new reduction/oxidation pathway for Am in the presence of Ce; however, these results were very preliminary, and will require additional experiments to understand the mechanism by which this occurs. Overall, these studies have shown that hexavalent americium is fundamentally stable enough in nitric acid to run a separations process. However, the complicated nature of the reduction pathways based on the system components is far from being rigorously understood.

  12. Competence in Streptococcus pneumoniae is regulated by the rate of ribosomal decoding errors.

    Science.gov (United States)

    Stevens, Kathleen E; Chang, Diana; Zwack, Erin E; Sebert, Michael E

    2011-01-01

    Competence for genetic transformation in Streptococcus pneumoniae develops in response to accumulation of a secreted peptide pheromone and was one of the initial examples of bacterial quorum sensing. Activation of this signaling system induces not only expression of the proteins required for transformation but also the production of cellular chaperones and proteases. We have shown here that activity of this pathway is sensitively responsive to changes in the accuracy of protein synthesis that are triggered by either mutations in ribosomal proteins or exposure to antibiotics. Increasing the error rate during ribosomal decoding promoted competence, while reducing the error rate below the baseline level repressed the development of both spontaneous and antibiotic-induced competence. This pattern of regulation was promoted by the bacterial HtrA serine protease. Analysis of strains with the htrA (S234A) catalytic site mutation showed that the proteolytic activity of HtrA selectively repressed competence when translational fidelity was high but not when accuracy was low. These findings redefine the pneumococcal competence pathway as a response to errors during protein synthesis. This response has the capacity to address the immediate challenge of misfolded proteins through production of chaperones and proteases and may also be able to address, through genetic exchange, upstream coding errors that cause intrinsic protein folding defects. The competence pathway may thereby represent a strategy for dealing with lesions that impair proper protein coding and for maintaining the coding integrity of the genome. The signaling pathway that governs competence in the human respiratory tract pathogen Streptococcus pneumoniae regulates both genetic transformation and the production of cellular chaperones and proteases. The current study shows that this pathway is sensitively controlled in response to changes in the accuracy of protein synthesis. Increasing the error rate during

  13. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  14. FPGA-based Bit-Error-Rate Tester for SEU-hardened Optical Links

    CERN Document Server

    Detraz, S; Moreira, P; Papadopoulos, S; Papakonstantinou, I; Seif El Nasr, S; Sigaud, C; Soos, C; Stejskal, P; Troska, J; Versmissen, H

    2009-01-01

    The next generation of optical links for future High-Energy Physics experiments will require components qualified for use in radiation-hard environments. To cope with radiation induced single-event upsets, the physical layer protocol will include Forward Error Correction (FEC). Bit-Error-Rate (BER) testing is a widely used method to characterize digital transmission systems. In order to measure the BER with and without the proposed FEC, simultaneously on several devices, a multi-channel BER tester has been developed. This paper describes the architecture of the tester, its implementation in a Xilinx Virtex-5 FPGA device and discusses the experimental results.

  15. A minimum bit error-rate detector for amplify and forward relaying systems

    KAUST Repository

    Ahmed, Qasim Zeeshan; Alouini, Mohamed-Slim; Aissa, Sonia

    2012-01-01

    In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.

  16. A minimum bit error-rate detector for amplify and forward relaying systems

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2012-05-01

    In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.

  17. Error rate on the director's task is influenced by the need to take another's perspective but not the type of perspective.

    Science.gov (United States)

    Legg, Edward W; Olivier, Laure; Samuel, Steven; Lurz, Robert; Clayton, Nicola S

    2017-08-01

    Adults are prone to responding erroneously to another's instructions based on what they themselves see and not what the other person sees. Previous studies have indicated that in instruction-following tasks participants make more errors when required to infer another's perspective than when following a rule. These inference-induced errors may occur because the inference process itself is error-prone or because they are a side effect of the inference process. Crucially, if the inference process is error-prone, then higher error rates should be found when the perspective to be inferred is more complex. Here, we found that participants were no more error-prone when they had to judge how an item appeared (Level 2 perspective-taking) than when they had to judge whether an item could or could not be seen (Level 1 perspective-taking). However, participants were more error-prone in the perspective-taking variants of the task than in a version that only required them to follow a rule. These results suggest that having to represent another's perspective induces errors when following their instructions but that error rates are not directly linked to errors in inferring another's perspective.

  18. The assessment of cognitive errors using an observer-rated method.

    Science.gov (United States)

    Drapeau, Martin

    2014-01-01

    Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.

  19. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    Directory of Open Access Journals (Sweden)

    Wei He

    Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.

  20. Social motivation in prospective memory: higher importance ratings and reported performance rates for social tasks.

    Science.gov (United States)

    Penningroth, Suzanna L; Scott, Walter D; Freuen, Margaret

    2011-03-01

    Few studies have addressed social motivation in prospective memory (PM). In a pilot study and two main studies, we examined whether social PM tasks possess a motivational advantage over nonsocial PM tasks. In the pilot study and Study 1, participants listed their real-life important and less important PM tasks. Independent raters categorized the PM tasks as social or nonsocial. Results from both studies showed a higher proportion of tasks rated as social when important tasks were requested than when less important tasks were requested. In Study 1, participants also reported whether they had remembered to perform each PM task. Reported performance rates were higher for tasks rated as social than for those rated as nonsocial. Finally, in Study 2, participants rated the importance of two hypothetical PM tasks, one social and one nonsocial. The social PM task was rated higher in importance. Overall, these findings suggest that social PM tasks are viewed as more important than nonsocial PM tasks and they are more likely to be performed. We propose that consideration of the social relevance of PM will lead to a more complete and ecologically valid theoretical description of PM performance. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  1. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  2. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    Directory of Open Access Journals (Sweden)

    Jeffrey R Kugelman

    Full Text Available Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5 of all compared methods.

  3. Practical error estimates for Reynolds' lubrication approximation and its higher order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Wilkening, Jon

    2008-12-10

    Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.

  4. Error-Rate Bounds for Coded PPM on a Poisson Channel

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  5. Symbol error rate performance evaluation of the LM37 multimegabit telemetry modulator-demodulator unit

    Science.gov (United States)

    Malek, H.

    1981-01-01

    The LM37 multimegabit telemetry modulator-demodulator unit was tested for evaluation of its symbol error rate (SER) performance. Using an automated test setup, the SER tests were carried out at various symbol rates and signal-to-noise ratios (SNR), ranging from +10 to -10 dB. With the aid of a specially designed error detector and a stabilized signal and noise summation unit, measurement of the SER at low SNR was possible. The results of the tests show that at symbol rates below 20 megasymbols per second (MS)s) and input SNR above -6 dB, the SER performance of the modem is within the specified 0.65 to 1.5 dB of the theoretical error curve. At symbol rates above 20 MS/s, the specification is met at SNR's down to -2 dB. The results of the SER tests are presented with the description of the test setup and the measurement procedure.

  6. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.

    2012-12-06

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.

  7. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  8. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    Science.gov (United States)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  9. The type I error rate for in vivo Comet assay data when the hierarchical structure is disregarded

    DEFF Research Database (Denmark)

    Hansen, Merete Kjær; Kulahci, Murat

    the type I error rate is greater than the nominal _ at 0.05. Closed-form expressions based on scaled F-distributions using the Welch-Satterthwaite approximation are provided to show how the type I error rate is aUected. With this study we hope to motivate researchers to be more precise regarding......, and this imposes considerable impact on the type I error rate. This study aims to demonstrate the implications that result from disregarding the hierarchical structure. DiUerent combinations of the factor levels as they appear in a literature study give type I error rates up to 0.51 and for all combinations...

  10. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  11. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    Science.gov (United States)

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  12. Comparing Response Times and Error Rates in a Simultaneous Masking Paradigm

    Directory of Open Access Journals (Sweden)

    F Hermens

    2014-08-01

    Full Text Available In simultaneous masking, performance on a foveally presented target is impaired by one or more flanking elements. Previous studies have demonstrated strong effects of the grouping of the target and the flankers on the strength of masking (e.g., Malania, Herzog & Westheimer, 2007. These studies have predominantly examined performance by measuring offset discrimination thresholds as a measure of performance, and it is therefore unclear whether other measures of performance provide similar outcomes. A recent study, which examined the role of grouping on error rates and response times in a speeded vernier offset discrimination task, similar to that used by Malania et al. (2007, suggested a possible dissociation between the two measures, with error rates mimicking threshold performance, but response times showing differential results (Panis & Hermens, 2014. We here report the outcomes of three experiments examining this possible dissociation, and demonstrate an overall similar pattern of results for error rates and response times across a broad range of mask layouts. Moreover, the pattern of results in our experiments strongly correlates with threshold performance reported earlier (Malania et al., 2007. Our results suggest that outcomes in a simultaneous masking paradigm do not critically depend on the outcome measure used, and therefore provide evidence for a common underlying mechanism.

  13. Tax revenue and inflation rate predictions in Banda Aceh using Vector Error Correction Model (VECM)

    Science.gov (United States)

    Maulia, Eva; Miftahuddin; Sofyan, Hizir

    2018-05-01

    A country has some important parameters to achieve the welfare of the economy, such as tax revenues and inflation. One of the largest revenues of the state budget in Indonesia comes from the tax sector. Besides, the rate of inflation occurring in a country can be used as one measure, to measure economic problems that the country facing. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the relationship and forecasting tax revenue and inflation rate. VECM (Vector Error Correction Model) was chosen as the method used in this research, because of the data used in the form of multivariate time series data. This study aims to produce a VECM model with optimal lag and to predict the tax revenue and inflation rate of the VECM model. The results show that the best model for data of tax revenue and the inflation rate in Banda Aceh City is VECM with 3rd optimal lag or VECM (3). Of the seven models formed, there is a significant model that is the acceptance model of income tax. The predicted results of tax revenue and the inflation rate in Kota Banda Aceh for the next 6, 12 and 24 periods (months) obtained using VECM (3) are considered valid, since they have a minimum error value compared to other models.

  14. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    Science.gov (United States)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  15. PS-022 Complex automated medication systems reduce medication administration error rates in an acute medical ward

    DEFF Research Database (Denmark)

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    2017-01-01

    Background Medication errors have received extensive attention in recent decades and are of significant concern to healthcare organisations globally. Medication errors occur frequently, and adverse events associated with medications are one of the largest causes of harm to hospitalised patients...... cabinet, automated dispensing and barcode medication administration; (2) non-patient specific automated dispensing and barcode medication administration. The occurrence of administration errors was observed in three 3 week periods. The error rates were calculated by dividing the number of doses with one...

  16. Soft error rate analysis methodology of multi-Pulse-single-event transients

    International Nuclear Information System (INIS)

    Zhou Bin; Huo Mingxue; Xiao Liyi

    2012-01-01

    As transistor feature size scales down, soft errors in combinational logic because of high-energy particle radiation is gaining more and more concerns. In this paper, a combinational logic soft error analysis methodology considering multi-pulse-single-event transients (MPSETs) and re-convergence with multi transient pulses is proposed. In the proposed approach, the voltage pulse produced at the standard cell output is approximated by a triangle waveform, and characterized by three parameters: pulse width, the transition time of the first edge, and the transition time of the second edge. As for the pulse with the amplitude being smaller than the supply voltage, the edge extension technique is proposed. Moreover, an efficient electrical masking model comprehensively considering transition time, delay, width and amplitude is proposed, and an approach using the transition times of two edges and pulse width to compute the amplitude of pulse is proposed. Finally, our proposed firstly-independently-propagating-secondly-mutually-interacting (FIP-SMI) is used to deal with more practical re-convergence gate with multi transient pulses. As for MPSETs, a random generation model of MPSETs is exploratively proposed. Compared to the estimates obtained using circuit level simulations by HSpice, our proposed soft error rate analysis algorithm has 10% errors in SER estimation with speed up of 300 when the single-pulse-single-event transient (SPSET) is considered. We have also demonstrated the runtime and SER decrease with the increment of P0 using designs from the ISCAS-85 benchmarks. (authors)

  17. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  18. Graduation Rates and the Higher Education Demographic Evolution

    Science.gov (United States)

    Hunsaker, B. Tom; Thomas, Douglas E.

    2013-01-01

    In his 1918 orienting work, The Higher Learning in America, Veblen highlights two primary aims of the higher education institution: (a) scientific and scholarly inquiry, and (b) the instruction of students (Veblen, 1918). As of 2006, this overarching mission remained intact. In contemporary literature, a common measure of the efficacy of the…

  19. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    Science.gov (United States)

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  20. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  1. On the symmetric α-stable distribution with application to symbol error rate calculations

    KAUST Repository

    Soury, Hamza

    2016-12-24

    The probability density function (PDF) of the symmetric α-stable distribution is investigated using the inverse Fourier transform of its characteristic function. For general values of the stable parameter α, it is shown that the PDF and the cumulative distribution function of the symmetric stable distribution can be expressed in terms of the Fox H function as closed-form. As an application, the probability of error of single input single output communication systems using different modulation schemes with an α-stable perturbation is studied. In more details, a generic formula is derived for generalized fading distribution, such as the extended generalized-k distribution. Later, simpler expressions of these error rates are deduced for some selected special cases and compact approximations are derived using asymptotic expansions.

  2. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2011-06-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  3. ESTIMATING RETURN RATE OF HIGHER EDUCATION FUND IN RUSSIA

    Directory of Open Access Journals (Sweden)

    Semenikhina V. A.

    2014-06-01

    Full Text Available Currently, the Russian government pays great attention to the field of higher and postgraduate education. But in the Russian scientific literature there are gaps related to the effectiveness of the overall evaluation of the higher education sector. The article dwells upon the problem of interregional income spread of the Russian population. Empirical estimator of difference influence accounting for human capital accumulated in Russian regions on wage levels and maximum increase of total wage levels and population income for 2001-2011 is carried out. Higher education, exceeding the influence of accumulated volume of the main funds, has a great influence on income spread in Russian regions. Besides, increase of higher education fund in Russian regions contributes to the population’s wage increase and growth in income, but at the same time it decreases legal wages. Results of the study extend knowledge of the economics of education of the Russian Federation.

  4. Linear transceiver design for nonorthogonal amplify-and-forward protocol using a bit error rate criterion

    KAUST Repository

    Ahmed, Qasim Zeeshan; Park, Kihong; Alouini, Mohamed-Slim; Aï ssa, Sonia

    2014-01-01

    The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network

  5. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    Science.gov (United States)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  6. Emotional Competence and Drop-Out Rates in Higher Education

    Science.gov (United States)

    Kingston, Emma

    2008-01-01

    Purpose: The purpose of this paper is to compare the emotional competence of first year undergraduates enrolled on a high or low drop-out rate (HDR and LDR, respectively) course, at a newly established university within the UK. Design/methodology/approach: A mixed methods approach using both quantitative and qualitative data collection methods was…

  7. Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise

    KAUST Repository

    Souri, Hamza; Alouini, Mohamed-Slim

    2015-01-01

    The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.

  8. Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise

    KAUST Repository

    Souri, Hamza

    2015-06-01

    The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.

  9. Error rates and resource overheads of encoded three-qubit gates

    Science.gov (United States)

    Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.

    2017-10-01

    A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.

  10. Comparison of Bit Error Rate of Line Codes in NG-PON2

    Directory of Open Access Journals (Sweden)

    Tomas Horvath

    2016-05-01

    Full Text Available This article focuses on simulation and comparison of line codes NRZ (Non Return to Zero, RZ (Return to Zero and Miller’s code for NG-PON2 (Next-Generation Passive Optical Network Stage 2 using. Our article provides solutions with Q-factor, BER (Bit Error Rate, and bandwidth comparison. Line codes are the most important part of communication over the optical fibre. The main role of these codes is digital signal representation. NG-PON2 networks use optical fibres for communication that is the reason why OptSim v5.2 is used for simulation.

  11. Inclusive bit error rate analysis for coherent optical code-division multiple-access system

    Science.gov (United States)

    Katz, Gilad; Sadot, Dan

    2002-06-01

    Inclusive noise and bit error rate (BER) analysis for optical code-division multiplexing (OCDM) using coherence techniques is presented. The analysis contains crosstalk calculation of the mutual field variance for different number of users. It is shown that the crosstalk noise depends deeply on the receiver integration time, the laser coherence time, and the number of users. In addition, analytical results of the power fluctuation at the received channel due to the data modulation at the rejected channels are presented. The analysis also includes amplified spontaneous emission (ASE)-related noise effects of in-line amplifiers in a long-distance communication link.

  12. Student Ratings of Instruction in Turkish Higher Education

    Directory of Open Access Journals (Sweden)

    Nehir Sert

    2013-05-01

    Full Text Available The end-of-term student evaluations have a twofold purpose: to provide information for administrators to make personnel decisions, and to help instructors to improve the quality of their teaching. The aim of this study is to investigate the ‘utility’ of the Student Ratings of Instruction (SRI. To that end, the concerns of the administrators, instructors and students regarding the use of the SRI in formative and summative evaluations are questioned. This study also investigates possible variables associated with the SRI: 1 what are the differences in ratings among the below-average, average and the above-average students? and 2 what is the correlation between the students’ grades and ratings? The participants of the study consisted of 5 administrators, 17 instructors and 292 students from the faculty of education of a foundation university in Ankara. A triangulation of quantitative and qualitative methods was adopted. In the first phase, causal comparative and correlation research methods were implemented. In the second phase, qualitative data were collected through semi-structured interviews. The results revealed that there was no significant difference in the SRI among the below-average, average and above-average students. The correlation between the student grades and the SRI was significant at a low level. The SRI were reportedly utilised to make teaching more effective and to make decisions when employing part-time personnel only. The permanent personnel were not affected by the SRI. Suggestions have been put forward to verify the usefulness of SRI.

  13. Modeling the cosmic-ray-induced soft-error rate in integrated circuits: An overview

    International Nuclear Information System (INIS)

    Srinivasan, G.R.

    1996-01-01

    This paper is an overview of the concepts and methodologies used to predict soft-error rates (SER) due to cosmic and high-energy particle radiation in integrated circuit chips. The paper emphasizes the need for the SER simulation using the actual chip circuit model which includes device, process, and technology parameters as opposed to using either the discrete device simulation or generic circuit simulation that is commonly employed in SER modeling. Concepts such as funneling, event-by-event simulation, nuclear history files, critical charge, and charge sharing are examined. Also discussed are the relative importance of elastic and inelastic nuclear collisions, rare event statistics, and device vs. circuit simulations. The semi-empirical methodologies used in the aerospace community to arrive at SERs [also referred to as single-event upset (SEU) rates] in integrated circuit chips are reviewed. This paper is one of four in this special issue relating to SER modeling. Together, they provide a comprehensive account of this modeling effort, which has resulted in a unique modeling tool called the Soft-Error Monte Carlo Model, or SEMM

  14. Symbol and Bit Error Rates Analysis of Hybrid PIM-CDMA

    Directory of Open Access Journals (Sweden)

    Ghassemlooy Z

    2005-01-01

    Full Text Available A hybrid pulse interval modulation code-division multiple-access (hPIM-CDMA scheme employing the strict optical orthogonal code (SOCC with unity and auto- and cross-correlation constraints for indoor optical wireless communications is proposed. In this paper, we analyse the symbol error rate (SER and bit error rate (BER of hPIM-CDMA. In the analysis, we consider multiple access interference (MAI, self-interference, and the hybrid nature of the hPIM-CDMA signal detection, which is based on the matched filter (MF. It is shown that the BER/SER performance can only be evaluated if the bit resolution conforms to the condition set by the number of consecutive false alarm pulses that might occur and be detected, so that one symbol being divided into two is unlikely to occur. Otherwise, the probability of SER and BER becomes extremely high and indeterminable. We show that for a large number of users, the BER improves when increasing the code weight . The results presented are compared with other modulation schemes.

  15. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    Science.gov (United States)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  16. Expensive Brains: “Brainy” Rodents have Higher Metabolic Rate

    Science.gov (United States)

    Sobrero, Raúl; May-Collado, Laura J.; Agnarsson, Ingi; Hernández, Cristián E.

    2011-01-01

    Brains are the centers of the nervous system of animals, controlling the organ systems of the body and coordinating responses to changes in the ecological and social environment. The evolution of traits that correlate with cognitive ability, such as relative brain size is thus of broad interest. Brain mass relative to body mass (BM) varies among mammals, and diverse factors have been proposed to explain this variation. A recent study provided evidence that energetics play an important role in brain evolution (Isler and van Schaik, 2006). Using composite phylogenies and data drawn from multiple sources, these authors showed that basal metabolic rate (BMR) correlates with brain mass across mammals. However, no such relationship was found within rodents. Here we re-examined the relationship between BMR and brain mass within Rodentia using a novel species-level phylogeny. Our results are sensitive to parameter evaluation; in particular how species mass is estimated. We detect no pattern when applying an approach used by previous studies, where each species BM is represented by two different numbers, one being the individual that happened to be used for BMR estimates of that species. However, this approach may compromise the analysis. When using a single value of BM for each species, whether representing a single individual, or available species mean, our findings provide evidence that brain mass (independent of BM) and BMR are correlated. These findings are thus consistent with the hypothesis that large brains evolve when the payoff for increased brain mass is greater than the energetic cost they incur. PMID:21811456

  17. Minimizing the symbol-error-rate for amplify-and-forward relaying systems using evolutionary algorithms

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2015-02-01

    In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.

  18. Correct mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme on ping-pong protocol

    OpenAIRE

    Zhang, Zhanjun

    2004-01-01

    Comment: The wrong mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme [PRL90(03)157901]on ping-pong protocol have been pointed out and corrected

  19. Calculation of the soft error rate of submicron CMOS logic circuits

    International Nuclear Information System (INIS)

    Juhnke, T.; Klar, H.

    1995-01-01

    A method to calculate the soft error rate (SER) of CMOS logic circuits with dynamic pipeline registers is described. This method takes into account charge collection by drift and diffusion. The method is verified by comparison of calculated SER's to measurement results. Using this method, the SER of a highly pipelined multiplier is calculated as a function of supply voltage for a 0.6 microm, 0.3 microm, and 0.12 microm technology, respectively. It has been found that the SER of such highly pipelined submicron CMOS circuits may become too high so that countermeasures have to be taken. Since the SER greatly increases with decreasing supply voltage, low-power/low-voltage circuits may show more than eight times the SER for half the normal supply voltage as compared to conventional designs

  20. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  1. Personnel selection and emotional stability certification: establishing a false negative error rate when clinical interviews

    International Nuclear Information System (INIS)

    Berghausen, P.E. Jr.

    1987-01-01

    The security plans of nuclear plants generally require that all personnel who are to have unescorted access to protected areas or vital islands be screened for emotional instability. Screening typically consists of first administering the MMPI and then conducting a clinical interview. Interviews-by-exception protocols provide for only those employees to be interviewed who have some indications of psychopathology in their MMPI results. A problem arises when the indications are not readily apparent: False negatives are likely to occur, resulting in employees being erroneously granted unescorted access. The present paper describes the development of a predictive equation which permits accurate identification, via analysis of MMPI results, of those employees who are most in need of being interviewed. The predictive equation also permits knowing probably maximum false negative error rates when a given percentage of employees is interviewed

  2. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2010-10-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  3. Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying

    KAUST Repository

    Fareed, Muhammad Mehboob; Uysal, Murat; Tsiftsis, Theodoros A.

    2014-01-01

    In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.

  4. Bit Error Rate Analysis for MC-CDMA Systems in Nakagami- Fading Channels

    Directory of Open Access Journals (Sweden)

    Li Zexian

    2004-01-01

    Full Text Available Multicarrier code division multiple access (MC-CDMA is a promising technique that combines orthogonal frequency division multiplexing (OFDM with CDMA. In this paper, based on an alternative expression for the -function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER of multiuser MC-CDMA systems in frequency-selective Nakagami- fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC or equal gain combining (EGC. The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.

  5. Evolutionary enhancement of the SLIM-MAUD method of estimating human error rates

    International Nuclear Information System (INIS)

    Zamanali, J.H.; Hubbard, F.R.; Mosleh, A.; Waller, M.A.

    1992-01-01

    The methodology described in this paper assigns plant-specific dynamic human error rates (HERs) for individual plant examinations based on procedural difficulty, on configuration features, and on the time available to perform the action. This methodology is an evolutionary improvement of the success likelihood index methodology (SLIM-MAUD) for use in systemic scenarios. It is based on the assumption that the HER in a particular situation depends of the combined effects of a comprehensive set of performance-shaping factors (PSFs) that influence the operator's ability to perform the action successfully. The PSFs relate the details of the systemic scenario in which the action must be performed according to the operator's psychological and cognitive condition

  6. Analysis of family-wise error rates in statistical parametric mapping using random field theory.

    Science.gov (United States)

    Flandin, Guillaume; Friston, Karl J

    2017-11-01

    This technical report revisits the analysis of family-wise error rates in statistical parametric mapping-using random field theory-reported in (Eklund et al. []: arXiv 1511.01863). Contrary to the understandable spin that these sorts of analyses attract, a review of their results suggests that they endorse the use of parametric assumptions-and random field theory-in the analysis of functional neuroimaging data. We briefly rehearse the advantages parametric analyses offer over nonparametric alternatives and then unpack the implications of (Eklund et al. []: arXiv 1511.01863) for parametric procedures. Hum Brain Mapp, 2017. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  7. Performance analysis for the bit-error rate of SAC-OCDMA systems

    Science.gov (United States)

    Feng, Gang; Cheng, Wenqing; Chen, Fujun

    2015-09-01

    Under low power, Gaussian statistics by invoking the central limit theorem is feasible to predict the upper bound in the spectral-amplitude-coding optical code division multiple access (SAC-OCDMA) system. However, this case severely underestimates the bit-error rate (BER) performance of the system under high power assumption. Fortunately, the exact negative binomial (NB) model is a perfect replacement for the Gaussian model in the prediction and evaluation. Based on NB statistics, a more accurate closed-form expression is analyzed and derived for the SAC-OCDMA system. The experiment shows that the obtained expression provides a more precise prediction of the BER performance under the low and high power assumptions.

  8. System care improves trauma outcome: patient care errors dominate reduced preventable death rate.

    Science.gov (United States)

    Thoburn, E; Norris, P; Flores, R; Goode, S; Rodriguez, E; Adams, V; Campbell, S; Albrink, M; Rosemurgy, A

    1993-01-01

    A review of 452 trauma deaths in Hillsborough County, Florida, in 1984 documented that 23% of non-CNS trauma deaths were preventable and occurred because of inadequate resuscitation or delay in proper surgical care. In late 1988 Hillsborough County organized a County Trauma Agency (HCTA) to coordinate trauma care among prehospital providers and state-designated trauma centers. The purpose of this study was to review county trauma deaths after the inception of the HCTA to determine the frequency of preventable deaths. 504 trauma deaths occurring between October 1989 and April 1991 were reviewed. Through committee review, 10 deaths were deemed preventable; 2 occurred outside the trauma system. Of the 10 deaths, 5 preventable deaths occurred late in severely injured patients. The preventable death rate has decreased to 7.0% with system care. The causes of preventable deaths have changed from delayed or inadequate intervention to postoperative care errors.

  9. Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-06-01

    In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.

  10. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, D; Ehler, E [University of Minnesota, Minneapolis, MN (United States)

    2015-06-15

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing.

  11. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    International Nuclear Information System (INIS)

    Sterling, D; Ehler, E

    2015-01-01

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing

  12. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    Science.gov (United States)

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ

  13. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    Science.gov (United States)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  14. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System

    Directory of Open Access Journals (Sweden)

    Jiayu Zhang

    2018-05-01

    Full Text Available The Semi-Strapdown Inertial Navigation System (SSINS provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS inertial measurement unit (MIMU outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.

  15. Quantitative comparison of errors in 15N transverse relaxation rates measured using various CPMG phasing schemes

    International Nuclear Information System (INIS)

    Myint Wazo; Cai Yufeng; Schiffer, Celia A.; Ishima, Rieko

    2012-01-01

    Nitrogen-15 Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation experiment are widely used to characterize protein backbone dynamics and chemical exchange parameters. Although an accurate value of the transverse relaxation rate, R 2 , is needed for accurate characterization of dynamics, the uncertainty in the R 2 value depends on the experimental settings and the details of the data analysis itself. Here, we present an analysis of the impact of CPMG pulse phase alternation on the accuracy of the 15 N CPMG R 2 . Our simulations show that R 2 can be obtained accurately for a relatively wide spectral width, either using the conventional phase cycle or using phase alternation when the r.f. pulse power is accurately calibrated. However, when the r.f. pulse is miscalibrated, the conventional CPMG experiment exhibits more significant uncertainties in R 2 caused by the off-resonance effect than does the phase alternation experiment. Our experiments show that this effect becomes manifest under the circumstance that the systematic error exceeds that arising from experimental noise. Furthermore, our results provide the means to estimate practical parameter settings that yield accurate values of 15 N transverse relaxation rates in the both CPMG experiments.

  16. PERBANDINGAN BIT ERROR RATE KODE REED-SOLOMON DENGAN KODE BOSE-CHAUDHURI-HOCQUENGHEM MENGGUNAKAN MODULASI 32-FSK

    Directory of Open Access Journals (Sweden)

    Eva Yovita Dwi Utami

    2016-11-01

    Full Text Available Kode Reed-Solomon (RS dan kode Bose-Chaudhuri-Hocquenghem (BCH merupakan kode pengoreksi error yang termasuk dalam jenis kode blok siklis. Kode pengoreksi error diperlukan pada sistem komunikasi untuk memperkecil error pada informasi yang dikirimkan. Dalam makalah ini, disajikan hasil penelitian kinerja BER sistem komunikasi yang menggunakan kode RS, kode BCH, dan sistem yang tidak menggunakan kode RS dan kode BCH, menggunakan modulasi 32-FSK pada kanal Additive White Gaussian Noise (AWGN, Rayleigh dan Rician. Kemampuan memperkecil error diukur menggunakan nilai Bit Error Rate (BER yang dihasilkan. Hasil penelitian menunjukkan bahwa kode RS seiring dengan penambahan nilai SNR, menurunkan nilai BER yang lebih curam bila dibandingkan sistem dengan kode BCH. Sedangkan kode BCH memberikan keunggulan saat SNR bernilai kecil, memiliki BER lebih baik daripada sistem dengan kode RS.

  17. Assessment of the rate and etiology of pharmacological errors by nurses of two major teaching hospitals in Shiraz

    Directory of Open Access Journals (Sweden)

    Fatemeh Vizeshfar

    2015-06-01

    Full Text Available Medication errors have serious consequences for patients, their families and care givers. Reduction of these faults by care givers such as nurses can increase the safety of patients. The goal of study was to assess the rate and etiology of medication error in pediatric and medical wards. This cross-sectional-analytic study is done on 101 registered nurses who had the duty of drug administration in medical pediatric and adults’ wards. Data was collected by a questionnaire including demographic information, self report faults, etiology of medication error and researcher observations. The results showed that nurses’ faults in pediatric wards were 51/6% and in adults wards were 47/4%. The most common faults in adults wards were later or sooner drug administration (48/6%, and administration of drugs without prescription and administering wrong drugs were the most common medication errors in pediatric wards (each one 49/2%. According to researchers’ observations, the medication error rate of 57/9% was rated low in adults wards and the rate of 69/4% in pediatric wards was rated moderate. The most frequent medication errors in both adults and pediatric wards were that nurses didn’t explain the reason and type of drug they were going to administer to patients. Independent T-test showed a significant change in faults observations in pediatric wards (p=0.000 and in adults wards (p=0.000. Several studies have shown medication errors all over the world, especially in pediatric wards. However, by designing a suitable report system and use a multi disciplinary approach, we can be reduced the occurrence of medication errors and its negative consequences.

  18. Impact of catheter reconstruction error on dose distribution in high dose rate intracavitary brachytherapy and evaluation of OAR doses

    International Nuclear Information System (INIS)

    Thaper, Deepak; Shukla, Arvind; Rathore, Narendra; Oinam, Arun S.

    2016-01-01

    In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this study is to evaluate the impact of catheter reconstruction error on dose distribution in CT based intracavitary brachytherapy planning and evaluation of its effect on organ at risk (OAR) like bladder, rectum and sigmoid and target volume High risk clinical target volume (HR-CTV)

  19. Time Domain Equalizer Design Using Bit Error Rate Minimization for UWB Systems

    Directory of Open Access Journals (Sweden)

    Syed Imtiaz Husain

    2009-01-01

    Full Text Available Ultra-wideband (UWB communication systems occupy huge bandwidths with very low power spectral densities. This feature makes the UWB channels highly rich in resolvable multipaths. To exploit the temporal diversity, the receiver is commonly implemented through a Rake. The aim to capture enough signal energy to maintain an acceptable output signal-to-noise ratio (SNR dictates a very complicated Rake structure with a large number of fingers. Channel shortening or time domain equalizer (TEQ can simplify the Rake receiver design by reducing the number of significant taps in the effective channel. In this paper, we first derive the bit error rate (BER of a multiuser and multipath UWB system in the presence of a TEQ at the receiver front end. This BER is then written in a form suitable for traditional optimization. We then present a TEQ design which minimizes the BER of the system to perform efficient channel shortening. The performance of the proposed algorithm is compared with some generic TEQ designs and other Rake structures in UWB channels. It is shown that the proposed algorithm maintains a lower BER along with efficiently shortening the channel.

  20. Student laboratory experiments exploring optical fibre communication systems, eye diagrams, and bit error rates

    Science.gov (United States)

    Walsh, Douglas; Moodie, David; Mauchline, Iain; Conner, Steve; Johnstone, Walter; Culshaw, Brian

    2005-06-01

    Optical fibre communications has proved to be one of the key application areas, which created, and ultimately propelled the global growth of the photonics industry over the last twenty years. Consequently the teaching of the principles of optical fibre communications has become integral to many university courses covering photonics technology. However to reinforce the fundamental principles and key technical issues students examine in their lecture courses and to develop their experimental skills, it is critical that the students also obtain hands-on practical experience of photonics components, instruments and systems in an associated teaching laboratory. In recognition of this need OptoSci, in collaboration with university academics, commercially developed a fibre optic communications based educational package (ED-COM). This educator kit enables students to; investigate the characteristics of the individual communications system components (sources, transmitters, fibre, receiver), examine and interpret the overall system performance limitations imposed by attenuation and dispersion, conduct system design and performance analysis. To further enhance the experimental programme examined in the fibre optic communications kit, an extension module to ED-COM has recently been introduced examining one of the most significant performance parameters of digital communications systems, the bit error rate (BER). This add-on module, BER(COM), enables students to generate, evaluate and investigate signal quality trends by examining eye patterns, and explore the bit-rate limitations imposed on communication systems by noise, attenuation and dispersion. This paper will examine the educational objectives, background theory, and typical results for these educator kits, with particular emphasis on BER(COM).

  1. Residents' Ratings of Their Clinical Supervision and Their Self-Reported Medical Errors: Analysis of Data From 2009.

    Science.gov (United States)

    Baldwin, DeWitt C; Daugherty, Steven R; Ryan, Patrick M; Yaghmour, Nicholas A; Philibert, Ingrid

    2018-04-01

    Medical errors and patient safety are major concerns for the medical and medical education communities. Improving clinical supervision for residents is important in avoiding errors, yet little is known about how residents perceive the adequacy of their supervision and how this relates to medical errors and other education outcomes, such as learning and satisfaction. We analyzed data from a 2009 survey of residents in 4 large specialties regarding the adequacy and quality of supervision they receive as well as associations with self-reported data on medical errors and residents' perceptions of their learning environment. Residents' reports of working without adequate supervision were lower than data from a 1999 survey for all 4 specialties, and residents were least likely to rate "lack of supervision" as a problem. While few residents reported that they received inadequate supervision, problems with supervision were negatively correlated with sufficient time for clinical activities, overall ratings of the residency experience, and attending physicians as a source of learning. Problems with supervision were positively correlated with resident reports that they had made a significant medical error, had been belittled or humiliated, or had observed others falsifying medical records. Although working without supervision was not a pervasive problem in 2009, when it happened, it appeared to have negative consequences. The association between inadequate supervision and medical errors is of particular concern.

  2. Attitudes of Mashhad Public Hospital's Nurses and Midwives toward the Causes and Rates of Medical Errors Reporting.

    Science.gov (United States)

    Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh

    2017-03-01

    Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer

  3. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    International Nuclear Information System (INIS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-01-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  4. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    Energy Technology Data Exchange (ETDEWEB)

    Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)

    2016-11-15

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  5. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    International Nuclear Information System (INIS)

    Chau, H.F.

    2002-01-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1√(5)≅27.6%, thereby making it the most error resistant scheme known to date

  6. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  7. Considering the role of time budgets on copy-error rates in material culture traditions: an experimental assessment.

    Science.gov (United States)

    Schillinger, Kerstin; Mesoudi, Alex; Lycett, Stephen J

    2014-01-01

    Ethnographic research highlights that there are constraints placed on the time available to produce cultural artefacts in differing circumstances. Given that copying error, or cultural 'mutation', can have important implications for the evolutionary processes involved in material culture change, it is essential to explore empirically how such 'time constraints' affect patterns of artefactual variation. Here, we report an experiment that systematically tests whether, and how, varying time constraints affect shape copying error rates. A total of 90 participants copied the shape of a 3D 'target handaxe form' using a standardized foam block and a plastic knife. Three distinct 'time conditions' were examined, whereupon participants had either 20, 15, or 10 minutes to complete the task. One aim of this study was to determine whether reducing production time produced a proportional increase in copy error rates across all conditions, or whether the concept of a task specific 'threshold' might be a more appropriate manner to model the effect of time budgets on copy-error rates. We found that mean levels of shape copying error increased when production time was reduced. However, there were no statistically significant differences between the 20 minute and 15 minute conditions. Significant differences were only obtained between conditions when production time was reduced to 10 minutes. Hence, our results more strongly support the hypothesis that the effects of time constraints on copying error are best modelled according to a 'threshold' effect, below which mutation rates increase more markedly. Our results also suggest that 'time budgets' available in the past will have generated varying patterns of shape variation, potentially affecting spatial and temporal trends seen in the archaeological record. Hence, 'time-budgeting' factors need to be given greater consideration in evolutionary models of material culture change.

  8. Finding the right coverage : The impact of coverage and sequence quality on single nucleotide polymorphism genotyping error rates

    NARCIS (Netherlands)

    Fountain, Emily D.; Pauli, Jonathan N.; Reid, Brendan N.; Palsboll, Per J.; Peery, M. Zachariah

    Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown.

  9. Error resilient H.264/AVC Video over Satellite for low Packet Loss Rates

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren; Andersen, Jakob Dahl

    2007-01-01

    The performance of video over satellite is simulated. The error resilience tools of intra macroblock refresh and slicing are optimized for live broadcast video over satellite. The improved performance using feedback, using a cross- layer approach, over the satellite link is also simulated. The ne...

  10. SNP discovery in nonmodel organisms: strand bias and base-substitution errors reduce conversion rates.

    Science.gov (United States)

    Gonçalves da Silva, Anders; Barendse, William; Kijas, James W; Barris, Wes C; McWilliam, Sean; Bunch, Rowan J; McCullough, Russell; Harrison, Blair; Hoelzel, A Rus; England, Phillip R

    2015-07-01

    Single nucleotide polymorphisms (SNPs) have become the marker of choice for genetic studies in organisms of conservation, commercial or biological interest. Most SNP discovery projects in nonmodel organisms apply a strategy for identifying putative SNPs based on filtering rules that account for random sequencing errors. Here, we analyse data used to develop 4723 novel SNPs for the commercially important deep-sea fish, orange roughy (Hoplostethus atlanticus), to assess the impact of not accounting for systematic sequencing errors when filtering identified polymorphisms when discovering SNPs. We used SAMtools to identify polymorphisms in a velvet assembly of genomic DNA sequence data from seven individuals. The resulting set of polymorphisms were filtered to minimize 'bycatch'-polymorphisms caused by sequencing or assembly error. An Illumina Infinium SNP chip was used to genotype a final set of 7714 polymorphisms across 1734 individuals. Five predictors were examined for their effect on the probability of obtaining an assayable SNP: depth of coverage, number of reads that support a variant, polymorphism type (e.g. A/C), strand-bias and Illumina SNP probe design score. Our results indicate that filtering out systematic sequencing errors could substantially improve the efficiency of SNP discovery. We show that BLASTX can be used as an efficient tool to identify single-copy genomic regions in the absence of a reference genome. The results have implications for research aiming to identify assayable SNPs and build SNP genotyping assays for nonmodel organisms. © 2014 John Wiley & Sons Ltd.

  11. Sharp Threshold Detection Based on Sup-norm Error rates in High-dimensional Models

    DEFF Research Database (Denmark)

    Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl

    focused almost exclusively on estimation errors in stronger norms. We show that this sup-norm bound can be used to distinguish between zero and non-zero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent...

  12. On the Symbol Error Rate of M-ary MPSK over Generalized Fading Channels with Additive Laplacian Noise

    KAUST Repository

    Soury, Hamza

    2015-01-07

    This work considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox’s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1].

  13. On the symbol error rate of M-ary MPSK over generalized fading channels with additive Laplacian noise

    KAUST Repository

    Soury, Hamza

    2014-06-01

    This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox\\'s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.

  14. On the symbol error rate of M-ary MPSK over generalized fading channels with additive Laplacian noise

    KAUST Repository

    Soury, Hamza; Alouini, Mohamed-Slim

    2014-01-01

    This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox's H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.

  15. On the Symbol Error Rate of M-ary MPSK over Generalized Fading Channels with Additive Laplacian Noise

    KAUST Repository

    Soury, Hamza; Alouini, Mohamed-Slim

    2015-01-01

    This work considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox’s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1].

  16. Determination of corrosion rate of reinforcement with a modulated guard ring electrode; analysis of errors due to lateral current distribution

    International Nuclear Information System (INIS)

    Wojtas, H.

    2004-01-01

    The main source of errors in measuring the corrosion rate of rebars on site is a non-uniform current distribution between the small counter electrode (CE) on the concrete surface and the large rebar network. Guard ring electrodes (GEs) are used in an attempt to confine the excitation current within a defined area. In order to better understand the functioning of modulated guard ring electrode and to assess its effectiveness in eliminating errors due to lateral spread of current signal from the small CE, measurements of the polarisation resistance performed on a concrete beam have been numerically simulated. Effect of parameters such as rebar corrosion activity, concrete resistivity, concrete cover depth and size of the corroding area on errors in the estimation of polarisation resistance of a single rebar has been examined. The results indicate that modulated GE arrangement fails to confine the lateral spread of the CE current within a constant area. Using the constant diameter of confinement for the calculation of corrosion rate may lead to serious errors when test conditions change. When high corrosion activity of rebar and/or local corrosion occur, the use of the modulated GE confinement may lead to significant underestimation of the corrosion rate

  17. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  18. United States private schools have higher rates of exemptions to school immunization requirements than public schools.

    Science.gov (United States)

    Shaw, Jana; Tserenpuntsag, Boldtsetseg; McNutt, Louise-Anne; Halsey, Neal

    2014-07-01

    To compare medical, religious, and personal belief immunization exemption rates between private and public schools in US. Exemption rates were calculated using the Centers for Disease Control and Prevention School Immunization Assessment Surveys for the 2009-2010 school year excluding states with incomplete survey data. Standardized exemption rates weighted on enrollments in public and private schools were calculated. Differences in exemption rates between public and private schools were tested using Wilcoxon signed rank test. The overall state exemption rate was higher in US private than public schools, 4.25% (SD 4.27) vs 1.91% (1.67), P = .0001 and private schools had higher exemption rates for all types of exemptions; medical 0.58% (0.71) vs 0.34% (0.34) respectively (P = .0004), religious 2.09% (3.14) vs 0.83% (1.05) respectively (P = .0001), and personal belief 6.10% (4.12) vs 2.79% (1.57), respectively (P = .006). Overall exemption rates were significantly higher in states that allowed personal belief exemptions. Exemption rates were significantly higher in US private than in public schools. Children attending private schools may be at higher risk of vaccine-preventable diseases than public school children. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  20. Controlling type I error rate for fast track drug development programmes.

    Science.gov (United States)

    Shih, Weichung J; Ouyang, Peter; Quan, Hui; Lin, Yong; Michiels, Bart; Bijnens, Luc

    2003-03-15

    The U.S. Food and Drug Administration (FDA) Modernization Act of 1997 has a Section (No. 112) entitled 'Expediting Study and Approval of Fast Track Drugs' (the Act). In 1998, the FDA issued a 'Guidance for Industry: the Fast Track Drug Development Programs' (the FTDD programmes) to meet the requirement of the Act. The purpose of FTDD programmes is to 'facilitate the development and expedite the review of new drugs that are intended to treat serious or life-threatening conditions and that demonstrate the potential to address unmet medical needs'. Since then many health products have reached patients who suffered from AIDS, cancer, osteoporosis, and many other diseases, sooner by utilizing the Fast Track Act and the FTDD programmes. In the meantime several scientific issues have also surfaced when following the FTDD programmes. In this paper we will discuss the concept of two kinds of type I errors, namely, the 'conditional approval' and the 'final approval' type I errors, and propose statistical methods for controlling them in a new drug submission process. Copyright 2003 John Wiley & Sons, Ltd.

  1. Bit Error Rate Due to Misalignment of Earth Station Antenna Pointing to Satellite

    Directory of Open Access Journals (Sweden)

    Wahyu Pamungkas

    2010-04-01

    Full Text Available One problem causing reduction of energy in satellite communications system is the misalignment of earth station antenna pointing to satellite. Error in pointing would affect the quality of information signal to energy bit in earth station. In this research, error in pointing angle occurred only at receiver (Rx antenna, while the transmitter (Tx antennas precisely point to satellite. The research was conducted towards two satellites, namely TELKOM-1 and TELKOM-2. At first, measurement was made by directing Tx antenna precisely to satellite, resulting in an antenna pattern shown by spectrum analyzer. The output from spectrum analyzers is drawn with the right scale to describe swift of azimuth and elevation pointing angle towards satellite. Due to drifting from the precise pointing, it influenced the received link budget indicated by pattern antenna. This antenna pattern shows reduction of power level received as a result of pointing misalignment. As a conclusion, the increasing misalignment of pointing to satellite would affect in the reduction of received signal parameters link budget of down-link traffic.

  2. Who Do Hospital Physicians and Nurses Go to for Advice About Medications? A Social Network Analysis and Examination of Prescribing Error Rates.

    Science.gov (United States)

    Creswick, Nerida; Westbrook, Johanna Irene

    2015-09-01

    To measure the weekly medication advice-seeking networks of hospital staff, to compare patterns across professional groups, and to examine these in the context of prescribing error rates. A social network analysis was conducted. All 101 staff in 2 wards in a large, academic teaching hospital in Sydney, Australia, were surveyed (response rate, 90%) using a detailed social network questionnaire. The extent of weekly medication advice seeking was measured by density of connections, proportion of reciprocal relationships by reciprocity, number of colleagues to whom each person provided advice by in-degree, and perceptions of amount and impact of advice seeking between physicians and nurses. Data on prescribing error rates from the 2 wards were compared. Weekly medication advice-seeking networks were sparse (density: 7% ward A and 12% ward B). Information sharing across professional groups was modest, and rates of reciprocation of advice were low (9% ward A, 14% ward B). Pharmacists provided advice to most people, and junior physicians also played central roles. Senior physicians provided medication advice to few people. Many staff perceived that physicians rarely sought advice from nurses when prescribing, but almost all believed that an increase in communication between physicians and nurses about medications would improve patient safety. The medication networks in ward B had higher measures for density, reciprocation, and fewer senior physicians who were isolates. Ward B had a significantly lower rate of both procedural and clinical prescribing errors than ward A (0.63 clinical prescribing errors per admission [95%CI, 0.47-0.79] versus 1.81/ admission [95%CI, 1.49-2.13]). Medication advice-seeking networks among staff on hospital wards are limited. Hubs of advice provision include pharmacists, junior physicians, and senior nurses. Senior physicians are poorly integrated into medication advice networks. Strategies to improve the advice-giving networks between senior

  3. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  4. Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link

    Directory of Open Access Journals (Sweden)

    Berioli Matteo

    2007-01-01

    Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.

  5. Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link

    Directory of Open Access Journals (Sweden)

    Matteo Berioli

    2007-05-01

    Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.

  6. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction

    Directory of Open Access Journals (Sweden)

    Boulesteix Anne-Laure

    2009-12-01

    Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.

  7. Standardized error severity score (ESS) ratings to quantify risk associated with child restraint system (CRS) and booster seat misuse.

    Science.gov (United States)

    Rudin-Brown, Christina M; Kramer, Chelsea; Langerak, Robin; Scipione, Andrea; Kelsey, Shelley

    2017-11-17

    Although numerous research studies have reported high levels of error and misuse of child restraint systems (CRS) and booster seats in experimental and real-world scenarios, conclusions are limited because they provide little information regarding which installation issues pose the highest risk and thus should be targeted for change. Beneficial to legislating bodies and researchers alike would be a standardized, globally relevant assessment of the potential injury risk associated with more common forms of CRS and booster seat misuse, which could be applied with observed error frequency-for example, in car seat clinics or during prototype user testing-to better identify and characterize the installation issues of greatest risk to safety. A group of 8 leading world experts in CRS and injury biomechanics, who were members of an international child safety project, estimated the potential injury severity associated with common forms of CRS and booster seat misuse. These injury risk error severity score (ESS) ratings were compiled and compared to scores from previous research that had used a similar procedure but with fewer respondents. To illustrate their application, and as part of a larger study examining CRS and booster seat labeling requirements, the new standardized ESS ratings were applied to objective installation performance data from 26 adult participants who installed a convertible (rear- vs. forward-facing) CRS and booster seat in a vehicle, and a child test dummy in the CRS and booster seat, using labels that only just met minimal regulatory requirements. The outcome measure, the risk priority number (RPN), represented the composite scores of injury risk and observed installation error frequency. Variability within the sample of ESS ratings in the present study was smaller than that generated in previous studies, indicating better agreement among experts on what constituted injury risk. Application of the new standardized ESS ratings to installation

  8. Relationship of Employee Attitudes and Supervisor-Controller Ratio to En Route Operational Error Rates

    National Research Council Canada - National Science Library

    Broach, Dana

    2002-01-01

    ...; Rodgers, Mogford, Mogford, 1998). In this study, the relationship of organizational factors to en route OE rates was investigated, based on an adaptation of the Human Factors Analysis and Classification System (HFACS; Shappell & Wiegmann 2000...

  9. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    Science.gov (United States)

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Bit Error-Rate Minimizing Detector for Amplify-and-Forward Relaying Systems Using Generalized Gaussian Kernel

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-01-01

    In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.

  11. Soft error rate estimations of the Kintex-7 FPGA within the ATLAS Liquid Argon (LAr) Calorimeter

    International Nuclear Information System (INIS)

    Wirthlin, M J; Harding, A; Takai, H

    2014-01-01

    This paper summarizes the radiation testing performed on the Xilinx Kintex-7 FPGA in an effort to determine if the Kintex-7 can be used within the ATLAS Liquid Argon (LAr) Calorimeter. The Kintex-7 device was tested with wide-spectrum neutrons, protons, heavy-ions, and mixed high-energy hadron environments. The results of these tests were used to estimate the configuration ram and block ram upset rate within the ATLAS LAr. These estimations suggest that the configuration memory will upset at a rate of 1.1 × 10 −10 upsets/bit/s and the bram memory will upset at a rate of 9.06 × 10 −11 upsets/bit/s. For the Kintex 7K325 device, this translates to 6.85 × 10 −3 upsets/device/s for configuration memory and 1.49 × 10 −3 for block memory

  12. Reliability of perceived neighbourhood conditions and the effects of measurement error on self-rated health across urban and rural neighbourhoods.

    Science.gov (United States)

    Pruitt, Sandi L; Jeffe, Donna B; Yan, Yan; Schootman, Mario

    2012-04-01

    Limited psychometric research has examined the reliability of self-reported measures of neighbourhood conditions, the effect of measurement error on associations between neighbourhood conditions and health, and potential differences in the reliabilities between neighbourhood strata (urban vs rural and low vs high poverty). We assessed overall and stratified reliability of self-reported perceived neighbourhood conditions using five scales (social and physical disorder, social control, social cohesion, fear) and four single items (multidimensional neighbouring). We also assessed measurement error-corrected associations of these conditions with self-rated health. Using random-digit dialling, 367 women without breast cancer (matched controls from a larger study) were interviewed twice, 2-3 weeks apart. Test-retest (intraclass correlation coefficients (ICC)/weighted κ) and internal consistency reliability (Cronbach's α) were assessed. Differences in reliability across neighbourhood strata were tested using bootstrap methods. Regression calibration corrected estimates for measurement error. All measures demonstrated satisfactory internal consistency (α ≥ 0.70) and either moderate (ICC/κ=0.41-0.60) or substantial (ICC/κ=0.61-0.80) test-retest reliability in the full sample. Internal consistency did not differ by neighbourhood strata. Test-retest reliability was significantly lower among rural (vs urban) residents for two scales (social control, physical disorder) and two multidimensional neighbouring items; test-retest reliability was higher for physical disorder and lower for one multidimensional neighbouring item among the high (vs low) poverty strata. After measurement error correction, the magnitude of associations between neighbourhood conditions and self-rated health were larger, particularly in the rural population. Research is needed to develop and test reliable measures of perceived neighbourhood conditions relevant to the health of rural populations.

  13. Error associated with model predictions of wildland fire rate of spread

    Science.gov (United States)

    Miguel G. Cruz; Martin E. Alexander

    2015-01-01

    How well can we expect to predict the spread rate of wildfires and prescribed fires? The degree of accuracy in model predictions of wildland fire behaviour characteristics are dependent on the model's applicability to a given situation, the validity of the model's relationships, and the reliability of the model input data (Alexander and Cruz 2013b#. We...

  14. Error-free 5.1 Tbit/s data generation on a single-wavelength channel using a 1.28 Tbaud symbol rate

    DEFF Research Database (Denmark)

    Mulvad, Hans Christian Hansen; Galili, Michael; Oxenløwe, Leif Katsuo

    2009-01-01

    We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER......We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER...

  15. A higher chest compression rate may be necessary for metronome-guided cardiopulmonary resuscitation.

    Science.gov (United States)

    Chung, Tae Nyoung; Kim, Sun Wook; You, Je Sung; Cho, Young Soon; Chung, Sung Phil; Park, Incheol

    2012-01-01

    Metronome guidance is a simple and economical feedback system for guiding cardiopulmonary resuscitation (CPR). However, a recent study showed that metronome guidance reduced the depth of chest compression. The results of previous studies suggest that a higher chest compression rate is associated with a better CPR outcome as compared with a lower chest compression rate, irrespective of metronome use. Based on this finding, we hypothesized that a lower chest compression rate promotes a reduction in chest compression depth in the recent study rather than metronome use itself. One minute of chest compression-only CPR was performed following the metronome sound played at 1 of 4 different rates: 80, 100, 120, and 140 ticks/min. Average compression depths (ACDs) and duty cycles were compared using repeated measures analysis of variance, and the values in the absence and presence of metronome guidance were compared. Both the ACD and duty cycle increased when the metronome rate increased (P = .017, metronome rates of 80 and 100 ticks/min were significantly lower than those for the procedures without metronome guidance. The ACD and duty cyle for chest compression increase as the metronome rate increases during metronome-guided CPR. A higher rate of chest compression is necessary for metronome-guided CPR to prevent suboptimal quality of chest compression. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Vastus Lateralis Motor Unit Firing Rate Is Higher in Women With Patellofemoral Pain.

    Science.gov (United States)

    Gallina, Alessio; Hunt, Michael A; Hodges, Paul W; Garland, S Jayne

    2018-05-01

    To compare neural drive, determined from motor unit firing rate, in the vastus medialis and lateralis in women with and without patellofemoral pain. Cross-sectional study. University research laboratory. Women (N=56) 19 to 35 years of age, including 36 with patellofemoral pain and 20 controls. Not applicable. Participants sustained an isometric knee extension contraction at 10% of their maximal voluntary effort for 70 seconds. Motor units (N=414) were identified using high-density surface electromyography. Average firing rate was calculated between 5 and 35 seconds after recruitment for each motor unit. Initial firing rate was the inverse of the first 3 motor unit interspike intervals. In control participants, vastus medialis motor units discharged at higher rates than vastus lateralis motor units (P=.001). This was not observed in women with patellofemoral pain (P=.78) because of a higher discharge rate of vastus lateralis compared with control participants (P=.002). No between-group differences were observed for vastus medialis (P=.93). Similar results were obtained for the initial motor unit firing rate. These findings suggest that women with patellofemoral pain have a higher neural drive to vastus lateralis but not vastus medialis, which may be a contributor of the altered patellar kinematics observed in some studies. The different neural drive may be an adaptation to patellofemoral pain, possibly to compensate for decreased quadriceps force production, or a precursor of patellofemoral pain. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  17. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  18. A Simulation Analysis of Errors in the Measurement of Standard Electrochemical Rate Constants from Phase-Selective Impedance Data.

    Science.gov (United States)

    1987-09-30

    RESTRICTIVE MARKINGSC Unclassif ied 2a SECURIly CLASSIFICATION ALIIMOA4TY 3 DIS1RSBj~jiOAVAILAB.I1Y OF RkPORI _________________________________ Approved...of the AC current, including the time dependence at a growing DME, at a given fixed potential either in the presence or the absence of an...the relative error in k b(app) is ob relatively small for ks (true) : 0.5 cm s-, and increases rapidly for ob larger rate constants as kob reaches the

  19. Rates of medical errors and preventable adverse events among hospitalized children following implementation of a resident handoff bundle.

    Science.gov (United States)

    Starmer, Amy J; Sectish, Theodore C; Simon, Dennis W; Keohane, Carol; McSweeney, Maireade E; Chung, Erica Y; Yoon, Catherine S; Lipsitz, Stuart R; Wassner, Ari J; Harper, Marvin B; Landrigan, Christopher P

    2013-12-04

    Handoff miscommunications are a leading cause of medical errors. Studies comprehensively assessing handoff improvement programs are lacking. To determine whether introduction of a multifaceted handoff program was associated with reduced rates of medical errors and preventable adverse events, fewer omissions of key data in written handoffs, improved verbal handoffs, and changes in resident-physician workflow. Prospective intervention study of 1255 patient admissions (642 before and 613 after the intervention) involving 84 resident physicians (42 before and 42 after the intervention) from July-September 2009 and November 2009-January 2010 on 2 inpatient units at Boston Children's Hospital. Resident handoff bundle, consisting of standardized communication and handoff training, a verbal mnemonic, and a new team handoff structure. On one unit, a computerized handoff tool linked to the electronic medical record was introduced. The primary outcomes were the rates of medical errors and preventable adverse events measured by daily systematic surveillance. The secondary outcomes were omissions in the printed handoff document and resident time-motion activity. Medical errors decreased from 33.8 per 100 admissions (95% CI, 27.3-40.3) to 18.3 per 100 admissions (95% CI, 14.7-21.9; P < .001), and preventable adverse events decreased from 3.3 per 100 admissions (95% CI, 1.7-4.8) to 1.5 (95% CI, 0.51-2.4) per 100 admissions (P = .04) following the intervention. There were fewer omissions of key handoff elements on printed handoff documents, especially on the unit that received the computerized handoff tool (significant reductions of omissions in 11 of 14 categories with computerized tool; significant reductions in 2 of 14 categories without computerized tool). Physicians spent a greater percentage of time in a 24-hour period at the patient bedside after the intervention (8.3%; 95% CI 7.1%-9.8%) vs 10.6% (95% CI, 9.2%-12.2%; P = .03). The average duration of verbal

  20. A web-based team-oriented medical error communication assessment tool: development, preliminary reliability, validity, and user ratings.

    Science.gov (United States)

    Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas

    2011-01-01

    Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.

  1. Does the Economic Crisis Have an Influence on the Higher Education Dropout Rate?

    Science.gov (United States)

    Leão Fernandes, Graça; Chagas Lopes, Margarida

    2016-01-01

    This research aims to identify the effects of the economic crisis on higher education (HE) dropout rates at Lisbon School of Economics and Management (ISEG)--Universidade de Lisboa, after having controlled for individual characteristics, family background, High School and HE trajectories. Our main hypothesis is that the economic crisis induces…

  2. Will ageing lead to a higher real exchange rate for the Netherlands?

    NARCIS (Netherlands)

    van Ewijk, C.; Volkerink, M.

    2012-01-01

    Long term projections for the Netherlands indicate that demand for nontradables—e.g. health care services—will increase relative to supply due to population ageing. If this leads to higher future real exchanges rates this will erode the return of the savings currently made to prepare for ageing.

  3. Will ageing lead to a higher real exchange rate for the Netherlands?

    NARCIS (Netherlands)

    van Ewijk, C.; Volkerink, M.

    2011-01-01

    Long-term projections for the Netherlands indicate that demand for nontradables - e.g. health care services - will increase relative to supply due to population ageing. If this leads to higher future real exchanges rates this will erode the return of the savings currently made to prepare for ageing.

  4. Structure analysis of tax revenue and inflation rate in Banda Aceh using vector error correction model with multiple alpha

    Science.gov (United States)

    Sofyan, Hizir; Maulia, Eva; Miftahuddin

    2017-11-01

    A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).

  5. The dynamic effect of exchange-rate volatility on Turkish exports: Parsimonious error-correction model approach

    Directory of Open Access Journals (Sweden)

    Demirhan Erdal

    2015-01-01

    Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.

  6. Why do younger women have higher breast cancer recurrence rates after breast-conserving surgery?

    International Nuclear Information System (INIS)

    Nishimura, Reiki; Matsuda, Masakazu; Miyayama, Haruhiko; Okazaki, Shinji; Kai, Chiharu; Ozaki, N.

    2003-01-01

    Preventing breast cancer recurrence after breast-conserving surgery is an important issue. The main factors contributing to such recurrence are positive margins, absence of radiotherapy and young age. To investigate the clinical significance of age in breast-conserving surgery, we examined the relationship between clinicopathological findings or outcome and age, especially young age. The cases were divided into three groups by age; 35 years old or less, 36-50y.o. and 51y.o. or higher. Between April 1989 and March 2003, 743 patients were treated with breast-conserving surgery. There were 49 patients aged 35 years old or less (6.6%). Younger age significantly correlated with positive surgical margin, lymph node metastases, higher proliferative activity, negative estrogen receptor (ER) or progesterone receptor (PgR), larger tumor size, and shorter nipple-tumor distances. Although younger patients had a higher recurrence rate irrespective of radiotherapy, margin status had an impact on recurrence rate. Thus, the reason young age was a significant factor for breast recurrence after breast-conserving surgery was that young patients frequently had numerous risk factors such as positive margin, higher proliferative activity, positive nodes, negative ER/PgR and larger tumor. However, negative surgical margins could reduce recurrence rates even in young women. These results suggest that more suitable criteria and strategies may be needed for young patients with breast cancer. (author)

  7. Impact of automated dispensing cabinets on medication selection and preparation error rates in an emergency department: a prospective and direct observational before-and-after study.

    Science.gov (United States)

    Fanning, Laura; Jones, Nick; Manias, Elizabeth

    2016-04-01

    The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.

  8. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    Science.gov (United States)

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  9. Reply: Birnbaum's (2012 statistical tests of independence have unknown Type-I error rates and do not replicate within participant

    Directory of Open Access Journals (Sweden)

    Yun-shil Cha

    2013-01-01

    Full Text Available Birnbaum (2011, 2012 questioned the iid (independent and identically distributed sampling assumptions used by state-of-the-art statistical tests in Regenwetter, Dana and Davis-Stober's (2010, 2011 analysis of the ``linear order model''. Birnbaum (2012 cited, but did not use, a test of iid by Smith and Batchelder (2008 with analytically known properties. Instead, he created two new test statistics with unknown sampling distributions. Our rebuttal has five components: 1 We demonstrate that the Regenwetter et al. data pass Smith and Batchelder's test of iid with flying colors. 2 We provide evidence from Monte Carlo simulations that Birnbaum's (2012 proposed tests have unknown Type-I error rates, which depend on the actual choice probabilities and on how data are coded as well as on the null hypothesis of iid sampling. 3 Birnbaum analyzed only a third of Regenwetter et al.'s data. We show that his two new tests fail to replicate on the other two-thirds of the data, within participants. 4 Birnbaum selectively picked data of one respondent to suggest that choice probabilities may have changed partway into the experiment. Such nonstationarity could potentially cause a seemingly good fit to be a Type-II error. We show that the linear order model fits equally well if we allow for warm-up effects. 5 Using hypothetical data, Birnbaum (2012 claimed to show that ``true-and-error'' models for binary pattern probabilities overcome the alleged short-comings of Regenwetter et al.'s approach. We disprove this claim on the same data.

  10. Comparison of the effect of paper and computerized procedures on operator error rate and speed of performance

    International Nuclear Information System (INIS)

    Converse, S.A.; Perez, P.B.; Meyer, S.; Crabtree, W.

    1994-01-01

    The Computerized Procedures Manual (COPMA-II) is an advanced procedure manual that can be used to select and execute procedures, to monitor the state of plant parameters, and to help operators track their progress through plant procedures. COPMA-II was evaluated in a study that compared the speed and accuracy of operators' performance when they performed with COPMA-II and traditional paper procedures. Sixteen licensed reactor operators worked in teams of two to operate the Scales Pressurized Water Reactor Facility at North Carolina State University. Each team performed one change of power with each type of procedure to simulate performance under normal operating conditions. Teams then performed one accident scenario with COPMA-II and one with paper procedures. Error rates, performance times, and subjective estimates of workload were collected, and were evaluated for each combination of procedure type and scenario type. For the change of power task, accuracy and response time were not different for COPMA-II and paper procedures. Operators did initiate responses to both accident scenarios fastest with paper procedures. However, procedure type did not moderate response completion time for either accident scenario. For accuracy, performance with paper procedures resulted in twice as many errors as did performance with COPMA-II. Subjective measures of mental workload for the accident scenarios were not affected by procedure type

  11. Inflation of type I error rates by unequal variances associated with parametric, nonparametric, and Rank-Transformation Tests

    Directory of Open Access Journals (Sweden)

    Donald W. Zimmerman

    2004-01-01

    Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.

  12. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities.

    Science.gov (United States)

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-03-01

    Sound is among the significant environmental factors for people's health, and it has an important role in both physical and psychological injuries, and it also affects individuals' performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant's performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). This study found that a sound level of 110 dB had an important effect on the individuals' performances, i.e., the performances were decreased.

  13. Calm Merino ewes have a higher ovulation rate and more multiple pregnancies than nervous ewes.

    Science.gov (United States)

    van Lier, E; Hart, K W; Viñoles, C; Paganoni, B; Blache, D

    2017-07-01

    In 1990, two selection lines of Merino sheep were established for low and high behavioural reactivity (calm and nervous temperament) at the University of Western Australia. Breeding records consistently showed that calm ewes weaned 10% to 19% more lambs than the nervous ewes. We hypothesise that calm ewes could have a higher ovulation rate than nervous ewes and/or calm ewes could have a lower rate of embryo mortality than nervous ewes. We tested these hypotheses by comparing the ovulation rate and the rate of embryo mortality between the calm and nervous lines before and after synchronisation and artificial insemination. Merino ewes from the temperament selection lines (calm, n=100; nervous, n=100) were synchronised (early breeding season) for artificial insemination (day 0) (intravaginal sponges containing fluogestone acetate and eCG immediately after sponge withdrawal). On day-17 and 11 ovarian cyclicity and corpora lutea, and on days 30 and 74 pregnancies and embryos/foetuses were determined by ultrasound. Progesterone, insulin and leptin concentrations were determined in blood plasma samples from days 5, 12 and 17. Ovarian cyclicity before and after oestrus synchronisation did not differ between the lines, but ovulation rate did (day-17: calm 1.63; nervous 1.26; Pewes was higher than on day-17. Loss of embryos by day 30 was high (calm: 71/150; nervous: 68/130); but nervous ewes had a lower proportion (15/47) of multiple pregnancies compared with calm ewes (30/46; Pewes had higher insulin (32.0 pmol/l±1.17 SEM; P=0.013) and lower leptin (1.18 μg/l±0.04 SEM; P=0.002) concentrations than calm ewes (insulin: 27.8 pmol/l±1.17 SEM; leptin: 1.35 μg/l±0.04 SEM). The differences in reproductive outcomes between the calm and nervous ewes were mainly due to a higher ovulation rate in calm ewes. We suggest that reproduction in nervous ewes is compromised by factors leading up to ovulation and conception, or the uterine environment during early pregnancy, that reflect

  14. Bit Error Rate Performance Analysis of a Threshold-Based Generalized Selection Combining Scheme in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    Kousa Maan

    2005-01-01

    Full Text Available The severity of fading on mobile communication channels calls for the combining of multiple diversity sources to achieve acceptable error rate performance. Traditional approaches perform the combining of the different diversity sources using either the conventional selective diversity combining (CSC, equal-gain combining (EGC, or maximal-ratio combining (MRC schemes. CSC and MRC are the two extremes of compromise between performance quality and complexity. Some researches have proposed a generalized selection combining scheme (GSC that combines the best branches out of the available diversity resources ( . In this paper, we analyze a generalized selection combining scheme based on a threshold criterion rather than a fixed-size subset of the best channels. In this scheme, only those diversity branches whose energy levels are above a specified threshold are combined. Closed-form analytical solutions for the BER performances of this scheme over Nakagami fading channels are derived. We also discuss the merits of this scheme over GSC.

  15. Splenectomy is associated with higher infection and pneumonia rates among trauma laparotomy patients.

    Science.gov (United States)

    Fair, Kelly A; Connelly, Christopher R; Hart, Kyle D; Schreiber, Martin A; Watters, Jennifer M

    2017-05-01

    Splenectomy increases lifetime risk of thromboembolism (VTE) and is associated with long-term infectious complications, primarily, overwhelming post-splenectomy infection (OPSI). Our objective was to evaluate risk of VTE and infection at index hospitalization post-splenectomy. Retrospective review of all patients who received a laparotomy in the NTDB. Propensity score matching for splenectomy was performed, based on ISS, abdominal abbreviated injury score >3, GCS, sex and mechanism. Major complications, VTE, and infection rates were compared. Multiple logistic regression models were utilized to evaluate splenectomy-associated complications. 93,221 laparotomies were performed and 17% underwent splenectomy. Multiple logistic regression models did not demonstrate an association between splenectomy and major complications (OR 0.96, 95% CI 0.91-1.03, p = 0.25) or VTE (OR 1.05, 95% CI 0.96-1.14, p = 0.33). Splenectomy was independently associated with infection (OR 1.07, 95% CI 1.00-1.14, p = 0.045). Subgroup analysis of patients with infection demonstrated that splenectomy was most strongly associated with pneumonia (OR 1.41, 95% CI 1.26-1.57, p Splenectomy is not associated with higher overall complication or VTE rates during index hospitalization. However, splenectomy is associated with a higher rate of pneumonia. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. 26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.

    Science.gov (United States)

    2010-04-01

    ... level) are not treated as letters of proposed deficiency that allow the taxpayer an opportunity for... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...

  17. With age a lower individual breathing reserve is associated with a higher maximal heart rate.

    Science.gov (United States)

    Burtscher, Martin; Gatterer, Hannes; Faulhaber, Martin; Burtscher, Johannes

    2018-01-01

    Maximal heart rate (HRmax) is linearly declining with increasing age. Regular exercise training is supposed to partly prevent this decline, whereas sex and habitual physical activity do not. High exercise capacity is associated with a high cardiac output (HR x stroke volume) and high ventilatory requirements. Due to the close cardiorespiratory coupling, we hypothesized that the individual ventilatory response to maximal exercise might be associated with the age-related HRmax. Retrospective analyses have been conducted on the results of 129 consecutively performed routine cardiopulmonary exercise tests. The study sample comprised healthy subjects of both sexes of a broad range of age (20-86 years). Maximal values of power output, minute ventilation, oxygen uptake and heart rate were assessed by the use of incremental cycle spiroergometry. Linear multivariate regression analysis revealed that in addition to age the individual breathing reserve at maximal exercise was independently predictive for HRmax. A lower breathing reserve due to a high ventilatory demand and/or a low ventilatory capacity, which is more pronounced at a higher age, was associated with higher HRmax. Age explained the observed variance in HRmax by 72% and was improved to 83% when the variable "breathing reserve" was entered. The presented findings indicate an independent association between the breathing reserve at maximal exercise and maximal heart rate, i.e. a low individual breathing reserve is associated with a higher age-related HRmax. A deeper understanding of this association has to be investigated in a more physiological scenario. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. New system for higher recovery rate of water borne Cryptosporidium oocysts and Giardia cysts

    DEFF Research Database (Denmark)

    Al-Sabi, Mohammad Nafi Solaiman; Gad, Jens; Klinting, Mette

    2012-01-01

    Background: The two most common water borne pathogenic protozoa, Cryptosporidium and Giardia, cause diarrhea worldwide. Detecting these parasites in water samples depends on effective parasite recovery from the water matrix. The reported low recovery rates of the currently used filter methods...... motivate the development of systems with higher recovery rates. Materials and methods: Five replicates of IMS purified Cryptosporidium oocysts and Giardia cysts (N=2x103) were injected into a specially coated filter unit with a carefully chosen pore size. Following filtration, sonication was performed...... were 85% were recorded when the filter was sonicated. Sonication usually affects parasite viability but could be tuned into a useful tool for enhanced backwash collection of parasites using a specially constructed filter unit and a sonication protocol. The filtration...

  19. Why the EU-15 Maintains Higher CIT Rates than the New Member States?

    Directory of Open Access Journals (Sweden)

    Karpowicz Andrzej

    2014-11-01

    Full Text Available The European Union is not a homogenous area. This lack of homogeneity extends to taxes, which vary across jurisdictions. On average, Western Europe imposes significantly higher taxes on capital than New Member States, which joined the Community in 2004 and 2007. Often this fact is simply taken for granted. However, there are several arguments that can explain this variance. Although several of these arguments are well known and have been researched, they have not been assessed in combination, or used in a comparative analysis of corporate income tax (CIT rates between EU member states. Because of interest in harmonizing CIT throughout the EU, the roots of divergent CIT is of particular and timely value. Therefore, this article we attempts to demonstrate the differences in CIT rates in the EU-15 and New Member States. In so doing the general characteristics of these country grouping is identified, and then discussed in the context of the taxation theory.

  20. Higher order constraints on the Higgs production rate from fixed-target DIS data

    International Nuclear Information System (INIS)

    Alekhin, S.; Bluemlein, J.; Moch, S.

    2011-01-01

    The constraints of fixed-target DIS data in fits of parton distributions including QCD corrections to next-to-next-to leading order are studied. We point out a potential problem in the analysis of the NMC data which can lead to inconsistencies in the extracted value for α s (M Z ) and the gluon distribution at higher orders in QCD. The implications for predictions of rates for Standard Model Higgs boson production at hadron colliders are investigated. We conclude that the current range of excluded Higgs boson masses at the Tevatron appears to be much too large. (orig.)

  1. Strain rate effects in nuclear steels at room and higher temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Solomos, G. E-mail: george.solomos@jrc.it; Albertini, C.; Labibes, K.; Pizzinato, V.; Viaccoz, B

    2004-04-01

    An investigation of strain rate, temperature and size effects in three nuclear steels has been conducted. The materials are: ferritic steel 20MnMoNi55 (vessel head), austenitic steel X6CrNiNb1810 (upper internal structure), and ferritic steel 26NiCrMo146 (bolting). Smooth cylindrical tensile specimens of three sizes have been tested at strain rates from 0.001 to 300 s{sup -1}, at room and elevated temperatures (400-600 deg. C). Full stress-strain diagrams have been obtained, and additional parameters have been calculated based on them. The results demonstrate a clear influence of temperature, which amounts into reducing substantially mechanical strengths with respect to RT conditions. The effect of strain rate is also shown. It is observed that at RT the strain rate effect causes up shifting of the flow stress curves, whereas at the higher temperatures a mild downshifting of the flow curves is manifested. Size effect tendencies have also been observed. Some implications when assessing the pressure vessel structural integrity under severe accident conditions are considered.

  2. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Directory of Open Access Journals (Sweden)

    Philip J Kellman

    Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert

  3. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Science.gov (United States)

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  4. Higher Growth Rate of Branch Duct Intraductal Papillary Mucinous Neoplasms Associates With Worrisome Features.

    Science.gov (United States)

    Kolb, Jennifer M; Argiriadi, Pamela; Lee, Karen; Liu, Xiaoyu; Bagiella, Emilia; Lucas, Aimee L; Kim, Michelle Kang; Kumta, Nikhil A; Nagula, Satish; Sarpel, Umut; DiMaio, Christopher J

    2018-03-11

    or invasive cancers. BD-IPMNs that developed worrisome features were associated with a significantly higher rate of growth than lesions with low-risk features. Low risk BD-IPMNs that grow more than 2.5 mm/year might require surveillance. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.

  5. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    Science.gov (United States)

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.

  6. The Differences in Error Rate and Type between IELTS Writing Bands and Their Impact on Academic Workload

    Science.gov (United States)

    Müller, Amanda

    2015-01-01

    This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…

  7. Rabies Vaccination: Higher Failure Rates in Imported Dogs than in those Vaccinated in Italy.

    Science.gov (United States)

    Rota Nodari, E; Alonso, S; Mancin, M; De Nardi, M; Hudson-Cooke, S; Veggiato, C; Cattoli, G; De Benedictis, P

    2017-03-01

    The current European Union (EU) legislation decrees that pets entering the EU from a rabies-infected third country have to obtain a satisfactory virus-neutralizing antibody level, while those moving within the EU require only rabies vaccination as the risk of moving a rabid pet within the EU is considered negligible. A number of factors driving individual variations in dog vaccine response have been previously reported, including a high rate of vaccine failure in puppies, especially those subject to commercial transport. A total of 21 001 observations collected from dogs (2006-2012) vaccinated in compliance with the current EU regulations were statistically analysed to assess the effect of different risk factors related to rabies vaccine efficacy. Within this framework, we were able to compare the vaccination failure rate in a group of dogs entering the Italian border from EU and non-EU countries to those vaccinated in Italy prior to international travel. Our analysis identified that cross-breeds and two breed categories showed high vaccine success rates, while Beagles and Boxers were the least likely to show a successful response to vaccination (88.82% and 90.32%, respectively). Our analysis revealed diverse performances among the commercially available vaccines, in terms of serological peak windows, and marked differences according to geographical area. Of note, we found a higher vaccine failure rate in imported dogs (13.15%) than in those vaccinated in Italy (5.89%). Our findings suggest that the choice of vaccine may influence the likelihood of an animal achieving a protective serological level and that time from vaccination to sampling should be considered when interpreting serological results. A higher vaccine failure in imported compared to Italian dogs highlights the key role that border controls still have in assessing the full compliance of pet movements with EU legislation to minimize the risk of rabies being reintroduced into a disease-free area.

  8. Pyrosequencing as a tool for the detection of Phytophthora species: error rate and risk of false Molecular Operational Taxonomic Units.

    Science.gov (United States)

    Vettraino, A M; Bonants, P; Tomassini, A; Bruni, N; Vannini, A

    2012-11-01

    To evaluate the accuracy of pyrosequencing for the description of Phytophthora communities in terms of taxa identification and risk of assignment for false Molecular Operational Taxonomic Units (MOTUs). Pyrosequencing of Internal Transcribed Spacer 1 (ITS1) amplicons was used to describe the structure of a DNA mixture comprising eight Phytophthora spp. and Pythium vexans. Pyrosequencing resulted in 16 965 reads, detecting all species in the template DNA mixture. Reducing the ITS1 sequence identity threshold resulted in a decrease in numbers of unmatched reads but a concomitant increase in the numbers of false MOTUs. The total error rate was 0·63% and comprised mainly mismatches (0·25%) Pyrosequencing of ITS1 region is an efficient and accurate technique for the detection and identification of Phytophthora spp. in environmental samples. However, the risk of allocating false MOTUs, even when demonstrated to be low, may require additional validation with alternative detection methods. Phytophthora spp. are considered among the most destructive groups of invasive plant pathogens, affecting thousands of cultivated and wild plants worldwide. Simultaneous early detection of Phytophthora complexes in environmental samples offers an unique opportunity for the interception of known and unknown species along pathways of introduction, along with the identification of these organisms in invaded environments. © 2012 The Authors Letters in Applied Microbiology © 2012 The Society for Applied Microbiology.

  9. Does higher income inequality adversely influence infant mortality rates? Reconciling descriptive patterns and recent research findings.

    Science.gov (United States)

    Siddiqi, Arjumand; Jones, Marcella K; Erwin, Paul Campbell

    2015-04-01

    As the struggle continues to explain the relatively high rates of infant mortality (IMR) exhibited in the United States, a renewed emphasis is being placed on the role of possible 'contextual' determinants. Cross-sectional and short time-series studies have found that higher income inequality is associated with higher IMR at the state level. Yet, descriptively, the longer-term trends in income inequality and in IMR seem to call such results into question. To assess whether, over the period 1990-2007, state-level income inequality is associated with state-level IMR; to examine whether the overall effect of income inequality on IMR over this period varies by state; to test whether the association between income inequality and IMR varies across this time period. IMR data--number of deaths per 1000 live births in a given state and year--were obtained from the U.S. Centers for Disease Control Wonder database. Income inequality was measured using the Gini coefficient, which varies from zero (complete equality) to 100 (complete inequality). Covariates included state-level poverty rate, median income, and proportion of high school graduates. Fixed and random effects regressions were conducted to test hypotheses. Fixed effects models suggested that, overall, during the period 1990-2007, income inequality was inversely associated with IMR (β = -0.07, SE (0.01)). Random effects models suggested that when the relationship was allowed to vary at the state-level, it remained inverse (β = -0.05, SE (0.01)). However, an interaction between income inequality and time suggested that, as time increased, the effect of income inequality had an increasingly positive association with total IMR (β = 0.009, SE (0.002)). The influence of state income inequality on IMR is dependent on time, which may proxy for time-dependent aspects of societal context. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    Science.gov (United States)

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  11. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    Science.gov (United States)

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  12. Higher resting heart rate variability predicts skill in expressing some emotions.

    Science.gov (United States)

    Tuck, Natalie L; Grant, Rosemary C I; Sollers, John J; Booth, Roger J; Consedine, Nathan S

    2016-12-01

    Vagally mediated heart rate variability (vmHRV) is a measure of cardiac vagal tone, and is widely viewed as a physiological index of the capacity to regulate emotions. However, studies have not directly tested whether vmHRV is associated with the ability to facially express emotions. In extending prior work, the current report tested links between resting vmHRV and the objectively assessed ability to facially express emotions, hypothesizing that higher vmHRV would predict greater expressive skill. Eighty healthy women completed self-reported measures, before attending a laboratory session in which vmHRV and the ability to express six emotions in the face were assessed. A repeated measures analysis of variance revealed a marginal main effect for vmHRV on skill overall; individuals with higher resting vmHRV were only better able to deliberately facially express anger and interest. Findings suggest that differences in resting vmHRV are associated with the objectively assessed ability to facially express some, but not all, emotions, with potential implications for health and well-being. © 2016 Society for Psychophysiological Research.

  13. Invasive acacias experience higher ant seed removal rates at the invasion edges

    Directory of Open Access Journals (Sweden)

    D. Montesinos

    2012-06-01

    Full Text Available Seed dispersal is a key process for the invasion of new areas by exotic species. Introduced plants often take advantage of native generalist dispersers. Australian acacias are primarily dispersed by ants in their native range and produce seeds bearing a protein and lipid rich reward for ant mutualists (elaiosome. Nevertheless, the role of myrmecochory in the expansion of Australian acacias in European invaded areas is still not clear. We selected one European population of Acacia dealbata and another of A. longifolia and offered elaiosome-bearing and elaiosome-removed seeds to local ant communities. For each species, seeds were offered both in high-density acacia stands and in low-density invasion edges. For both acacia species, seed removal was significantly higher at the low-density edges. For A. longifolia, manual elimination of elaiosomes reduced the chance of seed removal by 80% in the low-density edges, whereas it made no difference on the high-density stands. For A. dealbata, the absence of elaiosome reduced seed removal rate by 52%, independently of the acacia density. Our data suggests that invasive acacias have found effective ant seed dispersers in Europe and that the importance of such dispersers is higher at the invasion edges.

  14. Optimizing rate of nitrogen application for higher growth and yield of wheat (triticum aestivum l.) cultivars

    International Nuclear Information System (INIS)

    Maqsood, M.; Shehzad, M.A.; Asim, A.; Ahmad, W.

    2012-01-01

    In order to optimize the nitrogen rates in three wheat (Triticum aestivum L.) cultivars for obtaining higher grain yield, a split plot experiment based on Randomized Complete Block Design with three replicates was conducted in the research field of University of Agriculture, Faisalabad during Rabi season 2006-07. Among treatments nitrogen levels (N0= 0, N/sub 1/= 50, N2= 100, N3= 150 kg ha/sup -1/) in main while wheat cultivars (V1= Punjnad-I, V/sub 2/= Fareed-2006, V3=Uqab-2000) were allocated in sub plots during the course of growing season. Traits as plant height, fertile tillers, spike length, spikelets spike-1, grains spike-1, 1000-grain weight, straw yield, grain yield and harvest index (HI) were significantly (P=0.05) affected by treatment combinations. Maximum grain yield was obtained by V3 (Uqab-2000) cultivar when treated with N3 (150 kg ha/sup -1/) fertilizer level. Also, results showed that with increasing nitrogen rates, wheat yield increases significantly up to a level of significance (P=0.05). Increasing nitrogen levels led to significantly increase in plant height (101.81 cm), spike bearing tillers (495.77), grains spike/sup -1/ (61.45), straw yield (8.60 t ha/sup -1/) and harvest index (36.17%) of V3 (Uqab-2000). In all traits except germination count, V3 (Uqab-2000) was found to be superior. (author)

  15. Effect of additives for higher removal rate in lithium niobate chemical mechanical planarization

    International Nuclear Information System (INIS)

    Jeong, Sukhoon; Lee, Hyunseop; Cho, Hanchul; Lee, Sangjik; Kim, Hyoungjae; Kim, Sungryul; Park, Jaehong; Jeong, Haedo

    2010-01-01

    High roughness and a greater number of defects were created by lithium niobate (LN; LiNbO 3 ) processes such as traditional grinding and mechanical polishing (MP), should be decreased for manufacturing LN device. Therefore, an alternative process for gaining defect-free and smooth surface is needed. Chemical mechanical planarization (CMP) is suitable method in the LN process because it uses a combination approach consisting of chemical and mechanical effects. First of all, we investigated the LN CMP process using commercial slurry by changing various process conditions such as down pressure and relative velocity. However, the LN CMP process time using commercial slurry was long to gain a smooth surface because of lower material removal rate (MRR). So, to improve the material removal rate (MRR), the effects of additives such as oxidizer (hydrogen peroxide; H 2 O 2 ) and complexing agent (citric acid; C 6 H 8 O 7 ) in a potassium hydroxide (KOH) based slurry, were investigated. The manufactured slurry consisting of H 2 O 2 -citric acid in the KOH based slurry shows that the MRR of the H 2 O 2 at 2 wt% and the citric acid at 0.06 M was higher than the MRR for other conditions.

  16. Higher Rates of DZ Twinning in a Twenty-First Century Birth Cohort.

    Science.gov (United States)

    Rhea, Sally Ann; Corley, Robin P; Heath, Andrew C; Iacono, William G; Neale, Michael C; Hewitt, John K

    2017-09-01

    The Colorado Twin Registry is a population based registry initiated in 1984 with the involvement of the Colorado Department of Health, Division of Vital Statistics. Recruitment includes birth cohorts several years prior to 1984 and all subsequent years. As part of a recent evaluation of Colorado birth records for the years 2006 through 2008 we became aware of a shifting trend in the proportion of MZ and DZ twins in the Colorado population. Historically (Bulmer 1970 The biology of twinning in man, Clarendon, Oxford) we have expected a 1/3, 1/3, 1/3 ratio of MZ, same-sex DZ and opposite sex DZ twins in Caucasian populations. An excess of MZ pairs in most studies was assumed to be due to selection bias. Somewhat more recently, Hur et al.(1995 Behav Genet 25, 337-340) provided evidence that the DZ twinning rate was falling and that therefore selection bias was not the reason for higher MZ enrollment in most twin studies. They suggested that twin researchers might consider strategies to over-enroll DZ pairs to maximize statistical power. In contrast, we now find that of the 3217 twin births in Colorado from 2006 to 2008 with identified sex information the MZ rate is estimated at only 22%, and we have corroborating reports from other states of similar estimates. These were calculated applying Weinberg's rule which assumes an equal birth rate for same sex and opposite sex DZ pairs so that the proportion of MZ in a sample is the proportion of same sex (MM + FF) minus the proportion of opposite-sex (MF, FM). We explore factors, such as an increase in the proportion of non-Caucasian parents and an increase in average maternal age, which may contribute to this shift.

  17. The effect of speaking rate on serial-order sound-level errors in normal healthy controls and persons with aphasia.

    Science.gov (United States)

    Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I

    Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions

  18. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    Science.gov (United States)

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Higher Education Support Services and Graduation Rates of Structured Education Program Students

    Science.gov (United States)

    Hepner, Seth

    2017-01-01

    The 1st-year retention rate of the Structured Education Program (SEP) is 90%, yet the 6-year graduation rate of SEP students is 29%. The gap between SEP 1st-year retention and graduation rates is the problem that this study addressed. The low graduation rate of SEP students is an important issue because graduation rates are used to measure the…

  20. Increased Total Anesthetic Time Leads to Higher Rates of Surgical Site Infections in Spinal Fusions.

    Science.gov (United States)

    Puffer, Ross C; Murphy, Meghan; Maloney, Patrick; Kor, Daryl; Nassr, Ahmad; Freedman, Brett; Fogelson, Jeremy; Bydon, Mohamad

    2017-06-01

    A retrospective review of a consecutive series of spinal fusions comparing patient and procedural characteristics of patients who developed surgical site infections (SSIs) after spinal fusion. It is known that increased surgical time (incision to closure) is associated with a higher rate of postoperative SSIs. We sought to determine whether increased total anesthetic time (intubation to extubation) is a factor in the development of SSIs as well. In spine surgery for deformity and degenerative disease, SSI has been associated with operative time, revealing a nearly 10-fold increase in SSI rates in prolonged surgery. Surgical time is associated with infections in other surgical disciplines as well. No studies have reported whether total anesthetic time (intubation to extubation) has an association with SSIs. Surgical records were searched in a retrospective fashion to identify all spine fusion procedures performed between January 2010 and July 2012. All SSIs during that timeframe were recorded and compared with the list of cases performed between 2010 and 2012 in a case-control design. There were 20 (1.7%) SSIs in this fusion cohort. On univariate analyses of operative factors, there was a significant association between total anesthetic time (Infection 7.6 ± 0.5 hrs vs. no infection -6.0 ± 0.1 hrs, P operative time (infection 5.5 ± 0.4 hrs vs. no infection - 4.4 ± 0.06 hrs, P infections, whereas level of pathology and emergent surgery were not significant. On multivariate logistic analysis, BMI and total anesthetic time remained independent predictors of SSI whereas ASA status and operative time did not. Increasing BMI and total anesthetic time were independent predictors of SSIs in this cohort of over 1000 consecutive spinal fusions. 3.

  1. Removal of boron(III) by N-methylglucamine-type cellulose derivatives with higher adsorption rate

    International Nuclear Information System (INIS)

    Inukai, Yoshinari; Tanaka, Yoshiharu; Matsuda, Toshio; Mihara, Nobutake; Yamada, Kouji; Nambu, Nobuyoshi; Itoh, Osamu; Doi, Takao; Kaida, Yasuhiko; Yasuda, Seiji

    2004-01-01

    To obtain adsorbents for boron(III) derived from a natural polymer, two forms (powder and fiber) of N-methylglucamine-type cellulose derivatives were newly synthesized. After the graft polymerization of two forms of cellulose with vinyl monomer having epoxy groups, the N-methylglucamine-type cellulose derivatives were obtained by the reaction of the grafted cellulose with N-methylglucamine. The adsorption capacities of the cellulose derivatives for boron(III) were the same levels as that of a commercially available N-methylglucamine-type polystyrene resin. However, the cellulose derivatives adsorbed boron(III) more quickly than the polystyrene resin. The adsorption and desorption of boron(III) with a column method using the cellulose fiber were achieved at a higher flow rate than that using the polystyrene resin. In addition, the boron(III), adsorbed on the cellulose fiber column, was quantitatively recovered with dilute hydrochloric acid in 20- and 200-fold increased concentrations. Consequently, it was found that the cellulose derivatives were superior to the polystyrene resin as adsorbents for boron(III) for treatment of a large quantity of wastewater

  2. Continuous fermentation and in-situ reed separation of butyric acid for higher sugar consumption rate and productivity

    DEFF Research Database (Denmark)

    Baroi, George Nabin; Skiadas, Ioannis; Westermann, Peter

    that disconnection of the REED system resulted to much lower (48 and 83% for glucose and xylose, respectively) sugars consumption rates and consequently lower butyric acid production rates. It was also noticeable that continuous operation, even without the REED system, resulted to higher glucose consumption rates...

  3. Bit-error-rate performance analysis of self-heterodyne detected radio-over-fiber links using phase and intensity modulation

    DEFF Research Database (Denmark)

    Yin, Xiaoli; Yu, Xianbin; Tafur Monroy, Idelfonso

    2010-01-01

    We theoretically and experimentally investigate the performance of two self-heterodyne detected radio-over-fiber (RoF) links employing phase modulation (PM) and quadrature biased intensity modulation (IM), in term of bit-error-rate (BER) and optical signal-to-noise-ratio (OSNR). In both links, self...

  4. Downlink Error Rates of Half-duplex Users in Full-duplex Networks over a Laplacian Inter-User Interference Limited and EGK fading

    KAUST Repository

    Soury, Hamza; Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have

  5. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations

    NARCIS (Netherlands)

    Derks, E. M.; Zwinderman, A. H.; Gamazon, E. R.

    2017-01-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (FST) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates;

  6. Does adding metformin to clomifene citrate lead to higher pregnancy rates in a subset of women with polycystic ovary syndrome?

    OpenAIRE

    Moll, E.; Korevaar, J.C.; Bossuyt, P.M.M.; van der Veen, F.

    2008-01-01

    BACKGROUND An RCT among newly diagnosed, therapy naive women with polycystic ovary syndrome (PCOS) showed no significant differences in ovulation rate, ongoing pregnancy rate or spontaneous abortion rate in favour of clomifene citrate plus metformin compared with clomifene citrate. We wanted to assess whether there are specific subgroups of women with PCOS in whom clomifene citrate plus metformin leads to higher pregnancy rates. METHODS Subgroup analysis based on clinical and biochemical para...

  7. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  8. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.

  9. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Science.gov (United States)

    Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn

    2013-01-01

    The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID).

  10. Resident Physicians' Clinical Training and Error Rate: The Roles of Autonomy, Consultation, and Familiarity with the Literature

    Science.gov (United States)

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-01-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores…

  11. Higher USA State Resident Neuroticism Is Associated With Lower State Volunteering Rates.

    Science.gov (United States)

    McCann, Stewart J H

    2017-12-01

    Highly neurotic persons have dispositional characteristics that tend to precipitate social anxiety that discourages formal volunteering. With the 50 American states as analytical units, Study 1 found that state resident neuroticism correlated highly ( r = -.55) with state volunteering rates and accounted for another 26.8% of the volunteering rate variance with selected state demographics controlled. Study 2 replicated Study 1 during another period and extended the association to college student, senior, secular, and religious volunteering rates. Study 3 showed state resident percentages engaged in other social behaviors involving more familiarity and fewer demands than formal volunteering related to state volunteering rates but not to neuroticism. In Study 4, state resident neuroticism largely accounted statistically for relations between state volunteering rates and state population density, collectivism, social capital, Republican preference, and well-being. This research is the first to show that state resident neuroticism is a potent predictor of state volunteering rates.

  12. Predicting higher education graduation rates from institutional characteristics and resource allocation

    Directory of Open Access Journals (Sweden)

    Florence A. Hamrick

    2004-05-01

    Full Text Available This study incorporated institutional characteristics (e.g., Carnegie type, selectivity and resource allocations (e.g., instructional expenditures, student affairs expenditures into a statistical model to predict undergraduate graduation rates. Instructional expenditures, library expenditures, and a number of institutional classification variables were significant predictors of graduation rates. Based on these results, recommendations as well as warranted cautions are included about allocating academic financial resources to optimize graduation rates

  13. Do Astronauts have a Higher Rate of Orthopedic Shoulder Conditions than a Cohort of Working Professionals?

    Science.gov (United States)

    Laughlin, Mitzi S.; Murray, Jocelyn D.; Young, Millenia; Wear, Mary L.; Tarver, W. J.; Van Baalen, Mary

    2016-01-01

    Occupational surveillance of astronaut shoulder injuries began with operational concerns at the Neutral Buoyancy Laboratory (NBL) during Extra Vehicular Activity (EVA) training. NASA has implemented several occupational health initiatives during the past 20 years to decrease the number and severity of injuries, but the individual success rate is unknown. Orthopedic shoulder injury and surgery rates were calculated, but classifying the rates as normal, high or low was highly dependent on the comparison group. The purpose of this study was to identify a population of working professionals and compare orthopedic shoulder consultation and surgery rates.

  14. Predicting higher education graduation rates from institutional characteristics and resource allocation

    OpenAIRE

    Florence A. Hamrick; John H. Schuh; Mack C. Shelley

    2004-01-01

    This study incorporated institutional characteristics (e.g., Carnegie type, selectivity) and resource allocations (e.g., instructional expenditures, student affairs expenditures) into a statistical model to predict undergraduate graduation rates. Instructional expenditures, library expenditures, and a number of institutional classification variables were significant predictors of graduation rates. Based on these results, recommendations as well as warranted cautions are included about allocat...

  15. Results of a pilot scale melter test to attain higher production rates

    International Nuclear Information System (INIS)

    Elliott, M.L.; Perez, J.M. Jr.; Chapman, C.C.

    1991-01-01

    A pilot-scale melter test was completed as part of the effort to enhance glass production rates. The experiment was designed to evaluate the effects of bulk glass temperature and feed oxide loading. The maximum glass production rate obtained, 86 kg/hr-m 2 , was over 200% better than the previous record for the melter used

  16. Heroin addicts have higher discount rates for delayed rewards than non-drug-using controls.

    Science.gov (United States)

    Kirby, K N; Petry, N M; Bickel, W K

    1999-03-01

    Fifty-six heroin addicts and 60 age-matched controls were offered choices between monetary rewards ($11-$80) available immediately and larger rewards ($25-$85) available after delays ranging from 1 week to 6 months. Participants had a 1-in-6 chance of winning a reward that they chose on one randomly selected trial. Delay-discounting rates were estimated from the pattern of participants' choices. The discounting model of impulsiveness (Ainslie, 1975) implies that delay-discounting rates are positively correlated with impulsiveness. On average, heroin addicts' discount rates were twice those of controls (p = .004), and discount rates were positively correlated with impulsivity as measured by self-report questionnaires (p discounting rate as a measure of impulsiveness, a characteristic associated with substance abuse.

  17. Increased error rates in preliminary reports issued by radiology residents working more than 10 consecutive hours overnight.

    Science.gov (United States)

    Ruutiainen, Alexander T; Durand, Daniel J; Scanlon, Mary H; Itri, Jason N

    2013-03-01

    -sectional imaging modality (OR 5.38, 95% CI 3.22-8.98), and inpatient location (OR 1.81, 95% CI 1.02-3.20) were independent risk factors for major discrepancy. In a single academic medical center, major discrepancies in resident preliminary reports increased significantly during the final 2 hours of consecutive 12-hour overnight call shifts. This finding could be related to either fatigue or circadian desynchronization. Discrimination of these two potential etiologies requires additional investigation as major discrepancies in resident reports have the potential to negatively impact patient care/outcome. Cross-sectional imaging modalities including computed tomography and ultrasound (versus conventional radiography), as well as inpatient location (versus Emergency Department location), were also associated with significantly higher major discrepancy rates. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  18. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    Energy Technology Data Exchange (ETDEWEB)

    Boehnke, E McKenzie; DeMarco, J; Steers, J; Fraass, B [Cedars-Sinai Medical Center, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readings are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.

  19. Throughput Estimation Method in Burst ACK Scheme for Optimizing Frame Size and Burst Frame Number Appropriate to SNR-Related Error Rate

    Science.gov (United States)

    Ohteru, Shoko; Kishine, Keiji

    The Burst ACK scheme enhances effective throughput by reducing ACK overhead when a transmitter sends sequentially multiple data frames to a destination. IEEE 802.11e is one such example. The size of the data frame body and the number of burst data frames are important burst transmission parameters that affect throughput. The larger the burst transmission parameters are, the better the throughput under error-free conditions becomes. However, large data frame could reduce throughput under error-prone conditions caused by signal-to-noise ratio (SNR) deterioration. If the throughput can be calculated from the burst transmission parameters and error rate, the appropriate ranges of the burst transmission parameters could be narrowed down, and the necessary buffer size for storing transmit data or received data temporarily could be estimated. In this paper, we present a method that features a simple algorithm for estimating the effective throughput from the burst transmission parameters and error rate. The calculated throughput values agree well with the measured ones for actual wireless boards based on the IEEE 802.11-based original MAC protocol. We also calculate throughput values for larger values of the burst transmission parameters outside the assignable values of the wireless boards and find the appropriate values of the burst transmission parameters.

  20. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  1. Are Interpersonal Violence Rates Higher Among Young Women in College Compared With Those Never Attending College?

    Science.gov (United States)

    Coker, Ann L; Follingstad, Diane R; Bush, Heather M; Fisher, Bonnie S

    2016-05-01

    Estimates of sexual violence and partner violence rates among young women are generated primarily from college samples. Few studies have data to compare rates among similar-aged women attending college with those who never attended college. This study aims to estimate rates of partner violence by type (sexual, physical, and psychological) and severity (mild, moderate, severe), sexual harassment, and knowing or suspecting that someone put a drug in a drink (drugged drink) among a national sample of 959 young women aged 18 to 24 in an intimate relationship in the past 12 months who were either currently in college (college;n= 272) or never attended college (non-college;n= 687). After adjusting for demographic differences between these two groups, no significant differences were found in rates of sexual partner violence (28.4% non-college, 23.5% college), physical partner violence (27.9% non-college, 26.3% college), psychological partner violence (Mscore: 6.10 non-college, 5.59 college), sexual harassment (15.5% non-college, 14.1% college), or drugged drink (8.5% non-college, 7.8% college). Finding high rates of interpersonal violence among young women who are and are not currently attending college indicates the need to target all young adults with violence prevention interventions in educational, workplace, and other community-based settings. © The Author(s) 2015.

  2. Higher speciation and lower extinction rates influence mammal diversity gradients in Asia.

    Science.gov (United States)

    Tamma, Krishnapriya; Ramakrishnan, Uma

    2015-02-04

    Little is known about the patterns and correlates of mammal diversity gradients in Asia. In this study, we examine patterns of species distributions and phylogenetic diversity in Asia and investigate if the observed diversity patterns are associated with differences in diversification rates between the tropical and non-tropical regions. We used species distribution maps and phylogenetic trees to generate species and phylogenetic diversity measures for 1° × 1° cells across mainland Asia. We constructed lineage-through-time plots and estimated diversification shift-times to examine the temporal patterns of diversifications across orders. Finally, we tested if the observed gradients in Asia could be associated with geographical differences in diversification rates across the tropical and non-tropical biomes. We estimated speciation, extinction and dispersal rates across these two regions for mammals, both globally and for Asian mammals. Our results demonstrate strong latitudinal and longitudinal gradients of species and phylogenetic diversity with Southeast Asia and the Himalayas showing highest diversity. Importantly, our results demonstrate that differences in diversification (speciation, extinction and dispersal) rates between the tropical and the non-tropical biomes influence the observed diversity gradients globally and in Asia. For the first time, we demonstrate that Asian tropics act as both cradles and museums of mammalian diversity. Temporal and spatial variation in diversification rates across different lineages of mammals is an important correlate of species diversity gradients observed in Asia.

  3. Vigorous physical activity predicts higher heart rate variability among younger adults.

    Science.gov (United States)

    May, Richard; McBerty, Victoria; Zaky, Adam; Gianotti, Melino

    2017-06-14

    Baseline heart rate variability (HRV) is linked to prospective cardiovascular health. We tested intensity and duration of weekly physical activity as predictors of heart rate variability in young adults. Time and frequency domain indices of HRV were calculated based on 5-min resting electrocardiograms collected from 82 undergraduate students. Hours per week of both moderate and vigorous activity were estimated using the International Physical Activity Questionnaire. In regression analyses, hours of vigorous physical activity, but not moderate activity, significantly predicted greater time domain and frequency domain indices of heart rate variability. Adjusted for weekly frequency, greater daily duration of vigorous activity failed to predict HRV indices. Future studies should test direct measurements of vigorous activity patterns as predictors of autonomic function in young adulthood.

  4. Higher contamination rate than usual. Treatment and disinfection of water in hot whirlpool systems

    Energy Technology Data Exchange (ETDEWEB)

    Herschman, W

    1985-10-01

    Hot whirlpools must meet the hygienic standards set in the Federal Law Concerning Prevention of Epidemics of 18 Dec 1979. The low water volume of whirlpool systems and the extraordinary contamination rate in uninterrupted operation require a specific water treatment and disinfestation technology to make up for the poor buffer capacity of the low water volume. (orig./BWI).

  5. Mechanisms promoting higher growth rate in arctic than in temperate shorebirds

    NARCIS (Netherlands)

    Schekkerman, H.; Tulp, I.Y.M.; Piersma, T.; Visser, G.H.

    2003-01-01

    We compared prefledging growth, energy expenditure, and time budgets in the arctic-breeding red knot (Calidris canutus) to those in temperate shorebirds, to investigate how arctic chicks achieve a high growth rate despite energetic difficulties associated with precocial development in a cold

  6. Mechanisms promoting higher growth rate in arctic than in temperate shorebirds

    NARCIS (Netherlands)

    Schekkerman, H; Tulp, Ingrid; Piersma, T.; Visser, G.H.

    We compared prefledging growth, energy expenditure, and time budgets in the arctic-breeding red knot (Calidris canutus) to those in temperate shorebirds, to investigate how arctic chicks achieve a high growth rate despite energetic difficulties associated with precocial development in a cold

  7. Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment

    Science.gov (United States)

    Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.

    2016-11-01

    This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.

  8. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  9. Honest signaling in trust interactions: smiles rated as genuine induce trust and signal higher earning opportunities

    OpenAIRE

    Centorrino, S.; Djemai, E.; Hopfensitz, A.; Milinski, M.; Seabright, P.

    2015-01-01

    We test the hypothesis that smiles perceived as honest serve as a signal that has evolved to induce cooperation in situations requiring mutual trust. Potential trustees (84 participants from Toulouse, France) made two video clips averaging around 15 seconds for viewing by potential senders before the latter decided whether to ‘send’ or ‘keep’ a lower stake (4 euros) or higher stake (8 euros). Senders (198 participants from Lyon, France) made trust decisions with respect to the recorded clips....

  10. Surgical site infection and transfusion rates are higher in underweight total knee arthroplasty patients

    Directory of Open Access Journals (Sweden)

    Jorge Manrique, MD

    2017-03-01

    Conclusions: Our study demonstrates that UW TKA patients have a higher likelihood of developing SSI and requiring blood transfusions. The specific reasons are unclear, but we conjecture that it may be related to decreased wound healing capabilities and low preoperative hemoglobin. Investigation of local tissue coverage and hematologic status may be beneficial in this patient population to prevent SSI. Based on the results of this study, a prospective evaluation of these factors should be undertaken.

  11. Scintillation and bit error rate analysis of a phase-locked partially coherent flat-topped array laser beam in oceanic turbulence.

    Science.gov (United States)

    Yousefi, Masoud; Kashani, Fatemeh Dabbagh; Golmohammady, Shole; Mashal, Ahmad

    2017-12-01

    In this paper, the performance of underwater wireless optical communication (UWOC) links, which is made up of the partially coherent flat-topped (PCFT) array laser beam, has been investigated in detail. Providing high power, array laser beams are employed to increase the range of UWOC links. For characterization of the effects of oceanic turbulence on the propagation behavior of the considered beam, using the extended Huygens-Fresnel principle, an analytical expression for cross-spectral density matrix elements and a semi-analytical one for fourth-order statistical moment have been derived. Then, based on these expressions, the on-axis scintillation index of the mentioned beam propagating through weak oceanic turbulence has been calculated. Furthermore, in order to quantify the performance of the UWOC link, the average bit error rate (BER) has also been evaluated. The effects of some source factors and turbulent ocean parameters on the propagation behavior of the scintillation index and the BER have been studied in detail. The results of this investigation indicate that in comparison with the Gaussian array beam, when the source size of beamlets is larger than the first Fresnel zone, the PCFT array laser beam with the higher flatness order is found to have a lower scintillation index and hence lower BER. Specifically, in the sense of scintillation index reduction, using the PCFT array laser beams has a considerable benefit in comparison with the single PCFT or Gaussian laser beams and also Gaussian array beams. All the simulation results of this paper have been shown by graphs and they have been analyzed in detail.

  12. Accuracy of rate coding: When shorter time window and higher spontaneous activity help

    Czech Academy of Sciences Publication Activity Database

    Leváková, Marie; Tamborrino, M.; Košťál, Lubomír; Lánský, Petr

    2017-01-01

    Roč. 95, č. 2 (2017), č. článku 022310. ISSN 2470-0045 R&D Projects: GA ČR(CZ) GA15-08066S; GA MŠk(CZ) 7AMB17AT048 Institutional support: RVO:67985823 Keywords : rate coding * observation window * spontaneous activity * Fisher information * perfect integrate- and -fire model * Wiener process Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Biology (theoretical, mathematical, thermal, cryobiology, biological rhythm), Evolutionary biology Impact factor: 2.366, year: 2016

  13. A software solution to estimate the SEU-induced soft error rate for systems implemented on SRAM-based FPGAs

    International Nuclear Information System (INIS)

    Wang Zhongming; Lu Min; Yao Zhibin; Guo Hongxia

    2011-01-01

    SRAM-based FPGAs are very susceptible to radiation-induced Single-Event Upsets (SEUs) in space applications. The failure mechanism in FPGA's configuration memory differs from those in traditional memory device. As a result, there is a growing demand for methodologies which could quantitatively evaluate the impact of this effect. Fault injection appears to meet such requirement. In this paper, we propose a new methodology to analyze the soft errors in SRAM-based FPGAs. This method is based on in depth understanding of the device architecture and failure mechanisms induced by configuration upsets. The developed programs read in the placed and routed netlist, search for critical logic nodes and paths that may destroy the circuit topological structure, and then query a database storing the decoded relationship of the configurable resources and corresponding control bit to get the sensitive bits. Accelerator irradiation test and fault injection experiments were carried out to validate this approach. (semiconductor integrated circuits)

  14. Error Rates of M-PAM and M-QAM in Generalized Fading and Generalized Gaussian Noise Environments

    KAUST Repository

    Soury, Hamza

    2013-07-01

    This letter investigates the average symbol error probability (ASEP) of pulse amplitude modulation and quadrature amplitude modulation coherent signaling over flat fading channels subject to additive white generalized Gaussian noise. The new ASEP results are derived in a generic closed-form in terms of the Fox H function and the bivariate Fox H function for the extended generalized-K fading case. The utility of this new general closed-form is that it includes some special fading distributions, like the Generalized-K, Nakagami-m, and Rayleigh fading and special noise distributions such as Gaussian and Laplacian. Some of these special cases are also treated and are shown to yield simplified results.

  15. Comparison of higher order spectra in heart rate signals during two techniques of meditation: Chi and Kundalini meditation.

    Science.gov (United States)

    Goshvarpour, Ateke; Goshvarpour, Atefeh

    2013-02-01

    The human heartbeat is one of the important examples of complex physiologic fluctuations. For the first time in this study higher order spectra of heart rate signals during meditation have explored. Specifically, the aim of this study was to analysis and compares the contribution of quadratic phase coupling of human heart rate variability during two forms of meditation: (1) Chinese Chi (or Qigong) meditation and (2) Kundalini Yoga meditation. For this purpose, Bispectrum was estimated by using biased, parametric and the direct (FFT) method. The results show that the mean Bispectrum magnitude of heart rate signals increased during Kundalini Yoga meditation, but it decreased significantly during Chi meditation. However, in both meditation techniques phase-coupled harmonics are shifted to the higher frequencies during meditation. In addition, it has shown that not only there are significant differences between rest and meditation states, but also heart rate patterns appear to be influenced by different types of meditation.

  16. Effect of a health system's medical error disclosure program on gastroenterology-related claims rates and costs.

    Science.gov (United States)

    Adams, Megan A; Elmunzer, B Joseph; Scheiman, James M

    2014-04-01

    In 2001, the University of Michigan Health System (UMHS) implemented a novel medical error disclosure program. This study analyzes the effect of this program on gastroenterology (GI)-related claims and costs. This was a review of claims in the UMHS Risk Management Database (1990-2010), naming a gastroenterologist. Claims were classified according to pre-determined categories. Claims data, including incident date, date of resolution, and total liability dollars, were reviewed. Mean total liability incurred per claim in the pre- and post-implementation eras was compared. Patient encounter data from the Division of Gastroenterology was also reviewed in order to benchmark claims data with changes in clinical volume. There were 238,911 GI encounters in the pre-implementation era and 411,944 in the post-implementation era. A total of 66 encounters resulted in claims: 38 in the pre-implementation era and 28 in the post-implementation era. Of the total number of claims, 15.2% alleged delay in diagnosis/misdiagnosis, 42.4% related to a procedure, and 42.4% involved improper management, treatment, or monitoring. The reduction in the proportion of encounters resulting in claims was statistically significant (P=0.001), as was the reduction in time to claim resolution (1,000 vs. 460 days) (P<0.0001). There was also a reduction in the mean total liability per claim ($167,309 pre vs. $81,107 post, 95% confidence interval: 33682.5-300936.2 pre vs. 1687.8-160526.7 post). Implementation of a novel medical error disclosure program, promoting transparency and quality improvement, not only decreased the number of GI-related claims per patient encounter, but also dramatically shortened the time to claim resolution.

  17. Religious affiliation and psychiatric morbidity in Brazil: higher rates among evangelicals and spiritists.

    Science.gov (United States)

    Dalgalarrondo, Paulo; Marín-León, Leticia; Botega, Neury José; Berti De Azevedo Barros, Marilisa; Bosco De Oliveira, Helenice

    2008-11-01

    To verify the association between the prevalence of mental symptoms and excessive alcohol intake with religious affiliation, church attendance and personal religiosity. A household survey of 515 adults randomly sampled included the WHO SUPRE-MISS questionnaire, SRQ-20 and AUDIT. Weighted prevalences were estimated and logistic analyses were performed. Minor psychiatric morbidity was greater among Spiritists and Protestants/ Evangelicals than in Catholics and in the ;no-religion' group. The latter had a greater frequency of abusive alcohol drinking pattern and Protestants/Evangelicals showed lower drinking patterns. Although belonging to Protestant/Evangelical churches in Brazil may inhibit alcohol involvement it seems to be associated to a higher frequency of depressive symptoms. Processes of seeking relief in new religious affiliations among sub-groups with previous minor psychiatric symptoms may probably occur in the Brazilian society.

  18. Higher rate of compensation after surgical treatment versus conservative treatment for acute Achilles tendon rupture

    DEFF Research Database (Denmark)

    Sveen, Thor-Magnus; Troelsen, Anders; Barfod, Kristoffer Weisskirchner

    2015-01-01

    in the period from 1992 to 2010 in the DPIA database were identified and patient records were reviewed manually. RESULTS: The compensation awarded for the 18-year period totalled 18,147,202 DKK with 41% of patient claims being recognised. Out of 180 surgically treated patients, 79 received a total compensation...... of 14,051,377 DKK, median 47,637 (range: 5,000-3,577,043). Of 114 non-surgically treated patients, 40 received 3,715,224 DKK in compensation, with a median amount of 35,788 DKK (range: 5,000-830,073). CONCLUSION: Compensation after surgical treatment was 3.8 times higher than compensation after non......-surgical treatment. It is noteworthy that 34.5% of patients had an overlooked diagnosis which underlines the importance of a correct primary diagnosis. FUNDING: not relevant. TRIAL REGISTRATION: not relevant....

  19. Weaker gun state laws are associated with higher rates of suicide secondary to firearms.

    Science.gov (United States)

    Alban, Rodrigo F; Nuño, Miriam; Ko, Ara; Barmparas, Galinos; Lewis, Azaria V; Margulies, Daniel R

    2018-01-01

    Firearm-related suicides comprise over two-thirds of gun-related violence in the United States, and gun laws and policies remain under scrutiny, with many advocating for revision of the regulatory map for lawful gun ownership, aiming at restricting access and distribution of these weapons. However, the quantitative relationship between how strict gun laws are and the incidence of firearm violence with their associated mortality is largely unknown. We therefore, sought to explore the impact of firearm law patterns among states on the incidence and outcomes of firearm-related suicide attempts, utilizing established objective criteria. The National Inpatient Sample for the years 1998-2011 was queried for all firearm-related suicides. Discharge facilities were stratified into five categories (A, B, C, D, and F, with A representing states with the most strict and F representing states with the least strict laws) based on the Brady Campaign to prevent Gun Violence that assigns scorecards for every state. The primary outcomes were suicide attempts and in-hospital mortality per 100,000 populations by Brady state grade. During the 14-year study period, 34,994 subjects met inclusion criteria. The mean age was 42.0 years and 80.1% were male. A handgun was utilized by 51.8% of patients. The overall mortality was 33.3%. Overall, 22.0% had reported psychoses and 19.3% reported depression. After adjusting for confounding factors and using group A as reference, there were higher adjusted odds for suicide attempts for patients admitted in group C, D, and F category states (1.73, 2.09, and 1.65, respectively, all P gun laws, and these injuries tend to be associated with a higher mortality. Efforts aimed at nationwide standardization of firearm state laws are warranted, particularly for young adults and suicide-prone populations. III. Trauma Outcomes study. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Lead Burden as a Factor for Higher Complication Rate in Patients With Implantable Cardiac Devices

    Directory of Open Access Journals (Sweden)

    Christopher Kolibash

    2015-01-01

    Full Text Available Purpose: Lead revisions have increased over the last decade. Patients who do not undergo lead extraction face an increased lead burden. Consequences of increased lead burden have not been fully defined. We sought to characterize the complication rate and outcomes in patients with sterile redundant leads. Methods: We retrospectively reviewed 242 consecutive patients [mean age 74 ± 12 years; 66.9% male] who underwent lead revision that resulted in an abandoned lead from January 2005 to June 2010. Patients were placed in a cohort based on number of leads after last recorded procedure (Group A: ≤2 [n=58]; Group B: 3-4 [n=168]; Group C: ≥5 [n=16]. Prespecified inhospital and long-term follow-up events were compared. Mortality rates were obtained from Social Security Death Index. Median follow-up was 2 years. Results: Baseline age, gender and race demographics were similar among the three groups. Increasing lead burden was associated with more adverse periprocedural events (A: 3.4%, B: 10.1%, C: 25.0%; P=0.031 and long-term device-related events (A: 1.7%, B: 13.0%, C: 18.8%; P=0.031. Device-related readmissions increased in frequency as lead burden increased (A: 3.5%, B: 18.5%, C: 37.5%; P=0.002. Combined periprocedural and late events also increased with more redundant leads (A: 5.2%, B: 23.2%, C: 44.0%; P=0.001. Total major events were infrequent (3.3%. There was no procedure-related mortality. Long-term all-cause mortality was not significantly different (A: 17.2%, B: 23.8%, C: 25.0%; P=0.567. Conclusions: Greater lead burden was associated with increased number of periprocedural and long-term minor events. It did not significantly impact major events or mortality.

  1. Higher Precision of Heart Rate Compared with VO2 to Predict Exercise Intensity in Endurance-Trained Runners.

    Science.gov (United States)

    Reis, Victor M; den Tillaar, Roland Van; Marques, Mario C

    2011-01-01

    The aim of the present study was to assess the precision of oxygen uptake with heart rate regression during track running in highly-trained runners. Twelve national and international level male long-distance road runners (age 30.7 ± 5.5 yrs, height 1.71 ± 0.04 m and mass 61.2 ± 5.8 kg) with a personal best on the half marathon of 62 min 37 s ± 1 min 22 s participated in the study. Each participant performed, in an all-weather synthetic track five, six min bouts at constant velocity with each bout at an increased running velocity. The starting velocity was 3.33 m·s(-1) with a 0.56 m·s(-1) increase on each subsequent bout. VO2 and heart rate were measured during the runs and blood lactate was assessed immediately after each run. Mean peak VO2 and mean peak heart rate were, respectively, 76.2 ± 9.7 mL·kg(-1)·min(-1) and 181 ± 13 beats·min(-1). The linearity of the regressions between heart rate, running velocity and VO2 were all very high (r > 0.99) with small standard errors of regression (i.e. Sy.x at the velocity associated with the 2 and 4 mmol·L(-1) lactate thresholds). The strong relationships between heart rate, running velocity and VO2 found in this study show that, in highly trained runners, it is possible to have heart rate as an accurate indicator of energy demand and of the running speed. Therefore, in this subject cohort it may be unnecessary to use VO2 to track changes in the subjects' running economy during training periods. Key pointsHeart rate is used in the control of exercise intensity in endurance sports.However, few studies have quantified the precision of its relationship with oxygen uptake in highly trained runners.We evaluated twelve elite half-marathon runners during track running at various intensities and established three regressions: oxygen uptake / heart rate; heart rate / running velocity and oxygen uptake / running velocity.The three regressions presented, respectively, imprecision of 4,2%, 2,75% and 4,5% at the velocity

  2. Errors in Computing the Normalized Protein Catabolic Rate due to Use of Single-pool Urea Kinetic Modeling or to Omission of the Residual Kidney Urea Clearance.

    Science.gov (United States)

    Daugirdas, John T

    2017-07-01

    The protein catabolic rate normalized to body size (PCRn) often is computed in dialysis units to obtain information about protein ingestion. However, errors can manifest when inappropriate modeling methods are used. We used a variable volume 2-pool urea kinetic model to examine the percent errors in PCRn due to use of a 1-pool urea kinetic model or after omission of residual urea clearance (Kru). When a single-pool model was used, 2 sources of errors were identified. The first, dependent on the ratio of dialyzer urea clearance to urea distribution volume (K/V), resulted in a 7% inflation of the PCRn when K/V was in the range of 6 mL/min per L. A second, larger error appeared when Kt/V values were below 1.0 and was related to underestimation of urea distribution volume (due to overestimation of effective clearance) by the single-pool model. A previously reported prediction equation for PCRn was valid, but data suggest that it should be modified using 2-pool eKt/V and V coefficients instead of single-pool values. A third source of error, this one unrelated to use of a single-pool model, namely omission of Kru, was shown to result in an underestimation of PCRn, such that each ml/minute Kru per 35 L of V caused a 5.6% underestimate in PCRn. Marked overestimation of PCRn can result due to inappropriate use of a single-pool urea kinetic model, particularly when Kt/V <1.0 (as in short daily dialysis), or after omission of residual native kidney clearance. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  3. An integrated approach for a higher success rate in mergers and acquisitions

    Directory of Open Access Journals (Sweden)

    Andrej Bertoncelj

    2007-05-01

    Full Text Available The paper outlines the importance of balanced management of hard and soft key success factors, combining the economic logic of corporate performance and human capital through an integrated approach to mergers and acquisitions. The study, based on a questionnaire and interviews, suggests that the achievement level ofmergers and acquisitions’ objectives of acquiring companies in Slovenia should be comparable to findings of similar studies; namely, the objectives that drove the deal were met only half the time. The results indicate that five hard success factors – a professional target search and due diligence, a realistic assessment of synergies, theright mix of financial sources, a detailed post-acquisition integration plan already prepared in the pre-deal phase and its speedy implementation – and five soft success factors – a new “combined” organizational culture, a competent management team, innovative employees, efficient and consistent communication and a creative business environment – are becoming increasingly relevant. Even though they differ in their importance for individual companies in the sample, they are all considered essential to increasing the success rate of corporate combinations

  4. Sex-role reversal of a monogamous pipefish without higher potential reproductive rate in females.

    Science.gov (United States)

    Sogabe, Atsushi; Yanagisawa, Yasunobu

    2007-12-07

    In monogamous animals, males are usually the predominant competitors for mates. However, a strictly monogamous pipefish Corythoichthys haematopterus exceptionally exhibits a reversed sex role. To understand why its sex role is reversed, we measured the adult sex ratio and the potential reproductive rate (PRR), two principal factors influencing the operational sex ratio (OSR), in a natural population of southern Japan. The adult sex ratio was biased towards females throughout the breeding season, but the PRR, which increased with water temperature, did not show sexual difference. We found that an alternative index of the OSR (Sf/Sm: sex ratio of 'time in') calculated from the monthly data was consistently biased towards females. The female-biased OSR associated with sex-role reversal has been reported in some polyandrous or promiscuous pipefish, but factors biasing the OSR differed between these pipefish and C. haematopterus. We concluded that the similar PRR between the sexes in C. haematopterus does not confer reproductive benefit of polygamous mating on either sex, resulting in strict monogamous mating, and its female-biased adult sex ratio promotes female-female competition for a mate, resulting in sex-role reversal.

  5. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    Science.gov (United States)

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  6. HIGHER PRECISION OF HEART RATE COMPARED WITH VO2 TO PREDICT EXERCISE INTENSITY IN ENDURANCE-TRAINED RUNNERS

    Directory of Open Access Journals (Sweden)

    Victor M. Reis

    2011-03-01

    Full Text Available The aim of the present study was to assess the precision of oxygen uptake with heart rate regression during track running in highly-trained runners. Twelve national and international level male long-distance road runners (age 30.7 ± 5.5 yrs, height 1.71 ± 0.04 m and mass 61.2 ± 5.8 kg with a personal best on the half marathon of 62 min 37 s ± 1 min 22 s participated in the study. Each participant performed, in an all-weather synthetic track five, six min bouts at constant velocity with each bout at an increased running velocity. The starting velocity was 3.33 m·s-1 with a 0.56 m·s-1 increase on each subsequent bout. VO2 and heart rate were measured during the runs and blood lactate was assessed immediately after each run. Mean peak VO2 and mean peak heart rate were, respectively, 76.2 ± 9.7 mL·kg-1·min-1 and 181 ± 13 beats·min-1. The linearity of the regressions between heart rate, running velocity and VO2 were all very high (r > 0.99 with small standard errors of regression (i.e. Sy.x < 5% at the velocity associated with the 2 and 4 mmol·L-1 lactate thresholds. The strong relationships between heart rate, running velocity and VO2 found in this study show that, in highly trained runners, it is possible to have heart rate as an accurate indicator of energy demand and of the running speed. Therefore, in this subject cohort it may be unnecessary to use VO2 to track changes in the subjects' running economy during training periods.

  7. Lower serotonin level and higher rate of fibromyalgia syndrome with advancing pregnancy.

    Science.gov (United States)

    Atasever, Melahat; Namlı Kalem, Muberra; Sönmez, Çiğdem; Seval, Mehmet Murat; Yüce, Tuncay; Sahin Aker, Seda; Koç, Acar; Genc, Hakan

    2017-09-01

    The aim of the study is to investigate the relationship between changes in serotonin levels during pregnancy and fibromyalgia syndrome (FS) and the relationships between FS and the physical/psychological state, biochemical and hormonal parameters, which may be related to the musculoskeletal system. This study is a prospective case-control study conducted with 277 pregnant women at the obstetric unit of Ankara University Faculty of Medicine, in the period between January and June 2015. FS was determined based on the presence or absence of the 2010 ACR diagnostic criteria and all the volunteers were asked to answer the questionnaires as Fibromyalgia Impact Criteria (FIQ), Widespread Pain Index (WPI), Symptom Severity Scale (SS), Beck Depression Inventory and Visual Analog Scale (VAS). Biochemical and hormonal markers (glucose, TSH, T4, Ca (calcium), P (phosphate), PTH (parathyroid hormone) and serotonin levels) relating to muscle and bone metabolism were measured. In the presence of fibromyalgia, the physical and psychological parameters are negatively affected (p serotonin levels may contribute to the development of fibromyalgia but this was not statistically significant. The Beck Depression Inventory scale statistically showed that increasing scores also increase the risk of fibromyalgia (p serotonin levels in women with FS are lower than the control group and that serotonin levels reduce as pregnancy progresses. Anxiety and depression in pregnant women with FS are higher than the control group. The presence of depression increases the likelihood of developing FS at a statistically significant level. Serotonin impairment also increases the chance of developing FS, but this correlation has not been shown to be statistically significant.

  8. Biodegradation testing of chemicals with high Henry’s constants – separating mass and effective concentration reveals higher rate constants

    DEFF Research Database (Denmark)

    Birch, Heidi; Andersen, Henrik Rasmus; Comber, Mike

    Microextraction (HS-SPME) was applied directly on the test systems to measure substrate depletion by biodegradation relative to abiotic controls. HS-SPME was also applied to determine air to water partitioning ratios. Water phase biodegradation rate constants, kwater, were up to 72 times higher than test system...

  9. Higher dosage nicotine patches increase one-year smoking cessation rates : results from the European CEASE trial

    NARCIS (Netherlands)

    Tonnesen, P; Paoletti, P; Gustavsson, G; Russell, MA; Saracci, R; Gulsvik, A; Rijcken, B

    The Collaborative European Anti-Smoking Evaluation (CEASE) was a European multicentre, randomized, double-blind placebo controlled smoking cessation study, The objectives were to determine whether higher dosage and longer duration of nicotine patch therapy would increase the success rate. Thirty-six

  10. Understanding the Effect of Response Rate and Class Size Interaction on Students Evaluation of Teaching in a Higher Education

    Science.gov (United States)

    Al Kuwaiti, Ahmed; AlQuraan, Mahmoud; Subbarayalu, Arun Vijay

    2016-01-01

    Objective: This study aims to investigate the interaction between response rate and class size and its effects on students' evaluation of instructors and the courses offered at a higher education Institution in Saudi Arabia. Study Design: A retrospective study design was chosen. Methods: One thousand four hundred and forty four different courses…

  11. Children Receiving Free or Reduced-Price School Lunch Have Higher Food Insufficiency Rates in Summer.

    Science.gov (United States)

    Huang, Jin; Barnidge, Ellen; Kim, Youngmi

    2015-09-01

    In 2012, 20% of households in the United States with children lacked consistent access to adequate food. Food insufficiency has significant implications for children, including poor physical and mental health outcomes, behavior problems, and low educational achievements. The National School Lunch Program (NSLP) is one policy solution to reduce food insufficiency among children from low-income families. The objective of this project was to evaluate the association between NSLP participation and household food insufficiency by examining trajectories of food insufficiency over 10 calendar months. The calendar months included both nonsummer months when school is in session and summer months when school is out of session. The study used the data from the Survey of Income and Program Participation and conducted linear growth curve analyses in the multilevel modeling context. Comparisons were made between the trajectories of food insufficiencies among recipients of free or reduced-price lunch and their counterparts who are eligible but choose not to participate in the program. Heads of households that included children receiving free or reduced-price lunch (n = 6867) were more likely to be female, black, unmarried, and unemployed, and have a lower educational attainment than those whose children were eligible but did not receive free or reduced-price lunch (n = 11,396). For households participating in the NSLP, the food insufficiency rate was consistent from January to May at ∼4%, and then increased in June and July to >5%. Meanwhile, food insufficiency among eligible nonrecipients was constant throughout the year at nearly 2%. The NSLP protects households from food insufficiency. Policies should be instituted to make enrollment easier for households. © 2015 American Society for Nutrition.

  12. Informatics technology mimics ecology: dense, mutualistic collaboration networks are associated with higher publication rates.

    Directory of Open Access Journals (Sweden)

    Marco D Sorani

    Full Text Available Information technology (IT adoption enables biomedical research. Publications are an accepted measure of research output, and network models can describe the collaborative nature of publication. In particular, ecological networks can serve as analogies for publication and technology adoption. We constructed network models of adoption of bioinformatics programming languages and health IT (HIT from the literature.We selected seven programming languages and four types of HIT. We performed PubMed searches to identify publications since 2001. We calculated summary statistics and analyzed spatiotemporal relationships. Then, we assessed ecological models of specialization, cooperativity, competition, evolution, biodiversity, and stability associated with publications.Adoption of HIT has been variable, while scripting languages have experienced rapid adoption. Hospital systems had the largest HIT research corpus, while Perl had the largest language corpus. Scripting languages represented the largest connected network components. The relationship between edges and nodes was linear, though Bioconductor had more edges than expected and Perl had fewer. Spatiotemporal relationships were weak. Most languages shared a bioinformatics specialization and appeared mutualistic or competitive. HIT specializations varied. Specialization was highest for Bioconductor and radiology systems. Specialization and cooperativity were positively correlated among languages but negatively correlated among HIT. Rates of language evolution were similar. Biodiversity among languages grew in the first half of the decade and stabilized, while diversity among HIT was variable but flat. Compared with publications in 2001, correlation with publications one year later was positive while correlation after ten years was weak and negative.Adoption of new technologies can be unpredictable. Spatiotemporal relationships facilitate adoption but are not sufficient. As with ecosystems, dense

  13. Informatics technology mimics ecology: dense, mutualistic collaboration networks are associated with higher publication rates.

    Science.gov (United States)

    Sorani, Marco D

    2012-01-01

    Information technology (IT) adoption enables biomedical research. Publications are an accepted measure of research output, and network models can describe the collaborative nature of publication. In particular, ecological networks can serve as analogies for publication and technology adoption. We constructed network models of adoption of bioinformatics programming languages and health IT (HIT) from the literature.We selected seven programming languages and four types of HIT. We performed PubMed searches to identify publications since 2001. We calculated summary statistics and analyzed spatiotemporal relationships. Then, we assessed ecological models of specialization, cooperativity, competition, evolution, biodiversity, and stability associated with publications.Adoption of HIT has been variable, while scripting languages have experienced rapid adoption. Hospital systems had the largest HIT research corpus, while Perl had the largest language corpus. Scripting languages represented the largest connected network components. The relationship between edges and nodes was linear, though Bioconductor had more edges than expected and Perl had fewer. Spatiotemporal relationships were weak. Most languages shared a bioinformatics specialization and appeared mutualistic or competitive. HIT specializations varied. Specialization was highest for Bioconductor and radiology systems. Specialization and cooperativity were positively correlated among languages but negatively correlated among HIT. Rates of language evolution were similar. Biodiversity among languages grew in the first half of the decade and stabilized, while diversity among HIT was variable but flat. Compared with publications in 2001, correlation with publications one year later was positive while correlation after ten years was weak and negative.Adoption of new technologies can be unpredictable. Spatiotemporal relationships facilitate adoption but are not sufficient. As with ecosystems, dense, mutualistic

  14. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  15. Correction of quantification errors in pelvic and spinal lesions caused by ignoring higher photon attenuation of bone in [{sup 18}F]NaF PET/MR

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, Georg, E-mail: georg.schramm@kuleuven.be; Maus, Jens; Hofheinz, Frank; Petr, Jan; Lougovski, Alexandr [Helmholtz-Zentrum Dresden-Rossendorf, Institute of Radiopharmaceutical Cancer Research, Dresden 01328 (Germany); Beuthien-Baumann, Bettina; Oehme, Liane [Department of Nuclear Medicine, University Hospital Carl Gustav Carus, Dresden 01307 (Germany); Platzek, Ivan [Department of Radiology, University Hospital Carl Gustav Carus, Dresden 01307 (Germany); Hoff, Jörg van den [Helmholtz-Zentrum Dresden-Rossendorf, Institute for Radiopharmaceutical Cancer Research, Dresden 01328 (Germany); Department of Nuclear Medicine, University Hospital Carl Gustav Carus, Dresden 01307 (Germany)

    2015-11-15

    Purpose: MR-based attenuation correction (MRAC) in routine clinical whole-body positron emission tomography and magnetic resonance imaging (PET/MRI) is based on tissue type segmentation. Due to lack of MR signal in cortical bone and the varying signal of spongeous bone, standard whole-body segmentation-based MRAC ignores the higher attenuation of bone compared to the one of soft tissue (MRAC{sub nobone}). The authors aim to quantify and reduce the bias introduced by MRAC{sub nobone} in the standard uptake value (SUV) of spinal and pelvic lesions in 20 PET/MRI examinations with [{sup 18}F]NaF. Methods: The authors reconstructed 20 PET/MR [{sup 18}F]NaF patient data sets acquired with a Philips Ingenuity TF PET/MRI. The PET raw data were reconstructed with two different attenuation images. First, the authors used the vendor-provided MRAC algorithm that ignores the higher attenuation of bone to reconstruct PET{sub nobone}. Second, the authors used a threshold-based algorithm developed in their group to automatically segment bone structures in the [{sup 18}F]NaF PET images. Subsequently, an attenuation coefficient of 0.11 cm{sup −1} was assigned to the segmented bone regions in the MRI-based attenuation image (MRAC{sub bone}) which was used to reconstruct PET{sub bone}. The automatic bone segmentation algorithm was validated in six PET/CT [{sup 18}F]NaF examinations. Relative SUV{sub mean} and SUV{sub max} differences between PET{sub bone} and PET{sub nobone} of 8 pelvic and 41 spinal lesions, and of other regions such as lung, liver, and bladder, were calculated. By varying the assigned bone attenuation coefficient from 0.11 to 0.13 cm{sup −1}, the authors investigated its influence on the reconstructed SUVs of the lesions. Results: The comparison of [{sup 18}F]NaF-based and CT-based bone segmentation in the six PET/CT patients showed a Dice similarity of 0.7 with a true positive rate of 0.72 and a false discovery rate of 0.33. The [{sup 18}F]NaF-based bone

  16. Heroin and cocaine abusers have higher discount rates for delayed rewards than alcoholics or non-drug-using controls.

    Science.gov (United States)

    Kirby, Kris N; Petry, Nancy M

    2004-04-01

    To test a prediction of the discounting model of impulsiveness that discount rates would be positively associated with addiction. The delay-discount rate refers to the rate of reduction in the present value of a future reward as the delay to that reward increases. We estimated participants' discount rates on the basis of their pattern of choices between smaller immediate rewards ($11-80) and larger, delayed rewards ($25-85; at delays from 1 week to 6 months) in a questionnaire format. Participants had a one-in-six chance of winning a reward that they chose on one randomly selected trial. Heroin (n = 27), cocaine (n = 41) and alcohol (n = 33) abusers and non-drug-using controls (n = 44) were recruited from advertisements. They were tested in a drug abuse research clinic at a medical school. On average, the cocaine and heroin groups had higher rates than controls (both P rates for heroin abusers (P = 0.03), but not for cocaine or alcohol abusers (both P > 0.50). These data suggest that discount rates vary with the preferred drug of abuse, and that high discount rates should be considered in the development of substance abuse prevention and treatment efforts.

  17. Application of Fermat's Principle to Calculation of the Errors of Acoustic Flow-Rate Measurements for a Three-Dimensional Fluid Flow or Gas

    Science.gov (United States)

    Petrov, A. G.; Shkundin, S. Z.

    2018-01-01

    Fermat's variational principle is used for derivation of the formula for the time of propagation of a sonic signal between two set points A and B in a steady three-dimensional flow of a fluid or gas. It is shown that the fluid flow changes the time of signal reception by a value proportional to the flow rate independently of the velocity profile. The time difference in the reception of the signals from point B to point A and vice versa is proportional with a high accuracy to the flow rate. It is shown that the relative error of the formula does not exceed the square of the largest Mach number. This makes it possible to measure the flow rate of a fluid or gas with an arbitrary steady subsonic velocity field.

  18. Concerns and perceptions immediately following Superstorm Sandy: ratings for property damage were higher than for health issues.

    Science.gov (United States)

    Burger, Joanna; Gochfeld, Michael

    Governmental officials, health and safety professionals, early responders, and the public are interested in the perceptions and concerns of people faced with a crisis, especially during and immediately after a disaster strikes. Reliable information can lead to increased individual and community preparedness for upcoming crises. The objective of this research was to evaluate concerns of coastal and central New Jersey residents within the first 100 days of Superstorm Sandy's landfall. Respondents living in central New Jersey and Jersey shore communities were differentially impacted by the storm, with shore residents having higher evacuation rates (47% vs. 13%), more flood waters in their homes, longer power outages (average 23 vs. 6 days), and longer periods without Internet (29 vs. 6 days). Ratings of concerns varied both among and within categories as a function of location (central vs. coastal New Jersey), stressor level (ranging from 1 to 3 for combinations of power outages, high winds, and flooding), and demographics. Respondents were most concerned about property damage, health, inconveniences, ecological services, and nuclear power plants in that order. Respondents from the shore gave higher ratings to the concerns within each major category, compared to those from central Jersey. Four findings have implications for understanding future risk, recovery, and resiliency: (1) respondents with the highest stressor level (level 3) were more concerned about water damage than others, (2) respondents with flood damage were more concerned about water drainage and mold than others, (3) respondents with the highest stressor levels rated all ecological services higher than others, and (4) shore respondents rated all ecological services higher than central Jersey residents. These data provide information to design future preparedness plans, improve resiliency for future severe weather events, and reduce public health risk.

  19. Why are autopsy rates low in Japan? Views of ordinary citizens and doctors in the case of unexpected patient death and medical error.

    Science.gov (United States)

    Maeda, Shoichi; Kamishiraki, Etsuko; Starkey, Jay; Ikeda, Noriaki

    2013-01-01

    This article examines what could account for the low autopsy rate in Japan based on the findings from an anonymous, self-administered, structured questionnaire that was given to a sample population of the general public and physicians in Japan. The general public and physicians indicated that autopsy may not be carried out because: (1) conducting an autopsy might result in the accusation that patient death was caused by a medical error even when there was no error (50.4% vs. 13.1%, respectively), (2) suggesting an autopsy makes the families suspicious of a medical error even when there was none (61.0% vs. 19.1%, respectively), (3) families do not want the body to be damaged by autopsy (81.6% vs. 87.3%, respectively), and (4) families do not want to make the patient suffer any more in addition to what he/she has already endured (61.8% vs. 87.1%, respectively). © 2013 American Society for Healthcare Risk Management of the American Hospital Association.

  20. An evaluation of a Low-Dose-Rate (LDR) brachytherapy procedure using a systems engineering & error analysis methodology for health care (SEABH) - (SAVE)

    LENUS (Irish Health Repository)

    Chadwick, Liam

    2012-03-12

    Health Care Failure Modes and Effects Analysis (HFMEA®) is an established tool for risk assessment in health care. A number of deficiencies have been identified in the method. A new method called Systems and Error Analysis Bundle for Health Care (SEABH) was developed to address these deficiencies. SEABH has been applied to a number of medical processes as part of its validation and testing. One of these, Low Dose Rate (LDR) prostate Brachytherapy is reported in this paper. The case study supported the validity of SEABH with respect to its capacity to address the weaknesses of (HFMEA®).

  1. Choice of reference sequence and assembler for alignment of Listeria monocytogenes short-read sequence data greatly influences rates of error in SNP analyses.

    Directory of Open Access Journals (Sweden)

    Arthur W Pightling

    Full Text Available The wide availability of whole-genome sequencing (WGS and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i depth of sequencing coverage, ii choice of reference-guided short-read sequence assembler, iii choice of reference genome, and iv whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT, using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming. We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers

  2. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  3. The sensitivity of bit error rate (BER) performance in multi-carrier (OFDM) and single-carrier

    Science.gov (United States)

    Albdran, Saleh; Alshammari, Ahmed; Matin, Mohammad

    2012-10-01

    Recently, the single-carrier and multi-carrier transmissions have grabbed the attention of industrial systems. Theoretically, OFDM as a Multicarrier has more advantages over the Single-Carrier especially for high data rate. In this paper we will show which one of the two techniques outperforms the other. We will study and compare the performance of BER for both techniques for a given channel. As a function of signal to noise ratio SNR, the BER will be measure and studied. Also, Peak-to-Average Power Ratio (PAPR) is going to be examined and presented as a drawback of using OFDM. To make a reasonable comparison between the both techniques, we will use additive white Gaussian noise (AWGN) as a communication channel.

  4. Higher Magnitude Cash Payments Improve Research Follow-up Rates Without Increasing Drug Use or Perceived Coercion

    Science.gov (United States)

    Festinger, David S.; Marlowe, Douglas B.; Dugosh, Karen L.; Croft, Jason R.; Arabia, Patricia L.

    2008-01-01

    In a prior study (Festinger et al., 2005) we found that neither the mode (cash vs. gift card) nor magnitude ($10, $40, or $70) of research follow-up payments increased rates of new drug use or perceptions of coercion. However, higher payments and payments in cash were associated with better follow-up attendance, reduced tracking efforts, and improved participant satisfaction with the study. The present study extended those findings to higher payment magnitudes. Participants from an urban outpatient substance abuse treatment program were randomly assigned to receive $70, $100, $130, or $160 in either cash or a gift card for completing a follow-up assessment at 6 months post-admission (n ≅ 50 per cell). Apart from the payment incentives, all participants received a standardized, minimal platform of follow-up efforts. Findings revealed that neither the magnitude nor mode of payment had a significant effect on new drug use or perceived coercion. Consistent with our previous findings, higher payments and cash payments resulted in significantly higher follow-up rates and fewer tracking calls. In addition participants receiving cash vs. gift cards were more likely to use their payments for essential, non-luxury purchases. Follow-up rates for participants receiving cash payments of $100, $130, and $160 approached or exceeded the FDA required minimum of 70% for studies to be considered in evaluations of new medications. This suggests that the use of higher magnitude payments and cash payments may be effective strategies for obtaining more representative follow-up samples without increasing new drug use or perceptions of coercion. PMID:18395365

  5. Higher dose rate Gamma Knife radiosurgery may provide earlier and longer-lasting pain relief for patients with trigeminal neuralgia.

    Science.gov (United States)

    Lee, John Y K; Sandhu, Sukhmeet; Miller, Denise; Solberg, Timothy; Dorsey, Jay F; Alonso-Basanta, Michelle

    2015-10-01

    Gamma Knife radiosurgery (GKRS) utilizes cobalt-60 as its radiation source, and thus dose rate varies as the fixed source decays over its half-life of approximately 5.26 years. This natural decay results in increasing treatment times when delivering the same cumulative dose. It is also possible, however, that the biological effective dose may change based on this dose rate even if the total dose is kept constant. Because patients are generally treated in a uniform manner, radiosurgery for trigeminal neuralgia (TN) represents a clinical model whereby biological efficacy can be tested. The authors hypothesized that higher dose rates would result in earlier and more complete pain relief but only if measured with a sensitive pain assessment tool. One hundred thirty-three patients were treated with the Gamma Knife Model 4C unit at a single center by a single neurosurgeon during a single cobalt life cycle from January 2006 to May 2012. All patients were treated with 80 Gy with a single 4-mm isocenter without blocking. Using an output factor of 0.87, dose rates ranged from 1.28 to 2.95 Gy/min. The Brief Pain Inventory (BPI)-Facial was administered before the procedure and at the first follow-up office visit 1 month from the procedure (mean 1.3 months). Phone calls were made to evaluate patients after their procedures as part of a retrospective study. Univariate and multivariate linear regression was performed on several independent variables, including sex, age in deciles, diagnosis, follow-up duration, prior surgery, and dose rate. In the short-term analysis (mean 1.3 months), patients' self-reported pain intensity at its worst was significantly correlated with dose rate on multivariate analysis (p = 0.028). Similarly, patients' self-reported interference with activities of daily living was closely correlated with dose rate on multivariate analysis (p = 0.067). A 1 Gy/min decrease in dose rate resulted in a 17% decrease in pain intensity at its worst and a 22% decrease

  6. Evaluation of the effect of noise on the rate of errors and speed of work by the ergonomic test of two-hand co-ordination

    Directory of Open Access Journals (Sweden)

    Ehsanollah Habibi

    2013-01-01

    Full Text Available Background: Among the most important and effective factors affecting the efficiency of the human workforce are accuracy, promptness, and ability. In the context of promoting levels and quality of productivity, the aim of this study was to investigate the effects of exposure to noise on the rate of errors, speed of work, and capability in performing manual activities. Methods: This experimental study was conducted on 96 students (52 female and 44 male of the Isfahan Medical Science University with the average and standard deviations of age, height, and weight of 22.81 (3.04 years, 171.67 (8.51 cm, and 65.05 (13.13 kg, respectively. Sampling was conducted with a randomized block design. Along with controlling for intervening factors, a combination of sound pressure levels [65 dB (A, 85 dB (A, and 95 dB (A] and exposure times (0, 20, and 40 were used for evaluation of precision and speed of action of the participants, in the ergonomic test of two-hand coordination. Data was analyzed by SPSS18 software using a descriptive and analytical statistical method by analysis of covariance (ANCOVA repeated measures. Results: The results of this study showed that increasing sound pressure level from 65 to 95 dB in network ′A′ increased the speed of work (P 0.05. Male participants got annoyed from the noise more than females. Also, increase in sound pressure level increased the rate of error (P < 0.05. Conclusions: According to the results of this research, increasing the sound pressure level decreased efficiency and increased the errors and in exposure to sounds less than 85 dB in the beginning, the efficiency decreased initially and then increased in a mild slope.

  7. Downlink Error Rates of Half-duplex Users in Full-duplex Networks over a Laplacian Inter-User Interference Limited and EGK fading

    KAUST Repository

    Soury, Hamza

    2017-03-14

    This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have non-line of sight (NLOS) interfering link. Consequently, we study the interferer limited problem that appears between NLOS HD users-pair that are scheduled on the same FD channel. The distribution of the interference is first characterized via its distribution function, which is derived in closed form. Then, a comprehensive performance assessment for the proposed pairing scheme is provided by assuming Extended Generalized- $cal{K}$ (EGK) fading for the downlink and studying different modulation schemes. To this end, a unified closed form expression for the average symbol error rate is derived. Furthermore, we show the effective downlink throughput gain harvested by the pairing NLOS users as a function of the average signal-to-interferenceratio when compared to an idealized HD scenario with neither interference nor noise. Finally, we show the minimum required channel gain pairing threshold to harvest downlink throughput via the FD operation when compared to the HD case for each modulation scheme.

  8. Improved read disturb and write error rates in voltage-control spintronics memory (VoCSM) by controlling energy barrier height

    Science.gov (United States)

    Inokuchi, T.; Yoda, H.; Kato, Y.; Shimizu, M.; Shirotori, S.; Shimomura, N.; Koi, K.; Kamiguchi, Y.; Sugiyama, H.; Oikawa, S.; Ikegami, K.; Ishikawa, M.; Altansargai, B.; Tiwari, A.; Ohsawa, Y.; Saito, Y.; Kurobe, A.

    2017-06-01

    A hybrid writing scheme that combines the spin Hall effect and voltage-controlled magnetic-anisotropy effect is investigated in Ta/CoFeB/MgO/CoFeB/Ru/CoFe/IrMn junctions. The write current and control voltage are applied to Ta and CoFeB/MgO/CoFeB junctions, respectively. The critical current density required for switching the magnetization in CoFeB was modulated 3.6-fold by changing the control voltage from -1.0 V to +1.0 V. This modulation of the write current density is explained by the change in the surface anisotropy of the free layer from 1.7 mJ/m2 to 1.6 mJ/m2, which is caused by the electric field applied to the junction. The read disturb rate and write error rate, which are important performance parameters for memory applications, are drastically improved, and no error was detected in 5 × 108 cycles by controlling read and write sequences.

  9. Theoretical and computational study of the energy dependence of the muon transfer rate from hydrogen to higher-Z gases

    Energy Technology Data Exchange (ETDEWEB)

    Bakalov, Dimitar, E-mail: dbakalov@inrne.bas.bg [Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Tsarigradsko chaussée 72, Sofia 1784 (Bulgaria); Adamczak, Andrzej [Institute of Nuclear Physics, Polish Academy of Sciences, ul. Radzikowskiego 152, 31-342 Krakow (Poland); Stoilov, Mihail [Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Tsarigradsko chaussée 72, Sofia 1784 (Bulgaria); Vacchi, Andrea [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, Via A. Valerio 2, 34127 Trieste (Italy)

    2015-01-23

    The recent PSI Lamb shift experiment and the controversy about proton size revived the interest in measuring the hyperfine splitting in muonic hydrogen as an alternative possibility for comparing ordinary and muonic hydrogen spectroscopy data on proton electromagnetic structure. This measurement critically depends on the energy dependence of the muon transfer rate to heavier gases in the epithermal range. The available data provide only qualitative information, and the theoretical predictions have not been verified. We propose a new method by measurements of the transfer rate in thermalized target at different temperatures, estimate its accuracy and investigate the optimal experimental conditions. - Highlights: • Method for measuring the energy dependence of muon transfer rate to higher-Z gases. • Thermalization and depolarization of muonic hydrogen studied by Monte Carlo method. • Optimal experimental conditions determined by Monte Carlo simulations. • Mathematical model and for estimating the uncertainty of the experimental results.

  10. Global minimum profile error (GMPE) - a least-squares-based approach for extracting macroscopic rate coefficients for complex gas-phase chemical reactions.

    Science.gov (United States)

    Duong, Minh V; Nguyen, Hieu T; Mai, Tam V-T; Huynh, Lam K

    2018-01-03

    Master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) has shown to be a powerful framework for modeling kinetic and dynamic behaviors of a complex gas-phase chemical system on a complicated multiple-species and multiple-channel potential energy surface (PES) for a wide range of temperatures and pressures. Derived from the ME time-resolved species profiles, the macroscopic or phenomenological rate coefficients are essential for many reaction engineering applications including those in combustion and atmospheric chemistry. Therefore, in this study, a least-squares-based approach named Global Minimum Profile Error (GMPE) was proposed and implemented in the MultiSpecies-MultiChannel (MSMC) code (Int. J. Chem. Kinet., 2015, 47, 564) to extract macroscopic rate coefficients for such a complicated system. The capability and limitations of the new approach were discussed in several well-defined test cases.

  11. [Effect of high magnesium ion concentration on the electron transport rate and proton exchange in thylakoid membranes in higher plants].

    Science.gov (United States)

    Ignat'ev, A R; Khorobrykh, S A; Ivanov, B N

    2001-01-01

    The effects of magnesium ion concentration on the rate of electron transport in isolated pea thylakoids were investigated in the pH range from 4.0 up to 8.0. In the absence of magnesium ions in the medium and in the presence of 5 mM MgCl2 in the experiments not only without added artificial acceptors but also with ferricyanide or methylviologen as an acceptor, this rate had a well-expressed maximum at pH 5.0. It was shown that, after depression to minimal values at pH 5.5-6.5, it gradually rose with increasing pH. An increase in magnesium ion concentration up to 20 mM essentially affected the electron transfer rate: it decreased somewhat at pH 4.0-5.0 but increased at higher pH values. At this magnesium ion concentration, the maximum rate was at pH 6.0-6.5 and the minimum, at pH 7.0. Subsequent rise upon increasing pH to 8.0 was expressed more sharply. The influence of high magnesium ion concentration on the rate of electron transport was not observed in the presence of gramicidin D. It was found that without uncoupler, the changes in the electron transfer rate under the influence of magnesium ions correlated to the changes in the first-order rate constant of the proton efflux from thylakoids. It is supposed that the change in the ability of thylakoids to keep protons by the action of magnesium ions is the result of electrostatic interactions of these ions with the charges on the external surface of membranes. A possible role of regulation of the electron transport rate by magnesium ions in vivo is discussed.

  12. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  13. Does adding metformin to clomifene citrate lead to higher pregnancy rates in a subset of women with polycystic ovary syndrome?

    Science.gov (United States)

    Moll, E; Korevaar, J C; Bossuyt, P M M; van der Veen, F

    2008-08-01

    An RCT among newly diagnosed, therapy naive women with polycystic ovary syndrome (PCOS) showed no significant differences in ovulation rate, ongoing pregnancy rate or spontaneous abortion rate in favour of clomifene citrate plus metformin compared with clomifene citrate. We wanted to assess whether there are specific subgroups of women with PCOS in whom clomifene citrate plus metformin leads to higher pregnancy rates. Subgroup analysis based on clinical and biochemical parameters of 111 women randomized to clomifene citrate plus metformin compared with 114 women randomized to clomifene citrate plus placebo. The data for age, BMI, waist-hip ratio (WHR) and plasma testosterone were available in all women, 2 h glucose in 80% of women and homeostatic model assessment for assessing insulin sensitivity (HOMA) in 50% of women. Of the women who were allocated to the metformin group, 44 women (40%) reached an ongoing pregnancy. In the placebo group, 52 women (46%) reached an ongoing pregnancy. There was a significantly different chance of an ongoing pregnancy for metformin versus placebo between subgroups based on age and WHR (P = 0.014). There was a positive effect of metformin versus placebo on pregnancy rate in older women (>or=28 years) with a high WHR, a negative effect of metformin versus placebo in young women (Metformin may be an effective addition to clomifene citrate in infertile women with PCOS, especially in older and viscerally obese patients.

  14. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    Science.gov (United States)

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The

  15. The human error rate assessment and optimizing system HEROS - a new procedure for evaluating and optimizing the man-machine interface in PSA

    International Nuclear Information System (INIS)

    Richei, A.; Hauptmanns, U.; Unger, H.

    2001-01-01

    A new procedure allowing the probabilistic evaluation and optimization of the man-machine system is presented. This procedure and the resulting expert system HEROS, which is an acronym for Human Error Rate Assessment and Optimizing System, is based on the fuzzy set theory. Most of the well-known procedures employed for the probabilistic evaluation of human factors involve the use of vague linguistic statements on performance shaping factors to select and to modify basic human error probabilities from the associated databases. This implies a large portion of subjectivity. Vague statements are expressed here in terms of fuzzy numbers or intervals which allow mathematical operations to be performed on them. A model of the man-machine system is the basis of the procedure. A fuzzy rule-based expert system was derived from ergonomic and psychological studies. Hence, it does not rely on a database, whose transferability to situations different from its origin is questionable. In this way, subjective elements are eliminated to a large extent. HEROS facilitates the importance analysis for the evaluation of human factors, which is necessary for optimizing the man-machine system. HEROS is applied to the analysis of a simple diagnosis of task of the operating personnel in a nuclear power plant

  16. Does Prison Crowding Predict Higher Rates of Substance Use Related Parole Violations? A Recurrent Events Multi-Level Survival Analysis.

    Directory of Open Access Journals (Sweden)

    Michael A Ruderman

    Full Text Available This administrative data-linkage cohort study examines the association between prison crowding and the rate of post-release parole violations in a random sample of prisoners released with parole conditions in California, for an observation period of two years (January 2003 through December 2004.Crowding overextends prison resources needed to adequately protect inmates and provide drug rehabilitation services. Violence and lack of access to treatment are known risk factors for drug use and substance use disorders. These and other psychosocial effects of crowding may lead to higher rates of recidivism in California parolees.Rates of parole violation for parolees exposed to high and medium levels of prison crowding were compared to parolees with low prison crowding exposure. Hazard ratios (HRs with 95% confidence intervals (CIs were estimated using a Cox model for recurrent events. Our dataset included 13070 parolees in California, combining individual level parolee data with aggregate level crowding data for multilevel analysis.Comparing parolees exposed to high crowding with those exposed to low crowding, the effect sizes from greatest to least were absconding violations (HR 3.56 95% CI: 3.05-4.17, drug violations (HR 2.44 95% CI: 2.00-2.98, non-violent violations (HR 2.14 95% CI: 1.73-2.64, violent and serious violations (HR 1.88 95% CI: 1.45-2.43, and technical violations (HR 1.86 95% CI: 1.37-2.53.Prison crowding predicted higher rates of parole violations after release from prison. The effect was magnitude-dependent and particularly strong for drug charges. Further research into whether adverse prison experiences, such as crowding, are associated with recidivism and drug use in particular may be warranted.

  17. Does Prison Crowding Predict Higher Rates of Substance Use Related Parole Violations? A Recurrent Events Multi-Level Survival Analysis.

    Science.gov (United States)

    Ruderman, Michael A; Wilson, Deirdra F; Reid, Savanna

    2015-01-01

    This administrative data-linkage cohort study examines the association between prison crowding and the rate of post-release parole violations in a random sample of prisoners released with parole conditions in California, for an observation period of two years (January 2003 through December 2004). Crowding overextends prison resources needed to adequately protect inmates and provide drug rehabilitation services. Violence and lack of access to treatment are known risk factors for drug use and substance use disorders. These and other psychosocial effects of crowding may lead to higher rates of recidivism in California parolees. Rates of parole violation for parolees exposed to high and medium levels of prison crowding were compared to parolees with low prison crowding exposure. Hazard ratios (HRs) with 95% confidence intervals (CIs) were estimated using a Cox model for recurrent events. Our dataset included 13070 parolees in California, combining individual level parolee data with aggregate level crowding data for multilevel analysis. Comparing parolees exposed to high crowding with those exposed to low crowding, the effect sizes from greatest to least were absconding violations (HR 3.56 95% CI: 3.05-4.17), drug violations (HR 2.44 95% CI: 2.00-2.98), non-violent violations (HR 2.14 95% CI: 1.73-2.64), violent and serious violations (HR 1.88 95% CI: 1.45-2.43), and technical violations (HR 1.86 95% CI: 1.37-2.53). Prison crowding predicted higher rates of parole violations after release from prison. The effect was magnitude-dependent and particularly strong for drug charges. Further research into whether adverse prison experiences, such as crowding, are associated with recidivism and drug use in particular may be warranted.

  18. Does Prison Crowding Predict Higher Rates of Substance Use Related Parole Violations? A Recurrent Events Multi-Level Survival Analysis

    Science.gov (United States)

    Ruderman, Michael A.; Wilson, Deirdra F.; Reid, Savanna

    2015-01-01

    Objective This administrative data-linkage cohort study examines the association between prison crowding and the rate of post-release parole violations in a random sample of prisoners released with parole conditions in California, for an observation period of two years (January 2003 through December 2004). Background Crowding overextends prison resources needed to adequately protect inmates and provide drug rehabilitation services. Violence and lack of access to treatment are known risk factors for drug use and substance use disorders. These and other psychosocial effects of crowding may lead to higher rates of recidivism in California parolees. Methods Rates of parole violation for parolees exposed to high and medium levels of prison crowding were compared to parolees with low prison crowding exposure. Hazard ratios (HRs) with 95% confidence intervals (CIs) were estimated using a Cox model for recurrent events. Our dataset included 13070 parolees in California, combining individual level parolee data with aggregate level crowding data for multilevel analysis. Results Comparing parolees exposed to high crowding with those exposed to low crowding, the effect sizes from greatest to least were absconding violations (HR 3.56 95% CI: 3.05–4.17), drug violations (HR 2.44 95% CI: 2.00–2.98), non-violent violations (HR 2.14 95% CI: 1.73–2.64), violent and serious violations (HR 1.88 95% CI: 1.45–2.43), and technical violations (HR 1.86 95% CI: 1.37–2.53). Conclusions Prison crowding predicted higher rates of parole violations after release from prison. The effect was magnitude-dependent and particularly strong for drug charges. Further research into whether adverse prison experiences, such as crowding, are associated with recidivism and drug use in particular may be warranted. PMID:26492490

  19. Higher Rate of Tuberculosis in Second Generation Migrants Compared to Native Residents in a Metropolitan Setting in Western Europe

    Science.gov (United States)

    Marx, Florian M.; Fiebig, Lena; Hauer, Barbara; Brodhun, Bonita; Glaser-Paschke, Gisela; Haas, Walter

    2015-01-01

    Background In Western Europe, migrants constitute an important risk group for tuberculosis, but little is known about successive generations of migrants. We aimed to characterize migration among tuberculosis cases in Berlin and to estimate annual rates of tuberculosis in two subsequent migrant generations. We hypothesized that second generation migrants born in Germany are at higher risk of tuberculosis compared to native (non-migrant) residents. Methods A prospective cross-sectional study was conducted. All tuberculosis cases reported to health authorities in Berlin between 11/2010 and 10/2011 were eligible. Interviews were conducted using a structured questionnaire including demographic data, migration history of patients and their parents, and language use. Tuberculosis rates were estimated using 2011 census data. Results Of 314 tuberculosis cases reported, 154 (49.0%) participated. Of these, 81 (52.6%) were first-, 14 (9.1%) were second generation migrants, and 59 (38.3%) were native residents. The tuberculosis rate per 100,000 individuals was 28.3 (95CI: 24.0–32.6) in first-, 10.2 (95%CI: 6.1–16.6) in second generation migrants, and 4.6 (95%CI: 3.7–5.6) in native residents. When combining information from the standard notification variables country of birth and citizenship, the sensitivity to detect second generation migration was 28.6%. Conclusions There is a higher rate of tuberculosis among second generation migrants compared to native residents in Berlin. This may be explained by presumably frequent contact and transmission within migrant populations. Second generation migration is insufficiently captured by the surveillance variables country of birth and citizenship. Surveillance systems in Western Europe should allow for quantifying the tuberculosis burden in this important risk group. PMID:26061733

  20. Modified Mitchell osteotomy alone does not have higher rate of residual metatarsalgia than combined first and lesser metatarsal osteotomy

    Directory of Open Access Journals (Sweden)

    Shu-Jung Chen

    2015-04-01

    Full Text Available Transfer metatarsalgia (TM is a common forefoot disorder secondary to hallux valgus (HV. Some authors suggest that a combined lesser metatarsal osteotomy while undergoing HV surgery improves metatarsalgia, whereas others concluded that isolated HV corrective osteotomy can improve symptomatic metatarsalgia. The main purpose of this retrospective study was to compare clinical outcomes in patients with and without combined lesser metatarsal osteotomy while receiving HV correction surgery. We retrospectively reviewed the patients who underwent osteotomy for HV correction between January 2000 and December 2010. All patients underwent HV correction with modified Mitchell osteotomy. Clinical evaluations including the American Orthopaedic Foot and Ankle Society score and residual metatarsalgia were assessed, and radiographic measurements were carried out. Sixty-five patients (83 feet meeting the selection criteria were enrolled. Thirty feet receiving a combined lesser metatarsal osteotomy were classified as the combined surgery (CS group, and the others were classified as the control (CN group (53 feet. The overall rate of persistent symptomatic metatarsalgia was 19.28% after operative treatment. There were six feet with residual metatarsalgia in the CS group, and 10 feet in the CN group. There was no significant difference in the rate of persistent symptoms between the two groups (p = 0.9. According to this result, modified Mitchell osteotomy alone did not have a higher rate of residual metatarsalgia than CS. We also found that the average recovery rate of TM was about 80.7% and those patients whose preoperative HV angle was > 30° had the higher risk of residual metatarsalgia after surgery.

  1. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  2. Bit Error Rate Performance of a MIMO-CDMA System Employing Parity-Bit-Selected Spreading in Frequency Nonselective Rayleigh Fading

    Directory of Open Access Journals (Sweden)

    Claude D'Amours

    2011-01-01

    Full Text Available We analytically derive the upper bound for the bit error rate (BER performance of a single user multiple input multiple output code division multiple access (MIMO-CDMA system employing parity-bit-selected spreading in slowly varying, flat Rayleigh fading. The analysis is done for spatially uncorrelated links. The analysis presented demonstrates that parity-bit-selected spreading provides an asymptotic gain of 10log(Nt dB over conventional MIMO-CDMA when the receiver has perfect channel estimates. This analytical result concurs with previous works where the (BER is determined by simulation methods and provides insight into why the different techniques provide improvement over conventional MIMO-CDMA systems.

  3. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  4. Obese Japanese adults with type 2 diabetes have higher basal metabolic rates than non-diabetic adults.

    Science.gov (United States)

    Miyake, Rieko; Ohkawara, Kazunori; Ishikawa-Takata, Kazuko; Morita, Akemi; Watanabe, Shaw; Tanaka, Shigeho

    2011-01-01

    Several cross-sectional studies in Pima Indians and Caucasians have indicated that obese individuals with type 2 diabetes have a higher basal metabolic rate (BMR) than healthy, obese individuals. However, no study has investigated this comparison in Japanese subjects, who are known to be susceptible to type 2 diabetes due to genetic characteristics. Thirty obese Japanese adults with pre-type 2 diabetes (n=7) or type 2 diabetes (n=13) or without diabetes (n=10) participated in this study. BMR was measured using indirect calorimetry. The relationships between residual BMR (calculated as measured BMR minus BMR adjusted for fat-free mass, fat mass, age, and sex) and biomarkers including fasting glucose, glycosylated hemoglobin (HbA(1c)), fasting insulin, homeostasis model assessment of insulin resistance (HOMA-R), triglycerides, and free fatty acids were examined using Pearson's correlation. BMR in diabetic subjects adjusted for fat-free mass, fat mass, age, and sex was 7.1% higher than in non-diabetic subjects. BMR in diabetic subjects was also significantly (pBMR and fasting glucose (r=0.391, p=0.032). These results indicate that in the Japanese population, obese subjects with type 2 diabetes have higher BMR compared with obese non-diabetic subjects. The fasting glucose level may contribute to these differences.

  5. Is the higher rate of parental child homicide in stepfamilies an effect of non-genetic relatedness?

    Institute of Scientific and Technical Information of China (English)

    Hans TEMRIN; Johanna NORDLUND; Mikael RYING; Birgitta S. TULLBERG

    2011-01-01

    In an evolutionary perspective individuals are expected to vary the degree of parental love and care in relation to the fitness value that a child represents. Hence, stepparents are expected to show less solicitude than genetically related parents, and this lack of genetic relatedness has been used to explain the higher frequencies of child abuse and homicide found in stepfamilies.However, other factors than non-genetic relatedness may cause this over-representation in stepfamilies. Here we use a 45-year data set of parental child homicides in Sweden to test two hypotheses related to the higher incidence in stepfamilies: 1) adults in different types of family differ in their general disposition to use violence, and 2) parents are more likely to kill stepchildren than genetically related children. Of the 152 perpetrators in biparental families there was an overrepresentation of perpetrators in stepfamilies (n=27) compared with the general population. We found support for the first hypothesis in that both general and violent crime rates were higher in stepfamilies, both in the general population and among perpetrators of child homicide. However, we found no support for the second hypothesis because of the 27 perpetrators in stepfamilies the perpetrator killed a genetically related child in 13 cases, a stepchild in 13 cases and both types of children in one case. Moreover, out of the 12 families where the perpetrator lived with both stepchildren and genetic children, there was no bias towards killing stepchildren. Thus, we found no evidence for an effect of non-genetic relatedness per se [Current Zoology 57 (3): 253-59, 2011].

  6. Training in Using Earplugs or Using Earplugs with a Higher than Necessary Noise Reduction Rating? A Randomized Clinical Trial

    Directory of Open Access Journals (Sweden)

    M Salmani Nodoushan

    2014-09-01

    Full Text Available Background: Noise-induced hearing loss (NIHL is one of the most common occupational diseases and the second most common cause of workers' claims for occupational injuries. Objective: Due to high prevalence of NIHL and several reports of improper use of hearing protective devices (HPDs, we conducted this study to compare the effect of face-to-face training in effective use of earplugs with appropriate NRR to overprotection of workers by using earplugs with higher than necessary noise reduction rating (NRR. Methods: In a randomized clinical trial, 150 workers referred to occupational medicine clinic were randomly allocated to three arms—a group wearing earplugs with an NRR of 25 with no training in appropriate use of the device; a group wearing earplugs with an NRR of 25 with training; another group wearing earplugs with an NRR of 30, with no training. Hearing threshold was measured in the study groups by real ear attenuation at threshold (REAT method. This trial is registered with Australian New Zealand clinical trials Registry, number ACTRN00363175. Results: The mean±SD age of the participants was 28±5 (range: 19–39 years. 42% of participants were female. The mean noise attenuation in the group with training was 13.88 dB, significantly higher than those observed in other groups. The highest attenuation was observed in high frequencies (4, 6, and 8 kHz in the group with training. Conclusion: Training in appropriate use of earplugs significantly affects the efficacy of earplugs—even more than using an earplug with higher NRR.

  7. Is the higher rate of parental child homicide in stepfamilies an effect of non-genetic relatedness?

    Directory of Open Access Journals (Sweden)

    Hans TEMRIN, Johanna NORDLUND, Mikael RYING, Birgitta S. TULLBERG

    2011-06-01

    Full Text Available In an evolutionary perspective individuals are expected to vary the degree of parental love and care in relation to the fitness value that a child represents. Hence, stepparents are expected to show less solicitude than genetically related parents, and this lack of genetic relatedness has been used to explain the higher frequencies of child abuse and homicide found in stepfamilies. However, other factors than non-genetic relatedness may cause this over-representation in stepfamilies. Here we use a 45-year data set of parental child homicides in Sweden to test two hypotheses related to the higher incidence in stepfamilies: 1 adults in different types of family differ in their general disposition to use violence, and 2 parents are more likely to kill stepchildren than genetically related children. Of the 152 perpetrators in biparental families there was an overrepresentation of perpetrators in stepfamilies (n=27 compared with the general population. We found support for the first hypothesis in that both general and violent crime rates were higher in stepfamilies, both in the general population and among perpetrators of child homicide. However, we found no support for the second hypothesis because of the 27 perpetrators in stepfamilies the perpetrator killed a genetically related child in 13 cases, a stepchild in 13 cases and both types of children in one case. Moreover, out of the 12 families where the perpetrator lived with both stepchildren and genetic children, there was no bias towards killing stepchildren. Thus, we found no evidence for an effect of non-genetic relatedness per se [Current Zoology 57 (3: 253–259, 2011].

  8. The Offer of Advanced Imaging Techniques Leads to Higher Acceptance Rates for Screening Colonoscopy - a Prospective Study.

    Science.gov (United States)

    Albrecht, Heinz; Gallitz, Julia; Hable, Robert; Vieth, Michael; Tontini, Gian Eugenio; Neurath, Markus Friedrich; Riemann, Jurgen Ferdinand; Neumann, Helmut

    2016-01-01

    Colonoscopy plays a fundamental role in early diagnosis and management of colorectal cancer and requires public and professional acceptance to ensure the ongoing success of screening programs. The aim of the study was to prospectively assess whether patient acceptance rates to undergo screening colonoscopy could be improved by the offer of advanced imaging techniques. Overall, 372 randomly selected patients were prospectively included. A standardized questionnaire was developed that inquired of the patients their knowledge regarding advanced imaging techniques. Second, several media campaigns and information events were organized reporting about advanced imaging techniques, followed by repeated evaluation. After one year the evaluation ended. At baseline, 64% of the patients declared that they had no knowledge about new endoscopic methods. After twelve months the overall grade of information increased significantly from 14% at baseline to 34%. The percentage of patients who decided to undergo colonoscopy because of the offer of new imaging methods also increased significantly from 12% at baseline to 42% after 12 months. Patients were highly interested in the offer of advanced imaging techniques. Knowledge about these techniques could relatively easy be provided using local media campaigns. The offer of advanced imaging techniques leads to higher acceptance rates for screening colonoscopies.

  9. Non-English speakers attend gastroenterology clinic appointments at higher rates than English speakers in a vulnerable patient population

    Science.gov (United States)

    Sewell, Justin L.; Kushel, Margot B.; Inadomi, John M.; Yee, Hal F.

    2009-01-01

    Goals We sought to identify factors associated with gastroenterology clinic attendance in an urban safety net healthcare system. Background Missed clinic appointments reduce the efficiency and availability of healthcare, but subspecialty clinic attendance among patients with established healthcare access has not been studied. Study We performed an observational study using secondary data from administrative sources to study patients referred to, and scheduled for an appointment in, the adult gastroenterology clinic serving the safety net healthcare system of San Francisco, California. Our dependent variable was whether subjects attended or missed a scheduled appointment. Analysis included multivariable logistic regression and classification tree analysis. 1,833 patients were referred and scheduled for an appointment between 05/2005 and 08/2006. Prisoners were excluded. All patients had a primary care provider. Results 683 patients (37.3%) missed their appointment; 1,150 (62.7%) attended. Language was highly associated with attendance in the logistic regression; non-English speakers were less likely than English speakers to miss an appointment (adjusted odds ratio 0.42 [0.28,0.63] for Spanish, 0.56 [0.38,0.82] for Asian language, p gastroenterology clinic appointment, not speaking English was most strongly associated with higher attendance rates. Patient related factors associated with not speaking English likely influence subspecialty clinic attendance rates, and these factors may differ from those affecting general healthcare access. PMID:19169147

  10. 3D versus 2D Systematic Transrectal Ultrasound-Guided Prostate Biopsy: Higher Cancer Detection Rate in Clinical Practice

    Directory of Open Access Journals (Sweden)

    Alexandre Peltier

    2013-01-01

    Full Text Available Objectives. To compare prostate cancer detection rates of extended 2D versus 3D biopsies and to further assess the clinical impact of this method in day-to-day practice. Methods. We analyzed the data of a cohort of 220 consecutive patients with no prior history of prostate cancer who underwent an initial prostate biopsy in daily practice due to an abnormal PSA and/or DRE using, respectively, the classical 2D and the new 3D systems. All the biopsies were done by a single experienced operator using the same standardized protocol. Results. There was no significant difference in terms of age, total PSA, or prostate volume between the two groups. However, cancer detection rate was significantly higher using the 3D versus the 2D system, 50% versus 34% (P<0.05. There was no statistically significant difference while comparing the 2 groups in term of nonsignificant cancer detection. Conclusion. There is reasonable evidence demonstrating the superiority of the 3D-guided biopsies in detecting prostate cancers that would have been missed using the 2D extended protocol.

  11. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    Science.gov (United States)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  12. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  13. Error in the delivery of radiation therapy: Results of a quality assurance review

    International Nuclear Information System (INIS)

    Huang, Grace; Medlam, Gaylene; Lee, Justin; Billingsley, Susan; Bissonnette, Jean-Pierre; Ringash, Jolie; Kane, Gabrielle; Hodgson, David C.

    2005-01-01

    Purpose: To examine error rates in the delivery of radiation therapy (RT), technical factors associated with RT errors, and the influence of a quality improvement intervention on the RT error rate. Methods and materials: We undertook a review of all RT errors that occurred at the Princess Margaret Hospital (Toronto) from January 1, 1997, to December 31, 2002. Errors were identified according to incident report forms that were completed at the time the error occurred. Error rates were calculated per patient, per treated volume (≥1 volume per patient), and per fraction delivered. The association between tumor site and error was analyzed. Logistic regression was used to examine the association between technical factors and the risk of error. Results: Over the study interval, there were 555 errors among 28,136 patient treatments delivered (error rate per patient = 1.97%, 95% confidence interval [CI], 1.81-2.14%) and among 43,302 treated volumes (error rate per volume = 1.28%, 95% CI, 1.18-1.39%). The proportion of fractions with errors from July 1, 2000, to December 31, 2002, was 0.29% (95% CI, 0.27-0.32%). Patients with sarcoma or head-and-neck tumors experienced error rates significantly higher than average (5.54% and 4.58%, respectively); however, when the number of treated volumes was taken into account, the head-and-neck error rate was no longer higher than average (1.43%). The use of accessories was associated with an increased risk of error, and internal wedges were more likely to be associated with an error than external wedges (relative risk = 2.04; 95% CI, 1.11-3.77%). Eighty-seven errors (15.6%) were directly attributed to incorrect programming of the 'record and verify' system. Changes to planning and treatment processes aimed at reducing errors within the head-and-neck site group produced a substantial reduction in the error rate. Conclusions: Errors in the delivery of RT are uncommon and usually of little clinical significance. Patient subgroups and

  14. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  15. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  16. A Physics-Based Engineering Methodology for Calculating Soft Error Rates of Bulk CMOS and SiGe Heterojunction Bipolar Transistor Integrated Circuits

    Science.gov (United States)

    Fulkerson, David E.

    2010-02-01

    This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.

  17. HIV positivity but not HPV/p16 status is associated with higher recurrence rate in anal cancer.

    Science.gov (United States)

    Meyer, Joshua E; Panico, Vinicius J A; Marconato, Heloisa M F; Sherr, David L; Christos, Paul; Pirog, Edyta C

    2013-12-01

    .06). The regional and distant failure rate was not related to HPV/p16 positivity or histologic differentiation of ACA; however, HIV positivity appeared to be associated with a higher recurrence rate and worse recurrence-free survival.

  18. Satellite telemetry reveals higher fishing mortality rates than previously estimated, suggesting overfishing of an apex marine predator.

    Science.gov (United States)

    Byrne, Michael E; Cortés, Enric; Vaudo, Jeremy J; Harvey, Guy C McN; Sampson, Mark; Wetherbee, Bradley M; Shivji, Mahmood

    2017-08-16

    Overfishing is a primary cause of population declines for many shark species of conservation concern. However, means of obtaining information on fishery interactions and mortality, necessary for the development of successful conservation strategies, are often fisheries-dependent and of questionable quality for many species of commercially exploited pelagic sharks. We used satellite telemetry as a fisheries-independent tool to document fisheries interactions, and quantify fishing mortality of the highly migratory shortfin mako shark ( Isurus oxyrinchus ) in the western North Atlantic Ocean. Forty satellite-tagged shortfin mako sharks tracked over 3 years entered the Exclusive Economic Zones of 19 countries and were harvested in fisheries of five countries, with 30% of tagged sharks harvested. Our tagging-derived estimates of instantaneous fishing mortality rates ( F = 0.19-0.56) were 10-fold higher than previous estimates from fisheries-dependent data (approx. 0.015-0.024), suggesting data used in stock assessments may considerably underestimate fishing mortality. Additionally, our estimates of F were greater than those associated with maximum sustainable yield, suggesting a state of overfishing. This information has direct application to evaluations of stock status and for effective management of populations, and thus satellite tagging studies have potential to provide more accurate estimates of fishing mortality and survival than traditional fisheries-dependent methodology. © 2017 The Author(s).

  19. Water Exchange Produces Significantly Higher Adenoma Detection Rate Than Water Immersion: Pooled Data From 2 Multisite Randomized Controlled Trials.

    Science.gov (United States)

    Leung, Felix W; Koo, Malcolm; Cadoni, Sergio; Falt, Premysl; Hsieh, Yu-Hsi; Amato, Arnaldo; Erriu, Matteo; Fojtik, Petr; Gallittu, Paolo; Hu, Chi-Tan; Leung, Joseph W; Liggi, Mauro; Paggi, Silvia; Radaelli, Franco; Rondonotti, Emanuele; Smajstrla, Vit; Tseng, Chih-Wei; Urban, Ondrej

    2018-03-02

    To test the hypothesis that water exchange (WE) significantly increases adenoma detection rates (ADR) compared with water immersion (WI). Low ADR was linked to increased risk for interval colorectal cancers and related deaths. Two recent randomized controlled trials of head-to-head comparison of WE, WI, and traditional air insufflation (AI) each showed that WE achieved significantly higher ADR than AI, but not WI. The data were pooled from these 2 studies to test the above hypothesis. Two trials (5 sites, 14 colonoscopists) that randomized 1875 patients 1:1:1 to AI, WI, or WE were pooled and analyzed with ADR as the primary outcome. The ADR of AI (39.5%) and WI (42.4%) were comparable, significantly lower than that of WE (49.6%) (vs. AI P=0.001; vs. WI P=0.033). WE insertion time was 3 minutes longer than that of AI (Prate (vs. AI) of the >10 mm advanced adenomas. Right colon combined advanced and sessile serrated ADR of AI (3.4%) and WI (5%) were comparable and were significantly lower than that of WE (8.5%) (vs. AI P<0.001; vs. WI P=0.039). Compared with AI and WI, the superior ADR of WE offsets the drawback of a significantly longer insertion time. For quality improvement focused on increasing adenoma detection, WE is preferred over WI. The hypothesis that WE could lower the risk of interval colorectal cancers and related deaths should be tested.

  20. International comparisons of preterm birth: higher rates of late preterm birth are associated with lower rates of stillbirth and neonatal death.

    Science.gov (United States)

    Lisonkova, S; Sabr, Y; Butler, B; Joseph, K S

    2012-12-01

    To examine international rates of preterm birth and potential associations with stillbirths and neonatal deaths at late preterm and term gestation. Ecological study. Canada, USA and 26 countries in Europe. All deliveries in 2004. Information on preterm birth (Statistics Canada, the EURO-PERISTAT project and the National Center for Health Statistics. Pearson correlation coefficients and random-intercept Poisson regression were used to examine the association between preterm birth rates and gestational age-specific stillbirth and neonatal death rates. Rate ratios with 95% confidence intervals were estimated after adjustment for maternal age, parity and multiple births. Stillbirths and neonatal deaths ≥ 32 and ≥ 37 weeks of gestation. International rates of preterm birth (births. Preterm birth rates at 32-36 weeks were inversely associated with stillbirths at ≥ 32 weeks (adjusted rate ratio 0.94, 95% CI 0.92-0.96) and ≥ 37 weeks (adjusted rate ratio 0.88, 95% CI 0.85-0.91) of gestation and inversely associated with neonatal deaths at ≥ 32 weeks (adjusted rate ratio 0.88, 95% CI 0.85-0.91) and ≥ 37 weeks (adjusted rate ratio 0.82, 95% CI 0.78-0.86) of gestation. Countries with high rates of preterm birth at 32-36 weeks of gestation have lower stillbirth and neonatal death rates at and beyond 32 weeks of gestation. Contemporary rates of preterm birth are indicators of both perinatal health and obstetric care services. © 2012 The Authors BJOG An International Journal of Obstetrics and Gynaecology © 2012 RCOG.

  1. Error and discrepancy in radiology: inevitable or avoidable?

    OpenAIRE

    Brady, Adrian P.

    2016-01-01

    Abstract Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3?5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms ?error? and ?discrepancy? and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and ...

  2. Clinical errors and medical negligence.

    Science.gov (United States)

    Oyebode, Femi

    2013-01-01

    This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3-16% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. Copyright © 2013 S. Karger AG, Basel.

  3. Does adding metformin to clomifene citrate lead to higher pregnancy rates in a subset of women with polycystic ovary syndrome?

    NARCIS (Netherlands)

    Moll, E.; Korevaar, J. C.; Bossuyt, P. M. M.; van der Veen, F.

    2008-01-01

    BACKGROUND: An RCT among newly diagnosed, therapy naive women with polycystic ovary syndrome (PCOS) showed no significant differences in ovulation rate, ongoing pregnancy rate or spontaneous abortion rate in favour of clomifene citrate plus metformin compared with clomifene citrate. We wanted to

  4. Ventilator-associated pneumonia: the influence of bacterial resistance, prescription errors, and de-escalation of antimicrobial therapy on mortality rates

    Directory of Open Access Journals (Sweden)

    Ana Carolina Souza-Oliveira

    2016-09-01

    Conclusion: Prescription errors influenced mortality of patients with Ventilator-associated pneumonia, underscoring the challenge of proper Ventilator-associated pneumonia treatment, which requires continuous reevaluation to ensure that clinical response to therapy meets expectations.

  5. Explaining quantitative variation in the rate of Optional Infinitive errors across languages: a comparison of MOSAIC and the Variational Learning Model.

    Science.gov (United States)

    Freudenthal, Daniel; Pine, Julian; Gobet, Fernand

    2010-06-01

    In this study, we use corpus analysis and computational modelling techniques to compare two recent accounts of the OI stage: Legate & Yang's (2007) Variational Learning Model and Freudenthal, Pine & Gobet's (2006) Model of Syntax Acquisition in Children. We first assess the extent to which each of these accounts can explain the level of OI errors across five different languages (English, Dutch, German, French and Spanish). We then differentiate between the two accounts by testing their predictions about the relation between children's OI errors and the distribution of infinitival verb forms in the input language. We conclude that, although both accounts fit the cross-linguistic patterning of OI errors reasonably well, only MOSAIC is able to explain why verbs that occur more frequently as infinitives than as finite verb forms in the input also occur more frequently as OI errors than as correct finite verb forms in the children's output.

  6. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  7. Residents' numeric inputting error in computerized physician order entry prescription.

    Science.gov (United States)

    Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

    2016-04-01

    Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial

  8. Higher dropout rate in non-native patients than in native patients in rehabilitation in The Netherlands

    NARCIS (Netherlands)

    Sloots, Maurits; Scheppers, Emmanuel F.; van de Weg, Frans B.; Bartels, Edien A.; Geertzen, Jan H.; Dekker, Joost; Dekker, Jaap

    Dropout from a rehabilitation programme often occurs in patients with chronic nonspecific low back pain of non-native origin. However, the exact dropout rate is not known. The objective of this study was to determine the difference in dropout rate between native and non-native patients with chronic

  9. Accounting for Risk of Non-Completion in Private and Social Rates of Return to Higher Education

    Science.gov (United States)

    Toutkoushian, Robert K.; Shafiq, M. Najeeb; Trivette, Michael J.

    2013-01-01

    Conventional studies of the private and social rates of return to a Bachelor's degree focus on the earnings difference between Bachelor degree holders and high school graduates, and find that there are large rates of return for degree recipients. The estimates in these studies, however, do not take into account the risk of not completing a degree.…

  10. Trauma centers with higher rates of angiography have a lesser incidence of splenectomy in the management of blunt splenic injury.

    Science.gov (United States)

    Capecci, Louis M; Jeremitsky, Elan; Smith, R Stephen; Philp, Frances

    2015-10-01

    Nonoperative management (NOM) for blunt splenic injury (BSI) is well-established. Angiography (ANGIO) has been shown to improve success rates with NOM. Protocols for NOM are not standardized and vary widely between centers. We hypothesized that trauma centers that performed ANGIO at a greater rate would demonstrate decreased rates of splenectomy compared with trauma centers that used ANGIO less frequently. A large, multicenter, statewide database (Pennsylvania Trauma Systems Foundation) from 2007 to 2011 was used to generate the study cohort of patients with BSI (age ≥ 13). The cohort was divided into 2 populations based on admission to centers with high (≥13%) or low (Splenectomy rates were then compared between the 2 groups, and multivariable logistic regression for predictors of splenectomy (failed NOM) were also performed. The overall rate of splenectomy in the entire cohort was 21.0% (1,120 of 5,333 BSI patients). The high ANGIO group had a lesser rate of splenectoy compared with the low ANGIO group (19% vs 24%; P splenectomy compared with low ANGIO centers (odds ratio, 0.68; 95% CI 0.58-0.80; P splenectomy rates compared with centers with lesser rate of ANGIO. Inclusion of angiographic protocols for NOM of BSI should be considered strongly. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Medicaid/CHIP Program; Medicaid Program and Children's Health Insurance Program (CHIP); Changes to the Medicaid Eligibility Quality Control and Payment Error Rate Measurement Programs in Response to the Affordable Care Act. Final rule.

    Science.gov (United States)

    2017-07-05

    This final rule updates the Medicaid Eligibility Quality Control (MEQC) and Payment Error Rate Measurement (PERM) programs based on the changes to Medicaid and the Children's Health Insurance Program (CHIP) eligibility under the Patient Protection and Affordable Care Act. This rule also implements various other improvements to the PERM program.

  12. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  13. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    Science.gov (United States)

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  14. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  15. Heat conduction errors and time lag in cryogenic thermometer installations

    Science.gov (United States)

    Warshawsky, I.

    1973-01-01

    Installation practices are recommended that will increase rate of heat exchange between the thermometric sensing element and the cryogenic fluid and that will reduce the rate of undesired heat transfer to higher-temperature objects. Formulas and numerical data are given that help to estimate the magnitude of heat-conduction errors and of time lag in response.

  16. Are Cancer incidence Rates Among Present And Past Workers Of The research Centers Of The Atomic Energy Commission higher Than The Rates Among The General Population?

    International Nuclear Information System (INIS)

    Litai, D.

    1999-01-01

    Cancer incidence rates among the workers of the AEC and its retirees have increased several fold in the last decade compared to the rates experienced in previous ones. This has brought about a wave of claims for compensation with negative repercussions in the media about the state of radiation safety in the nuclear research centers in the country. The Nuclear Research Center - Negev, being, generally closed to public and media visits, has taken the brunt of this criticism. Consequently, the question spelled out in the title has caused much concern and deserves to be discussed and explained. The purpose of this paper is to review what we know in this context and to show that the observed morbidity rates, worrying as they may be, are entirely natural, and, by and large, unrelated to the occupational exposures of the workers. It is well known that cancer incidence rates in the population rise steeply with age, especially over 50. As both research centers are approaching the age of 40, it is clear that a very large fraction of the workers and all retirees have passed this age and many are already in their sixties and even seventies. It is a well established fact that close to 40% of the population in this country (and many others as well) develop some type of cancer during their lifetime and close to a half of these succumb to it. As most of those cancers occur after the age of 50, this explains the increased rates alluded to above. Notably, numerous research centers around the globe have reached similar ages in the last decade and experience similar increases in morbidity, that have caused understandable concern and the initiation of epidemiological studies intended to identify the health effects of extended exposures to low doses, if any. Such studies have been carried out in several countries and followed, altogether, about 100,000 workers through 40 years. The studies showed no excess of cancer mortality among workers compared to the general population (adjusted

  17. Faster eating rates are associated with higher energy intakes during an ad libitum meal, higher BMI and greater adiposity among 4·5-year-old children: results from the Growing Up in Singapore Towards Healthy Outcomes (GUSTO) cohort.

    Science.gov (United States)

    Fogel, Anna; Goh, Ai Ting; Fries, Lisa R; Sadananthan, Suresh A; Velan, S Sendhil; Michael, Navin; Tint, Mya-Thway; Fortier, Marielle V; Chan, Mei Jun; Toh, Jia Ying; Chong, Yap-Seng; Tan, Kok Hian; Yap, Fabian; Shek, Lynette P; Meaney, Michael J; Broekman, Birit F P; Lee, Yung Seng; Godfrey, Keith M; Chong, Mary F F; Forde, Ciarán G

    2017-04-01

    Faster eating rates are associated with increased energy intake, but little is known about the relationship between children's eating rate, food intake and adiposity. We examined whether children who eat faster consume more energy and whether this is associated with higher weight status and adiposity. We hypothesised that eating rate mediates the relationship between child weight and ad libitum energy intake. Children (n 386) from the Growing Up in Singapore Towards Healthy Outcomes cohort participated in a video-recorded ad libitum lunch at 4·5 years to measure acute energy intake. Videos were coded for three eating-behaviours (bites, chews and swallows) to derive a measure of eating rate (g/min). BMI and anthropometric indices of adiposity were measured. A subset of children underwent MRI scanning (n 153) to measure abdominal subcutaneous and visceral adiposity. Children above/below the median eating rate were categorised as slower and faster eaters, and compared across body composition measures. There was a strong positive relationship between eating rate and energy intake (r 0·61, P<0·001) and a positive linear relationship between eating rate and children's BMI status. Faster eaters consumed 75 % more energy content than slower eating children (Δ548 kJ (Δ131 kcal); 95 % CI 107·6, 154·4, P<0·001), and had higher whole-body (P<0·05) and subcutaneous abdominal adiposity (Δ118·3 cc; 95 % CI 24·0, 212·7, P=0·014). Mediation analysis showed that eating rate mediates the link between child weight and energy intake during a meal (b 13·59; 95 % CI 7·48, 21·83). Children who ate faster had higher energy intake, and this was associated with increased BMI z-score and adiposity.

  18. MEDICAL ERROR: CIVIL AND LEGAL ASPECT.

    Science.gov (United States)

    Buletsa, S; Drozd, O; Yunin, O; Mohilevskyi, L

    2018-03-01

    The scientific article is focused on the research of the notion of medical error, medical and legal aspects of this notion have been considered. The necessity of the legislative consolidation of the notion of «medical error» and criteria of its legal estimation have been grounded. In the process of writing a scientific article, we used the empirical method, general scientific and comparative legal methods. A comparison of the concept of medical error in civil and legal aspects was made from the point of view of Ukrainian, European and American scientists. It has been marked that the problem of medical errors is known since ancient times and in the whole world, in fact without regard to the level of development of medicine, there is no country, where doctors never make errors. According to the statistics, medical errors in the world are included in the first five reasons of death rate. At the same time the grant of medical services practically concerns all people. As a man and his life, health in Ukraine are acknowledged by a higher social value, medical services must be of high-quality and effective. The grant of not quality medical services causes harm to the health, and sometimes the lives of people; it may result in injury or even death. The right to the health protection is one of the fundamental human rights assured by the Constitution of Ukraine; therefore the issue of medical errors and liability for them is extremely relevant. The authors make conclusions, that the definition of the notion of «medical error» must get the legal consolidation. Besides, the legal estimation of medical errors must be based on the single principles enshrined in the legislation and confirmed by judicial practice.

  19. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  20. Score of Inattention Subscale of ADHD Rating Scale-IV is Significantly Higher for AD/HD than PDD.

    OpenAIRE

    Fujibayashi, Hiromi; Kitayama, Shinji; Matsuo, Masafumi

    2010-01-01

    Attention-deficit/hyperactivity disorder (AD/HD) and pervasive developmental disorder (PDD) must be differentiated because the respective treatments are different. However, they are difficult to distinguish because they often show similar symptoms. At our hospital, we have the rearer of a patient answer both the ADHD Rating Scale-IV (ADHD-RS) and the Autism Spectrum Screening Questionnaire (ASSQ), and use the results as an aid for the diagnosis of AD/HD or PDD. These results were compared wit...

  1. Hydraulic conductance as well as nitrogen accumulation plays a role in the higher rate of leaf photosynthesis of the most productive variety of rice in Japan.

    Science.gov (United States)

    Taylaran, Renante D; Adachi, Shunsuke; Ookawa, Taiichiro; Usuda, Hideaki; Hirasawa, Tadashi

    2011-07-01

    An indica variety Takanari is known as one of the most productive rice varieties in Japan and consistently produces 20-30% heavier dry matter during ripening than Japanese commercial varieties in the field. The higher rate of photosynthesis of individual leaves during ripening has been recognized in Takanari. By using pot-grown plants under conditions of minimal mutual shading, it was confirmed that the higher rate of leaf photosynthesis is responsible for the higher dry matter production after heading in Takanari as compared with a japonica variety, Koshihikari. The rate of leaf photosynthesis and shoot dry weight became larger in Takanari after the panicle formation and heading stages, respectively, than in Koshihikari. Roots grew rapidly in the panicle formation stage until heading in Takanari compared with Koshihikari. The higher rate of leaf photosynthesis in Takanari resulted not only from the higher content of leaf nitrogen, which was caused by its elevated capacity for nitrogen accumulation, but also from higher stomatal conductance. When measured under light-saturated conditions, stomatal conductance was already decreased due to the reduction in leaf water potential in Koshihikari even under conditions of a relatively small difference in leaf-air vapour pressure difference. In contrast, the higher stomatal conductance was supported by the maintenance of higher leaf water potential through the higher hydraulic conductance in Takanari with the larger area of root surface. However, no increase in root hydraulic conductivity was expected in Takanari. The larger root surface area of Takanari might be a target trait in future rice breeding for increasing dry matter production.

  2. Association between higher levels of sexual function, activity, and satisfaction and self-rated successful aging in older postmenopausal women

    Science.gov (United States)

    Thompson, Wesley K.; Charo, Lindsey; Vahia, Ipsit V.; Depp, Colin; Allison, Matthew; Jeste, Dilip V.

    2014-01-01

    Objectives To determine if measures of successful-aging are associated with sexual activity, satisfaction, and function in older post-menopausal women. Design Cross-sectional study using self-report surveys; analyses include chi-square and t-tests and multiple linear regression analyses. Setting Community-dwelling older post-menopausal women in the greater San Diego Region. Participants 1,235 community-dwelling women aged 60-89 years participating at the San Diego site of the Women's Health Initiative. Measurements Demographics and self-report measures of sexual activity, function, and satisfaction and successful aging. Results Sexual activity and functioning (desire, arousal, vaginal tightness, use of lubricants, and ability to climax) were negatively associated with age, as were physical and mental health. In contrast, sexual satisfaction and self-rated successful aging and quality of life remained unchanged across age groups. Successful aging measures were positively associated with sexual measures, especially self-rated quality of life and sexual satisfaction. Conclusions Self-rated successful aging, quality of life, and sexual satisfaction appear to be stable in the face of declines in physical health, some cognitive abilities, and sexual activity and function and are positively associated with each other across ages 60-89 years. PMID:21797827

  3. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  4. Error and discrepancy in radiology: inevitable or avoidable?

    Science.gov (United States)

    Brady, Adrian P

    2017-02-01

    Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3-5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms "error" and "discrepancy" and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and system-based. Possible strategies to minimise error are considered, along with the means of dealing with perceived underperformance when it is identified. The inevitability of imperfection is explained, while the importance of striving to minimise such imperfection is emphasised. • Discrepancies between radiology reports and subsequent patient outcomes are not inevitably errors. • Radiologist reporting performance cannot be perfect, and some errors are inevitable. • Error or discrepancy in radiology reporting does not equate negligence. • Radiologist errors occur for many reasons, both human- and system-derived. • Strategies exist to minimise error causes and to learn from errors made.

  5. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  6. Do More Hospital Beds Lead to Higher Hospitalization Rates? A Spatial Examination of Roemer’s Law

    Science.gov (United States)

    Delamater, Paul L.; Messina, Joseph P.; Grady, Sue C.; WinklerPrins, Vince; Shortridge, Ashton M.

    2013-01-01

    Background Roemer’s Law, a widely cited principle in health care policy, states that hospital beds that are built tend to be used. This simple but powerful expression has been invoked to justify Certificate of Need regulation of hospital beds in an effort to contain health care costs. Despite its influence, a surprisingly small body of empirical evidence supports its content. Furthermore, known geographic factors influencing health services use and the spatial structure of the relationship between hospital bed availability and hospitalization rates have not been sufficiently explored in past examinations of Roemer’s Law. We pose the question, “Accounting for space in health care access and use, is there an observable association between the availability of hospital beds and hospital utilization?” Methods We employ an ecological research design based upon the Anderson behavioral model of health care utilization. This conceptual model is implemented in an explicitly spatial context. The effect of hospital bed availability on the utilization of hospital services is evaluated, accounting for spatial structure and controlling for other known determinants of hospital utilization. The stability of this relationship is explored by testing across numerous geographic scales of analysis. The case study comprises an entire state system of hospitals and population, evaluating over one million inpatient admissions. Results We find compelling evidence that a positive, statistically significant relationship exists between hospital bed availability and inpatient hospitalization rates. Additionally, the observed relationship is invariant with changes in the geographic scale of analysis. Conclusions This study provides evidence for the effects of Roemer’s Law, thus suggesting that variations in hospitalization rates have origins in the availability of hospital beds. This relationship is found to be robust across geographic scales of analysis. These findings suggest

  7. Fish community reassembly after a coral mass mortality: higher trophic groups are subject to increased rates of extinction.

    Science.gov (United States)

    Alonso, David; Pinyol-Gallemí, Aleix; Alcoverro, Teresa; Arthur, Rohan

    2015-05-01

    Since Gleason and Clements, our understanding of community dynamics has been influenced by theories emphasising either dispersal or niche assembly as central to community structuring. Determining the relative importance of these processes in structuring real-world communities remains a challenge. We tracked reef fish community reassembly after a catastrophic coral mortality in a relatively unfished archipelago. We revisited the stochastic model underlying MacArthur and Wilson's Island Biogeography Theory, with a simple extension to account for trophic identity. Colonisation and extinction rates calculated from decadal presence-absence data based on (1) species neutrality, (2) trophic identity and (3) site-specificity were used to model post-disturbance reassembly, and compared with empirical observations. Results indicate that species neutrality holds within trophic guilds, and trophic identity significantly increases overall model performance. Strikingly, extinction rates increased clearly with trophic position, indicating that fish communities may be inherently susceptible to trophic downgrading even without targeted fishing of top predators. © 2015 John Wiley & Sons Ltd/CNRS.

  8. The Active Management of Risk in Multiparous Pregnancy at Term: association between a higher preventive labor induction rate and improved birth outcomes

    Science.gov (United States)

    Nicholson, James M.; Caughey, Aaron; Stenson, Ms. Morghan H.; Cronholm, Peter; Kellar, Lisa; Bennett, Ian; Margo, Katie; Stratton, Joseph

    2009-01-01

    Objective To determine if exposure of multiparous women to a high rate of preventive labor induction was associated with a significantly lower cesarean delivery rate. Study Design Retrospective cohort study involving 123 multiparas, who were exposed to the frequent use of preventive labor induction, and 304 multiparas, who received standard management. Rates of cesarean delivery and other adverse birth outcomes were compared in the two groups. Logistic regression controlled for confounding covariates. Results The exposed group had a lower cesarean delivery rate (aOR 0.09, 0.8% vs. 9.9%, p = 0.02) and a higher uncomplicated vaginal delivery rate (OR 0.53, 78.9% vs. 66.4%, p=0.01). Exposure was not associated with higher rates of other adverse birth outcomes. Conclusion Exposure of multiparas to a high rate of preventive labor induction was significantly associated with improved birth outcomes including a very low cesarean delivery rate. A prospective randomized trial is needed to determine causality. PMID:19254584

  9. Potential impact on HIV incidence of higher HIV testing rates and earlier antiretroviral therapy initiation in MSM

    DEFF Research Database (Denmark)

    Phillips, Andrew N; Cambiano, Valentina; Miners, Alec

    2015-01-01

    count 350/μl. We investigated what would be required to reduce HIV incidence in MSM to below 1 per 1000 person-years (i.e. cost-effective. METHODS: A dynamic, individual-based simulation model was calibrated to multiple data sources...... with viral suppression to 80%, and it would be 90%, if ART is initiated at diagnosis. The scenarios required for such a policy to be cost-effective are presented. CONCLUSION: This analysis provides targets for the proportion of all HIV-positive MSM with viral suppression required to achieve substantial......BACKGROUND: Increased rates of testing, with early antiretroviral therapy (ART) initiation, represent a key potential HIV-prevention approach. Currently, in MSM in the United Kingdom, it is estimated that 36% are diagnosed by 1 year from infection, and the ART initiation threshold is at CD4 cell...

  10. Errors in laboratory medicine: practical lessons to improve patient safety.

    Science.gov (United States)

    Howanitz, Peter J

    2005-10-01

    Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification

  11. PCR reveals significantly higher rates of Trypanosoma cruzi infection than microscopy in the Chagas vector, Triatoma infestans: High rates found in Chuquisaca, Bolivia

    Directory of Open Access Journals (Sweden)

    Lucero David E

    2007-06-01

    Full Text Available Abstract Background The Andean valleys of Bolivia are the only reported location of sylvatic Triatoma infestans, the main vector of Chagas disease in this country, and the high human prevalence of Trypanosoma cruzi infection in this region is hypothesized to result from the ability of vectors to persist in domestic, peri-domestic, and sylvatic environments. Determination of the rate of Trypanosoma infection in its triatomine vectors is an important element in programs directed at reducing human infections. Traditionally, T. cruzi has been detected in insect vectors by direct microscopic examination of extruded feces, or dissection and analysis of the entire bug. Although this technique has proven to be useful, several drawbacks related to its sensitivity especially in the case of small instars and applicability to large numbers of insects and dead specimens have motivated researchers to look for a molecular assay based on the polymerase chain reaction (PCR as an alternative for parasitic detection of T. cruzi infection in vectors. In the work presented here, we have compared a PCR assay and direct microscopic observation for diagnosis of T. cruzi infection in T. infestans collected in the field from five localities and four habitats in Chuquisaca, Bolivia. The efficacy of the methods was compared across nymphal stages, localities and habitats. Methods We examined 152 nymph and adult T. infestans collected from rural areas in the department of Chuquisaca, Bolivia. For microscopic observation, a few drops of rectal content obtained by abdominal extrusion were diluted with saline solution and compressed between a slide and a cover slip. The presence of motile parasites in 50 microscopic fields was registered using 400× magnification. For the molecular analysis, dissection of the posterior part of the abdomen of each insect followed by DNA extraction and PCR amplification was performed using the TCZ1 (5' – CGA GCT CTT GCC CAC ACG GGT GCT – 3

  12. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  13. Effect of temperature during ion sputtering on the surface segregation rate of antimony in an iron-antimony alloy at higher temperatures

    International Nuclear Information System (INIS)

    Oku, M.; Hirokawa, K.; Kimura, H.; Suzuki, S.

    1986-01-01

    The surface segregation of antimony in an iron-0.23 at% antimony alloy was studied by XPS. The segregation rate in the temperature range between 800 and 900 K depends on the temperature during sputtering with argon ion of kinetic energy of 1 keV. The sputtering at room temperature or 473 K gives higher values of the segregation rate than those at 673 K. Both cases give the activation energy of 170 kJmol -1 for the surface segregation rate. The segregation of antimony is not observed after the sample is heated at 1000 K. (author)

  14. Loose regulation of medical marijuana programs associated with higher rates of adult marijuana use but not cannabis use disorder.

    Science.gov (United States)

    Williams, Arthur Robin; Santaella-Tenorio, Julian; Mauro, Christine M; Levin, Frances R; Martins, Silvia S

    2017-11-01

    Most US states have passed medical marijuana laws (MMLs), with great variation in program regulation impacting enrollment rates. We aimed to compare changes in rates of marijuana use, heavy use and cannabis use disorder across age groups while accounting for whether states enacted medicalized (highly regulated) or non-medical mml programs. Difference-in-differences estimates with time-varying state-level MML coded by program type (medicalized versus non-medical). Multi-level linear regression models adjusted for state-level random effects and covariates as well as historical trends in use. Nation-wide cross-sectional survey data from the US National Survey of Drug Use and Health (NSDUH) restricted use data portal aggregated at the state level. Participants comprised 2004-13 NSDUH respondents (n ~ 67 500/year); age groups 12-17, 18-25 and 26+ years. States had implemented eight medicalized and 15 non-medical MML programs. Primary outcome measures included (1) active (past-month) marijuana use; (2) heavy use (> 300 days/year); and (3) cannabis use disorder diagnosis, based on DSM-IV criteria. Covariates included program type, age group and state-level characteristics throughout the study period. Adults 26+ years of age living in states with non-medical MML programs increased past-month marijuana use 1.46% (from 4.13 to 6.59%, P = 0.01), skewing towards greater heavy marijuana by 2.36% (from 14.94 to 17.30, P = 0.09) after MMLs were enacted. However, no associated increase in the prevalence of cannabis use disorder was found during the study period. Our findings do not show increases in prevalence of marijuana use among adults in states with medicalized MML programs. Additionally, there were no increases in adolescent or young adult marijuana outcomes following MML passage, irrespective of program type. Non-medical marijuana laws enacted in US states are associated with increased marijuana use, but only among adults aged 26+ years. Researchers and

  15. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  16. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  17. Being born under adverse economic conditions leads to a higher cardiovascular mortality rate later in life: evidence based on individuals born at different stages of the business cycle

    DEFF Research Database (Denmark)

    van den Berg, Gerard J; Doblhammer-Reiter, Gabriele; Christensen, Kaare

    2011-01-01

    since the 1870s and including the cause of death. To capture exogenous variation of conditions early in life, we use the state of the business cycle around birth. We find significant negative effects of economic conditions around birth on the individual CV mortality rate at higher ages...

  18. Boarding is associated with higher rates of medication delays and adverse events but fewer laboratory-related delays.

    Science.gov (United States)

    Sri-On, Jiraporn; Chang, Yuchiao; Curley, David P; Camargo, Carlos A; Weissman, Joel S; Singer, Sara J; Liu, Shan W

    2014-09-01

    Hospital crowding and emergency department (ED) boarding are large and growing problems. To date, there has been a paucity of information regarding the quality of care received by patients boarding in the ED compared with the care received by patients on an inpatient unit. We compared the rate of delays and adverse events at the event level that occur while boarding in the ED vs while on an inpatient unit. This study was a secondary analysis of data from medical record review and administrative databases at 2 urban academic teaching hospitals from August 1, 2004, through January 31, 2005. We measured delayed repeat cardiac enzymes, delayed partial thromboplastin time level checks, delayed antibiotic administration, delayed administration of home medications, and adverse events. We compared the incidence of events during ED boarding vs while on an inpatient unit. Among 1431 patient medical records, we identified 1016 events. Emergency department boarding was associated with an increased risk of home medication delays (risk ratio [RR], 1.54; 95% confidence interval [CI], 1.26-1.88), delayed antibiotic administration (RR, 2.49; 95% CI, 1.72-3.52), and adverse events (RR, 2.36; 95% CI, 1.15-4.72). On the contrary, ED boarding was associated with fewer delays in repeat cardiac enzymes (RR, 0.17; 95% CI, 0.09-0.27) and delayed partial thromboplastin time checks (RR, 0.54; 95% CI, 0.27-0.96). Compared with inpatient units, ED boarding was associated with more medication-related delays and adverse events but fewer laboratory-related delays. Until we can eliminate ED boarding, it is critical to identify areas for improvement. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Low-birthweight rates higher among Bangladeshi neonates measured during active birth surveillance compared to national survey data.

    Science.gov (United States)

    Klemm, Rolf D W; Merrill, Rebecca D; Wu, Lee; Shamim, Abu Ahmed; Ali, Hasmot; Labrique, Alain; Christian, Parul; West, Keith P

    2015-10-01

    Birth size is an important gauge of fetal and neonatal health. Birth size measurements were collected within 72 h of life for 16 290 live born, singleton infants in rural Bangladesh from 2004 to 2007. Gestational age was calculated based on the date of last menstrual period. Newborns were classified as small-for-gestational age (SGA) based on a birthweight below the 10th percentile for gestational age, using three sets of US reference data. Birth size distributions were explored based on raw values as well as after z-score standardisation in reference to World Health Organization (WHO) 2006 growth standards. Mean (SD) birthweight (g), length (cm) and head circumference (cm) measurements, completed within [median (25th, 75th percentile)] 15 (8, 23) h of life, were 2433 (425), 46.4 (2.4) and 32.4 (1.6), respectively. Twenty-two per cent were born preterm. Over one-half (55.3%) of infants were born low birthweight; 46.6%, 37.0% and 33.6% had a weight, length and head circumference below -2 z-scores of the WHO growth standard at birth; and 70.9%, 72.2% and 59.8% were SGA for weight based on Alexander et al., Oken et al. and Olsen et al. references, respectively. Infants in this typical rural Bangladesh setting were commonly born small, reflecting a high burden of fetal growth restriction and preterm birth. Our findings, produced by active birth surveillance, suggest that low birthweight is far more common than suggested by cross-sectional survey estimates. Interventions that improve fetal growth during pregnancy may have the largest impact on reducing SGA rates. © 2013 John Wiley & Sons Ltd.

  20. Numerical and experimental investigation of the bell-mouth inlet design of a centrifugal fan for higher internal flow rate

    International Nuclear Information System (INIS)

    Kim, Sang Hyeon; Heo, Seung; Cheong, Cheolung; Kim, Tae Hoon

    2013-01-01

    The energy efficiency of a household refrigerator is one of the most critical characteristics considered by manufacturers and consumers. Numerous studies in various fields have been conducted to increase energy efficiency. One of the most efficient methods to reduce the energy consumption of a refrigerator is by improving the performance of fans inside the refrigerator. A number of studies reported various ways to enhance fan performance. However, the majority of these studies focused solely on the fan and did not consider the working environment of the fan, such as the inlet and outlet flow characteristics. The expected performance of fans developed without consideration of these characteristics cannot be determined because complex inlet and outlet flow passage could adversely affect performance. This study investigates the effects of the design of the bell-mouth inlet on the performance of a centrifugal fan in a household refrigerator. In preliminary numerical studies, significant flow loss is identified through the bell-mouth inlet in the target fan system. Several design factors such as tip clearance, inner fence, motor-box struts, and guide vane are proposed to resolve these flow losses. The effects of these factors on fan performance are investigated using computational fluid dynamics techniques to solve incompressible Reynolds-averaged Navier-Stokes equations for predicting the circulating flow of the fan. Experiments are then performed to validate the numerical predictions. Results indicate that four design factors positively affect fan performance in terms of flow rate. The guide vane is the most effective design factor to consider for improving fan performance. Further studies are conducted to investigate the detailed effects of the guide vane by varying its install angle, install location, height, and length. These studies determine the optimum design of the guide vane to achieve the highest performance of the fan and the related flow characteristics

  1. Numerical and experimental investigation of the bell-mouth inlet design of a centrifugal fan for higher internal flow rate

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Hyeon; Heo, Seung; Cheong, Cheolung [Pusan National University, Busan (Korea, Republic of); Kim, Tae Hoon [Refrigeration Division, Changwon (Korea, Republic of)

    2013-08-15

    The energy efficiency of a household refrigerator is one of the most critical characteristics considered by manufacturers and consumers. Numerous studies in various fields have been conducted to increase energy efficiency. One of the most efficient methods to reduce the energy consumption of a refrigerator is by improving the performance of fans inside the refrigerator. A number of studies reported various ways to enhance fan performance. However, the majority of these studies focused solely on the fan and did not consider the working environment of the fan, such as the inlet and outlet flow characteristics. The expected performance of fans developed without consideration of these characteristics cannot be determined because complex inlet and outlet flow passage could adversely affect performance. This study investigates the effects of the design of the bell-mouth inlet on the performance of a centrifugal fan in a household refrigerator. In preliminary numerical studies, significant flow loss is identified through the bell-mouth inlet in the target fan system. Several design factors such as tip clearance, inner fence, motor-box struts, and guide vane are proposed to resolve these flow losses. The effects of these factors on fan performance are investigated using computational fluid dynamics techniques to solve incompressible Reynolds-averaged Navier-Stokes equations for predicting the circulating flow of the fan. Experiments are then performed to validate the numerical predictions. Results indicate that four design factors positively affect fan performance in terms of flow rate. The guide vane is the most effective design factor to consider for improving fan performance. Further studies are conducted to investigate the detailed effects of the guide vane by varying its install angle, install location, height, and length. These studies determine the optimum design of the guide vane to achieve the highest performance of the fan and the related flow characteristics

  2. Relating physician's workload with errors during radiation therapy planning.

    Science.gov (United States)

    Mazur, Lukasz M; Mosaly, Prithima R; Hoyle, Lesley M; Jones, Ellen L; Chera, Bhishamjit S; Marks, Lawrence B

    2014-01-01

    To relate subjective workload (WL) levels to errors for routine clinical tasks. Nine physicians (4 faculty and 5 residents) each performed 3 radiation therapy planning cases. The WL levels were subjectively assessed using National Aeronautics and Space Administration Task Load Index (NASA-TLX). Individual performance was assessed objectively based on the severity grade of errors. The relationship between the WL and performance was assessed via ordinal logistic regression. There was an increased rate of severity grade of errors with increasing WL (P value = .02). As the majority of the higher NASA-TLX scores, and the majority of the performance errors were in the residents, our findings are likely most pertinent to radiation oncology centers with training programs. WL levels may be an important factor contributing to errors during radiation therapy planning tasks. Published by Elsevier Inc.

  3. APPRAISAL OF STUDENT RATING AS A MEASURE TO MANAGE THE QUALITY OF HIGHER EDUCATION IN INDIA: AN INSTITUTIONAL STUDY USING SIX SIGMA MODEL APPROACH

    Directory of Open Access Journals (Sweden)

    Arun Vijay

    2013-12-01

    Full Text Available Students' rating of teaching is one of the most widely accepted methods of measuring the quality in Higher Education worldwide. The overall experience gained by the students during their academic journey in their respective college is a key factor to determine the Institutional Quality. This study was conducted among the Physical Therapy students with an objective to capture the overall experience related to various aspects of their Academic environment including teaching and learning process adopted in their college. To facilitate that, a unique questionnaire called,"Academic Environment Evaluation Questionnaire (AEEQ was developed covering all the important teaching elements of the Higher Education Institutions. The students' opinion was captured and analyzed through six sigma analytical tool using Poisson distribution model. From the non-conformance level captured through the responses from the students about the various categories of teaching and learning elements, the corresponding Sigma rating for each teaching element was measured. Accordingly, a six point Quality rating system was developed customizing to each sigma values. This study brings a new, innovative student driven Quality rating system for the Higher Education Institutions in India.

  4. Refractive errors in children and adolescents in Bucaramanga (Colombia).

    Science.gov (United States)

    Galvis, Virgilio; Tello, Alejandro; Otero, Johanna; Serrano, Andrés A; Gómez, Luz María; Castellanos, Yuly

    2017-01-01

    The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia). This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D). Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.

  5. Refractive errors in children and adolescents in Bucaramanga (Colombia

    Directory of Open Access Journals (Sweden)

    Virgilio Galvis

    Full Text Available ABSTRACT Purpose: The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia. Methods: This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. Results: One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D. Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. Conclusions: The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.

  6. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  7. The Sustained Influence of an Error on Future Decision-Making.

    Science.gov (United States)

    Schiffler, Björn C; Bengtsson, Sara L; Lundqvist, Daniel

    2017-01-01

    Post-error slowing (PES) is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants) of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants' response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters' role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  8. The Sustained Influence of an Error on Future Decision-Making

    Directory of Open Access Journals (Sweden)

    Björn C. Schiffler

    2017-06-01

    Full Text Available Post-error slowing (PES is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants’ response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters’ role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  9. Estimates of rates and errors for measurements of direct-γ and direct-γ + jet production by polarized protons at RHIC

    International Nuclear Information System (INIS)

    Beddo, M.E.; Spinka, H.; Underwood, D.G.

    1992-01-01

    Studies of inclusive direct-γ production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between √s = 50 and 500 GeV. Also, rates were computed for direct-γ + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made

  10. Endometrial Scratch Injury Induces Higher Pregnancy Rate for Women With Unexplained Infertility Undergoing IUI With Ovarian Stimulation: A Randomized Controlled Trial.

    Science.gov (United States)

    Maged, Ahmed M; Al-Inany, Hesham; Salama, Khaled M; Souidan, Ibrahim I; Abo Ragab, Hesham M; Elnassery, Noura

    2016-02-01

    To explore the impact of endometrial scratch injury (ESI) on intrauterine insemination (IUI) success. One hundred and fifty four infertile women received 100 mg of oral clomiphene citrate for 5 days starting on day 3 of the menstrual cycle. Patients were randomized to 2 equal groups: Group C received IUI without ESI and group S had ESI. Successful pregnancy was confirmed by ultrasound. 13, 21, and 10 women got pregnant after the first, second, and third IUI trials, respectively, with 28.6% cumulative pregnancy rate (PR). The cumulative PR was significantly higher in group S (39%) compared to group C (18.2%). The PR in group S was significantly higher compared to that in group C at the second and third trials. The PR was significantly higher in group S at the second trial compared to that reported in the same group at the first trial but nonsignificantly higher compared to that reported during the third trial, while in group C, the difference was nonsignificant. Eight pregnant women had first trimester abortion with 18.2% total abortion rate with nonsignificant difference between studied groups. The ESI significantly improves the outcome of IUI in women with unexplained infertility especially when conducted 1 month prior to IUI. © The Author(s) 2015.

  11. Teamwork and clinical error reporting among nurses in Korean hospitals.

    Science.gov (United States)

    Hwang, Jee-In; Ahn, Jeonghoon

    2015-03-01

    To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitoring, mutual support, and communication. Using logistic regression analysis, we determined the relationships between teamwork and error reporting. The response rate was 85.5%. The mean score of teamwork was 3.5 out of 5. At the subscale level, mutual support was rated highest, while leadership was rated lowest. Of the participating nurses, 522 responded that they had experienced at least one clinical error in the last 6 months. Among those, only 53.0% responded that they always or usually reported clinical errors to their managers and/or the patient safety department. Teamwork was significantly associated with better error reporting. Specifically, nurses with a higher team communication score were more likely to report clinical errors to their managers and the patient safety department (odds ratio = 1.82, 95% confidence intervals [1.05, 3.14]). Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety. Copyright © 2015. Published by Elsevier B.V.

  12. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  13. Being born under adverse economic conditions leads to a higher cardiovascular mortality rate later in life: evidence based on individuals born at different stages of the business cycle.

    Science.gov (United States)

    van den Berg, Gerard J; Doblhammer-Reiter, Gabriele; Christensen, Kaare

    2011-05-01

    We connect the recent medical and economic literatures on the long-run effects of early-life conditions by analyzing the effects of economic conditions on the individual cardiovascular (CV) mortality rate later in life, using individual data records from the Danish Twin Registry covering births since the 1870s and including the cause of death. To capture exogenous variation of conditions early in life, we use the state of the business cycle around birth. We find significant negative effects of economic conditions around birth on the individual CV mortality rate at higher ages. There is no effect on the cancer-specific mortality rate. From variation within and between monozygotic and dizygotic twin pairs born under different conditions, we conclude that the fate of an individual is more strongly determined by genetic and household-environmental factors if early-life conditions are poor. Individual-specific qualities come more to fruition if the starting position in life is better.

  14. Higher Prevalence and Awareness, but Lower Control Rate of Hypertension in Patients with Diabetes than General Population: The Fifth Korean National Health and Nutrition Examination Survey in 2011

    Directory of Open Access Journals (Sweden)

    Seung-Hyun Ko

    2014-02-01

    Full Text Available BackgroundWe investigated the prevalence, awareness, treatment, and control rate of hypertension in Korean adults with diabetes using nationally representative data.MethodsUsing data of 5,105 adults from the fifth Korea National Health and Nutrition Examination Survey in 2011 (4,389 nondiabetes mellitus [non-DM], 242 newly diagnosed with DM (new-DM, and 474 previously diagnosed with DM (known-DM, we analyzed the prevalence of hypertension (mean systolic blood pressure ≥140 mm Hg, diastolic blood pressure ≥90 mm Hg, or use of antihypertensive medication and control rate of hypertension (blood pressure [BP] <130/80 mm Hg.ResultsThe prevalence of hypertension in diabetic adults was 54.6% (44.4% in new-DM and 62.6% in known-DM, P<0.0001 and P<0.0001, respectively compared with non-DM adults (26.2%. Compared to non-DM, awareness (85.7%, P<0.001 and treatment (97.0%, P=0.020 rates were higher in known-DM, whereas no differences were found between new-DM and non-DM. Control rate among all hypertensive subjects was lower in new-DM (14.9%, compared to non-DM (35.1%, P<0.001 and known-DM (33.3%, P=0.004. Control rate among treated subjects was also lower in new-DM (25.2%, compared to non-DM (68.4%, P<0.0001 and known-DM (39.9%, P<0.0001.ConclusionHigher prevalence and low control rate of hypertension in adults with diabetes suggest that stringent efforts are needed to control BP in patients with diabetes, particularly in newly diagnosed diabetic patients.

  15. Higher success rate with transcranial electrical stimulation of motor-evoked potentials using constant-voltage stimulation compared with constant-current stimulation in patients undergoing spinal surgery.

    Science.gov (United States)

    Shigematsu, Hideki; Kawaguchi, Masahiko; Hayashi, Hironobu; Takatani, Tsunenori; Iwata, Eiichiro; Tanaka, Masato; Okuda, Akinori; Morimoto, Yasuhiko; Masuda, Keisuke; Tanaka, Yuu; Tanaka, Yasuhito

    2017-10-01

    During spine surgery, the spinal cord is electrophysiologically monitored via transcranial electrical stimulation of motor-evoked potentials (TES-MEPs) to prevent injury. Transcranial electrical stimulation of motor-evoked potential involves the use of either constant-current or constant-voltage stimulation; however, there are few comparative data available regarding their ability to adequately elicit compound motor action potentials. We hypothesized that the success rates of TES-MEP recordings would be similar between constant-current and constant-voltage stimulations in patients undergoing spine surgery. The objective of this study was to compare the success rates of TES-MEP recordings between constant-current and constant-voltage stimulation. This is a prospective, within-subject study. Data from 100 patients undergoing spinal surgery at the cervical, thoracic, or lumbar level were analyzed. The success rates of the TES-MEP recordings from each muscle were examined. Transcranial electrical stimulation with constant-current and constant-voltage stimulations at the C3 and C4 electrode positions (international "10-20" system) was applied to each patient. Compound muscle action potentials were bilaterally recorded from the abductor pollicis brevis (APB), deltoid (Del), abductor hallucis (AH), tibialis anterior (TA), gastrocnemius (GC), and quadriceps (Quad) muscles. The success rates of the TES-MEP recordings from the right Del, right APB, bilateral Quad, right TA, right GC, and bilateral AH muscles were significantly higher using constant-voltage stimulation than those using constant-current stimulation. The overall success rates with constant-voltage and constant-current stimulations were 86.3% and 68.8%, respectively (risk ratio 1.25 [95% confidence interval: 1.20-1.31]). The success rates of TES-MEP recordings were higher using constant-voltage stimulation compared with constant-current stimulation in patients undergoing spinal surgery. Copyright © 2017

  16. Improving the ablation efficiency of excimer laser systems with higher repetition rates through enhanced debris removal and optimized spot pattern.

    Science.gov (United States)

    Arba-Mosquera, Samuel; Klinner, Thomas

    2014-03-01

    To evaluate the reasons for the required increased radiant exposure for higher-repetition-rate excimer lasers and determine experimentally possible compensations to achieve equivalent ablation profiles maintaining the same single-pulse energies and radiant exposures for laser repetition rates ranging from 430 to 1000 Hz. Schwind eye-tech-solutions GmbH and Co. KG, Kleinostheim, Germany. Experimental study. Poly(methyl methacrylate) (PMMA) plates were photoablated. The pulse laser energy was maintained during all experiments; the effects of the flow of the debris removal, the shot pattern for the correction, and precooling the PMMA plates were evaluated in terms of achieved ablation versus repetition rate. The mean ablation performance ranged from 88% to 100%; the variability between the profile measurements ranged from 1.4% to 6.2%. Increasing the laser repetition rate from 430 Hz to 1000 Hz reduced the mean ablation performance from 98% to 91% and worsened the variability from 1.9% to 4.3%. Increasing the flow of the debris removal, precooling the PMMA plates to -18°C, and adapting the shot pattern for the thermal response of PMMA to excimer ablation helped stabilize the variability. Only adapting the shot pattern for the thermal response of PMMA to excimer ablation helped stabilize the mean ablation performance. The ablation performance of higher-repetition-rate excimer lasers on PMMA improved with improvements in the debris removal systems and shot pattern. More powerful debris removal systems and smart shot patterns in terms of thermal response improved the performance of these excimer lasers. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  17. Older Adults With a Combination of Vision and Hearing Impairment Experience Higher Rates of Cognitive Impairment, Functional Dependence, and Worse Outcomes Across a Set of Quality Indicators.

    Science.gov (United States)

    Davidson, Jacob G S; Guthrie, Dawn M

    2017-08-01

    Hearing and vision impairment were examined across several health-related outcomes and across a set of quality indicators (QIs) in home care clients with both vision and hearing loss (or dual sensory impairment [DSI]). Data collected using the Resident Assessment Instrument for Home Care (RAI-HC) were analyzed in a sample of older home care clients. The QIs represent the proportion of clients experiencing negative outcomes (e.g., falls, social isolation). The average age of clients was 82.8 years ( SD = 7.9), 20.5% had DSI and 8.5% had a diagnosis of Alzheimer's disease (AD). Clients with DSI were more likely to have a diagnosis of dementia (not AD), have functional impairments, report loneliness, and have higher rates across 20 of the 22 QIs, including communication difficulty and cognitive decline. Clients with highly impaired hearing, and any visual impairment, had the highest QI rates. Individuals with DSI experience higher rates of adverse events across many health-related outcomes and QIs. Understanding the unique contribution of hearing and vision in this group can promote optimal quality of care.

  18. Central nervous system tumours among adolescents and young adults (15-39 years) in Southern and Eastern Europe: Registration improvements reveal higher incidence rates compared to the US.

    Science.gov (United States)

    Georgakis, Marios K; Panagopoulou, Paraskevi; Papathoma, Paraskevi; Tragiannidis, Athanasios; Ryzhov, Anton; Zivkovic-Perisic, Snezana; Eser, Sultan; Taraszkiewicz, Łukasz; Sekerija, Mario; Žagar, Tina; Antunes, Luis; Zborovskaya, Anna; Bastos, Joana; Florea, Margareta; Coza, Daniela; Demetriou, Anna; Agius, Domenic; Strahinja, Rajko M; Sfakianos, Georgios; Nikas, Ioannis; Kosmidis, Sofia; Razis, Evangelia; Pourtsidis, Apostolos; Kantzanou, Maria; Dessypris, Nick; Petridou, Eleni Th

    2017-11-01

    To present incidence of central nervous system (CNS) tumours among adolescents and young adults (AYAs; 15-39 years) derived from registries of Southern and Eastern Europe (SEE) in comparison to the Surveillance, Epidemiology and End Results (SEER), US and explore changes due to etiological parameters or registration improvement via evaluating time trends. Diagnoses of 11,438 incident malignant CNS tumours in AYAs (1990-2014) were retrieved from 14 collaborating SEE cancer registries and 13,573 from the publicly available SEER database (1990-2012). Age-adjusted incidence rates (AIRs) were calculated; Poisson and joinpoint regression analyses were performed for temporal trends. The overall AIR of malignant CNS tumours among AYAs was higher in SEE (28.1/million) compared to SEER (24.7/million). Astrocytomas comprised almost half of the cases in both regions, albeit the higher proportion of unspecified cases in SEE registries (30% versus 2.5% in SEER). Similar were the age and gender distributions across SEE and SEER with a male-to-female ratio of 1.3 and an overall increase of incidence by age. Increasing temporal trends in incidence were documented in four SEE registries (Greater Poland, Portugal North, Turkey-Izmir and Ukraine) versus an annual decrease in Croatia (-2.5%) and a rather stable rate in SEER (-0.3%). This first report on descriptive epidemiology of AYAs malignant CNS tumours in the SEE area shows higher incidence rates as compared to the United States of America and variable temporal trends that may be linked to registration improvements. Hence, it emphasises the need for optimisation of cancer registration processes, as to enable the in-depth evaluation of the observed patterns by disease subtype. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Enhancing Brain Lesions during Acute Optic Neuritis and/or Longitudinally Extensive Transverse Myelitis May Portend a Higher Relapse Rate in Neuromyelitis Optica Spectrum Disorders.

    Science.gov (United States)

    Orman, G; Wang, K Y; Pekcevik, Y; Thompson, C B; Mealy, M; Levy, M; Izbudak, I

    2017-05-01

    Neuromyelitis optica spectrum disorders are inflammatory demyelinating disorders with optic neuritis and/or longitudinally extensive transverse myelitis episodes. We now know that neuromyelitis optica spectrum disorders are associated with antibodies to aquaporin-4, which are highly concentrated on astrocytic end-feet at the blood-brain barrier. Immune-mediated disruption of the blood-brain barrier may manifest as contrast enhancement on brain MR imaging. We aimed to delineate the extent and frequency of contrast enhancement on brain MR imaging within 1 month of optic neuritis and/or longitudinally extensive transverse myelitis attacks and to correlate contrast enhancement with outcome measures. Brain MRIs of patients with neuromyelitis optica spectrum disorders were evaluated for patterns of contrast enhancement (periependymal, cloudlike, leptomeningeal, and so forth). The Fisher exact test was used to evaluate differences between the proportion of contrast enhancement in patients who were seropositive and seronegative for aquaporin-4 antibodies. The Mann-Whitney test was used to compare the annualized relapse rate and disease duration between patients with and without contrast enhancement and with and without seropositivity. Brain MRIs of 77 patients were evaluated; 59 patients (10 males, 49 females) were scanned within 1 month of optic neuritis and/or longitudinally extensive transverse myelitis attacks and were included in the analysis. Forty-eight patients were seropositive, 9 were seronegative, and 2 were not tested for aquaporin-4 antibodies. Having brain contrast enhancement of any type during an acute attack was significantly associated with higher annualized relapse rates ( P = .03) and marginally associated with shorter disease duration ( P = .05). Having periependymal contrast enhancement was significantly associated with higher annualized relapse rates ( P = .03). Brain MRIs of patients with neuromyelitis optica spectrum disorders with contrast

  20. Higher tacrolimus trough levels on days 2-5 post-renal transplant are associated with reduced rates of acute rejection.

    LENUS (Irish Health Repository)

    O'Seaghdha, C M

    2011-04-06

    We analyzed the association between whole-blood trough tacrolimus (TAC) levels in the first days post-kidney transplant and acute cellular rejection (ACR) rates. Four hundred and sixty-four consecutive, deceased-donor kidney transplant recipients were included. All were treated with a combination of TAC, mycophenolate mofetil and prednisolone. Patients were analyzed in four groups based on quartiles of the mean TAC on days 2 and 5 post-transplant: Group 1: median TAC 11 ng\\/mL (n = 122, range 2-13.5 ng\\/mL), Group 2: median 17 ng\\/mL (n = 123, range 14-20 ng\\/mL), Group 3: median 24 ng\\/mL (n = 108, range 20.5-27 ng\\/mL) and Group 4: median 33.5 ng\\/mL (n = 116, range 27.5-77.5 ng\\/mL). A graded reduction in the rates of ACR was observed for each incremental days 2-5 TAC. The one-yr ACR rate was 24.03% (95% CI 17.26-32.88), 22.20% (95% CI 15.78-30.70), 13.41% (95% CI 8.15-21.63) and 8.69% (95% CI 4.77-15.55) for Groups 1-4, respectively (p = 0.003). This study suggests that higher early TACs are associated with reduced rates of ACR at one yr.

  1. Improved Landau gauge fixing and discretisation errors

    International Nuclear Information System (INIS)

    Bonnet, F.D.R.; Bowman, P.O.; Leinweber, D.B.; Richards, D.G.; Williams, A.G.

    2000-01-01

    Lattice discretisation errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition displays the secondary benefit of reducing the size of higher-order errors. These results emphasise the importance of implementing an improved gauge fixing condition

  2. Medical errors in hospitalized pediatric trauma patients with chronic health conditions

    Directory of Open Access Journals (Sweden)

    Xiaotong Liu

    2014-01-01

    Full Text Available Objective: This study compares medical errors in pediatric trauma patients with and without chronic conditions. Methods: The 2009 Kids’ Inpatient Database, which included 123,303 trauma discharges, was analyzed. Medical errors were identified by International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis codes. The medical error rates per 100 discharges and per 1000 hospital days were calculated and compared between inpatients with and without chronic conditions. Results: Pediatric trauma patients with chronic conditions experienced a higher medical error rate compared with patients without chronic conditions: 4.04 (95% confidence interval: 3.75–4.33 versus 1.07 (95% confidence interval: 0.98–1.16 per 100 discharges. The rate of medical error differed by type of chronic condition. After controlling for confounding factors, the presence of a chronic condition increased the adjusted odds ratio of medical error by 37% if one chronic condition existed (adjusted odds ratio: 1.37, 95% confidence interval: 1.21–1.5, and 69% if more than one chronic condition existed (adjusted odds ratio: 1.69, 95% confidence interval: 1.48–1.53. In the adjusted model, length of stay had the strongest association with medical error, but the adjusted odds ratio for chronic conditions and medical error remained significantly elevated even when accounting for the length of stay, suggesting that medical complexity has a role in medical error. Higher adjusted odds ratios were seen in other subgroups. Conclusion: Chronic conditions are associated with significantly higher rate of medical errors in pediatric trauma patients. Future research should evaluate interventions or guidelines for reducing the risk of medical errors in pediatric trauma patients with chronic conditions.

  3. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  4. Interferon-free treatment for patients with chronic hepatitis C and autoimmune liver disease: higher SVR rates with special precautions for deterioration of autoimmune hepatitis.

    Science.gov (United States)

    Kanda, Tatsuo; Yasui, Shin; Nakamura, Masato; Nakamoto, Shingo; Takahashi, Koji; Wu, Shuang; Sasaki, Reina; Haga, Yuki; Ogasawara, Sadahisa; Saito, Tomoko; Kobayashi, Kazufumi; Kiyono, Soichiro; Ooka, Yoshihiko; Suzuki, Eiichiro; Chiba, Tetsuhiro; Maruyama, Hitoshi; Imazeki, Fumio; Moriyama, Mitsuhiko; Kato, Naoya

    2018-02-20

    Interferon-free treatment can achieve higher sustained virological response (SVR) rates, even in patients in whom hepatitis C virus (HCV) could not be eradicated in the interferon treatment era. Immune restoration in the liver is occasionally associated with HCV infection. We examined the safety and effects of interferon-free regimens on HCV patients with autoimmune liver diseases. All 7 HCV patients with autoimmune hepatitis (AIH) completed treatment and achieved SVR. Three patients took prednisolone (PSL) at baseline, and 3 did not take PSL during interferon-free treatment. In one HCV patient with AIH and cirrhosis, PSL were not administered at baseline, but she needed to take 40 mg/day PSL at week 8 for liver dysfunction. She also complained back pain and was diagnosed with vasospastic angina by coronary angiography at week 11. However, she completed interferon-free treatment. All 5 HCV patients with primary biliary cholangitis (PBC) completed treatment and achieved SVR. Three of these HCV patients with PBC were treated with UDCA during interferon-free treatment. Interferon-free regimens could result in higher SVR rates in HCV patients with autoimmune liver diseases. As interferon-free treatment for HCV may have an effect on hepatic immunity and activity of the autoimmune liver diseases, careful attention should be paid to unexpected adverse events in their treatments. Total 12 patients with HCV and autoimmune liver diseases [7 AIH and PBC], who were treated with interferon-free regimens, were retrospectively analyzed.

  5. Leflunomide is associated with a higher flare rate compared to methotrexate in the treatment of chronic uveitis in juvenile idiopathic arthritis.

    Science.gov (United States)

    Bichler, J; Benseler, S M; Krumrey-Langkammerer, M; Haas, J-P; Hügle, B

    2015-01-01

    Chronic anterior uveitis is a serious complication of juvenile idiopathic arthritis (JIA); disease flares are highly associated with loss of vision. Leflunomide (LEF) is used successfully for JIA joint disease but its effectiveness in uveitis has not been determined. The aim of this study was to determine whether LEF improves flare rates of uveitis in JIA patients compared to preceding methotrexate (MTX) therapy. A single-centre retrospective study of consecutive children with JIA and chronic anterior uveitis was performed. All children initially received MTX and were then switched to LEF. Demographic, clinical, and laboratory data, dose and duration of MTX and LEF therapy, concomitant medications and rate of anterior uveitis flares, as determined by an expert ophthalmologist, were obtained. Flare rates were compared using a generalized linear mixed model with a negative binomial distribution. A total of 15 children were included (80% females, all antinuclear antibody positive). The median duration of MTX therapy was 51 (range 26-167) months; LEF was given for a median of 12 (range 4-47) months. Anti-tumour necrosis factor (anti-TNF-α) co-medication was given to four children while on MTX. By contrast, LEF was combined with anti-TNF-α treatment in six children. On MTX, JIA patients showed a uveitis flare rate of 0.0247 flares/month, while LEF treatment was associated with a significantly higher flare rate of 0.0607 flares/month (p = 0.008). Children with JIA had significantly more uveitis flares on LEF compared to MTX despite receiving anti-TNF-α co-medication more frequently. Therefore, LEF may need to be considered less effective in controlling chronic anterior uveitis.

  6. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  7. The effectiveness of risk management program on pediatric nurses' medication error.

    Science.gov (United States)

    Dehghan-Nayeri, Nahid; Bayat, Fariba; Salehi, Tahmineh; Faghihzadeh, Soghrat

    2013-09-01

    Medication therapy is one of the most complex and high-risk clinical processes that nurses deal with. Medication error is the most common type of error that brings about damage and death to patients, especially pediatric ones. However, these errors are preventable. Identifying and preventing undesirable events leading to medication errors are the main risk management activities. The aim of this study was to investigate the effectiveness of a risk management program on the pediatric nurses' medication error rate. This study is a quasi-experimental one with a comparison group. In this study, 200 nurses were recruited from two main pediatric hospitals in Tehran. In the experimental hospital, we applied the risk management program for a period of 6 months. Nurses of the control hospital did the hospital routine schedule. A pre- and post-test was performed to measure the frequency of the medication error events. SPSS software, t-test, and regression analysis were used for data analysis. After the intervention, the medication error rate of nurses at the experimental hospital was significantly lower (P error-reporting rate was higher (P medical environment, applying the quality-control programs such as risk management can effectively prevent the occurrence of the hospital undesirable events. Nursing mangers can reduce the medication error rate by applying risk management programs. However, this program cannot succeed without nurses' cooperation.

  8. Ways to be different: Foraging adaptations that facilitate higher intake rates in a northerly wintering shorebird compared with a low-latitude conspecific

    Science.gov (United States)

    Ruthrauff, Daniel R.; Dekinga, Anne; Gill, Robert E.; van Gils, Jan A.; Piersma, Theunis

    2015-01-01

    At what phenotypic level do closely related subspecies that live in different environments differ with respect to food detection, ingestion and processing? This question motivated an experimental study on rock sandpipers (Calidris ptilocnemis). The species' nonbreeding range spans 20 deg of latitude, the extremes of which are inhabited by two subspecies: C. p. ptilocnemis that winters primarily in upper Cook Inlet, Alaska (61°N) and C. p. tschuktschorum that overlaps slightly with C. p. ptilocnemis but whose range extends much farther south (∼40°N). In view of the strongly contrasting energetic demands of their distinct nonbreeding distributions, we conducted experiments to assess the behavioral, physiological and sensory aspects of foraging and we used the bivalve Macoma balthica for all trials. C. p. ptilocnemis consumed a wider range of prey sizes, had higher maximum rates of energy intake, processed shell waste at higher maximum rates and handled prey more quickly. Notably, however, the two subspecies did not differ in their abilities to find buried prey. The subspecies were similar in size and had equally sized gizzards, but the more northern ptilocnemis individuals were 10–14% heavier than their same-sex tschuktschorum counterparts. The higher body mass in ptilocnemis probably resulted from hypertrophy of digestive organs (e.g. intestine, liver) related to digestion and nutrient assimilation. Given the previously established equality of the metabolic capacities of the two subspecies, we propose that the high-latitude nonbreeding range of ptilocnemis rock sandpipers is primarily facilitated by digestive (i.e. physiological) aspects of their foraging ecology rather than behavioral or sensory aspects.

  9. Core-needle biopsy of breast cancer is associated with a higher rate of distant metastases 5 to 15 years after diagnosis than FNA biopsy.

    Science.gov (United States)

    Sennerstam, Roland B; Franzén, Bo S H; Wiksell, Hans O T; Auer, Gert U

    2017-10-01

    The literature offers discordant results regarding whether diagnostic biopsy is associated with the dissemination of cancer cells, resulting in local and/or distant metastasis. The long-term outcomes of patients with breast cancer were compared between those who were diagnosed using either fine-needle aspiration biopsy (FNAB) or core-needle biopsy (CNB) during 2 decades: the 1970s and 1990s. In the 1970s, the only diagnostic needle biopsy method used for breast cancer in Sweden was FNAB. CNB was introduced 1989 and became established in Stockholm Gotland County in the early 1990s. The authors compared the clinical outcomes of patients diagnosed using FNAB from 1971 to 1976 (n = 354) versus those of patients diagnosed using CNB from 1991 to 1995 (n = 1729). Adjusting for differences in various treatment modalities, mammography screening, tumor size, DNA ploidy, and patient age between the 2 decades, 2 strictly matched samples representing FNAB (n = 181) and CNB (n = 203) were selected for a 15-year follow-up study. In a comparison of the rates of distant metastasis in the strictly matched patient groups from the FNAB and CNB cohorts, significantly higher rates of late-appearing (5-15 years after diagnosis) distant metastasis were observed among the patients who were diagnosed on CNB compared with those who were diagnosed on FNAB. No significant difference in local metastasis was observed between the 2 groups. At 5 to 15 years after diagnosis of the primary tumor, CNB-diagnosed patients had significantly higher rates of distant metastases than FNAB-diagnosed patients. Cancer Cytopathol 2017;125:748-56. © 2017 American Cancer Society. © 2017 American Cancer Society.

  10. Effect of 60Co γ-irradiation on germination rate of corms and selection of a higher temperature-tolerance mutant 'zf893' of Crocus sativus L

    International Nuclear Information System (INIS)

    Zhang Qiaosheng; Wang Zhiping; Pan Jianyong; Li Xuebing; Xu Bujin; Zou Fenglian; Lu Gang

    2009-01-01

    The effects of 60 Co γ-rays irradiation on germination rate of corms of saffron (Crocus sativus L.) were studied and the results were as follows: (1) The corm germination rate raised with the corm fresh weight M (g) increasing, and the corms of 3 60 Co γ-rays, the typical radiation effect of dose on germination rate were obtained, and the semi-lethal dose (D 50 ) for the corms of 3 60 Co γ-rays and the stigma productivity of ZF893 was 1.3 times as that of the parent, for the field growing period, total weight of daughter corms and flower numbers of ZF893 were 22%, 27% and 30% higher than that of the parent, respectively. (4) The soluble protein SDS-PAGE patterns between ZF893 and it's parent were very similar, but the 54.8kD bands were much stronger in ZF893 and the 20.9 kD bands which were clear in ZF893, but were nearly absent in it's parent. (authors)

  11. Mechanistic dissimilarities between environmentally-influenced fatigue-crack propagation at near-threshold and higher growth rates in lower-strength steels

    Energy Technology Data Exchange (ETDEWEB)

    Suresh, S.; Ritchie, R. O.

    1981-11-01

    The role of hydrogen gas in influencing fatigue crack propagation is examined for several classes of lower strength pressure vessel and piping steels. Based on measurements over a wide range of growth rates from 10/sup -8/ to 10/sup -2/ mm/cycle, crack propagation rates are found to be significantly higher in dehumidified gaseous hydrogen compared to moist air in two distinct regimes of crack growth, namely (i) at the intermediate range of growth typically above approx. 10/sup -5/ mm/cycle, and (ii) at the near-threshold region below approx. 10/sup -6/ mm/cycle approaching lattice dimensions per cycle. Both effects are seen at maximum stress intensities (K/sub max/) far below the sustained-load threshold stress intensity for hydrogen-assisted cracking (K/sub Iscc/). Characteristics of environmentally influenced fatigue crack growth in each regime are shown to be markedly different with regard to fractography and the effect of such variables as load ratio and frequency. It is concluded that the primary mechanisms responsible for the influence of the environment in each regime are distinctly different. Whereas corrosion fatigue behavior at intermediate growth rates can be attributed to hydrogen embrittlement processes, the primary role of moist environments at near-threshold levels is shown to involve a contribution from enhanced crack closure due to the formation of crack surface corrosion deposits at low load ratios.

  12. Naming game with learning errors in communications

    OpenAIRE

    Lou, Yang; Chen, Guanrong

    2014-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network topology. By pair-wise iterative interactions, the population reaches a consensus state asymptotically. In this paper, we study naming game with communication errors during pair-wise conversations, where errors are represented by error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed....

  13. Geometrical Sparing Factors for the Rectum and Bladder in the Prediction of Grade 2 and Higher Complications After High-Dose-Rate Brachytherapy for Cervical Cancer

    International Nuclear Information System (INIS)

    Chen, S.-W.; Liang, J.-A.; Hung, Y.-C.; Yeh, L.-S.; Chang, W.-C.; Yang, S.-N.; Lin, F.-J.

    2009-01-01

    Purpose: This study aimed to assess the predictive values of geometrical sparing factors for the rectum and bladder in high-dose-rate intracavitary brachytherapy (HDRICB) for Grade 2 and higher late sequelae in patients with cervical cancer. Methods: A total of 392 patients were enrolled in this study. They were treated with external beam radiotherapy to the pelvis, after which HDRICB was performed using Ir-192 remote after-loading at 1-week intervals for three or four sessions. The geometrical sparing factor (GSF) was defined as the average of the ratios between the reference doses and the Point A dose. Results: A total of 46 patients (11.7%) had Grade 2 or higher late rectal complications (36 Grade 2, 9 Grade 3, and 1 Grade 4). In all, 32 patients (8.2%) had Grade 2 or higher late bladder complications (14 Grade 2, 16 Grade 3, and 2 Grade 4). Multivariate analysis demonstrated a high risk of rectal sequelae in patients who developed bladder complications (p = 0.0004, hazard ratio 3.54) and had a rectal GSF greater than 0.7 (p = 0.01, hazard ratio 1.99). The high risk factors for bladder complications were development of rectal complications (p = 0.0004, hazard ratio 3.74), concurrent chemotherapy (p = 0.0001, relative risk 3.94), and a bladder GSF greater than 0.9 (p = 0.01, hazard ratio, 2.53). Conclusion: This study demonstrates the predictive value of GSFs in HDRICB for cervical cancer. Patients with rectal GSFs greater than 0.7 or bladder GSFs greater than 0.9 are at risk for Grade 2 and higher late sequelae.

  14. Geometrical sparing factors for the rectum and bladder in the prediction of grade 2 and higher complications after high-dose-rate brachytherapy for cervical cancer.

    Science.gov (United States)

    Chen, Shang-Wen; Liang, Ji-An; Hung, Yao-Ching; Yeh, Lian-Shung; Chang, Wei-Chun; Yang, Shih-Neng; Lin, Fang-Jen

    2009-12-01

    This study aimed to assess the predictive values of geometrical sparing factors for the rectum and bladder in high-dose-rate intracavitary brachytherapy (HDRICB) for Grade 2 and higher late sequelae in patients with cervical cancer. A total of 392 patients were enrolled in this study. They were treated with external beam radiotherapy to the pelvis, after which HDRICB was performed using Ir-192 remote after-loading at 1-week intervals for three or four sessions. The geometrical sparing factor (GSF) was defined as the average of the ratios between the reference doses and the Point A dose. A total of 46 patients (11.7%) had Grade 2 or higher late rectal complications (36 Grade 2, 9 Grade 3, and 1 Grade 4). In all, 32 patients (8.2%) had Grade 2 or higher late bladder complications (14 Grade 2, 16 Grade 3, and 2 Grade 4). Multivariate analysis demonstrated a high risk of rectal sequelae in patients who developed bladder complications (p = 0.0004, hazard ratio 3.54) and had a rectal GSF greater than 0.7 (p = 0.01, hazard ratio 1.99). The high risk factors for bladder complications were development of rectal complications (p = 0.0004, hazard ratio 3.74), concurrent chemotherapy (p = 0.0001, relative risk 3.94), and a bladder GSF greater than 0.9 (p = 0.01, hazard ratio, 2.53). This study demonstrates the predictive value of GSFs in HDRICB for cervical cancer. Patients with rectal GSFs greater than 0.7 or bladder GSFs greater than 0.9 are at risk for Grade 2 and higher late sequelae.

  15. Analysis of 162 colon injuries in patients with penetrating abdominal trauma: concomitant stomach injury results in a higher rate of infection.

    Science.gov (United States)

    O'Neill, Patricia A; Kirton, Orlando C; Dresner, Lisa S; Tortella, Bartholomew; Kestner, Mark M

    2004-02-01

    Fecal contamination from colon injury has been thought to be the most significant factor for the development of surgical site infection (SSI) after trauma. However, there are increasing data to suggest that other factors may play a role in the development of postinjury infection in patients after colon injury. The purpose of this study was to determine the impact of gastric wounding on the development of SSI and nonsurgical site infection (NSSI) in patients with colon injury. Post hoc analysis was performed on data prospectively collected for 317 patients presenting with penetrating hollow viscus injury. One hundred sixty-two patients with colon injury were subdivided into one of three groups: patients with isolated colon wounds (C), patients with colon and stomach wounds with or without other organ injury (C+S), and patients with colon and other organ injury but no stomach injury (C-S) and assessed for the development of SSI and NSSI. Infection rates were also determined for patients who sustained isolated gastric injury (S) and gastric injury in combination with other injuries other than colon (S-C). Penetrating Abdominal Trauma Index, operative times, and transfusion were assessed. Discrete variables were analyzed by Cochran-Mantel-Haenszel chi2 test and Fisher's exact test. Risk factor analysis was performed by multivariate logistic regression. C+S patients had a higher rate of SSI infection (31%) than C patients (3.6%) (p=0.008) and C-S patients (13%) (p=0.021). Similarly, the incidence of NSSI was also significantly greater in the C+S group (37%) compared with the C patients (7.5%) (p=0.07) and the C-S patients (17%) (p=0.019). There was no difference in the rate of SSI or NSSI between the C and C-S groups (p=0.3 and p=0.24, respectively). The rate of SSI was significantly greater in the C+S patients when compared with the S-C patients (31% vs. 10%, p=0.008), but there was no statistical difference in the rate of NSSI in the C+S group and the S-C group (37

  16. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  17. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  18. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  19. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  20. Elevated basal progesterone levels are associated with increased preovulatory progesterone rise but not with higher pregnancy rates in ICSI cycles with GnRH antagonists.

    Science.gov (United States)

    Mutlu, Mehmet Firat; Erdem, Mehmet; Mutlu, Ilknur; Bulut, Berk; Erdem, Ahmet

    2017-09-01

    To ascertain the association between basal progesterone (P) levels and the occurrence of preovulatory progesterone rise (PPR) and clinical pregnancy rates (CPRs) in ICSI cycles with GnRH antagonists. Serum P levels of 464 patients were measured on day 2 and day of hCG of cycles. Cycles with basal P levels>1.6ng/mL were cancelled. All embryos were cryopreserved in cycles with P levels≥2ng/mL on the day of hCG. The primary outcome measures were the incidence of PPR (P>1.5ng/mL) and CPR with regard to basal P. Basal P levels were significantly higher in cycles with PPR than in those without PPR (0.63±0.31 vs. 0.48±0.28ng/mL). Area under the curve for basal P according to ROC analysis to discriminate between elevated and normal P levels on the day of hCG was 0.65 (0.58-0.71 95% CI, pcycles with and without PPR was 0.65ng/mL. Cycles with basal P levels above 0.65ng/mL had a significantly higher incidence of PPR (30.9% vs. 13.5%) but similar clinical and cumulative pregnancy rates (38.8% vs. 31.1% and 41.7% vs. 32.6%, respectively) in comparison to cycles with basal P levels below 0.65ng/mL. In multivariate regression analysis, basal P levels, LH level on the first day of antagonist administration, and estradiol levels on the day of hCG trigger were the variables that predicted PPR. Basal P levels were associated with increased incidence of PPR but not with CPR. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. PERM Error Rate Findings and Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — Federal agencies are required to annually review programs they administer and identify those that may be susceptible to significant improper payments, to estimate...

  2. Medicare FFS Jurisdiction Error Rate Contribution Data

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services CMS is dedicated to continually strengthening and improving the Medicare program, which provides vital services to...

  3. Analysis of error patterns in clinical radiotherapy

    International Nuclear Information System (INIS)

    Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin

    1996-01-01

    Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to

  4. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    Science.gov (United States)

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  5. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  6. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  7. [Medication error management climate and perception for system use according to construction of medication error prevention system].

    Science.gov (United States)

    Kim, Myoung Soo

    2012-08-01

    The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.

  8. What is the empirical evidence that hospitals with higher-risk adjusted mortality rates provide poorer quality care? A systematic review of the literature

    Directory of Open Access Journals (Sweden)

    Mohammed Mohammed A

    2007-06-01

    Full Text Available Abstract Background Despite increasing interest and publication of risk-adjusted hospital mortality rates, the relationship with underlying quality of care remains unclear. We undertook a systematic review to ascertain the extent to which variations in risk-adjusted mortality rates were associated with differences in quality of care. Methods We identified studies in which risk-adjusted mortality and quality of care had been reported in more than one hospital. We adopted an iterative search strategy using three databases – Medline, HealthSTAR and CINAHL from 1966, 1975 and 1982 respectively. We identified potentially relevant studies on the basis of the title or abstract. We obtained these papers and included those which met our inclusion criteria. Results From an initial yield of 6,456 papers, 36 studies met the inclusion criteria. Several of these studies considered more than one process-versus-risk-adjusted mortality relationship. In total we found 51 such relationships in a widen range of clinical conditions using a variety of methods. A positive correlation between better quality of care and risk-adjusted mortality was found in under half the relationships (26/51 51% but the remainder showed no correlation (16/51 31% or a paradoxical correlation (9/51 18%. Conclusion The general notion that hospitals with higher risk-adjusted mortality have poorer quality of care is neither consistent nor reliable.

  9. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  10. Reference Values for the Marx Activity Rating Scale in a Young Athletic Population: History of Knee Ligament Injury Is Associated With Higher Scores.

    Science.gov (United States)

    Cameron, Kenneth L; Peck, Karen Y; Thompson, Brandon S; Svoboda, Steven J; Owens, Brett D; Marshall, Stephen W

    2015-01-01

    Activity-related patient-reported outcome measures are an important component of assessment after knee ligament injury in young and physically active patients; however, normative data for most activity scales are limited. To present reference values by sex for the Marx Activity Rating Scale (MARS) within a young and physically active population while accounting for knee ligament injury history and sex. Cross-sectional study. Level 2. All incoming freshman entering a US Service Academy in June of 2011 were recruited to participate in this study. MARS was administered to 1169 incoming freshmen (203 women) who consented to participate within the first week of matriculation. All subjects were deemed healthy and medically fit for military service on admission. Subjects also completed a baseline questionnaire that asked for basic demographic information and injury history. We calculated means with standard deviations, medians with interquartile ranges, and percentiles for ordinal and continuous variables, and frequencies and proportions for dichotomous variables. We also compared median scores by sex and history of knee ligament injury using the Kruskal-Wallis test. MARS was the primary outcome of interest. The median MARS score was significantly higher for men when compared with women (χ(2) = 13.22, df = 1, P MARS scores between men and women (χ(2) = 0.47, df = 1, P = 0.493) who reported a history of injury. Overall, median MARS scores were significantly higher among those who reported a history of knee ligament injury when compared with those who did not (χ(2) = 9.06, df = 1, P = 0.003). Assessing activity as a patient-reported outcome after knee ligament injury is important, and reference values for these instruments need to account for the influence of prior injury and sex. © 2015 The Author(s).

  11. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  12. Long-Term Results of Fixed High-Dose I-131 Treatment for Toxic Nodular Goiter: Higher Euthyroidism Rates in Geriatric Patients

    Directory of Open Access Journals (Sweden)

    Gül Ege Aktaş

    2015-10-01

    Full Text Available Objective: Geriatric patient population has special importance due to particular challenges. In addition to the increase in incidence of toxic nodular goiter (TNG with age, it has a high incidence in the regions of low-medium iodine intake such as in our country. The aim of this study was to evaluate the overall outcome of high fixed dose radioiodine (RAI therapy, and investigate the particular differences in the geriatric patient population. Methods: One hundred and three TNG patients treated with high dose I-131 (370-740 MBq were retrospectively reviewed. The baseline characteristics; age, gender, scintigraphic patterns and thyroid function tests before and after treatment, as well as follow-up, duration of antithyroid drug (ATD medication and achievement of euthyroid or hypothyroid state were evaluated. The patient population was divided into two groups as those=>65 years and those who were younger, in order to assess the effect of age. Results: Treatment success was 90% with single dose RAI therapy. Hyperthyroidism was treated in 7±7, 2 months after RAI administration. At the end of the first year, overall hypothyroidism rate was 30% and euthyroid state was achieved in 70% of patients. Age was found to be the only statistically significant variable effecting outcome. A higher ratio of euthyroidism was achieved in the geriatric patient population. Conclusion: High fixed dose I-131 treatment should be preferred in geriatric TNG patients in order to treat persistent hyperthyroidism rapidly. The result of this study suggests that high fixed dose RAI therapy is a successful modality in treating TNG, and high rates of euthyroidism can be achieved in geriatric patients.

  13. Protocol for the PINCER trial: a cluster randomised trial comparing the effectiveness of a pharmacist-led IT-based intervention with simple feedback in reducing rates of clinically important errors in medicines management in general practices

    Directory of Open Access Journals (Sweden)

    Murray Scott A

    2009-05-01

    ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05 to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial and the interventions have been delivered. Analysis has not yet been undertaken. Trial registration Current controlled trials ISRCTN21785299

  14. Error analysis to improve the speech recognition accuracy on ...

    Indian Academy of Sciences (India)

    dictionary plays a key role in the speech recognition accuracy. .... Sophisticated microphone is used for the recording speech corpus in a noise free environment. .... values, word error rate (WER) and error-rate will be calculated as follows:.

  15. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  16. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system.

    Science.gov (United States)

    Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O

    2015-02-01

    To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association

  17. Familial prostate cancer has a more aggressive course than sporadic prostate cancer after treatment for localized disease, mainly due to a higher rate of distant metastases

    International Nuclear Information System (INIS)

    Kupelian, Patrick A.; Klein, Eric A.; Suh, John H; Kupelian, Varant A.

    1997-01-01

    with negative and positive family history were 83% and 72% percent, respectively (p=0.013). The 5-year locRFS rates for patients with negative and positive family history were 91% and 87% percent, respectively (p=0.45). The 5-year dRFS rates for patients with negative and positive family history were 91% and 84%, respectively (p=0.032). Table 1 displays the statistical significance in crude (univariate) and adjusted (multivariate) analysis of all factors analyzed with respect to outcomes of interest. After adjusting for potential confounders, family history of prostate cancer remained strongly associated with biochemical failure. For RP patients, even in the presence of pathologic parameters, family history remained a strong independent predictor of biochemical, clinical, and distant failure (data not shown). Conclusion: Our findings suggest that familial prostate cancer may have a more aggressive course after treatment than non-familial prostate cancer, and that clinical and/or pathological parameters may not adequately predict this course. Familial prostate cancer seems associated with a higher rate of distant metastases. Further studies need to be performed to confirm these findings

  18. After microvascular decompression to treat trigeminal neuralgia, both immediate pain relief and recurrence rates are higher in patients with arterial compression than with venous compression.

    Science.gov (United States)

    Shi, Lei; Gu, Xiaoyan; Sun, Guan; Guo, Jun; Lin, Xin; Zhang, Shuguang; Qian, Chunfa

    2017-07-04

    We explored differences in postoperative pain relief achieved through decompression of the trigeminal nerve compressed by arteries and veins. Clinical characteristics, intraoperative findings, and postoperative curative effects were analyzed in 72 patients with trigeminal neuralgia who were treated by microvascular decompression. The patients were divided into arterial and venous compression groups based on intraoperative findings. Surgical curative effects included immediate relief, delayed relief, obvious reduction, and invalid result. Among the 40 patients in the arterial compression group, 32 had immediate pain relief of pain (80.0%), 5 cases had delayed relief (12.5%), and 3 cases had an obvious reduction (7.5%). In the venous compression group, 12 patients had immediate relief of pain (37.5%), 13 cases had delayed relief (40.6%), and 7 cases had an obvious reduction (21.9%). During 2-year follow-up period, 6 patients in the arterial compression group experienced recurrence of trigeminal neuralgia, but there were no recurrences in the venous compression group. Simple artery compression was followed by early relief of trigeminal neuralgia more often than simple venous compression. However, the trigeminal neuralgia recurrence rate was higher in the artery compression group than in the venous compression group.

  19. The impact of work-related stress on medication errors in Eastern Region Saudi Arabia.

    Science.gov (United States)

    Salam, Abdul; Segal, David M; Abu-Helalah, Munir Ahmad; Gutierrez, Mary Lou; Joosub, Imran; Ahmed, Wasim; Bibi, Rubina; Clarke, Elizabeth; Qarni, Ali Ahmed Al

    2018-05-07

    To examine the relationship between overall level and source-specific work-related stressors on medication errors rate. A cross-sectional study examined the relationship between overall levels of stress, 25 source-specific work-related stressors and medication error rate based on documented incident reports in Saudi Arabia (SA) hospital, using secondary databases. King Abdulaziz Hospital in Al-Ahsa, Eastern Region, SA. Two hundred and sixty-nine healthcare professionals (HCPs). The odds ratio (OR) and corresponding 95% confidence interval (CI) for HCPs documented incident report medication errors and self-reported sources of Job Stress Survey. Multiple logistic regression analysis identified source-specific work-related stress as significantly associated with HCPs who made at least one medication error per month (P stress were two times more likely to make at least one medication error per month than non-stressed HCPs (OR: 1.95, P = 0.081). This is the first study to use documented incident reports for medication errors rather than self-report to evaluate the level of stress-related medication errors in SA HCPs. Job demands, such as social stressors (home life disruption, difficulties with colleagues), time pressures, structural determinants (compulsory night/weekend call duties) and higher income, were significantly associated with medication errors whereas overall stress revealed a 2-fold higher trend.

  20. Errors as a Means of Reducing Impulsive Food Choice.

    Science.gov (United States)

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2016-06-05

    Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities.

  1. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  2. The calculation of average error probability in a digital fibre optical communication system

    Science.gov (United States)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  3. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  4. Indexed effective orifice area is a significant predictor of higher mid- and long-term mortality rates following aortic valve replacement in patients with prosthesis-patient mismatch.

    Science.gov (United States)

    Chen, Jian; Lin, Yiyun; Kang, Bo; Wang, Zhinong

    2014-02-01

    Prosthesis-patient mismatch (PPM) is defined as a too-small effective orifice area (EOA) of an inserted prosthetic relative to body size, resulting in an abnormally high postoperative gradient. It is unclear, however, whether residual stenosis after aortic valve replacement (AVR) has a negative impact on mid- and long-term survivals. We searched electronic databases, including PubMed, Embase, Medline and the Cochrane controlled trials register, through October 2012, to identify published full-text English studies on the association between PPM and mortality rates. A significant PPM was defined as an indexed EOA (iEOA)<0.85 cm2/m2, and severe PPM as an iEOA<0.65 cm2/m2. Two reviewers independently assessed the studies for inclusion and extracted data. Fourteen observational studies, involving 14 874 patients, met our final inclusion criteria. Meta-analysis demonstrated that PPM significantly increased mid-term (odds ratio [OR] 1.42, 95% confidence interval [CI] 1.19-1.69) and long-term (OR 1.52, 95% CI 1.26-1.84) all-cause mortalities. Subgroup analysis showed that PPM was associated with higher mid- and long-term mortality rates only in younger and predominantly female populations. Risk-adjusted sensitivity analysis showed that severe PPM was associated with reduced survival (adjusted hazard ratio [HR] 1.50, 95% CI 1.24-1.80), whereas moderate PPM was not (adjusted HR 0.96, 95% CI 0.86-1.07). Regardless of severity, however, PPM had a negative effect on survival in patients with impaired ejection fraction (adjusted HR 1.26, 95% CI 1.09-1.47). PPM (iEOA<0.85 cm2/m2) after AVR tended to be associated with increased long-term all-cause mortality in younger patients, females and patients with preoperative left ventricular dysfunction. Severe PPM (iEOA<0.65 cm2/m2) was a significant predictor of reduced long-term survival in all populations undergoing AVR.

  5. Azacitidine-lenalidomide (ViLen) combination yields a high response rate in higher risk myelodysplastic syndromes (MDS)-ViLen-01 protocol.

    Science.gov (United States)

    Mittelman, Moshe; Filanovsky, Kalman; Ofran, Yishai; Rosenbaum, Hanna; Raanani, Pia; Braester, Andrei; Goldschmidt, Neta; Kirgner, Ilya; Herishanu, Yair; Perri, Chava; Ellis, Martin; Oster, Howard S

    2016-10-01

    Azacitidine treatment is effective in higher risk MDS (HR-MDS), with less than 50 % response, lasting 2 years. Aza and lenalidomide (Len) have a potential synergistic effect. ViLen-01 phase IIa trial includes 6-month induction (Aza 75 mg/m(2)/day, days 1-5, Len 10 mg/day, days 6-21, every 28 days), 6-month consolidation (Aza 75 mg/m(2)/day, days 1-5, every 28 days), and 12-month maintenance (Len 10 mg/day, days 1-21, every 28 days). Response was evaluated according to IWG criteria. Totally, 25 patients enrolled, with an average of 76.3 years old (60-87), and 88 % with major comorbidities. Thirteen patients completed induction, 7 proceeded for consolidation, and 2 for maintenance. The overall response rate (ORR) was 72 % (18/25), with 6 (24 %) for CR, 3 (12 %) for marrow CR, and 9 (36 %) for hematologic improvement (HI). The 7 non-responding patients were on the study 3 days to 4.1 months. At 6 months, 4 of 6 evaluable patients achieved complete cytogenetic response and 2 with del (5q) at diagnosis. Adverse events (AEs) were as expected in these patients: grades III-IV, mainly hematologic-thrombocytopenia (20 patients) and neutropenia (13 patients). The common non-hematologic AEs were infections (14 patients), nausea (7), vomiting (7), diarrhea (7), and skin reactions (5). The median progression-free survival (PFS) was 12 ± 1.36 months, with median overall survival (OS) of 12 ± 1.7 months. Quality of life (FACT questionnaire) data were available for 12 patients with a tendency towards improved QoL. This trial with elderly HR-MDS patients with an expected poor prognosis demonstrates a high (72 %) response rate and a reasonable expected safety profile but a relatively short PFS and OS.

  6. Deep breathing exercises with positive expiratory pressure at a higher rate improve oxygenation in the early period after cardiac surgery--a randomised controlled trial.

    Science.gov (United States)

    Urell, Charlotte; Emtner, Margareta; Hedenström, Hans; Tenling, Arne; Breidenskog, Marie; Westerdahl, Elisabeth

    2011-07-01

    In addition to early mobilisation, a variety of breathing exercises are used to prevent postoperative pulmonary complications after cardiac surgery. The optimal duration of the treatment is not well evaluated. The aim of this study was to determine the effect of 30 versus 10 deep breaths hourly, while awake, with positive expiratory pressure on oxygenation and pulmonary function the first days after cardiac surgery. A total of 181 patients, undergoing cardiac surgery, were randomised into a treatment group, performing 30 deep breaths hourly the first postoperative days, or into a control group performing 10 deep breaths hourly. The main outcome measurement arterial blood gases and the secondary outcome pulmonary function, evaluated with spirometry, were determined on the second postoperative day. Preoperatively, both study groups were similar in terms of age, SpO(2), forced expiratory volume in 1s and New York Heart Association classification. On the second postoperative day, arterial oxygen tension (PaO(2)) was 8.9 ± 1.7 kPa in the treatment group and 8.1 ± 1.4 kPa in the control group (p = 0.004). Arterial oxygen saturation (SaO(2)) was 92.7 ± 3.7% in the treatment group and 91.1 ± 3.8% in the control group (p = 0.016). There were no differences in measured lung function between the groups or in compliance to the breathing exercises. Compliance was 65% of possible breathing sessions. A significantly increased oxygenation was found in patients performing 30 deep breaths the first two postoperative days compared with control patients performing 10 deep breaths hourly. These results support the implementation of a higher rate of deep breathing exercises in the initial phase after cardiac surgery. Copyright © 2010 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.

  7. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  8. Spacecraft and propulsion technician error

    Science.gov (United States)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  9. Measuring Error Identification and Recovery Skills in Surgical Residents.

    Science.gov (United States)

    Sternbach, Joel M; Wang, Kevin; El Khoury, Rym; Teitelbaum, Ezra N; Meyerson, Shari L

    2017-02-01

    Although error identification and recovery skills are essential for the safe practice of surgery, they have not traditionally been taught or evaluated in residency training. This study validates a method for assessing error identification and recovery skills in surgical residents using a thoracoscopic lobectomy simulator. We developed a 5-station, simulator-based examination containing the most commonly encountered cognitive and technical errors occurring during division of the superior pulmonary vein for left upper lobectomy. Successful completion of each station requires identification and correction of these errors. Examinations were video recorded and scored in a blinded fashion using an examination-specific rating instrument evaluating task performance as well as error identification and recovery skills. Evidence of validity was collected in the categories of content, response process, internal structure, and relationship to other variables. Fifteen general surgical residents (9 interns and 6 third-year residents) completed the examination. Interrater reliability was high, with an intraclass correlation coefficient of 0.78 between 4 trained raters. Station scores ranged from 64% to 84% correct. All stations adequately discriminated between high- and low-performing residents, with discrimination ranging from 0.35 to 0.65. The overall examination score was significantly higher for intermediate residents than for interns (mean, 74 versus 64 of 90 possible; p = 0.03). The described simulator-based examination with embedded errors and its accompanying assessment tool can be used to measure error identification and recovery skills in surgical residents. This examination provides a valid method for comparing teaching strategies designed to improve error recognition and recovery to enhance patient safety. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  10. How does aging affect the types of error made in a visual short-term memory 'object-recall' task?

    Science.gov (United States)

    Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina

    2014-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.

  11. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  12. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Systematic error growth rate peak is observed at wavenumber 2 up to 4-day forecast then .... the influence of summer systematic error and ran- ... total exchange. When the error energy budgets are examined in spectral domain, one may ask ques- tions on the error growth at a certain wavenum- ber from its interaction with ...

  13. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  14. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  15. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Higher neonatal growth rate and body condition score at 7 months are predictive factors of obesity in adult female Beagle dogs.

    Science.gov (United States)

    Leclerc, Lucie; Thorin, Chantal; Flanagan, John; Biourge, Vincent; Serisier, Samuel; Nguyen, Patrick

    2017-04-13

    The risks during early growth on becoming overweight in adulthood are widely studied in humans. However, early-life predictive factors for canine adult overweight and obesity have not yet been studied. To identify factors that may help explain the development of overweight and obesity at adulthood in dogs, a longitudinal study of 2 years was conducted in 24 female Beagle dogs of the same age, sexual status, and raised under identical environmental conditions. By means of a hierarchical classification on principal components with the following quantitative values: fat-free mass (FFM), percentage fat mass and pelvic circumference at 2 years of age, three groups of dogs were established and were nominally named: ideal weight (IW, n = 9), slightly overweight (OW1, n = 6) and overweight (OW2, n = 9). With the aim of identifying predictive factors of development of obesity at adulthood parental characteristics, growth pattern, energy balance and plasma factors were analysed by logistic regression analysis. At 24 months, the group compositions were in line with the body condition scores (BCS 1-9) values of the IW (5 or 6/9), the OW1 (6/9) and the OW2 (7 or 8/9) groups. Logistic regression analysis permitted the identification of neonatal growth rate during the first 2 weeks of life (GR 2W ) and BCS at 7 months as predictors for the development of obesity at adulthood. Seventy percent of dogs with either GR 2W >125% or with BCS > 6/9 at 7 months belonged to the OW2 group. Results from energy intake and expenditure, corrected for FFM, showed that there was a greater positive energy imbalance between 7 and 10 months for the OW2, compared to the IW group. This study expands the understanding of previously reported risk factors for being overweight or obese in dogs, establishing that (i) 15 out of 24 of the studied dogs became overweight and (ii) GR 2W and BCS at 7 months of age could be used as predictive factors as overweight adult dogs in the OW2

  17. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  18. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  19. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  20. Commission errors of active intentions: the roles of aging, cognitive load, and practice.

    Science.gov (United States)

    Boywitt, C Dennis; Rummel, Jan; Meiser, Thorsten

    2015-01-01

    Performing an intended action when it needs to be withheld, for example, when temporarily prescribed medication is incompatible with the other medication, is referred to as commission errors of prospective memory (PM). While recent research indicates that older adults are especially prone to commission errors for finished intentions, there is a lack of research on the effects of aging on commission errors for still active intentions. The present research investigates conditions which might contribute to older adults' propensity to perform planned intentions under inappropriate conditions. Specifically, disproportionally higher rates of commission errors for still active intentions were observed in older than in younger adults with both salient (Experiment 1) and non-salient (Experiment 2) target cues. Practicing the PM task in Experiment 2, however, helped execution of the intended action in terms of higher PM performance at faster ongoing-task response times but did not increase the rate of commission errors. The results have important implications for the understanding of older adults' PM commission errors and the processes involved in these errors.

  1. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  2. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  3. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  4. Collection of offshore human error probability data

    International Nuclear Information System (INIS)

    Basra, Gurpreet; Kirwan, Barry

    1998-01-01

    Accidents such as Piper Alpha have increased concern about the effects of human errors in complex systems. Such accidents can in theory be predicted and prevented by risk assessment, and in particular human reliability assessment (HRA), but HRA ideally requires qualitative and quantitative human error data. A research initiative at the University of Birmingham led to the development of CORE-DATA, a Computerised Human Error Data Base. This system currently contains a reasonably large number of human error data points, collected from a variety of mainly nuclear-power related sources. This article outlines a recent offshore data collection study, concerned with collecting lifeboat evacuation data. Data collection methods are outlined and a selection of human error probabilities generated as a result of the study are provided. These data give insights into the type of errors and human failure rates that could be utilised to support offshore risk analyses

  5. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  6. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  7. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  8. Teamwork and Clinical Error Reporting among Nurses in Korean Hospitals

    Directory of Open Access Journals (Sweden)

    Jee-In Hwang, PhD

    2015-03-01

    Conclusions: Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety.

  9. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  10. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  11. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  12. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  13. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  14. Does provider-initiated HIV testing and counselling lead to higher HIV testing rate and HIV case finding in Rwandan clinics?

    NARCIS (Netherlands)

    Kayigamba, Felix R.; van Santen, Daniëla; Bakker, Mirjam I.; Lammers, Judith; Mugisha, Veronicah; Bagiruwigize, Emmanuel; de Naeyer, Ludwig; Asiimwe, Anita; Schim van der Loeff, Maarten F.

    2016-01-01

    Provider-initiated HIV testing and counselling (PITC) is promoted as a means to increase HIV case finding. We assessed the effectiveness of PITC to increase HIV testing rate and HIV case finding among outpatients in Rwandan health facilities (HF). PITC was introduced in six HFs in 2009-2010. HIV

  15. Lower estimated glomerular filtration rate and higher albuminuria are associated with mortality and end-stage renal disease. A collaborative meta-analysis of kidney disease population cohorts

    DEFF Research Database (Denmark)

    Astor, Brad C; Matsushita, Kunihiro; Gansevoort, Ron T

    2011-01-01

    We studied here the independent associations of estimated glomerular filtration rate (eGFR) and albuminuria with mortality and end-stage renal disease (ESRD) in individuals with chronic kidney disease (CKD). We performed a collaborative meta-analysis of 13 studies totaling 21,688 patients selected...

  16. Payment for antiretroviral drugs is associated with a higher rate of patients lost to follow-up than those offered free-of-charge therapy in Nairobi, Kenya

    NARCIS (Netherlands)

    Zachariah, R.; van Engelgem, I.; Massaquoi, M.; Kocholla, L.; Manzi, M.; Suleh, A.; Phillips, M.; Borgdorff, M.

    2008-01-01

    This retrospective analysis of routine programme data from Mbagathi District Hospital, Nairobi, Kenya shows the difference in rates of loss to follow-up between a cohort that paid 500 shillings/month (approximately US$7) for antiretroviral drugs (ART) and one that received medication free of charge.

  17. Lower estimated glomerular filtration rate and higher albuminuria are associated with all-cause and cardiovascular mortality. A collaborative meta-analysis of high-risk population cohorts

    NARCIS (Netherlands)

    van der Velde, Marije; Matsushita, Kunihiro; Coresh, Josef; Astor, Brad C.; Woodward, Mark; Levey, Andrew S.; de Jong, Paul E.; Gansevoort, Ron T.

    Screening for chronic kidney disease is recommended in people at high risk, but data on the independent and combined associations of estimated glomerular filtration rate (eGFR) and albuminuria with all-cause and cardiovascular mortality are limited. To clarify this, we performed a collaborative

  18. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...

  19. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  20. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  1. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  2. Errors in energy bills

    International Nuclear Information System (INIS)

    Kop, L.

    2001-01-01

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  3. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  4. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  5. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  6. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  7. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    Science.gov (United States)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  8. Are High-Severity Fires Burning at Much Higher Rates Recently than Historically in Dry-Forest Landscapes of the Western USA?

    Science.gov (United States)

    Baker, William L

    2015-01-01

    Dry forests at low elevations in temperate-zone mountains are commonly hypothesized to be at risk of exceptional rates of severe fire from climatic change and land-use effects. Their setting is fire-prone, they have been altered by land-uses, and fire severity may be increasing. However, where fires were excluded, increased fire could also be hypothesized as restorative of historical fire. These competing hypotheses are not well tested, as reference data prior to widespread land-use expansion were insufficient. Moreover, fire-climate projections were lacking for these forests. Here, I used new reference data and records of high-severity fire from 1984-2012 across all dry forests (25.5 million ha) of the western USA to test these hypotheses. I also approximated projected effects of climatic change on high-severity fire in dry forests by applying existing projections. This analysis showed the rate of recent high-severity fire in dry forests is within the range of historical rates, or is too low, overall across dry forests and individually in 42 of 43 analysis regions. Significant upward trends were lacking overall from 1984-2012 for area burned and fraction burned at high severity. Upward trends in area burned at high severity were found in only 4 of 43 analysis regions. Projections for A.D. 2046-2065 showed high-severity fire would generally be still operating at, or have been restored to historical rates, although high projections suggest high-severity fire rotations that are too short could ensue in 6 of 43 regions. Programs to generally reduce fire severity in dry forests are not supported and have significant adverse ecological impacts, including reducing habitat for native species dependent on early-successional burned patches and decreasing landscape heterogeneity that confers resilience to climatic change. Some adverse ecological effects of high-severity fires are concerns. Managers and communities can improve our ability to live with high-severity fire in

  9. Higher caloric intake in hospitalized adolescents with anorexia nervosa is associated with reduced length of stay and no increased rate of refeeding syndrome.

    Science.gov (United States)

    Golden, Neville H; Keane-Miller, Casey; Sainani, Kristin L; Kapphahn, Cynthia J

    2013-11-01

    To determine the effect of higher caloric intake on weight gain, length of stay (LOS), and incidence of hypophosphatemia, hypomagnesemia, and hypokalemia in adolescents hospitalized with anorexia nervosa. Electronic medical records of all subjects 10-21 years of age with anorexia nervosa, first admitted to a tertiary children's hospital from Jan 2007 to Dec 2011, were retrospectively reviewed. Demographic factors, anthropometric measures, incidence of hypophosphatemia (≤3.0 mg/dL), hypomagnesemia (≤1.7 mg/dL), and hypokalemia (≤3.5 mEq/L), and daily change in percent median body mass index (BMI) (%mBMI) from baseline were recorded. Subjects started on higher-calorie diets (≥1,400 kcal/d) were compared with those started on lower-calorie diets (Refeeding hypophosphatemia depends on the degree of malnutrition but not prescribed caloric intake, within the range studied. Copyright © 2013 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  10. Implementation of the forced answering option within online surveys: Do higher item response rates come at the expense of participation and answer quality?

    Directory of Open Access Journals (Sweden)

    Décieux Jean Philippe

    2015-01-01

    Full Text Available Online surveys have become a popular method for data gathering for many reasons, including low costs and the ability to collect data rapidly. However, online data collection is often conducted without adequate attention to implementation details. One example is the frequent use of the forced answering option, which forces the respondent to answer each question in order to proceed through the questionnaire. The avoidance of missing data is often the idea behind the use of the forced answering option. However, we suggest that the costs of a reactance effect in terms of quality reduction and unit nonresponse may be high because respondents typically have plausible reasons for not answering questions. The objective of the study reported in this paper was to test the influence of forced answering on dropout rates and data quality. The results show that requiring participants answer every question increases dropout rates and decreases quality of answers. Our findings suggest that the desire for a complete data set has to be balanced against the consequences of reduced data quality.

  11. Higher rates of triple-class virological failure in perinatally HIV-infected teenagers compared with heterosexually infected young adults in Europe.

    Science.gov (United States)

    Judd, A; Lodwick, R; Noguera-Julian, A; Gibb, D M; Butler, K; Costagliola, D; Sabin, C; van Sighem, A; Ledergerber, B; Torti, C; Mocroft, A; Podzamczer, D; Dorrucci, M; De Wit, S; Obel, N; Dabis, F; Cozzi-Lepri, A; García, F; Brockmeyer, N H; Warszawski, J; Gonzalez-Tome, M I; Mussini, C; Touloumi, G; Zangerle, R; Ghosn, J; Castagna, A; Fätkenheuer, G; Stephan, C; Meyer, L; Campbell, M A; Chene, G; Phillips, A

    2017-03-01

    The aim of the study was to determine the time to, and risk factors for, triple-class virological failure (TCVF) across age groups for children and adolescents with perinatally acquired HIV infection and older adolescents and adults with heterosexually acquired HIV infection. We analysed individual patient data from cohorts in the Collaboration of Observational HIV Epidemiological Research Europe (COHERE). A total of 5972 participants starting antiretroviral therapy (ART) from 1998, aged 500 HIV-1 RNA copies/mL despite ≥ 4 months of use. TCVF was defined as cumulative failure of two NRTIs, an NNRTI and a bPI. The median number of weeks between diagnosis and the start of ART was higher in participants with perinatal HIV infection compared with participants with heterosexually acquired HIV infection overall [17 (interquartile range (IQR) 4-111) vs. 8 (IQR 2-38) weeks, respectively], and highest in perinatally infected participants aged 10-14 years [49 (IQR 9-267) weeks]. The cumulative proportion with TCVF 5 years after starting ART was 9.6% [95% confidence interval (CI) 7.0-12.3%] in participants with perinatally acquired infection and 4.7% (95% CI 3.9-5.5%) in participants with heterosexually acquired infection, and highest in perinatally infected participants aged 10-14 years when starting ART (27.7%; 95% CI 13.2-42.1%). Across all participants, significant predictors of TCVF were those with perinatal HIV aged 10-14 years, African origin, pre-ART AIDS, NNRTI-based initial regimens, higher pre-ART viral load and lower pre-ART CD4. The results suggest a beneficial effect of starting ART before adolescence, and starting young people on boosted PIs, to maximize treatment response during this transitional stage of development. © 2016 The Authors. HIV Medicine published by John Wiley & Sons Ltd on behalf of British HIV Association.

  12. Apolipoprotein CIII overexpression exacerbates diet-induced obesity due to adipose tissue higher exogenous lipid uptake and retention and lower lipolysis rates.

    Science.gov (United States)

    Raposo, Helena F; Paiva, Adriene A; Kato, Larissa S; de Oliveira, Helena C F

    2015-01-01

    Hypertriglyceridemia is a common type of dyslipidemia found in obesity. However, it is not established whether primary hyperlipidemia can predispose to obesity. Evidences have suggested that proteins primarily related to plasma lipoprotein transport, such as apolipoprotein (apo) CIII and E, may significantly affect the process of body fat accumulation. We have previously observed an increased adiposity in response to a high fat diet (HFD) in mice overexpressing apoCIII. Here, we examined the potential mechanisms involved in this exacerbated response of apoCIII mice to the HFD. We measured body energy balance, tissue capacity to store exogenous lipids, lipogenesis and lipolysis rates in non-transgenic and apoCIII overexpressing mice fed a HFD during two months. Food intake, fat excretion and whole body CO2 production were similar in both groups. However, the adipose tissue mass (45 %) and leptin plasma levels (2-fold) were significantly greater in apoCIII mice. Lipogenesis rates were similar, while exogenous lipid retention was increased in perigonadal (2-fold) and brown adipose tissues (40 %) of apoCIII mice. In addition, adipocyte basal lipolysis (55 %) and in vivo lipolysis index (30 %) were significantly decreased in apoCIII mice. A fat tolerance test evidenced delayed plasma triglyceride clearance and greater transient availability of non-esterified fatty acids (NEFA) during the post-prandial state in the apoCIII mice plasma. Thus, apoCIII overexpression resulted in increased NEFA availability to adipose uptake and decreased adipocyte lipolysis, favoring lipid enlargement of adipose depots. We propose that plasma apoCIII levels represent a new risk factor for diet-induced obesity.

  13. Treatment Results of Postoperative Radiotherapy on Squamous Cell Carcinoma of the Oral Cavity: Coexistence of Multiple Minor Risk Factors Results in Higher Recurrence Rates

    International Nuclear Information System (INIS)

    Fan, Kang-Hsing; Wang, Hung-Ming; Kang, Chung-Jan

    2010-01-01

    Purpose: The aim of this study was to investigate the treatment results of postoperative radiotherapy (PORT) on squamous cell carcinoma of the oral cavity (OSCC). Materials and Methods: This study included 302 OSCC patients who were treated by radical surgery and PORT. Indications for PORT include Stage III or IV OSCC according to the 2002 criteria of the American Joint Committee on Cancer, the presence of perineural invasion or lymphatic invasion, the depth of tumor invasion, or a close surgical margin. Patients with major risk factors, such as multiple nodal metastases, a positive surgical margin, or extracapsular spreading, were excluded. The prescribed dose of PORT ranged from 59.4 to 66.6Gy (median, 63Gy). Results: The 3-year overall and recurrence-free survival rates were 73% and 70%, respectively. Univariate analysis revealed that differentiation, perineural invasion, lymphatic invasion, bone invasion, location (hard palate and retromolar trigone), invasion depths ≥10mm, and margin distances ≤4mm were significant prognostic factors. The presence of multiple significant factors of univariate analysis correlated with disease recurrence. The 3-year recurrence-free survival rates were 82%, 76%, and 45% for patients with no risk factors, one or two risk factors, and three or more risk factors, respectively. After multivariate analysis, the number of risk factors and lymphatic invasion were significant prognostic factors. Conclusion: PORT may be an adequate adjuvant therapy for OSCC patients with one or two risk factors of recurrence. The presence of multiple risk factors and lymphatic invasion correlated with poor prognosis, and more aggressive treatment may need to be considered.

  14. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  15. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    model with higher bias but more stability. Further, my experiments showed that the two metrics can and will lead to different conclusions about the impacts of certain operations and may suggest different strategies for model improvement. Therefore, SQ error is a poor substitute for ABS error, and the use of the two metrics should be clearly differentiated.

  16. Autotrophic and heterotrophic nitrification-anoxic denitrification dominated the anoxic/oxic sewage treatment process during optimization for higher loading rate and energy savings.

    Science.gov (United States)

    Zhang, Xueyu; Zheng, Shaokui; Zhang, Hangyu; Duan, Shoupeng

    2018-04-30

    This study clarified the dominant nitrogen (N)-transformation pathway and the key ammonia-oxidizing microbial species at three loading levels during optimization of the anoxic/oxic (A/O) process for sewage treatment. Comprehensive N-transformation activity analysis showed that ammonia oxidization was performed predominantly by aerobic chemolithotrophic and heterotrophic ammonia oxidization, whereas N 2 production was performed primarily by anoxic denitrification in the anoxic unit. The abundances of ammonia-oxidizing bacteria (AOB), nitrite-oxidizing bacteria, and anaerobic AOB in activated sludge reflected their activities on the basis of high-throughput sequencing data. AOB amoA gene clone libraries revealed that the predominant AOB species in sludge samples shifted from Nitrosomonas europaea (61% at the normal loading level) to Nitrosomonas oligotropha (58% and 81% at the two higher loading levels). Following isolation and sequencing, the predominant culturable heterotrophic AOB in sludge shifted from Agrobacterium tumefaciens (42% at the normal loading level) to Acinetobacter johnsonii (52% at the highest loading level). Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Infusion-related febrile reaction after haploidentical stem cell transplantation in children is associated with higher rates of engraftment syndrome and acute graft-versus-host disease.

    Science.gov (United States)

    Chen, Yao; Huang, Xiao-Jun; Liu, Kai-Yan; Chen, Huan; Chen, Yu-Hong; Zhang, Xiao-Hui; Wang, Feng-Rong; Han, Wei; Wang, Jing-Zhi; Wang, Yu; Yan, Chen-Hua; Zhang, Yuan-Yuan; Sun, Yu-Qian; Xu, Lan-Ping

    2015-12-01

    The clinical significance and prognostic impact of IRFR in pediatric recipients of haploidentical SCT are not clearly understood. Therefore, we attempted to determine how IRFR affects clinical outcomes in children. Clinical data from 100 consecutive pediatric patients (60 boys and 40 girls; median age, 12 yr [range, 2-18 yr] after haploidentical SCT between January 2010 and December 2012 were collected retrospectively. IRFR was described as unexplained fever (>38 °C) within 24 h after the infusion of haploidentical PBSCs. Thirty-eight (38.0%) cases met the criteria for IRFR. ES was found in 24 (63.2%) of the 38 children with IRFR, with the median time of developing ES of +9 (7-16) days, while only 15 (25.4%) of the 59 children without IRFR were found with ES (p children after haploidentical SCT. Thirty-eight children comprised the IRFR group, and 59 were in the control (non-IRFR) group. High incidence of ES was observed in children with the occurrence of IRFR. Similarly, the incidence of stage I-IV and II-IV aGVHD was significantly higher in the febrile group. Multivariate analysis showed IRFR to be the risk factor for ES and aGVHD. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  19. Ablation of arginylation in the mouse N-end rule pathway: loss of fat, higher metabolic rate, damaged spermatogenesis, and neurological perturbations.

    Directory of Open Access Journals (Sweden)

    Christopher S Brower

    2009-11-01

    Full Text Available In the N-end rule pathway of protein degradation, the destabilizing activity of N-terminal Asp, Glu or (oxidized Cys residues requires their conjugation to Arg, which is recognized directly by pathway's ubiquitin ligases. N-terminal arginylation is mediated by the Ate1 arginyltransferase, whose physiological substrates include the Rgs4, Rgs5 and Rgs16 regulators of G proteins. Here, we employed the Cre-lox technique to uncover new physiological functions of N-terminal arginylation in adult mice. We show that postnatal deletion of mouse Ate1 (its unconditional deletion is embryonic lethal causes a rapid decrease of body weight and results in early death of approximately 15% of Ate1-deficient mice. Despite being hyperphagic, the surviving Ate1-deficient mice contain little visceral fat. They also exhibit an increased metabolic rate, ectopic induction of the Ucp1 uncoupling protein in white fat, and are resistant to diet-induced obesity. In addition, Ate1-deficient mice have enlarged brains, an enhanced startle response, are strikingly hyperkinetic, and are prone to seizures and kyphosis. Ate1-deficient males are also infertile, owing to defects in Ate1(-/- spermatocytes. The remarkably broad range of specific biological processes that are shown here to be perturbed by the loss of N-terminal arginylation will make possible the dissection of regulatory circuits that involve Ate1 and either its known substrates, such as Rgs4, Rgs5 and Rgs16, or those currently unknown.

  20. Surgery for diverticular disease results in a higher hernia rate compared to colorectal cancer: a population-based study from Ontario, Canada.

    Science.gov (United States)

    Tang, E S; Robertson, D I; Whitehead, M; Xu, J; Hall, S F

    2017-11-16

    Incisional hernias are a well described complication of abdominal surgery. Previous studies identified malignancy and diverticular disease as risk factors. We compared incisional hernia rates between colon resection for colorectal cancer (CRC) and diverticular disease (DD). We performed a retrospective, population-based, matched cohort study. Provincial databases were linked through the Institute for Clinical Evaluative Sciences. These databases include all patients registered under the universal Ontario Health Insurance Plan. Patients aged 18-105 undergoing open colon resection, without ostomy formation between April 1, 2002 and March 31, 2009, were included. We excluded those with previous surgery, hernia, obstruction, and perforation. The primary outcomes were surgery for hernia repair, or diagnosis of hernia in clinic. We identified 4660 cases of DD. These were matched 2:1 by age and gender to 8933 patients with CRC for a total of 13,593. At 5 years, incisional hernias occurred in 8.3% of patients in the CRC cohort, versus 13.1% of those undergoing surgery for DD. After adjusting for important confounders (comorbidity score, wound infection, age, diabetes, prednisone and chemotherapy), hernias were still more likely in patients with DD [HR 1.58, 95% Confidence Interval (CI) 1.43-1.76, P < 0.001]. The only significant covariate was wound infection (HR 1.63, 95% CI 1.43-1.87, P < 0.001). Our study found that incisional hernias occur more commonly in patients with DD than CRC.