WorldWideScience

Sample records for error rate determination

  1. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    Science.gov (United States)

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  2. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Satellite Photometric Error Determination

    Science.gov (United States)

    2015-10-18

    Satellite Photometric Error Determination Tamara E. Payne, Philip J. Castro, Stephen A. Gregory Applied Optimization 714 East Monument Ave, Suite...advocate the adoption of new techniques based on in-frame photometric calibrations enabled by newly available all-sky star catalogs that contain highly...filter systems will likely be supplanted by the Sloan based filter systems. The Johnson photometric system is a set of filters in the optical

  4. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  5. Determination of corrosion rate of reinforcement with a modulated guard ring electrode; analysis of errors due to lateral current distribution

    International Nuclear Information System (INIS)

    Wojtas, H.

    2004-01-01

    The main source of errors in measuring the corrosion rate of rebars on site is a non-uniform current distribution between the small counter electrode (CE) on the concrete surface and the large rebar network. Guard ring electrodes (GEs) are used in an attempt to confine the excitation current within a defined area. In order to better understand the functioning of modulated guard ring electrode and to assess its effectiveness in eliminating errors due to lateral spread of current signal from the small CE, measurements of the polarisation resistance performed on a concrete beam have been numerically simulated. Effect of parameters such as rebar corrosion activity, concrete resistivity, concrete cover depth and size of the corroding area on errors in the estimation of polarisation resistance of a single rebar has been examined. The results indicate that modulated GE arrangement fails to confine the lateral spread of the CE current within a constant area. Using the constant diameter of confinement for the calculation of corrosion rate may lead to serious errors when test conditions change. When high corrosion activity of rebar and/or local corrosion occur, the use of the modulated GE confinement may lead to significant underestimation of the corrosion rate

  6. Accelerated tests for the soft error rate determination of single radiation particles in components of terrestrial and avionic electronic systems

    International Nuclear Information System (INIS)

    Flament, O.; Baggio, J.

    2010-01-01

    This paper describes the main features of the accelerated test procedures used to determine reliability data of microelectronics devices used in terrestrial environment.This paper focuses on the high energy particle test that could be performed through spallation neutron source or quasi-mono-energetic neutron or proton. Improvements of standards are illustrated with respect to the state of the art of knowledge in radiation effects and scaling down of microelectronics technologies. (authors)

  7. Technological Advancements and Error Rates in Radiation Therapy Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States)

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  8. Technological Advancements and Error Rates in Radiation Therapy Delivery

    International Nuclear Information System (INIS)

    Margalit, Danielle N.; Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K.

    2011-01-01

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)–conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women’s Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher’s exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01–0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08–0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  9. Generalizing human error rates: A taxonomic approach

    International Nuclear Information System (INIS)

    Buffardi, L.; Fleishman, E.; Allen, J.

    1989-01-01

    It is well established that human error plays a major role in malfunctioning of complex, technological systems and in accidents associated with their operation. Estimates of the rate of human error in the nuclear industry range from 20-65% of all system failures. In response to this, the Nuclear Regulatory Commission has developed a variety of techniques for estimating human error probabilities for nuclear power plant personnel. Most of these techniques require the specification of the range of human error probabilities for various tasks. Unfortunately, very little objective performance data on error probabilities exist for nuclear environments. Thus, when human reliability estimates are required, for example in computer simulation modeling of system reliability, only subjective estimates (usually based on experts' best guesses) can be provided. The objective of the current research is to provide guidelines for the selection of human error probabilities based on actual performance data taken in other complex environments and applying them to nuclear settings. A key feature of this research is the application of a comprehensive taxonomic approach to nuclear and non-nuclear tasks to evaluate their similarities and differences, thus providing a basis for generalizing human error estimates across tasks. In recent years significant developments have occurred in classifying and describing tasks. Initial goals of the current research are to: (1) identify alternative taxonomic schemes that can be applied to tasks, and (2) describe nuclear tasks in terms of these schemes. Three standardized taxonomic schemes (Ability Requirements Approach, Generalized Information-Processing Approach, Task Characteristics Approach) are identified, modified, and evaluated for their suitability in comparing nuclear and non-nuclear power plant tasks. An agenda for future research and its relevance to nuclear power plant safety is also discussed

  10. Multicenter Assessment of Gram Stain Error Rates.

    Science.gov (United States)

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  11. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  12. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  13. Aniseikonia quantification: error rate of rule of thumb estimation.

    Science.gov (United States)

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  14. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  15. 45 CFR 98.100 - Error Rate Report.

    Science.gov (United States)

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... the total dollar amount of payments made in the sample); the average amount of improper payment; and... not received. (e) Costs of Preparing the Error Rate Report—Provided the error rate calculations and...

  16. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  17. Accelerated testing for cosmic soft-error rate

    International Nuclear Information System (INIS)

    Ziegler, J.F.; Muhlfeld, H.P.; Montrose, C.J.; Curtis, H.W.; O'Gorman, T.J.; Ross, J.M.

    1996-01-01

    This paper describes the experimental techniques which have been developed at IBM to determine the sensitivity of electronic circuits to cosmic rays at sea level. It relates IBM circuit design and modeling, chip manufacture with process variations, and chip testing for SER sensitivity. This vertical integration from design to final test and with feedback to design allows a complete picture of LSI sensitivity to cosmic rays. Since advanced computers are designed with LSI chips long before the chips have been fabricated, and the system architecture is fully formed before the first chips are functional, it is essential to establish the chip reliability as early as possible. This paper establishes techniques to test chips that are only partly functional (e.g., only 1Mb of a 16Mb memory may be working) and can establish chip soft-error upset rates before final chip manufacturing begins. Simple relationships derived from measurement of more than 80 different chips manufactured over 20 years allow total cosmic soft-error rate (SER) to be estimated after only limited testing. Comparisons between these accelerated test results and similar tests determined by ''field testing'' (which may require a year or more of testing after manufacturing begins) show that the experimental techniques are accurate to a factor of 2

  18. Logical error rate scaling of the toric code

    International Nuclear Information System (INIS)

    Watson, Fern H E; Barrett, Sean D

    2014-01-01

    To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)

  19. Error-rate performance analysis of opportunistic regenerative relaying

    KAUST Repository

    Tourki, Kamel

    2011-09-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.

  20. Process error rates in general research applications to the Human ...

    African Journals Online (AJOL)

    Objective. To examine process error rates in applications for ethics clearance of health research. Methods. Minutes of 586 general research applications made to a human health research ethics committee (HREC) from April 2008 to March 2009 were examined. Rates of approval were calculated and reasons for requiring ...

  1. Individual Differences and Rating Errors in First Impressions of Psychopathy

    Directory of Open Access Journals (Sweden)

    Christopher T. A. Gillen

    2016-10-01

    Full Text Available The current study is the first to investigate whether individual differences in personality are related to improved first impression accuracy when appraising psychopathy in female offenders from thin-slices of information. The study also investigated the types of errors laypeople make when forming these judgments. Sixty-seven undergraduates assessed 22 offenders on their level of psychopathy, violence, likability, and attractiveness. Psychopathy rating accuracy improved as rater extroversion-sociability and agreeableness increased and when neuroticism and lifestyle and antisocial characteristics decreased. These results suggest that traits associated with nonverbal rating accuracy or social functioning may be important in threat detection. Raters also made errors consistent with error management theory, suggesting that laypeople overappraise danger when rating psychopathy.

  2. A critique of recent models for human error rate assessment

    International Nuclear Information System (INIS)

    Apostolakis, G.E.

    1988-01-01

    This paper critically reviews two groups of models for assessing human error rates under accident conditions. The first group, which includes the US Nuclear Regulatory Commission (NRC) handbook model and the human cognitive reliability (HCR) model, considers as fundamental the time that is available to the operators to act. The second group, which is represented by the success likelihood index methodology multiattribute utility decomposition (SLIM-MAUD) model, relies on ratings of the human actions with respect to certain qualitative factors and the subsequent derivation of error rates. These models are evaluated with respect to two criteria: the treatment of uncertainties and the internal coherence of the models. In other words, this evaluation focuses primarily on normative aspects of these models. The principal findings are as follows: (1) Both of the time-related models provide human error rates as a function of the available time for action and the prevailing conditions. However, the HCR model ignores the important issue of state-of-knowledge uncertainties, dealing exclusively with stochastic uncertainty, whereas the model presented in the NRC handbook handles both types of uncertainty. (2) SLIM-MAUD provides a highly structured approach for the derivation of human error rates under given conditions. However, the treatment of the weights and ratings in this model is internally inconsistent. (author)

  3. Evaluation of soft errors rate in a commercial memory EEPROM

    International Nuclear Information System (INIS)

    Claro, Luiz H.; Silva, A.A.; Santos, Jose A.

    2011-01-01

    Soft errors are transient circuit errors caused by external radiation. When an ion intercepts a p-n region in an electronic component, the ionization produces excess charges along the track. These charges when collected can flip internal values, especially in memory cells. The problem affects not only space application but also terrestrial ones. Neutrons induced by cosmic rays and alpha particles, emitted from traces of radioactive contaminants contained in packaging and chip materials, are the predominant sources of radiation. The soft error susceptibility is different for different memory technology hence the experimental study are very important for Soft Error Rate (SER) evaluation. In this work, the methodology for accelerated tests is presented with the results for SER in a commercial electrically erasable and programmable read-only memory (EEPROM). (author)

  4. The 95% confidence intervals of error rates and discriminant coefficients

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2015-02-01

    Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.

  5. Dispensing error rate after implementation of an automated pharmacy carousel system.

    Science.gov (United States)

    Oswald, Scott; Caldwell, Richard

    2007-07-01

    A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.

  6. Assessment of salivary flow rate: biologic variation and measure error.

    NARCIS (Netherlands)

    Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.

    2004-01-01

    OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated

  7. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  8. Estimating error rates for firearm evidence identifications in forensic science

    Science.gov (United States)

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  9. Laser tracker error determination using a network measurement

    International Nuclear Information System (INIS)

    Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim

    2011-01-01

    We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies

  10. Error rate performance of narrowband multilevel CPFSK signals

    Science.gov (United States)

    Ekanayake, N.; Fonseka, K. J. P.

    1987-04-01

    The paper presents a relatively simple method for analyzing the effect of IF filtering on the performance of multilevel FM signals. Using this method, the error rate performance of narrowband FM signals is analyzed for three different detection techniques, namely limiter-discriminator detection, differential detection and coherent detection followed by differential decoding. The symbol error probabilities are computed for a Gaussian IF filter and a second-order Butterworth IF filter. It is shown that coherent detection and differential decoding yields better performance than limiter-discriminator detection and differential detection, whereas two noncoherent detectors yield approximately identical performance.

  11. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

    Science.gov (United States)

    Nicholson, Mark; Markley, F.; Seidewitz, E.

    1988-01-01

    The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

  12. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance

  13. The nearest neighbor and the bayes error rates.

    Science.gov (United States)

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal.

  14. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  15. Modeling of Bit Error Rate in Cascaded 2R Regenerators

    DEFF Research Database (Denmark)

    Öhman, Filip; Mørk, Jesper

    2006-01-01

    and the regenerating nonlinearity is investigated. It is shown that an increase in nonlinearity can compensate for an increase in noise figure or decrease in signal power. Furthermore, the influence of the improvement in signal extinction ratio along the cascade and the importance of choosing the proper threshold......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...

  16. Minimizing Symbol Error Rate for Cognitive Relaying with Opportunistic Access

    KAUST Repository

    Zafar, Ammar

    2012-12-29

    In this paper, we present an optimal resource allocation scheme (ORA) for an all-participate(AP) cognitive relay network that minimizes the symbol error rate (SER). The SER is derived and different constraints are considered on the system. We consider the cases of both individual and global power constraints, individual constraints only and global constraints only. Numerical results show that the ORA scheme outperforms the schemes with direct link only and uniform power allocation (UPA) in terms of minimizing the SER for all three cases of different constraints. Numerical results also show that the individual constraints only case provides the best performance at large signal-to-noise-ratio (SNR).

  17. The decline and fall of Type II error rates

    Science.gov (United States)

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  18. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei

    2014-06-01

    Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.

  19. Low dose rate gamma ray induced loss and data error rate of multimode silica fibre links

    International Nuclear Information System (INIS)

    Breuze, G.; Fanet, H.; Serre, J.

    1993-01-01

    Fiber optics data transmission from numerous multiplexed sensors, is potentially attractive for nuclear plant applications. Multimode silica fiber behaviour during steady state gamma ray exposure is studied as a joint programme between LETI CE/SACLAY and EDF Renardieres: transmitted optical power and bit error rate have been measured on a 100 m optical fiber

  20. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A. [Canis Lupus LLC and Department of Human Oncology, University of Wisconsin, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Departments of Human Oncology, Medical Physics, and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin 53792 (United States)

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa

  1. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    International Nuclear Information System (INIS)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-01-01

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  2. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  3. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  4. Analysis of gross error rates in operation of commercial nuclear power stations

    International Nuclear Information System (INIS)

    Joos, D.W.; Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    Experience in operation of US commercial nuclear power plants is reviewed over a 25-month period. The reports accumulated in that period on events of human error and component failure are examined to evaluate gross operator error rates. The impact of such errors on plant operation and safety is examined through the use of proper taxonomies of error, tasks and failures. Four categories of human errors are considered; namely, operator, maintenance, installation and administrative. The computed error rates are used to examine appropriate operator models for evaluation of operator reliability. Human error rates are found to be significant to a varying degree in both BWR and PWR. This emphasizes the import of considering human factors in safety and reliability analysis of nuclear systems. The results also indicate that human errors, and especially operator errors, do indeed follow the exponential reliability model. (Auth.)

  5. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    Directory of Open Access Journals (Sweden)

    Jeffrey R Kugelman

    Full Text Available Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5 of all compared methods.

  6. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    Science.gov (United States)

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  7. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  8. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    Science.gov (United States)

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  9. What Determines Star Formation Rates?

    Science.gov (United States)

    Evans, Neal John

    2017-06-01

    The relations between star formation and gas have received renewed attention. We combine studies on scales ranging from local (within 0.5 kpc) to distant galaxies to assess what factors contribute to star formation. These include studies of star forming regions in the Milky Way, the LMC, nearby galaxies with spatially resolved star formation, and integrated galaxy studies. We test whether total molecular gas or dense gas provides the best predictor of star formation rate. The star formation ``efficiency," defined as star formation rate divided by mass, spreads over a large range when the mass refers to molecular gas; the standard deviation of the log of the efficiency decreases by a factor of three when the mass of relatively dense molecular gas is used rather than the mass of all the molecular gas. We suggest ways to further develop the concept of "dense gas" to incorporate other factors, such as turbulence.

  10. Bit error rate analysis of free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy

    2016-12-24

    In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.

  11. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    Science.gov (United States)

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  12. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    Science.gov (United States)

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  13. Bit Error Rate Minimizing Channel Shortening Equalizers for Single Carrier Cyclic Prefixed Systems

    National Research Council Canada - National Science Library

    Martin, Richard K; Vanbleu, Koen; Ysebaert, Geert

    2007-01-01

    .... Previous work on channel shortening has largely been in the context of digital subscriber lines, a wireline system that allows bit allocation, thus it has focused on maximizing the bit rate for a given bit error rate (BER...

  14. Framed bit error rate testing for 100G ethernet equipment

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2010-01-01

    rate. As the need for 100 Gigabit Ethernet equipment rises, so does the need for equipment, which can properly test these systems during development, deployment and use. This paper presents early results from a work-in-progress academia-industry collaboration project and elaborates on the challenges...

  15. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency.

    Science.gov (United States)

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2012-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 2nd graders and 974 3rd graders. Participants were assessed using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and the Woodcock Reading Mastery Test (WRMT) Passage Comprehension subtest. Results from this study further illuminate the significant relationships between error rate, oral reading fluency, and reading comprehension performance, and grade-specific guidelines for appropriate error rate levels. Low oral reading fluency and high error rates predict the level of passage comprehension performance. For second grade students below benchmark, a fall assessment error rate of 28% predicts that student comprehension performance will be below average. For third grade students below benchmark, the fall assessment cut point is 14%. Instructional implications of the findings are discussed.

  16. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    Science.gov (United States)

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  17. Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise

    KAUST Repository

    Souri, Hamza; Alouini, Mohamed-Slim

    2015-01-01

    The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.

  18. Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise

    KAUST Repository

    Souri, Hamza

    2015-06-01

    The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.

  19. Double symbol error rates for differential detection of narrow-band FM

    Science.gov (United States)

    Simon, M. K.

    1985-01-01

    This paper evaluates the double symbol error rate (average probability of two consecutive symbol errors) in differentially detected narrow-band FM. Numerical results are presented for the special case of MSK with a Gaussian IF receive filter. It is shown that, not unlike similar results previously obtained for the single error probability of such systems, large inaccuracies in predicted performance can occur when intersymbol interference is ignored.

  20. Errors of car wheels rotation rate measurement using roller follower on test benches

    Science.gov (United States)

    Potapov, A. S.; Svirbutovich, O. A.; Krivtsov, S. N.

    2018-03-01

    The article deals with rotation rate measurement errors, which depend on the motor vehicle rate, on the roller, test benches. Monitoring of the vehicle performance under operating conditions is performed on roller test benches. Roller test benches are not flawless. They have some drawbacks affecting the accuracy of vehicle performance monitoring. Increase in basic velocity of the vehicle requires increase in accuracy of wheel rotation rate monitoring. It determines the degree of accuracy of mode identification for a wheel of the tested vehicle. To ensure measurement accuracy for rotation velocity of rollers is not an issue. The problem arises when measuring rotation velocity of a car wheel. The higher the rotation velocity of the wheel is, the lower the accuracy of measurement is. At present, wheel rotation frequency monitoring on roller test benches is carried out by following-up systems. Their sensors are rollers following wheel rotation. The rollers of the system are not kinematically linked to supporting rollers of the test bench. The roller follower is forced against the wheels of the tested vehicle by means of a spring-lever mechanism. Experience of the test bench equipment operation has shown that measurement accuracy is satisfactory at small rates of vehicles diagnosed on roller test benches. With a rising diagnostics rate, rotation velocity measurement errors occur in both braking and pulling modes because a roller spins about a tire tread. The paper shows oscillograms of changes in wheel rotation velocity and rotation velocity measurement system’s signals when testing a vehicle on roller test benches at specified rates.

  1. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    Science.gov (United States)

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  2. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    Science.gov (United States)

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  3. Error rates in forensic DNA analysis: Definition, numbers, impact and communication

    NARCIS (Netherlands)

    Kloosterman, A.; Sjerps, M.; Quak, A.

    2014-01-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and

  4. Classification based upon gene expression data: bias and precision of error rates.

    Science.gov (United States)

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  5. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.; Ghaeb, Jasim A.; Jazzar, Saleh; Saraereh, Omar A.

    2012-01-01

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate

  6. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  7. Determining Bounds on Assumption Errors in Operational Analysis

    Directory of Open Access Journals (Sweden)

    Neal M. Bengtson

    2014-01-01

    Full Text Available The technique of operational analysis (OA is used in the study of systems performance, mainly for estimating mean values of various measures of interest, such as, number of jobs at a device and response times. The basic principles of operational analysis allow errors in assumptions to be quantified over a time period. The assumptions which are used to derive the operational analysis relationships are studied. Using Karush-Kuhn-Tucker (KKT conditions bounds on error measures of these OA relationships are found. Examples of these bounds are used for representative performance measures to show limits on the difference between true performance values and those estimated by operational analysis relationships. A technique for finding tolerance limits on the bounds is demonstrated with a simulation example.

  8. The statistical significance of error probability as determined from decoding simulations for long codes

    Science.gov (United States)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  9. The study of error for analysis in dynamic image from the error of count rates in Nal (Tl) scintillation camera

    International Nuclear Information System (INIS)

    Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam; Park, Hoon Hee

    2013-01-01

    This study is aimed to evaluate the effect of T 1/2 upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9 9m TcO 4 - of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ 2 test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T 1/2 error from change of gradient with -0.25% to +0.25%, if T 1/2 is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T 1/2 error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation measurement. Especially, we found a

  10. determinants of stock market development in nigeria using error ...

    African Journals Online (AJOL)

    DJFLEX

    impact the stock market; more domestic firms should be encouraged to enlist in the market and .... The result shows that economic growth, financial liberalization polices, and foreign .... inflation and exchange rate (dollar-naira rate) following.

  11. Safe and effective error rate monitors for SS7 signaling links

    Science.gov (United States)

    Schmidt, Douglas C.

    1994-04-01

    This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.

  12. Bit Error Rate Analysis for MC-CDMA Systems in Nakagami- Fading Channels

    Directory of Open Access Journals (Sweden)

    Li Zexian

    2004-01-01

    Full Text Available Multicarrier code division multiple access (MC-CDMA is a promising technique that combines orthogonal frequency division multiplexing (OFDM with CDMA. In this paper, based on an alternative expression for the -function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER of multiuser MC-CDMA systems in frequency-selective Nakagami- fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC or equal gain combining (EGC. The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.

  13. System care improves trauma outcome: patient care errors dominate reduced preventable death rate.

    Science.gov (United States)

    Thoburn, E; Norris, P; Flores, R; Goode, S; Rodriguez, E; Adams, V; Campbell, S; Albrink, M; Rosemurgy, A

    1993-01-01

    A review of 452 trauma deaths in Hillsborough County, Florida, in 1984 documented that 23% of non-CNS trauma deaths were preventable and occurred because of inadequate resuscitation or delay in proper surgical care. In late 1988 Hillsborough County organized a County Trauma Agency (HCTA) to coordinate trauma care among prehospital providers and state-designated trauma centers. The purpose of this study was to review county trauma deaths after the inception of the HCTA to determine the frequency of preventable deaths. 504 trauma deaths occurring between October 1989 and April 1991 were reviewed. Through committee review, 10 deaths were deemed preventable; 2 occurred outside the trauma system. Of the 10 deaths, 5 preventable deaths occurred late in severely injured patients. The preventable death rate has decreased to 7.0% with system care. The causes of preventable deaths have changed from delayed or inadequate intervention to postoperative care errors.

  14. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  15. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin; Genton, Marc G.

    2013-01-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  16. Estimating the annotation error rate of curated GO database sequence annotations

    Directory of Open Access Journals (Sweden)

    Brown Alfred L

    2007-05-01

    Full Text Available Abstract Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO sequence database (GOSeqLite. This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006 at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information.

  17. Effects of human errors on the determination of surveillance test interval

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Koo, Bon Hyun

    1990-01-01

    This paper incorporates the effects of human error relevant to the periodic test on the unavailability of the safety system as well as the component unavailability. Two types of possible human error during the test are considered. One is the possibility that a good safety system is inadvertently left in a bad state after the test (Type A human error) and the other is the possibility that bad safety system is undetected upon the test (Type B human error). An event tree model is developed for the steady-state unavailability of safety system to determine the effects of human errors on the component unavailability and the test interval. We perform the reliability analysis of safety injection system (SIS) by applying aforementioned two types of human error to safety injection pumps. Results of various sensitivity analyses show that; 1) the appropriate test interval decreases and steady-state unavailability increases as the probabilities of both types of human errors increase, and they are far more sensitive to Type A human error than Type B and 2) the SIS unavailability increases slightly as the probability of Type B human error increases, and significantly as the probability of Type A human error increases. Therefore, to avoid underestimation, the effects of human error should be incorporated in the system reliability analysis which aims at the relaxations of the surveillance test intervals, and Type A human error has more important effect on the unavailability and surveillance test interval

  18. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    Science.gov (United States)

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  19. Considering the role of time budgets on copy-error rates in material culture traditions: an experimental assessment.

    Science.gov (United States)

    Schillinger, Kerstin; Mesoudi, Alex; Lycett, Stephen J

    2014-01-01

    Ethnographic research highlights that there are constraints placed on the time available to produce cultural artefacts in differing circumstances. Given that copying error, or cultural 'mutation', can have important implications for the evolutionary processes involved in material culture change, it is essential to explore empirically how such 'time constraints' affect patterns of artefactual variation. Here, we report an experiment that systematically tests whether, and how, varying time constraints affect shape copying error rates. A total of 90 participants copied the shape of a 3D 'target handaxe form' using a standardized foam block and a plastic knife. Three distinct 'time conditions' were examined, whereupon participants had either 20, 15, or 10 minutes to complete the task. One aim of this study was to determine whether reducing production time produced a proportional increase in copy error rates across all conditions, or whether the concept of a task specific 'threshold' might be a more appropriate manner to model the effect of time budgets on copy-error rates. We found that mean levels of shape copying error increased when production time was reduced. However, there were no statistically significant differences between the 20 minute and 15 minute conditions. Significant differences were only obtained between conditions when production time was reduced to 10 minutes. Hence, our results more strongly support the hypothesis that the effects of time constraints on copying error are best modelled according to a 'threshold' effect, below which mutation rates increase more markedly. Our results also suggest that 'time budgets' available in the past will have generated varying patterns of shape variation, potentially affecting spatial and temporal trends seen in the archaeological record. Hence, 'time-budgeting' factors need to be given greater consideration in evolutionary models of material culture change.

  20. Errors in determination of irregularity factor for distributed parameters in a reactor core

    International Nuclear Information System (INIS)

    Vlasov, V.A.; Zajtsev, M.P.; Il'ina, L.I.; Postnikov, V.V.

    1988-01-01

    Two types errors (measurement error and error of regulation of reactor core distributed parameters), offen met during high-power density reactor operation, are analyzed. Consideration is given to errors in determination of irregularity factor for radial power distribution for a hot channel under conditions of its minimization and for the conditions when the regulation of relative power distribution is absent. The first regime is investigated by the method of statistic experiment using the program of neutron-physical calculation optimization taking as an example a large channel water cooled graphite moderated reactor. It is concluded that it is necessary, to take into account the complex interaction of measurement error with the error of parameter profiling over the core both for conditions of continuous manual or automatic parameter regulation (optimization) and for the conditions without regulation namely at a priore equalized distribution. When evaluating the error of distributed parameter control

  1. Voice recognition versus transcriptionist: error rates and productivity in MRI reporting.

    Science.gov (United States)

    Strahan, Rodney H; Schneider-Kolsky, Michal E

    2010-10-01

    Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Fifty MRI reports generated by VR and 50 finalized MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Forty-two % and 30% of the finalized VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR. © 2010 The Authors. Journal of Medical Imaging and Radiation Oncology © 2010 The Royal Australian and New Zealand College of Radiologists.

  2. Voice recognition versus transcriptionist: error rated and productivity in MRI reporting

    International Nuclear Information System (INIS)

    Strahan, Rodney H.; Schneider-Kolsky, Michal E.

    2010-01-01

    Full text: Purpose: Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Methods: Fifty MRI reports generated by VR and 50 finalised MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Results: Forty-two % and 30% of the finalised VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Conclusion: Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR.

  3. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    International Nuclear Information System (INIS)

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-01-01

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization of the homodyne detection scheme.

  4. Analytical expression for the bit error rate of cascaded all-optical regenerators

    DEFF Research Database (Denmark)

    Mørk, Jesper; Öhman, Filip; Bischoff, S.

    2003-01-01

    We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed.......We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed....

  5. The determinants of credit rating: brazilian evidence

    OpenAIRE

    MurciaI,Flávia Cruz de Souza; Dal-Ri Murcia,Fernando; Rover,Suliani; Borba,José Alonso

    2014-01-01

    This study attempts to identify the determinant factors of credit rating in Brazil. The relevance of this proposal is based on the importance of the subject as well as the uniqueness of the Brazilian market. As for originality, the great majority of previous studies regarding credit rating have been developed in the US, UK and Australia; therefore the effect on other markets is still unclear, especially in emerging markets, like Brazil. We’ve used a Generalized Estimating Equa...

  6. Band extension in digital methods of transfer function determination – signal conditioners asymmetry error corrections

    Directory of Open Access Journals (Sweden)

    Zbigniew Staroszczyk

    2014-12-01

    Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors

  7. Dose-rate determination by radiochemical analysis

    International Nuclear Information System (INIS)

    Mangini, A.; Pernicka, E.; Wagner, G.A.

    1983-01-01

    At the previous TL Specialist Seminr we had suggested that α-counting is an unsuitable technique for dose-rate determination due to overcounting effects. This is confirmed by combining α-counting, neutron activation analysis, fission track counting, α-spectrometry on various pottery samples. One result of this study is that disequilibrium in the uranium decay chain alone cannot account for the observed discrepancies between α-counting and chemical analysis. Therefore we propose for routine dose-rate determination in TL dating to apply chemical analysis of the radioactive elements supplemented by an α-spectrometric equilibrium check. (author)

  8. Cold Vacuum Drying (CVD) OCRWM Loop Error Determination

    International Nuclear Information System (INIS)

    PHILIPP, B.L.

    2000-01-01

    Characterization is specifically identified by the Richland Operations Office (RL) for the Office of Civilian Radioactive Waste Management (OCRWM) of the US Department of Energy (DOE), as requiring application of the requirements in the Quality Assurance Requirements and Description (QARD) (RW-0333P DOE 1997a). Those analyses that provide information that is necessary for repository acceptance require application of the QARD. The cold vacuum drying (CVD) project identified the loops that measure, display, and record multi-canister overpack (MCO) vacuum pressure and Tempered Water (TW) temperature data as providing OCRWM data per Application of the Office of Civilian Radioactive Waste Management (OCRWM) Quality Assurance Requirements to the Hanford Spent Nuclear Fuel Project HNF-SD-SNF-RPT-007. Vacuum pressure transmitters (PT 1*08, 1*10) and TW temperature transmitters (TIT-3*05, 3*12) are used to verify drying and to determine the water content within the MCO after CVD

  9. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    Science.gov (United States)

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  10. Error rates of a full-duplex system over EGK fading channels subject to laplacian interference

    KAUST Repository

    Soury, Hamza

    2017-07-31

    This paper develops a mathematical paradigm to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). Particularly, we study the dominant intra-cell interferer problem that appears between HD users scheduled on the same FD-channel. The distribution of the dominant interference is first characterized via its distribution function, which is derived in closed-form. Assuming Nakagami-m fading, the probability of error for different modulation schemes is studied and a unified closed-form expression for the average symbol error rate is derived. To this end, we show the effective downlink throughput gain, harvested by employing FD communication at a BS that serves HD users, as a function of the signal-to-interference-ratio when compared to an idealized HD interference and noise free BS operation.

  11. The determinants of exchange rate in Croatia

    Directory of Open Access Journals (Sweden)

    Manuel BENAZIC

    2016-06-01

    Full Text Available The dilemma for every country with an independent monetary policy is which kind of exchange rate arrangement should be applied. Through the exchange rate policy, countries can influence their economies, i.e. price stability and export competiveness. Croatia is a new EU member state, it has its own monetary policy and currency but it is on the way to euro introduction. Regarding the experiences from the beginning of the 1990s when Croatia was faced with serious monetary instabilities and hyperinflation, the goal of Croatian National Bank (CNB is to ensure price stability and one way to do so is through exchange rate policy. Croatia, as a small and open economy, has applied a managed floating exchange rate regime. The exchange rate is determined primarily by the foreign exchange supply and demand on the foreign exchange market, with occasional market interventions by the CNB. Therefore, in order to maintain exchange rate stability, policymakers must be able to recognize how changes in these factors affect changes in the exchange rate. This research aims to find a relationship among the main sources of foreign currency inflow and outflow and the level of exchange rate in Croatia. The analysis is carried out by using the bounds testing (ARDL approach for co-integration. The results indicate the existence of a stable co-integration relationship between the observed variables, whereby an increase in the majority of variables leads to an exchange rate appreciation.

  12. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    Science.gov (United States)

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of

  13. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates

    Directory of Open Access Journals (Sweden)

    Berhane Yemane

    2008-03-01

    Full Text Available Abstract Background As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs. Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. Methods This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. Results The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. Conclusion The low sensitivity of parameter

  14. Benefits and risks of using smart pumps to reduce medication error rates: a systematic review.

    Science.gov (United States)

    Ohashi, Kumiko; Dalleur, Olivia; Dykes, Patricia C; Bates, David W

    2014-12-01

    Smart infusion pumps have been introduced to prevent medication errors and have been widely adopted nationally in the USA, though they are not always used in Europe or other regions. Despite widespread usage of smart pumps, intravenous medication errors have not been fully eliminated. Through a systematic review of recent studies and reports regarding smart pump implementation and use, we aimed to identify the impact of smart pumps on error reduction and on the complex process of medication administration, and strategies to maximize the benefits of smart pumps. The medical literature related to the effects of smart pumps for improving patient safety was searched in PUBMED, EMBASE, and the Cochrane Central Register of Controlled Trials (CENTRAL) (2000-2014) and relevant papers were selected by two researchers. After the literature search, 231 papers were identified and the full texts of 138 articles were assessed for eligibility. Of these, 22 were included after removal of papers that did not meet the inclusion criteria. We assessed both the benefits and negative effects of smart pumps from these studies. One of the benefits of using smart pumps was intercepting errors such as the wrong rate, wrong dose, and pump setting errors. Other benefits include reduction of adverse drug event rates, practice improvements, and cost effectiveness. Meanwhile, the current issues or negative effects related to using smart pumps were lower compliance rates of using smart pumps, the overriding of soft alerts, non-intercepted errors, or the possibility of using the wrong drug library. The literature suggests that smart pumps reduce but do not eliminate programming errors. Although the hard limits of a drug library play a main role in intercepting medication errors, soft limits were still not as effective as hard limits because of high override rates. Compliance in using smart pumps is key towards effectively preventing errors. Opportunities for improvement include upgrading drug

  15. Error compensation of single-antenna attitude determination using GNSS for Low-dynamic applications

    Science.gov (United States)

    Chen, Wen; Yu, Chao; Cai, Miaomiao

    2017-04-01

    GNSS-based single-antenna pseudo-attitude determination method has attracted more and more attention from the field of high-dynamic navigation due to its low cost, low system complexity, and no temporal accumulated errors. Related researches indicate that this method can be an important complement or even an alternative to the traditional sensors for general accuracy requirement (such as small UAV navigation). The application of single-antenna attitude determining method to low-dynamic carrier has just started. Different from the traditional multi-antenna attitude measurement technique, the pseudo-attitude attitude determination method calculates the rotation angle of the carrier trajectory relative to the earth. Thus it inevitably contains some deviations comparing with the real attitude angle. In low-dynamic application, these deviations are particularly noticeable, which may not be ignored. The causes of the deviations can be roughly classified into three categories, including the measurement error, the offset error, and the lateral error. Empirical correction strategies for the formal two errors have been promoted in previous study, but lack of theoretical support. In this paper, we will provide quantitative description of the three type of errors and discuss the related error compensation methods. Vehicle and shipborne experiments were carried out to verify the feasibility of the proposed correction methods. Keywords: Error compensation; Single-antenna; GNSS; Attitude determination; Low-dynamic

  16. Frequency and determinants of drug administration errors in the intensive care unit

    NARCIS (Netherlands)

    van den Bemt, PMLA; Fijn, R; van der Voort, PHJ; Gossen, AA; Egberts, TCG; Brouwers, JRBJ

    Objective., The study aimed to identify both the frequency and the determinants of drug administration errors in the intensive care unit. Design: Administration errors were detected by using the disguised-observation technique (observation of medication administrations by nurses, without revealing

  17. DETERMINATION OF THE SPECIFIC GROWTH RATE ON ...

    African Journals Online (AJOL)

    Sewage generation is one of the dense problems Nigerians encounter on daily bases, mostly at the urbanized area where factories and industries are located. This paper is aimed at determining the specific growth rate “K” of biological activities on cassava wastewater during degradation using Michaelis-Menten Equation.

  18. Kurzweil Reading Machine: A Partial Evaluation of Its Optical Character Recognition Error Rate.

    Science.gov (United States)

    Goodrich, Gregory L.; And Others

    1979-01-01

    A study designed to assess the ability of the Kurzweil reading machine (a speech reading device for the visually handicapped) to read three different type styles produced by five different means indicated that the machines tested had different error rates depending upon the means of producing the copy and upon the type style used. (Author/CL)

  19. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei; Ding, Xiaoli; Lu, Zhong; Jung, Hyungsup; Hu, Jun; Feng, Guangcai

    2014-01-01

    be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long

  20. Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.

    2005-01-01

    Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...

  1. Error rates of a full-duplex system over EGK fading channels subject to laplacian interference

    KAUST Repository

    Soury, Hamza; Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    modulation schemes is studied and a unified closed-form expression for the average symbol error rate is derived. To this end, we show the effective downlink throughput gain, harvested by employing FD communication at a BS that serves HD users, as a function

  2. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    International Nuclear Information System (INIS)

    Kanphet, J; Suriyapee, S; Sanghangthum, T; Kumkhwao, J; Wisetrintong, M; Dumrongkijudom, N

    2016-01-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable. (paper)

  3. Error analysis of the microradiographical determination of mineral content in mineralised tissue slices

    International Nuclear Information System (INIS)

    Jong, E. de J. de; Bosch, J.J. ten

    1985-01-01

    The microradiographic method, used to measure the mineral content in slices of mineralised tissues as a function of position, is analysed. The total error in the measured mineral content is split into systematic errors per microradiogram and random noise errors. These errors are measured quantitatively. Predominant contributions to systematic errors appear to be x-ray beam inhomogeneity, the determination of the step wedge thickness and stray light in the densitometer microscope, while noise errors are under the influence of the choice of film, the value of the optical film transmission of the microradiographic image and the area of the densitometer window. Optimisation criteria are given. The authors used these criteria, together with the requirement that the method be fast and easy to build an optimised microradiographic system. (author)

  4. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Competence in Streptococcus pneumoniae is regulated by the rate of ribosomal decoding errors.

    Science.gov (United States)

    Stevens, Kathleen E; Chang, Diana; Zwack, Erin E; Sebert, Michael E

    2011-01-01

    Competence for genetic transformation in Streptococcus pneumoniae develops in response to accumulation of a secreted peptide pheromone and was one of the initial examples of bacterial quorum sensing. Activation of this signaling system induces not only expression of the proteins required for transformation but also the production of cellular chaperones and proteases. We have shown here that activity of this pathway is sensitively responsive to changes in the accuracy of protein synthesis that are triggered by either mutations in ribosomal proteins or exposure to antibiotics. Increasing the error rate during ribosomal decoding promoted competence, while reducing the error rate below the baseline level repressed the development of both spontaneous and antibiotic-induced competence. This pattern of regulation was promoted by the bacterial HtrA serine protease. Analysis of strains with the htrA (S234A) catalytic site mutation showed that the proteolytic activity of HtrA selectively repressed competence when translational fidelity was high but not when accuracy was low. These findings redefine the pneumococcal competence pathway as a response to errors during protein synthesis. This response has the capacity to address the immediate challenge of misfolded proteins through production of chaperones and proteases and may also be able to address, through genetic exchange, upstream coding errors that cause intrinsic protein folding defects. The competence pathway may thereby represent a strategy for dealing with lesions that impair proper protein coding and for maintaining the coding integrity of the genome. The signaling pathway that governs competence in the human respiratory tract pathogen Streptococcus pneumoniae regulates both genetic transformation and the production of cellular chaperones and proteases. The current study shows that this pathway is sensitively controlled in response to changes in the accuracy of protein synthesis. Increasing the error rate during

  6. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  7. FPGA-based Bit-Error-Rate Tester for SEU-hardened Optical Links

    CERN Document Server

    Detraz, S; Moreira, P; Papadopoulos, S; Papakonstantinou, I; Seif El Nasr, S; Sigaud, C; Soos, C; Stejskal, P; Troska, J; Versmissen, H

    2009-01-01

    The next generation of optical links for future High-Energy Physics experiments will require components qualified for use in radiation-hard environments. To cope with radiation induced single-event upsets, the physical layer protocol will include Forward Error Correction (FEC). Bit-Error-Rate (BER) testing is a widely used method to characterize digital transmission systems. In order to measure the BER with and without the proposed FEC, simultaneously on several devices, a multi-channel BER tester has been developed. This paper describes the architecture of the tester, its implementation in a Xilinx Virtex-5 FPGA device and discusses the experimental results.

  8. A minimum bit error-rate detector for amplify and forward relaying systems

    KAUST Repository

    Ahmed, Qasim Zeeshan; Alouini, Mohamed-Slim; Aissa, Sonia

    2012-01-01

    In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.

  9. A minimum bit error-rate detector for amplify and forward relaying systems

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2012-05-01

    In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.

  10. Hessian matrix approach for determining error field sensitivity to coil deviations

    Science.gov (United States)

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi

    2018-05-01

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.

  11. Linear transceiver design for nonorthogonal amplify-and-forward protocol using a bit error rate criterion

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2014-04-01

    The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol. © 2002-2012 IEEE.

  12. The assessment of cognitive errors using an observer-rated method.

    Science.gov (United States)

    Drapeau, Martin

    2014-01-01

    Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.

  13. Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.

    Science.gov (United States)

    Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda

    2017-09-01

    We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe. © 2016 Family Process Institute.

  14. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    Directory of Open Access Journals (Sweden)

    Wei He

    Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.

  15. Theoretical-and experimental analysis of the errors involved in the wood moisture determination by gamma-ray attenuation

    International Nuclear Information System (INIS)

    Aguiar, O.

    1983-01-01

    The sources of errors in wood moisture determination by gamma-ray attenuation were sought. Equations were proposed for determining errors and for ideal sample thickness. A series of measurements of moisture content in wood samples of Pinus oocarpa was made and the experimental errors were compared with the theoretical errors. (Author) [pt

  16. Error-Rate Bounds for Coded PPM on a Poisson Channel

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  17. Symbol error rate performance evaluation of the LM37 multimegabit telemetry modulator-demodulator unit

    Science.gov (United States)

    Malek, H.

    1981-01-01

    The LM37 multimegabit telemetry modulator-demodulator unit was tested for evaluation of its symbol error rate (SER) performance. Using an automated test setup, the SER tests were carried out at various symbol rates and signal-to-noise ratios (SNR), ranging from +10 to -10 dB. With the aid of a specially designed error detector and a stabilized signal and noise summation unit, measurement of the SER at low SNR was possible. The results of the tests show that at symbol rates below 20 megasymbols per second (MS)s) and input SNR above -6 dB, the SER performance of the modem is within the specified 0.65 to 1.5 dB of the theoretical error curve. At symbol rates above 20 MS/s, the specification is met at SNR's down to -2 dB. The results of the SER tests are presented with the description of the test setup and the measurement procedure.

  18. The Determinants of Credit Rating: Brazilian Evidence

    Directory of Open Access Journals (Sweden)

    Flávia Cruz de Souza Murcia

    2014-04-01

    Full Text Available This study attempts to identify the determinant factors of credit rating in Brazil. The relevance of this proposal is based on the importance of the subject as well as the uniqueness of the Brazilian market. As for originality, the great majority of previous studies regarding credit rating have been developed in the US, UK and Australia; therefore the effect on other markets is still unclear, especially in emerging markets, like Brazil. We’ve used a Generalized Estimating Equations (GEE model considering a panel structure with a categorical dependent variable (credit rating and ten independent variables: leverage, profitability, size, financial coverage, growth, liquidity, corporate governance, control, financial market performance and internationalization. The sample consisted of 153 rating observations during the period of 1997-2011 for a total of 49 public firms operating in the Brazilian Market. Results showed that leverage and internationalization are significant at the 1% level in explaining credit rating. Performance in the financial market was significant at a 5% level; profitability and growth were also statistically significant, but at a 10% significance level.

  19. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    Science.gov (United States)

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  20. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.

    2012-12-06

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.

  1. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  2. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    Science.gov (United States)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  3. Phase Error Modeling and Its Impact on Precise Orbit Determination of GRACE Satellites

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Limiting factors for the precise orbit determination (POD of low-earth orbit (LEO satellite using dual-frequency GPS are nowadays mainly encountered with the in-flight phase error modeling. The phase error is modeled as a systematic and a random component each depending on the direction of GPS signal reception. The systematic part and standard deviation of random part in phase error model are, respectively, estimated by bin-wise mean and standard deviation values of phase postfit residuals computed by orbit determination. By removing the systematic component and adjusting the weight of phase observation data according to standard deviation of random component, the orbit can be further improved by POD approach. The GRACE data of 1–31 January 2006 are processed, and three types of orbit solutions, POD without phase error model correction, POD with mean value correction of phase error model, and POD with phase error model correction, are obtained. The three-dimensional (3D orbit improvements derived from phase error model correction are 0.0153 m for GRACE A and 0.0131 m for GRACE B, and the 3D influences arisen from random part of phase error model are 0.0068 m and 0.0075 m for GRACE A and GRACE B, respectively. Thus the random part of phase error model cannot be neglected for POD. It is also demonstrated by phase postfit residual analysis, orbit comparison with JPL precise science orbit, and orbit validation with KBR data that the results derived from POD with phase error model correction are better than another two types of orbit solutions generated in this paper.

  4. The type I error rate for in vivo Comet assay data when the hierarchical structure is disregarded

    DEFF Research Database (Denmark)

    Hansen, Merete Kjær; Kulahci, Murat

    the type I error rate is greater than the nominal _ at 0.05. Closed-form expressions based on scaled F-distributions using the Welch-Satterthwaite approximation are provided to show how the type I error rate is aUected. With this study we hope to motivate researchers to be more precise regarding......, and this imposes considerable impact on the type I error rate. This study aims to demonstrate the implications that result from disregarding the hierarchical structure. DiUerent combinations of the factor levels as they appear in a literature study give type I error rates up to 0.51 and for all combinations...

  5. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  6. Comparing Response Times and Error Rates in a Simultaneous Masking Paradigm

    Directory of Open Access Journals (Sweden)

    F Hermens

    2014-08-01

    Full Text Available In simultaneous masking, performance on a foveally presented target is impaired by one or more flanking elements. Previous studies have demonstrated strong effects of the grouping of the target and the flankers on the strength of masking (e.g., Malania, Herzog & Westheimer, 2007. These studies have predominantly examined performance by measuring offset discrimination thresholds as a measure of performance, and it is therefore unclear whether other measures of performance provide similar outcomes. A recent study, which examined the role of grouping on error rates and response times in a speeded vernier offset discrimination task, similar to that used by Malania et al. (2007, suggested a possible dissociation between the two measures, with error rates mimicking threshold performance, but response times showing differential results (Panis & Hermens, 2014. We here report the outcomes of three experiments examining this possible dissociation, and demonstrate an overall similar pattern of results for error rates and response times across a broad range of mask layouts. Moreover, the pattern of results in our experiments strongly correlates with threshold performance reported earlier (Malania et al., 2007. Our results suggest that outcomes in a simultaneous masking paradigm do not critically depend on the outcome measure used, and therefore provide evidence for a common underlying mechanism.

  7. Tax revenue and inflation rate predictions in Banda Aceh using Vector Error Correction Model (VECM)

    Science.gov (United States)

    Maulia, Eva; Miftahuddin; Sofyan, Hizir

    2018-05-01

    A country has some important parameters to achieve the welfare of the economy, such as tax revenues and inflation. One of the largest revenues of the state budget in Indonesia comes from the tax sector. Besides, the rate of inflation occurring in a country can be used as one measure, to measure economic problems that the country facing. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the relationship and forecasting tax revenue and inflation rate. VECM (Vector Error Correction Model) was chosen as the method used in this research, because of the data used in the form of multivariate time series data. This study aims to produce a VECM model with optimal lag and to predict the tax revenue and inflation rate of the VECM model. The results show that the best model for data of tax revenue and the inflation rate in Banda Aceh City is VECM with 3rd optimal lag or VECM (3). Of the seven models formed, there is a significant model that is the acceptance model of income tax. The predicted results of tax revenue and the inflation rate in Kota Banda Aceh for the next 6, 12 and 24 periods (months) obtained using VECM (3) are considered valid, since they have a minimum error value compared to other models.

  8. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    Science.gov (United States)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  9. PS-022 Complex automated medication systems reduce medication administration error rates in an acute medical ward

    DEFF Research Database (Denmark)

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    2017-01-01

    Background Medication errors have received extensive attention in recent decades and are of significant concern to healthcare organisations globally. Medication errors occur frequently, and adverse events associated with medications are one of the largest causes of harm to hospitalised patients...... cabinet, automated dispensing and barcode medication administration; (2) non-patient specific automated dispensing and barcode medication administration. The occurrence of administration errors was observed in three 3 week periods. The error rates were calculated by dividing the number of doses with one...

  10. Soft error rate analysis methodology of multi-Pulse-single-event transients

    International Nuclear Information System (INIS)

    Zhou Bin; Huo Mingxue; Xiao Liyi

    2012-01-01

    As transistor feature size scales down, soft errors in combinational logic because of high-energy particle radiation is gaining more and more concerns. In this paper, a combinational logic soft error analysis methodology considering multi-pulse-single-event transients (MPSETs) and re-convergence with multi transient pulses is proposed. In the proposed approach, the voltage pulse produced at the standard cell output is approximated by a triangle waveform, and characterized by three parameters: pulse width, the transition time of the first edge, and the transition time of the second edge. As for the pulse with the amplitude being smaller than the supply voltage, the edge extension technique is proposed. Moreover, an efficient electrical masking model comprehensively considering transition time, delay, width and amplitude is proposed, and an approach using the transition times of two edges and pulse width to compute the amplitude of pulse is proposed. Finally, our proposed firstly-independently-propagating-secondly-mutually-interacting (FIP-SMI) is used to deal with more practical re-convergence gate with multi transient pulses. As for MPSETs, a random generation model of MPSETs is exploratively proposed. Compared to the estimates obtained using circuit level simulations by HSpice, our proposed soft error rate analysis algorithm has 10% errors in SER estimation with speed up of 300 when the single-pulse-single-event transient (SPSET) is considered. We have also demonstrated the runtime and SER decrease with the increment of P0 using designs from the ISCAS-85 benchmarks. (authors)

  11. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  12. Error rate of automated calculation for wound surface area using a digital photography.

    Science.gov (United States)

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Determining the Numeracy and Algebra Errors of Students in a Two-Year Vocational School

    Science.gov (United States)

    Akyüz, Gözde

    2015-01-01

    The goal of this study was to determine the mathematics achievement level in basic numeracy and algebra concepts of students in a two-year program in a technical vocational school of higher education and determine the errors that they make in these topics. The researcher developed a diagnostic mathematics achievement test related to numeracy and…

  14. Exploring key considerations when determining bona fide inadvertent errors resulting in understatements

    Directory of Open Access Journals (Sweden)

    Chrizanne de Villiers

    2016-03-01

    Full Text Available Chapter 16 of the Tax Administration Act (28 of 2011 (the TA Act deals with understatement penalties. In the event of an ‘understatement’, in terms of Section 222 of the TA Act, a taxpayer must pay an understatement penalty, unless the understatement results from a bona fide inadvertent error. The determining of a bona fide inadvertent error on taxpayers’ returns is a totally new concept in the tax fraternity. It is of utmost importance that this section is applied correctly based on sound evaluation principles and not on professional judgement when determining if the error was indeed the result of a bona fide inadvertent error. This research study focuses on exploring key considerations when determining bona fide inadvertent errors resulting in understatements. The role and importance of tax penalty provisions is explored and the meaning of the different components in the term ‘bona fide inadvertent error’ critically analysed with the purpose to find a possible definition for the term ‘bona fide inadvertent error’. The study also compares the provisions of other tax jurisdictions with regards to errors made resulting in tax understatements in order to find possible guidelines on the application of bona fide inadvertent errors as contained in Section 222 of the TA Act. The findings of the research study revealed that the term ‘bona fide inadvertent error’ contained in Section 222 of the TA Act should be defined urgently and that guidelines must be provided by SARS on the application of the new amendment. SARS should also clarify the application of a bona fide inadvertent error in light of the behaviours contained in Section 223 of the TA Act to avoid any confusion.

  15. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    Science.gov (United States)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  16. Systematic errors of EIT systems determined by easily-scalable resistive phantoms

    International Nuclear Information System (INIS)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-01-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design

  17. On determining dose rate constants spectroscopically

    International Nuclear Information System (INIS)

    Rodriguez, M.; Rogers, D. W. O.

    2013-01-01

    Purpose: To investigate several aspects of the Chen and Nath spectroscopic method of determining the dose rate constants of 125 I and 103 Pd seeds [Z. Chen and R. Nath, Phys. Med. Biol. 55, 6089–6104 (2010)] including the accuracy of using a line or dual-point source approximation as done in their method, and the accuracy of ignoring the effects of the scattered photons in the spectra. Additionally, the authors investigate the accuracy of the literature's many different spectra for bare, i.e., unencapsulated 125 I and 103 Pd sources. Methods: Spectra generated by 14 125 I and 6 103 Pd seeds were calculated in vacuo at 10 cm from the source in a 2.7 × 2.7 × 0.05 cm 3 voxel using the EGSnrc BrachyDose Monte Carlo code. Calculated spectra used the initial photon spectra recommended by AAPM's TG-43U1 and NCRP (National Council of Radiation Protection and Measurements) Report 58 for the 125 I seeds, or TG-43U1 and NNDC(2000) (National Nuclear Data Center, 2000) for 103 Pd seeds. The emitted spectra were treated as coming from a line or dual-point source in a Monte Carlo simulation to calculate the dose rate constant. The TG-43U1 definition of the dose rate constant was used. These calculations were performed using the full spectrum including scattered photons or using only the main peaks in the spectrum as done experimentally. Statistical uncertainties on the air kerma/history and the dose rate/history were ⩽0.2%. The dose rate constants were also calculated using Monte Carlo simulations of the full seed model. Results: The ratio of the intensity of the 31 keV line relative to that of the main peak in 125 I spectra is, on average, 6.8% higher when calculated with the NCRP Report 58 initial spectrum vs that calculated with TG-43U1 initial spectrum. The 103 Pd spectra exhibit an average 6.2% decrease in the 22.9 keV line relative to the main peak when calculated with the TG-43U1 rather than the NNDC(2000) initial spectrum. The measured values from three different

  18. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  19. On the symmetric α-stable distribution with application to symbol error rate calculations

    KAUST Repository

    Soury, Hamza

    2016-12-24

    The probability density function (PDF) of the symmetric α-stable distribution is investigated using the inverse Fourier transform of its characteristic function. For general values of the stable parameter α, it is shown that the PDF and the cumulative distribution function of the symmetric stable distribution can be expressed in terms of the Fox H function as closed-form. As an application, the probability of error of single input single output communication systems using different modulation schemes with an α-stable perturbation is studied. In more details, a generic formula is derived for generalized fading distribution, such as the extended generalized-k distribution. Later, simpler expressions of these error rates are deduced for some selected special cases and compact approximations are derived using asymptotic expansions.

  20. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2011-06-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  1. The dynamic effect of exchange-rate volatility on Turkish exports: Parsimonious error-correction model approach

    Directory of Open Access Journals (Sweden)

    Demirhan Erdal

    2015-01-01

    Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.

  2. Determining leach rates of monolithic waste forms

    International Nuclear Information System (INIS)

    Gilliam, T.M.; Dole, L.R.

    1986-01-01

    The ANS 16.1 Leach Procedure provides a conservative means of predicting long-term release from monolithic waste forms, offering a simple and relatively quick means of determining effective solid diffusion coefficients. As presented here, these coefficients can be used in a simple model to predict maximum release rates or be used in more complex site-specific models to predict actual site performance. For waste forms that pass the structural integrity test, this model also allows the prediction of EP-Tox leachate concentrations from these coefficients. Thus, the results of the ANS 16.1 Leach Procedure provide a powerful tool that can be used to predict the waste concentration limits in order to comply with the EP-Toxicity criteria for characteristically nonhazardous waste. 12 refs., 3 figs

  3. Determination of Ga-67 disintegration rate

    International Nuclear Information System (INIS)

    Fonseca, Katia A.; Koskinas, Maria F.; Dias, Mauro S.

    1996-01-01

    One of the consequences of the production by IPEN of new radioisotopes used in nuclear medicine, as the case of Ga-67, is the need of new standard sources of the radionuclide obtained in a fast and simple way. The Laboratorio de Metrologia de Radionuclideos at IPEN has a well-type ionization chamber system, the most suitable for this purpose. In order to calibrate this system it was necessary to standardize Ga-67 solutions by an absolute system. The present work gives details on the Ga-67 disintegration rate determination by an 4 π β-γ coincidence system, gamma spectrometry using an HPGe detector and measurements using a 1383A - type ionization chamber, in order to check the consistency in the adopted methodology. (author)

  4. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    Science.gov (United States)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  5. Error rates and resource overheads of encoded three-qubit gates

    Science.gov (United States)

    Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.

    2017-10-01

    A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.

  6. Comparison of Bit Error Rate of Line Codes in NG-PON2

    Directory of Open Access Journals (Sweden)

    Tomas Horvath

    2016-05-01

    Full Text Available This article focuses on simulation and comparison of line codes NRZ (Non Return to Zero, RZ (Return to Zero and Miller’s code for NG-PON2 (Next-Generation Passive Optical Network Stage 2 using. Our article provides solutions with Q-factor, BER (Bit Error Rate, and bandwidth comparison. Line codes are the most important part of communication over the optical fibre. The main role of these codes is digital signal representation. NG-PON2 networks use optical fibres for communication that is the reason why OptSim v5.2 is used for simulation.

  7. Inclusive bit error rate analysis for coherent optical code-division multiple-access system

    Science.gov (United States)

    Katz, Gilad; Sadot, Dan

    2002-06-01

    Inclusive noise and bit error rate (BER) analysis for optical code-division multiplexing (OCDM) using coherence techniques is presented. The analysis contains crosstalk calculation of the mutual field variance for different number of users. It is shown that the crosstalk noise depends deeply on the receiver integration time, the laser coherence time, and the number of users. In addition, analytical results of the power fluctuation at the received channel due to the data modulation at the rejected channels are presented. The analysis also includes amplified spontaneous emission (ASE)-related noise effects of in-line amplifiers in a long-distance communication link.

  8. Modeling the cosmic-ray-induced soft-error rate in integrated circuits: An overview

    International Nuclear Information System (INIS)

    Srinivasan, G.R.

    1996-01-01

    This paper is an overview of the concepts and methodologies used to predict soft-error rates (SER) due to cosmic and high-energy particle radiation in integrated circuit chips. The paper emphasizes the need for the SER simulation using the actual chip circuit model which includes device, process, and technology parameters as opposed to using either the discrete device simulation or generic circuit simulation that is commonly employed in SER modeling. Concepts such as funneling, event-by-event simulation, nuclear history files, critical charge, and charge sharing are examined. Also discussed are the relative importance of elastic and inelastic nuclear collisions, rare event statistics, and device vs. circuit simulations. The semi-empirical methodologies used in the aerospace community to arrive at SERs [also referred to as single-event upset (SEU) rates] in integrated circuit chips are reviewed. This paper is one of four in this special issue relating to SER modeling. Together, they provide a comprehensive account of this modeling effort, which has resulted in a unique modeling tool called the Soft-Error Monte Carlo Model, or SEMM

  9. Symbol and Bit Error Rates Analysis of Hybrid PIM-CDMA

    Directory of Open Access Journals (Sweden)

    Ghassemlooy Z

    2005-01-01

    Full Text Available A hybrid pulse interval modulation code-division multiple-access (hPIM-CDMA scheme employing the strict optical orthogonal code (SOCC with unity and auto- and cross-correlation constraints for indoor optical wireless communications is proposed. In this paper, we analyse the symbol error rate (SER and bit error rate (BER of hPIM-CDMA. In the analysis, we consider multiple access interference (MAI, self-interference, and the hybrid nature of the hPIM-CDMA signal detection, which is based on the matched filter (MF. It is shown that the BER/SER performance can only be evaluated if the bit resolution conforms to the condition set by the number of consecutive false alarm pulses that might occur and be detected, so that one symbol being divided into two is unlikely to occur. Otherwise, the probability of SER and BER becomes extremely high and indeterminable. We show that for a large number of users, the BER improves when increasing the code weight . The results presented are compared with other modulation schemes.

  10. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    Science.gov (United States)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  11. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    Science.gov (United States)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  12. Error analysis for determination of accuracy of an ultrasound navigation system for head and neck surgery.

    Science.gov (United States)

    Kozak, J; Krysztoforski, K; Kroll, T; Helbig, S; Helbig, M

    2009-01-01

    The use of conventional CT- or MRI-based navigation systems for head and neck surgery is unsatisfactory due to tissue shift. Moreover, changes occurring during surgical procedures cannot be visualized. To overcome these drawbacks, we developed a novel ultrasound-guided navigation system for head and neck surgery. A comprehensive error analysis was undertaken to determine the accuracy of this new system. The evaluation of the system accuracy was essentially based on the method of error definition for well-established fiducial marker registration methods (point-pair matching) as used in, for example, CT- or MRI-based navigation. This method was modified in accordance with the specific requirements of ultrasound-guided navigation. The Fiducial Localization Error (FLE), Fiducial Registration Error (FRE) and Target Registration Error (TRE) were determined. In our navigation system, the real error (the TRE actually measured) did not exceed a volume of 1.58 mm(3) with a probability of 0.9. A mean value of 0.8 mm (standard deviation: 0.25 mm) was found for the FRE. The quality of the coordinate tracking system (Polaris localizer) could be defined with an FLE of 0.4 +/- 0.11 mm (mean +/- standard deviation). The quality of the coordinates of the crosshairs of the phantom was determined with a deviation of 0.5 mm (standard deviation: 0.07 mm). The results demonstrate that our newly developed ultrasound-guided navigation system shows only very small system deviations and therefore provides very accurate data for practical applications.

  13. Points of Interest: What Determines Interest Rates?

    Science.gov (United States)

    Schilling, Tim

    Interest rates can significantly influence people's behavior. When rates decline, homeowners rush to buy new homes and refinance old mortgages; automobile buyers scramble to buy new cars; the stock market soars, and people tend to feel more optimistic about the future. But even though individuals respond to changes in rates, they may not fully…

  14. The error analysis of the determination of the activity coefficients via the isopiestic method

    International Nuclear Information System (INIS)

    Zhou Jun; Chen Qiyuan; Fang Zheng; Liang Yizeng; Liu Shijun; Zhou Yong

    2005-01-01

    Error analysis is very important to experimental designs. The error analysis of the determination of activity coefficients for a binary system via the isopiestic method shows that the error sources include not only the experimental errors of the analyzed molalities and the measured osmotic coefficients, but also the deviation of the regressed values from the experimental data when the regression function is used. It also shows that the accurate chemical analysis of the molality of the test solution is important, and it is preferable to keep the error of the measured osmotic coefficients changeless in all isopiestic experiments including those experiments on the very dilute solutions. The isopiestic experiments on the dilute solutions are very important, and the lowest molality should be low enough so that a theoretical method can be used below the lowest molality. And it is necessary that the isopiestic experiment should be done on the test solutions of lower than 0.1 mol . kg -1 . For most electrolytes solutions, it is usually preferable to require the lowest molality to be less than 0.05 mol . kg -1 . Moreover, the experimental molalities of the test solutions should be firstly arranged by keeping the interval of the logarithms of the molalities nearly constant, and secondly more number of high molalities should be arranged, and we propose to arrange the experimental molalities greater than 1 mol . kg -1 according to some kind of the arithmetical progression of the intervals of the molalities. After experiments, the error of the calculated activity coefficients of the solutes could be calculated from the actually values of the errors of the measured isopiestic molalities and the deviations of the regressed values from the experimental values with our obtained equations

  15. The Determinants of Early Refractive Error on School-Going Chinese Children

    Directory of Open Access Journals (Sweden)

    K. Jayaraman

    2016-04-01

    Full Text Available Refractive error is a common social issue in every walks of human life, and its prevalence recorded the highest among Chinese population, particularly among people living in southern China, Hong Kong, Thailand, Singapore, and Malaysia. Refractive error is the simplest disorder to treat and supposed to cost the effective health care intervention. The present study included 168 Chinese school-going children aged 10 to 12 years; they were selected from different schools of urban Malaysia. It was surprising to see that 112 (66.7% children had the early onset of refractive error; refractive error was also detected late among the primary school or secondary school students. The findings revealed that the determinants of refractive error among Chinese children were personal achievements and machine dependence. The possible reasons for the above significant factors emerged could be attributed to the inbuilt culture and traditions of Chinese parents who insist that their children should be hardworking and focus on school subjects so that their parents allow them to use luxury electronic devices.

  16. The Determinants of Country Risk Ratings

    OpenAIRE

    Jean-Claude Cosset; Jean Roy

    1991-01-01

    The purpose of this paper is to replicate Euromoney's and Institutional Investor's country risk ratings on the basis of economic and political variables. The evidence reveals that country risk ratings respond to some of the variables suggested by the theory. In particular, both the level of per capita income and propensity to invest affect positively the rating of a country. In addition, high-ranking countries are less indebted than low-ranking countries. It also appears that the ability of t...

  17. Minimizing the symbol-error-rate for amplify-and-forward relaying systems using evolutionary algorithms

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2015-02-01

    In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.

  18. Rates of medical errors and preventable adverse events among hospitalized children following implementation of a resident handoff bundle.

    Science.gov (United States)

    Starmer, Amy J; Sectish, Theodore C; Simon, Dennis W; Keohane, Carol; McSweeney, Maireade E; Chung, Erica Y; Yoon, Catherine S; Lipsitz, Stuart R; Wassner, Ari J; Harper, Marvin B; Landrigan, Christopher P

    2013-12-04

    Handoff miscommunications are a leading cause of medical errors. Studies comprehensively assessing handoff improvement programs are lacking. To determine whether introduction of a multifaceted handoff program was associated with reduced rates of medical errors and preventable adverse events, fewer omissions of key data in written handoffs, improved verbal handoffs, and changes in resident-physician workflow. Prospective intervention study of 1255 patient admissions (642 before and 613 after the intervention) involving 84 resident physicians (42 before and 42 after the intervention) from July-September 2009 and November 2009-January 2010 on 2 inpatient units at Boston Children's Hospital. Resident handoff bundle, consisting of standardized communication and handoff training, a verbal mnemonic, and a new team handoff structure. On one unit, a computerized handoff tool linked to the electronic medical record was introduced. The primary outcomes were the rates of medical errors and preventable adverse events measured by daily systematic surveillance. The secondary outcomes were omissions in the printed handoff document and resident time-motion activity. Medical errors decreased from 33.8 per 100 admissions (95% CI, 27.3-40.3) to 18.3 per 100 admissions (95% CI, 14.7-21.9; P < .001), and preventable adverse events decreased from 3.3 per 100 admissions (95% CI, 1.7-4.8) to 1.5 (95% CI, 0.51-2.4) per 100 admissions (P = .04) following the intervention. There were fewer omissions of key handoff elements on printed handoff documents, especially on the unit that received the computerized handoff tool (significant reductions of omissions in 11 of 14 categories with computerized tool; significant reductions in 2 of 14 categories without computerized tool). Physicians spent a greater percentage of time in a 24-hour period at the patient bedside after the intervention (8.3%; 95% CI 7.1%-9.8%) vs 10.6% (95% CI, 9.2%-12.2%; P = .03). The average duration of verbal

  19. Estimation of distance error by fuzzy set theory required for strength determination of HDR (192)Ir brachytherapy sources.

    Science.gov (United States)

    Kumar, Sudhir; Datta, D; Sharma, S D; Chourasiya, G; Babu, D A R; Sharma, D N

    2014-04-01

    Verification of the strength of high dose rate (HDR) (192)Ir brachytherapy sources on receipt from the vendor is an important component of institutional quality assurance program. Either reference air-kerma rate (RAKR) or air-kerma strength (AKS) is the recommended quantity to specify the strength of gamma-emitting brachytherapy sources. The use of Farmer-type cylindrical ionization chamber of sensitive volume 0.6 cm(3) is one of the recommended methods for measuring RAKR of HDR (192)Ir brachytherapy sources. While using the cylindrical chamber method, it is required to determine the positioning error of the ionization chamber with respect to the source which is called the distance error. An attempt has been made to apply the fuzzy set theory to estimate the subjective uncertainty associated with the distance error. A simplified approach of applying this fuzzy set theory has been proposed in the quantification of uncertainty associated with the distance error. In order to express the uncertainty in the framework of fuzzy sets, the uncertainty index was estimated and was found to be within 2.5%, which further indicates that the possibility of error in measuring such distance may be of this order. It is observed that the relative distance li estimated by analytical method and fuzzy set theoretic approach are consistent with each other. The crisp values of li estimated using analytical method lie within the bounds computed using fuzzy set theory. This indicates that li values estimated using analytical methods are within 2.5% uncertainty. This value of uncertainty in distance measurement should be incorporated in the uncertainty budget, while estimating the expanded uncertainty in HDR (192)Ir source strength measurement.

  20. Correct mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme on ping-pong protocol

    OpenAIRE

    Zhang, Zhanjun

    2004-01-01

    Comment: The wrong mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme [PRL90(03)157901]on ping-pong protocol have been pointed out and corrected

  1. Calculation of the soft error rate of submicron CMOS logic circuits

    International Nuclear Information System (INIS)

    Juhnke, T.; Klar, H.

    1995-01-01

    A method to calculate the soft error rate (SER) of CMOS logic circuits with dynamic pipeline registers is described. This method takes into account charge collection by drift and diffusion. The method is verified by comparison of calculated SER's to measurement results. Using this method, the SER of a highly pipelined multiplier is calculated as a function of supply voltage for a 0.6 microm, 0.3 microm, and 0.12 microm technology, respectively. It has been found that the SER of such highly pipelined submicron CMOS circuits may become too high so that countermeasures have to be taken. Since the SER greatly increases with decreasing supply voltage, low-power/low-voltage circuits may show more than eight times the SER for half the normal supply voltage as compared to conventional designs

  2. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  3. Personnel selection and emotional stability certification: establishing a false negative error rate when clinical interviews

    International Nuclear Information System (INIS)

    Berghausen, P.E. Jr.

    1987-01-01

    The security plans of nuclear plants generally require that all personnel who are to have unescorted access to protected areas or vital islands be screened for emotional instability. Screening typically consists of first administering the MMPI and then conducting a clinical interview. Interviews-by-exception protocols provide for only those employees to be interviewed who have some indications of psychopathology in their MMPI results. A problem arises when the indications are not readily apparent: False negatives are likely to occur, resulting in employees being erroneously granted unescorted access. The present paper describes the development of a predictive equation which permits accurate identification, via analysis of MMPI results, of those employees who are most in need of being interviewed. The predictive equation also permits knowing probably maximum false negative error rates when a given percentage of employees is interviewed

  4. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2010-10-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  5. Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying

    KAUST Repository

    Fareed, Muhammad Mehboob; Uysal, Murat; Tsiftsis, Theodoros A.

    2014-01-01

    In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.

  6. Evolutionary enhancement of the SLIM-MAUD method of estimating human error rates

    International Nuclear Information System (INIS)

    Zamanali, J.H.; Hubbard, F.R.; Mosleh, A.; Waller, M.A.

    1992-01-01

    The methodology described in this paper assigns plant-specific dynamic human error rates (HERs) for individual plant examinations based on procedural difficulty, on configuration features, and on the time available to perform the action. This methodology is an evolutionary improvement of the success likelihood index methodology (SLIM-MAUD) for use in systemic scenarios. It is based on the assumption that the HER in a particular situation depends of the combined effects of a comprehensive set of performance-shaping factors (PSFs) that influence the operator's ability to perform the action successfully. The PSFs relate the details of the systemic scenario in which the action must be performed according to the operator's psychological and cognitive condition

  7. Analysis of family-wise error rates in statistical parametric mapping using random field theory.

    Science.gov (United States)

    Flandin, Guillaume; Friston, Karl J

    2017-11-01

    This technical report revisits the analysis of family-wise error rates in statistical parametric mapping-using random field theory-reported in (Eklund et al. []: arXiv 1511.01863). Contrary to the understandable spin that these sorts of analyses attract, a review of their results suggests that they endorse the use of parametric assumptions-and random field theory-in the analysis of functional neuroimaging data. We briefly rehearse the advantages parametric analyses offer over nonparametric alternatives and then unpack the implications of (Eklund et al. []: arXiv 1511.01863) for parametric procedures. Hum Brain Mapp, 2017. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  8. Performance analysis for the bit-error rate of SAC-OCDMA systems

    Science.gov (United States)

    Feng, Gang; Cheng, Wenqing; Chen, Fujun

    2015-09-01

    Under low power, Gaussian statistics by invoking the central limit theorem is feasible to predict the upper bound in the spectral-amplitude-coding optical code division multiple access (SAC-OCDMA) system. However, this case severely underestimates the bit-error rate (BER) performance of the system under high power assumption. Fortunately, the exact negative binomial (NB) model is a perfect replacement for the Gaussian model in the prediction and evaluation. Based on NB statistics, a more accurate closed-form expression is analyzed and derived for the SAC-OCDMA system. The experiment shows that the obtained expression provides a more precise prediction of the BER performance under the low and high power assumptions.

  9. Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-06-01

    In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.

  10. Methods for determining and processing 3D errors and uncertainties for AFM data analysis

    Science.gov (United States)

    Klapetek, P.; Nečas, D.; Campbellová, A.; Yacoot, A.; Koenders, L.

    2011-02-01

    This paper describes the processing of three-dimensional (3D) scanning probe microscopy (SPM) data. It is shown that 3D volumetric calibration error and uncertainty data can be acquired for both metrological atomic force microscope systems and commercial SPMs. These data can be used within nearly all the standard SPM data processing algorithms to determine local values of uncertainty of the scanning system. If the error function of the scanning system is determined for the whole measurement volume of an SPM, it can be converted to yield local dimensional uncertainty values that can in turn be used for evaluation of uncertainties related to the acquired data and for further data processing applications (e.g. area, ACF, roughness) within direct or statistical measurements. These have been implemented in the software package Gwyddion.

  11. Methods for determining and processing 3D errors and uncertainties for AFM data analysis

    International Nuclear Information System (INIS)

    Klapetek, P; Campbellová, A; Nečas, D; Yacoot, A; Koenders, L

    2011-01-01

    This paper describes the processing of three-dimensional (3D) scanning probe microscopy (SPM) data. It is shown that 3D volumetric calibration error and uncertainty data can be acquired for both metrological atomic force microscope systems and commercial SPMs. These data can be used within nearly all the standard SPM data processing algorithms to determine local values of uncertainty of the scanning system. If the error function of the scanning system is determined for the whole measurement volume of an SPM, it can be converted to yield local dimensional uncertainty values that can in turn be used for evaluation of uncertainties related to the acquired data and for further data processing applications (e.g. area, ACF, roughness) within direct or statistical measurements. These have been implemented in the software package Gwyddion

  12. Human error considerations and annunciator effects in determining optimal test intervals for periodically inspected standby systems

    International Nuclear Information System (INIS)

    McWilliams, T.P.; Martz, H.F.

    1981-01-01

    This paper incorporates the effects of four types of human error in a model for determining the optimal time between periodic inspections which maximizes the steady state availability for standby safety systems. Such safety systems are characteristic of nuclear power plant operations. The system is modeled by means of an infinite state-space Markov chain. Purpose of the paper is to demonstrate techniques for computing steady-state availability A and the optimal periodic inspection interval tau* for the system. The model can be used to investigate the effects of human error probabilities on optimal availability, study the benefits of annunciating the standby-system, and to determine optimal inspection intervals. Several examples which are representative of nuclear power plant applications are presented

  13. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, D; Ehler, E [University of Minnesota, Minneapolis, MN (United States)

    2015-06-15

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing.

  14. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    International Nuclear Information System (INIS)

    Sterling, D; Ehler, E

    2015-01-01

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing

  15. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    Science.gov (United States)

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ

  16. Determining long-term regional erosion rates using impact craters

    Science.gov (United States)

    Hergarten, Stefan; Kenkmann, Thomas

    2015-04-01

    More than 300,000 impact craters have been found on Mars, while the surface of Moon's highlands is even saturated with craters. In contrast, only 184 impact craters have been confirmed on Earth so far with only 125 of them exposed at the surface. The spatial distribution of these impact craters is highly inhomogeneous. Beside the large variation in the age of the crust, consumption of craters by erosion and burial by sediments are the main actors being responsible for the quite small and inhomogeneous crater record. In this study we present a novel approach to infer long-term average erosion rates at regional scales from the terrestrial crater inventory. The basic idea behind this approach is a dynamic equilibrium between the production of new craters and their consumption by erosion. It is assumed that each crater remains detectable until the total erosion after the impact exceeds a characteristic depth depending on the crater's diameter. Combining this model with the terrestrial crater production rate, i.e., the number of craters per unit area and time as a function of their diameter, allows for a prediction of the expected number of craters in a given region as a function of the erosion rate. Using the real crater inventory, this relationship can be inverted to determine the regional long-term erosion rate and its statistical uncertainty. A limitation by the finite age of the crust can also be taken into account. Applying the method to the Colorado Plateau and the Deccan Traps, both being regions with a distinct geological history, yields erosion rates in excellent agreement with those obtained by other, more laborious methods. However, these rates are formally exposed to large statistical uncertainties due to the small number of impact craters. As higher crater densities are related to lower erosion rates, smaller statistical errors can be expected when large regions in old parts of the crust are considered. Very low long-term erosion rates of less than 4

  17. MONETARY MODELS AND EXCHANGE RATE DETERMINATION ...

    African Journals Online (AJOL)

    Power Party [PPP] based on the law of one price asserts that the change in the exchange rate between .... exchange in international economic transactions has made it vitally evident that the management of ... One lesson from this episode is to ...

  18. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    Science.gov (United States)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  19. Enzymatic spectrophotometric reaction rate determination of aspartame

    Directory of Open Access Journals (Sweden)

    Trifković Kata T.

    2015-01-01

    Full Text Available Aspartame is an artificial sweetener of low caloric value (approximately 200 times sweeter than sucrose. Aspartame is currently permitted for use in food and beverage production in more than 90 countries. The application of aspartame in food products requires development of rapid, inexpensive and accurate method for its determination. The new assay for determination of aspartame was based on set of reactions that are catalyzed by three different enzymes: α-chymotrypsin, alcohol oxidase and horseradish peroxidase. Optimization of the proposed method was carried out for: (i α-chymotrypsin activity; (ii time allowed for α-chymotrypsin action, (iii temperature. Evaluation of the developed method was done by determining aspartame content in “diet” drinks, as well as in artificial sweetener pills. [Projekat Ministarstva nauke Republike Srbije, br. III46010

  20. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System

    Directory of Open Access Journals (Sweden)

    Jiayu Zhang

    2018-05-01

    Full Text Available The Semi-Strapdown Inertial Navigation System (SSINS provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS inertial measurement unit (MIMU outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.

  1. Quantitative comparison of errors in 15N transverse relaxation rates measured using various CPMG phasing schemes

    International Nuclear Information System (INIS)

    Myint Wazo; Cai Yufeng; Schiffer, Celia A.; Ishima, Rieko

    2012-01-01

    Nitrogen-15 Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation experiment are widely used to characterize protein backbone dynamics and chemical exchange parameters. Although an accurate value of the transverse relaxation rate, R 2 , is needed for accurate characterization of dynamics, the uncertainty in the R 2 value depends on the experimental settings and the details of the data analysis itself. Here, we present an analysis of the impact of CPMG pulse phase alternation on the accuracy of the 15 N CPMG R 2 . Our simulations show that R 2 can be obtained accurately for a relatively wide spectral width, either using the conventional phase cycle or using phase alternation when the r.f. pulse power is accurately calibrated. However, when the r.f. pulse is miscalibrated, the conventional CPMG experiment exhibits more significant uncertainties in R 2 caused by the off-resonance effect than does the phase alternation experiment. Our experiments show that this effect becomes manifest under the circumstance that the systematic error exceeds that arising from experimental noise. Furthermore, our results provide the means to estimate practical parameter settings that yield accurate values of 15 N transverse relaxation rates in the both CPMG experiments.

  2. Power penalties for multi-level PAM modulation formats at arbitrary bit error rates

    Science.gov (United States)

    Kaliteevskiy, Nikolay A.; Wood, William A.; Downie, John D.; Hurley, Jason; Sterlingov, Petr

    2016-03-01

    There is considerable interest in combining multi-level pulsed amplitude modulation formats (PAM-L) and forward error correction (FEC) in next-generation, short-range optical communications links for increased capacity. In this paper we derive new formulas for the optical power penalties due to modulation format complexity relative to PAM-2 and due to inter-symbol interference (ISI). We show that these penalties depend on the required system bit-error rate (BER) and that the conventional formulas overestimate link penalties. Our corrections to the standard formulas are very small at conventional BER levels (typically 1×10-12) but become significant at the higher BER levels enabled by FEC technology, especially for signal distortions due to ISI. The standard formula for format complexity, P = 10log(L-1), is shown to overestimate the actual penalty for PAM-4 and PAM-8 by approximately 0.1 and 0.25 dB respectively at 1×10-3 BER. Then we extend the well-known PAM-2 ISI penalty estimation formula from the IEEE 802.3 standard 10G link modeling spreadsheet to the large BER case and generalize it for arbitrary PAM-L formats. To demonstrate and verify the BER dependence of the ISI penalty, a set of PAM-2 experiments and Monte-Carlo modeling simulations are reported. The experimental results and simulations confirm that the conventional formulas can significantly overestimate ISI penalties at relatively high BER levels. In the experiments, overestimates up to 2 dB are observed at 1×10-3 BER.

  3. Pattern placement errors: application of in-situ interferometer-determined Zernike coefficients in determining printed image deviations

    Science.gov (United States)

    Roberts, William R.; Gould, Christopher J.; Smith, Adlai H.; Rebitz, Ken

    2000-08-01

    Several ideas have recently been presented which attempt to measure and predict lens aberrations for new low k1 imaging systems. Abbreviated sets of Zernike coefficients have been produced and used to predict Across Chip Linewidth Variation. Empirical use of the wavefront aberrations can now be used in commercially available lithography simulators to predict pattern distortion and placement errors. Measurement and Determination of Zernike coefficients has been a significant effort of many. However the use of this data has generally been limited to matching lenses or picking best fit lense pairs. We will use wavefront aberration data collected using the Litel InspecStep in-situ Interferometer as input data for Prolith/3D to model and predict pattern placement errors and intrafield overlay variation. Experiment data will be collected and compared to the simulated predictions.

  4. 7 CFR 1714.5 - Determination of interest rates on municipal rate loans.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Determination of interest rates on municipal rate... General § 1714.5 Determination of interest rates on municipal rate loans. (a) RUS will post on the RUS website, Electric Program HomePage, a schedule of interest rates for municipal rate loans at the beginning...

  5. Autonomous orbit determination and its error analysis for deep space using X-ray pulsar

    International Nuclear Information System (INIS)

    Feng, Dongzhu; Yuan, Xiaoguang; Guo, Hehe; Wang, Xin

    2014-01-01

    Autonomous orbit determination (OD) is a complex process using filtering method to integrate observation and orbit dynamic model effectively and estimate the position and velocity of a spacecraft. As a novel technology for autonomous interplanetary OD, X-ray pulsar holds great promise for deep space exploration. The position and velocity of spacecraft should be estimated accurately during the OD process. However, under the same condition, the accuracy of OD can be greatly reduced by the error of the initial orbit value and the orbit mutation. To resolve this problem, we propose a novel OD method, which is based on the X-ray pulsar measurement and Adaptive Unscented Kalman Filter (AUKF). The accuracy of OD can be improved obviously because the AUKF estimates the orbit of spacecraft using measurement residual. During the simulation, the orbit of Phoenix Mars Lander, Deep Impact Probe, and Voyager 1 are selected. Compared with Unscented Kalman Filter (UKF) and Extended Kalman Filter (EKF), the simulation results demonstrate that the proposed OD method based on AUKF can accurately determinate the velocity and position and effectively decrease the orbit estimated errors which is caused by the orbit mutation and orbit initial errors. (authors)

  6. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities.

    Science.gov (United States)

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-03-01

    Sound is among the significant environmental factors for people's health, and it has an important role in both physical and psychological injuries, and it also affects individuals' performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant's performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). This study found that a sound level of 110 dB had an important effect on the individuals' performances, i.e., the performances were decreased.

  7. PERBANDINGAN BIT ERROR RATE KODE REED-SOLOMON DENGAN KODE BOSE-CHAUDHURI-HOCQUENGHEM MENGGUNAKAN MODULASI 32-FSK

    Directory of Open Access Journals (Sweden)

    Eva Yovita Dwi Utami

    2016-11-01

    Full Text Available Kode Reed-Solomon (RS dan kode Bose-Chaudhuri-Hocquenghem (BCH merupakan kode pengoreksi error yang termasuk dalam jenis kode blok siklis. Kode pengoreksi error diperlukan pada sistem komunikasi untuk memperkecil error pada informasi yang dikirimkan. Dalam makalah ini, disajikan hasil penelitian kinerja BER sistem komunikasi yang menggunakan kode RS, kode BCH, dan sistem yang tidak menggunakan kode RS dan kode BCH, menggunakan modulasi 32-FSK pada kanal Additive White Gaussian Noise (AWGN, Rayleigh dan Rician. Kemampuan memperkecil error diukur menggunakan nilai Bit Error Rate (BER yang dihasilkan. Hasil penelitian menunjukkan bahwa kode RS seiring dengan penambahan nilai SNR, menurunkan nilai BER yang lebih curam bila dibandingkan sistem dengan kode BCH. Sedangkan kode BCH memberikan keunggulan saat SNR bernilai kecil, memiliki BER lebih baik daripada sistem dengan kode RS.

  8. Assessment of the rate and etiology of pharmacological errors by nurses of two major teaching hospitals in Shiraz

    Directory of Open Access Journals (Sweden)

    Fatemeh Vizeshfar

    2015-06-01

    Full Text Available Medication errors have serious consequences for patients, their families and care givers. Reduction of these faults by care givers such as nurses can increase the safety of patients. The goal of study was to assess the rate and etiology of medication error in pediatric and medical wards. This cross-sectional-analytic study is done on 101 registered nurses who had the duty of drug administration in medical pediatric and adults’ wards. Data was collected by a questionnaire including demographic information, self report faults, etiology of medication error and researcher observations. The results showed that nurses’ faults in pediatric wards were 51/6% and in adults wards were 47/4%. The most common faults in adults wards were later or sooner drug administration (48/6%, and administration of drugs without prescription and administering wrong drugs were the most common medication errors in pediatric wards (each one 49/2%. According to researchers’ observations, the medication error rate of 57/9% was rated low in adults wards and the rate of 69/4% in pediatric wards was rated moderate. The most frequent medication errors in both adults and pediatric wards were that nurses didn’t explain the reason and type of drug they were going to administer to patients. Independent T-test showed a significant change in faults observations in pediatric wards (p=0.000 and in adults wards (p=0.000. Several studies have shown medication errors all over the world, especially in pediatric wards. However, by designing a suitable report system and use a multi disciplinary approach, we can be reduced the occurrence of medication errors and its negative consequences.

  9. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    Science.gov (United States)

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  10. 5 CFR 304.104 - Determining rate of pay.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Determining rate of pay. 304.104 Section... CONSULTANT APPOINTMENTS § 304.104 Determining rate of pay. (a) The rate of basic pay for experts and... appropriate rate of basic pay on an hourly or daily basis, subject to the limitations described in section 304...

  11. Impact of catheter reconstruction error on dose distribution in high dose rate intracavitary brachytherapy and evaluation of OAR doses

    International Nuclear Information System (INIS)

    Thaper, Deepak; Shukla, Arvind; Rathore, Narendra; Oinam, Arun S.

    2016-01-01

    In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this study is to evaluate the impact of catheter reconstruction error on dose distribution in CT based intracavitary brachytherapy planning and evaluation of its effect on organ at risk (OAR) like bladder, rectum and sigmoid and target volume High risk clinical target volume (HR-CTV)

  12. Soft error rate estimations of the Kintex-7 FPGA within the ATLAS Liquid Argon (LAr) Calorimeter

    International Nuclear Information System (INIS)

    Wirthlin, M J; Harding, A; Takai, H

    2014-01-01

    This paper summarizes the radiation testing performed on the Xilinx Kintex-7 FPGA in an effort to determine if the Kintex-7 can be used within the ATLAS Liquid Argon (LAr) Calorimeter. The Kintex-7 device was tested with wide-spectrum neutrons, protons, heavy-ions, and mixed high-energy hadron environments. The results of these tests were used to estimate the configuration ram and block ram upset rate within the ATLAS LAr. These estimations suggest that the configuration memory will upset at a rate of 1.1 × 10 −10 upsets/bit/s and the bram memory will upset at a rate of 9.06 × 10 −11 upsets/bit/s. For the Kintex 7K325 device, this translates to 6.85 × 10 −3 upsets/device/s for configuration memory and 1.49 × 10 −3 for block memory

  13. Time Domain Equalizer Design Using Bit Error Rate Minimization for UWB Systems

    Directory of Open Access Journals (Sweden)

    Syed Imtiaz Husain

    2009-01-01

    Full Text Available Ultra-wideband (UWB communication systems occupy huge bandwidths with very low power spectral densities. This feature makes the UWB channels highly rich in resolvable multipaths. To exploit the temporal diversity, the receiver is commonly implemented through a Rake. The aim to capture enough signal energy to maintain an acceptable output signal-to-noise ratio (SNR dictates a very complicated Rake structure with a large number of fingers. Channel shortening or time domain equalizer (TEQ can simplify the Rake receiver design by reducing the number of significant taps in the effective channel. In this paper, we first derive the bit error rate (BER of a multiuser and multipath UWB system in the presence of a TEQ at the receiver front end. This BER is then written in a form suitable for traditional optimization. We then present a TEQ design which minimizes the BER of the system to perform efficient channel shortening. The performance of the proposed algorithm is compared with some generic TEQ designs and other Rake structures in UWB channels. It is shown that the proposed algorithm maintains a lower BER along with efficiently shortening the channel.

  14. DETERMINING STAR FORMATION RATES FOR INFRARED GALAXIES

    International Nuclear Information System (INIS)

    Rieke, G. H.; Weiner, B. J.; Perez-Gonzalez, P. G.; Donley, J. L.; Alonso-Herrero, A.; Blaylock, M.; Marcillac, D.

    2009-01-01

    We show that measures of star formation rates (SFRs) for infrared galaxies using either single-band 24 μm or extinction-corrected Paα luminosities are consistent in the total infrared luminosity = L(TIR) ∼ 10 10 L sun range. MIPS 24 μm photometry can yield SFRs accurately from this luminosity upward: SFR(M sun yr -1 ) = 7.8 x 10 -10 L(24 μm, L sun ) from L(TIR) = 5x 10 9 L sun to 10 11 L sun and SFR = 7.8 x 10 -10 L(24 μm, L sun )(7.76 x 10 -11 L(24)) 0.048 for higher L(TIR). For galaxies with L(TIR) ≥ 10 10 L sun , these new expressions should provide SFRs to within 0.2 dex. For L(TIR) ≥ 10 11 L sun , we find that the SFR of infrared galaxies is significantly underestimated using extinction-corrected Paα (and presumably using any other optical or near-infrared recombination lines). As a part of this work, we constructed spectral energy distribution templates for eleven luminous and ultraluminous purely star forming infrared galaxies and over the spectral range 0.4 μm to 30 cm. We use these templates and the SINGS data to construct average templates from 5 μm to 30 cm for infrared galaxies with L(TIR) = 5x 10 9 to 10 13 L sun . All of these templates are made available online.

  15. Student laboratory experiments exploring optical fibre communication systems, eye diagrams, and bit error rates

    Science.gov (United States)

    Walsh, Douglas; Moodie, David; Mauchline, Iain; Conner, Steve; Johnstone, Walter; Culshaw, Brian

    2005-06-01

    Optical fibre communications has proved to be one of the key application areas, which created, and ultimately propelled the global growth of the photonics industry over the last twenty years. Consequently the teaching of the principles of optical fibre communications has become integral to many university courses covering photonics technology. However to reinforce the fundamental principles and key technical issues students examine in their lecture courses and to develop their experimental skills, it is critical that the students also obtain hands-on practical experience of photonics components, instruments and systems in an associated teaching laboratory. In recognition of this need OptoSci, in collaboration with university academics, commercially developed a fibre optic communications based educational package (ED-COM). This educator kit enables students to; investigate the characteristics of the individual communications system components (sources, transmitters, fibre, receiver), examine and interpret the overall system performance limitations imposed by attenuation and dispersion, conduct system design and performance analysis. To further enhance the experimental programme examined in the fibre optic communications kit, an extension module to ED-COM has recently been introduced examining one of the most significant performance parameters of digital communications systems, the bit error rate (BER). This add-on module, BER(COM), enables students to generate, evaluate and investigate signal quality trends by examining eye patterns, and explore the bit-rate limitations imposed on communication systems by noise, attenuation and dispersion. This paper will examine the educational objectives, background theory, and typical results for these educator kits, with particular emphasis on BER(COM).

  16. Error analysis of satellite attitude determination using a vision-based approach

    Science.gov (United States)

    Carozza, Ludovico; Bevilacqua, Alessandro

    2013-09-01

    Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).

  17. [The effectiveness of error reporting promoting strategy on nurse's attitude, patient safety culture, intention to report and reporting rate].

    Science.gov (United States)

    Kim, Myoungsoo

    2010-04-01

    The purpose of this study was to examine the impact of strategies to promote reporting of errors on nurses' attitude to reporting errors, organizational culture related to patient safety, intention to report and reporting rate in hospital nurses. A nonequivalent control group non-synchronized design was used for this study. The program was developed and then administered to the experimental group for 12 weeks. Data were analyzed using descriptive analysis, X(2)-test, t-test, and ANCOVA with the SPSS 12.0 program. After the intervention, the experimental group showed significantly higher scores for nurses' attitude to reporting errors (experimental: 20.73 vs control: 20.52, F=5.483, p=.021) and reporting rate (experimental: 3.40 vs control: 1.33, F=1998.083, porganizational culture and intention to report. The study findings indicate that strategies that promote reporting of errors play an important role in producing positive attitudes to reporting errors and improving behavior of reporting. Further advanced strategies for reporting errors that can lead to improved patient safety should be developed and applied in a broad range of hospitals.

  18. Residents' Ratings of Their Clinical Supervision and Their Self-Reported Medical Errors: Analysis of Data From 2009.

    Science.gov (United States)

    Baldwin, DeWitt C; Daugherty, Steven R; Ryan, Patrick M; Yaghmour, Nicholas A; Philibert, Ingrid

    2018-04-01

    Medical errors and patient safety are major concerns for the medical and medical education communities. Improving clinical supervision for residents is important in avoiding errors, yet little is known about how residents perceive the adequacy of their supervision and how this relates to medical errors and other education outcomes, such as learning and satisfaction. We analyzed data from a 2009 survey of residents in 4 large specialties regarding the adequacy and quality of supervision they receive as well as associations with self-reported data on medical errors and residents' perceptions of their learning environment. Residents' reports of working without adequate supervision were lower than data from a 1999 survey for all 4 specialties, and residents were least likely to rate "lack of supervision" as a problem. While few residents reported that they received inadequate supervision, problems with supervision were negatively correlated with sufficient time for clinical activities, overall ratings of the residency experience, and attending physicians as a source of learning. Problems with supervision were positively correlated with resident reports that they had made a significant medical error, had been belittled or humiliated, or had observed others falsifying medical records. Although working without supervision was not a pervasive problem in 2009, when it happened, it appeared to have negative consequences. The association between inadequate supervision and medical errors is of particular concern.

  19. Monetary models and exchange rate determination: The Nigerian ...

    African Journals Online (AJOL)

    Monetary models and exchange rate determination: The Nigerian evidence. ... income levels and real interest rate differentials provide better forecasts of the ... partner can expect to suffer depreciation in the external value of her currency.

  20. Attitudes of Mashhad Public Hospital's Nurses and Midwives toward the Causes and Rates of Medical Errors Reporting.

    Science.gov (United States)

    Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh

    2017-03-01

    Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer

  1. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    International Nuclear Information System (INIS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-01-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  2. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    Energy Technology Data Exchange (ETDEWEB)

    Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)

    2016-11-15

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  3. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    International Nuclear Information System (INIS)

    Chau, H.F.

    2002-01-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1√(5)≅27.6%, thereby making it the most error resistant scheme known to date

  4. Determining The Factors Causing Human Error Deficiencies At A Public Utility Company

    Directory of Open Access Journals (Sweden)

    F. W. Badenhorst

    2004-11-01

    Full Text Available According to Neff (1977, as cited by Bergh (1995, the westernised culture considers work important for industrial mental health. Most individuals experience work positively, which creates a positive attitude. Should this positive attitude be inhibited, workers could lose concentration and become bored, potentially resulting in some form of human error. The aim of this research was to determine the factors responsible for human error events, which lead to power supply failures at Eskom power stations. Proposals were made for the reduction of these contributing factors towards improving plant performance. The target population was 700 panel operators in Eskom’s Power Generation Group. The results showed that factors leading to human error can be reduced or even eliminated. Opsomming Neff (1977 soos aangehaal deur Bergh (1995, skryf dat in die westerse kultuur werk belangrik vir bedryfsgeestesgesondheid is. Die meeste persone ervaar werk as positief, wat ’n positiewe gesindheid kweek. Indien hierdie positiewe gesindheid geïnhibeer word, kan dit lei tot ’n gebrek aan konsentrasie by die werkers. Werkers kan verveeld raak en dit kan weer lei tot menslike foute. Die doel van hierdie navorsing is om die faktore vas te stel wat tot menslike foute lei, en wat bydra tot onderbrekings in kragvoorsiening by Eskom kragstasies. Voorstelle is gemaak vir die vermindering van hierdie bydraende faktore ten einde die kragaanleg se prestasie te verbeter. Die teiken-populasie was 700 paneel-operateurs in die Kragopwekkingsgroep by Eskom. Die resultate dui daarop dat die faktore wat aanleiding gee tot menslike foute wel verminder, of geëlimineer kan word.

  5. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  6. Determining Sorption Rate by a Continuous Gravimetric Method

    National Research Council Canada - National Science Library

    Hall, Monicia R; Procell, Lawrence R; Bartram, Philip W; Shuely, Wendel J

    2003-01-01

    ... were automatically recorded in an Excel file while CARC coupons were submerged in solvent. Initial sorption rates were determined for butyl acetate, butyl ether, cyclohexane and propylene carbonate...

  7. Finding the right coverage : The impact of coverage and sequence quality on single nucleotide polymorphism genotyping error rates

    NARCIS (Netherlands)

    Fountain, Emily D.; Pauli, Jonathan N.; Reid, Brendan N.; Palsboll, Per J.; Peery, M. Zachariah

    Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown.

  8. Errors in the determination of the total filtration of diagnostic x-ray tubes by the HVL method

    International Nuclear Information System (INIS)

    Gilmore, B.J.; Cranley, K.

    1990-01-01

    Optimal technique and an analysis of errors are essential for interpreting whether the total filtration of a diagnostic x-ray tube is acceptable. The study discusses this problem from a theoretical viewpoint utilising recent theoretical HVL-total-filtration data relating to 10 0 and 16 0 tungsten target angles and 0-30% kilovoltage ripples. The theory indicates the typical accuracy to which each appropriate parameter must be determined to maintain acceptable errors in total filtration. A quantitative approach is taken to evaluate systematic errors in a technique for interpolation of HVL from raw attenuation curve data. A theoretical derivation is presented to enable random errors in HVL due to x-ray set inconsistency to be estimated for particular experimental techniques and data analysis procedures. Further formulae are presented to enable errors in the total filtration estimate to be readily determined from those in the individual parameters. (author)

  9. Evaluation of errors in determination of DNA melting curve registered with differential scanning calorimetry

    International Nuclear Information System (INIS)

    Lando, D.Y.; Fridman, A.S.; Galyuk, E.N.; Dalyan, Y.B.; Grigoryan, I.E.; Haroutiunian, S.G.

    2013-01-01

    The differential scanning calorimetry (DSC) is more sensitive than UV absorption spectrophotometry as a tool for the measurement of DNA melting curves. The advantage of DSC is a direct determination of differential melting curves (DMC) obtained without numerical differentiation. However, the difference between the helix-coil transition enthalpies of AT and GC base pairs can cause distortions in the shape of melting curve. Up to date, the errors caused by those distortions were not evaluated. In this study, a simple procedure of recalculation of a calorimetric DMC into a real DMC is developed. It demonstrates that the 'real' melting curve and differential melting curve deviate very slightly from the same two curves calculated from DSC data. The melting temperature and the temperature melting range are usually the same even if the difference in the enthalpies is several times higher than a real one

  10. Error resilient H.264/AVC Video over Satellite for low Packet Loss Rates

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren; Andersen, Jakob Dahl

    2007-01-01

    The performance of video over satellite is simulated. The error resilience tools of intra macroblock refresh and slicing are optimized for live broadcast video over satellite. The improved performance using feedback, using a cross- layer approach, over the satellite link is also simulated. The ne...

  11. SNP discovery in nonmodel organisms: strand bias and base-substitution errors reduce conversion rates.

    Science.gov (United States)

    Gonçalves da Silva, Anders; Barendse, William; Kijas, James W; Barris, Wes C; McWilliam, Sean; Bunch, Rowan J; McCullough, Russell; Harrison, Blair; Hoelzel, A Rus; England, Phillip R

    2015-07-01

    Single nucleotide polymorphisms (SNPs) have become the marker of choice for genetic studies in organisms of conservation, commercial or biological interest. Most SNP discovery projects in nonmodel organisms apply a strategy for identifying putative SNPs based on filtering rules that account for random sequencing errors. Here, we analyse data used to develop 4723 novel SNPs for the commercially important deep-sea fish, orange roughy (Hoplostethus atlanticus), to assess the impact of not accounting for systematic sequencing errors when filtering identified polymorphisms when discovering SNPs. We used SAMtools to identify polymorphisms in a velvet assembly of genomic DNA sequence data from seven individuals. The resulting set of polymorphisms were filtered to minimize 'bycatch'-polymorphisms caused by sequencing or assembly error. An Illumina Infinium SNP chip was used to genotype a final set of 7714 polymorphisms across 1734 individuals. Five predictors were examined for their effect on the probability of obtaining an assayable SNP: depth of coverage, number of reads that support a variant, polymorphism type (e.g. A/C), strand-bias and Illumina SNP probe design score. Our results indicate that filtering out systematic sequencing errors could substantially improve the efficiency of SNP discovery. We show that BLASTX can be used as an efficient tool to identify single-copy genomic regions in the absence of a reference genome. The results have implications for research aiming to identify assayable SNPs and build SNP genotyping assays for nonmodel organisms. © 2014 John Wiley & Sons Ltd.

  12. Sharp Threshold Detection Based on Sup-norm Error rates in High-dimensional Models

    DEFF Research Database (Denmark)

    Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl

    focused almost exclusively on estimation errors in stronger norms. We show that this sup-norm bound can be used to distinguish between zero and non-zero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent...

  13. On the Symbol Error Rate of M-ary MPSK over Generalized Fading Channels with Additive Laplacian Noise

    KAUST Repository

    Soury, Hamza

    2015-01-07

    This work considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox’s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1].

  14. On the symbol error rate of M-ary MPSK over generalized fading channels with additive Laplacian noise

    KAUST Repository

    Soury, Hamza

    2014-06-01

    This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox\\'s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.

  15. On the symbol error rate of M-ary MPSK over generalized fading channels with additive Laplacian noise

    KAUST Repository

    Soury, Hamza; Alouini, Mohamed-Slim

    2014-01-01

    This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox's H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.

  16. On the Symbol Error Rate of M-ary MPSK over Generalized Fading Channels with Additive Laplacian Noise

    KAUST Repository

    Soury, Hamza; Alouini, Mohamed-Slim

    2015-01-01

    This work considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox’s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1].

  17. Determinants of Commercial banks' interest rate spreads in Botswana

    African Journals Online (AJOL)

    The paper investigated the determinants of commercial banks' interest rate ... Index (HHI), a proxy measure of the degree of competition in a market ... the difference between prime lending rate and the savings account rate. ... as profit maximizing firms whose primary business is to offer deposits and loan ..... and Accounting.

  18. Dose Rate Determination from Airborne Gamma-ray Spectra

    DEFF Research Database (Denmark)

    Bargholz, Kim

    1996-01-01

    The standard method for determination of ground level dose rates from airborne gamma-ray is the integral count rate which for a constant flying altitude is assumed proportional to the dose rate. The method gives reasonably results for natural radioactivity which almost always has the same energy...

  19. Determining the Exchange Rate: Purchasing Power Parity – PPP

    Directory of Open Access Journals (Sweden)

    Bangun WIDOYOKO

    2018-05-01

    Full Text Available This study aimed to examine the effect of inflation on the issue of exchange rate determination of the forward exchange rate on the exchange rate of RMB (Renminbi to Rupiah. Inflation has been chosen as an independent variable because of its close relation to PPP (purchasing power parity theory. Analyses in this research have used logistic analysis with time series data. The data that has been used include exchange rate data with the period 2007-2017 with a sample size of 132 data. The results of this study have shown that inflation is effective in determining the exchange rate.

  20. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    Energy Technology Data Exchange (ETDEWEB)

    Boehnke, E McKenzie; DeMarco, J; Steers, J; Fraass, B [Cedars-Sinai Medical Center, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readings are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.

  1. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  2. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  3. Controlling type I error rate for fast track drug development programmes.

    Science.gov (United States)

    Shih, Weichung J; Ouyang, Peter; Quan, Hui; Lin, Yong; Michiels, Bart; Bijnens, Luc

    2003-03-15

    The U.S. Food and Drug Administration (FDA) Modernization Act of 1997 has a Section (No. 112) entitled 'Expediting Study and Approval of Fast Track Drugs' (the Act). In 1998, the FDA issued a 'Guidance for Industry: the Fast Track Drug Development Programs' (the FTDD programmes) to meet the requirement of the Act. The purpose of FTDD programmes is to 'facilitate the development and expedite the review of new drugs that are intended to treat serious or life-threatening conditions and that demonstrate the potential to address unmet medical needs'. Since then many health products have reached patients who suffered from AIDS, cancer, osteoporosis, and many other diseases, sooner by utilizing the Fast Track Act and the FTDD programmes. In the meantime several scientific issues have also surfaced when following the FTDD programmes. In this paper we will discuss the concept of two kinds of type I errors, namely, the 'conditional approval' and the 'final approval' type I errors, and propose statistical methods for controlling them in a new drug submission process. Copyright 2003 John Wiley & Sons, Ltd.

  4. Bit Error Rate Due to Misalignment of Earth Station Antenna Pointing to Satellite

    Directory of Open Access Journals (Sweden)

    Wahyu Pamungkas

    2010-04-01

    Full Text Available One problem causing reduction of energy in satellite communications system is the misalignment of earth station antenna pointing to satellite. Error in pointing would affect the quality of information signal to energy bit in earth station. In this research, error in pointing angle occurred only at receiver (Rx antenna, while the transmitter (Tx antennas precisely point to satellite. The research was conducted towards two satellites, namely TELKOM-1 and TELKOM-2. At first, measurement was made by directing Tx antenna precisely to satellite, resulting in an antenna pattern shown by spectrum analyzer. The output from spectrum analyzers is drawn with the right scale to describe swift of azimuth and elevation pointing angle towards satellite. Due to drifting from the precise pointing, it influenced the received link budget indicated by pattern antenna. This antenna pattern shows reduction of power level received as a result of pointing misalignment. As a conclusion, the increasing misalignment of pointing to satellite would affect in the reduction of received signal parameters link budget of down-link traffic.

  5. On the determinants of measurement error in time-driven costing

    NARCIS (Netherlands)

    Cardinaels, E.; Labro, E.

    2008-01-01

    Although time estimates are used extensively for costing purposes, they are prone to measurement error. In an experimental setting, we research how measurement error in time estimates varies with: (1) the level of aggregation in the definition of costing system activities (aggregated or

  6. Determination of Biological Oxygen Demand Rate Constant and ...

    African Journals Online (AJOL)

    Determination of Biological Oxygen Demand Rate Constant and Ultimate Biological Oxygen Demand for Liquid Waste Generated from Student Cafeteria at Jimma University: A Tool for Development of Scientific Criteria to Protect Aquatic Health in the Region.

  7. Determination of in vivo RNA kinetics using RATE-seq.

    Science.gov (United States)

    Neymotin, Benjamin; Athanasiadou, Rodoniki; Gresham, David

    2014-10-01

    The abundance of a transcript is determined by its rate of synthesis and its rate of degradation; however, global methods for quantifying RNA abundance cannot distinguish variation in these two processes. Here, we introduce RNA approach to equilibrium sequencing (RATE-seq), which uses in vivo metabolic labeling of RNA and approach to equilibrium kinetics, to determine absolute RNA degradation and synthesis rates. RATE-seq does not disturb cellular physiology, uses straightforward normalization with exogenous spike-ins, and can be readily adapted for studies in most organisms. We demonstrate the use of RATE-seq to estimate genome-wide kinetic parameters for coding and noncoding transcripts in Saccharomyces cerevisiae. © 2014 Neymotin et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  8. Determination of human muscle protein fractional synthesis rate

    DEFF Research Database (Denmark)

    Bornø, Andreas; Hulston, Carl J; van Hall, Gerrit

    2014-01-01

    In the present study, different MS methods for the determination of human muscle protein fractional synthesis rate (FSR) using [ring-(13)C6 ]phenylalanine as a tracer were evaluated. Because the turnover rate of human skeletal muscle is slow, only minute quantities of the stable isotopically...

  9. Determinants of Graduation Rate of Public Alternative Schools

    Science.gov (United States)

    Izumi, Masashi; Shen, Jianping; Xia, Jiangang

    2015-01-01

    In this study we investigated determinants of the graduation rate of public alternative schools by analyzing the most recent, nationally representative data from Schools and Staffing Survey 2007-2008. Based on the literature, we built a series of three regression models via successive block entry, predicting the graduate rate first by (a) student…

  10. Raman Spectral Determination of Chemical Reaction Rate Characteristics

    Science.gov (United States)

    Balakhnina, I. A.; Brandt, N. N.; Mankova, A. A.; Chikishev, A. Yu.; Shpachenko, I. G.

    2017-09-01

    The feasibility of using Raman spectroscopy to determine chemical reaction rates and activation energies has been demonstrated for the saponification of ethyl acetate. The temperature dependence of the reaction rate was found in the range from 15 to 45°C.

  11. Proposed test method for determining discharge rates from water closets

    DEFF Research Database (Denmark)

    Nielsen, V.; Fjord Jensen, T.

    At present the rates at which discharge takes place from sanitary appliances are mostly known only in the form of estimated average values. SBI has developed a measuring method enabling determination of the exact rate of discharge from a sanitary appliance as function of time. The methods depends...

  12. Can we use human judgments to determine the discount rate?

    Science.gov (United States)

    Baron, J

    2000-12-01

    It has been suggested that the long-term discount rate for environmental goods should decrease at longer delays. One justification for this suggestion is that human judgments support it. This article presents an experiment showing that judgments concerning discount rates are internally inconsistent. These results point to potential problems with the use of judgments referenda for determining discount rates in cost-benefit analyses.

  13. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  14. Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link

    Directory of Open Access Journals (Sweden)

    Berioli Matteo

    2007-01-01

    Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.

  15. Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link

    Directory of Open Access Journals (Sweden)

    Matteo Berioli

    2007-05-01

    Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.

  16. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction

    Directory of Open Access Journals (Sweden)

    Boulesteix Anne-Laure

    2009-12-01

    Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.

  17. Standardized error severity score (ESS) ratings to quantify risk associated with child restraint system (CRS) and booster seat misuse.

    Science.gov (United States)

    Rudin-Brown, Christina M; Kramer, Chelsea; Langerak, Robin; Scipione, Andrea; Kelsey, Shelley

    2017-11-17

    Although numerous research studies have reported high levels of error and misuse of child restraint systems (CRS) and booster seats in experimental and real-world scenarios, conclusions are limited because they provide little information regarding which installation issues pose the highest risk and thus should be targeted for change. Beneficial to legislating bodies and researchers alike would be a standardized, globally relevant assessment of the potential injury risk associated with more common forms of CRS and booster seat misuse, which could be applied with observed error frequency-for example, in car seat clinics or during prototype user testing-to better identify and characterize the installation issues of greatest risk to safety. A group of 8 leading world experts in CRS and injury biomechanics, who were members of an international child safety project, estimated the potential injury severity associated with common forms of CRS and booster seat misuse. These injury risk error severity score (ESS) ratings were compiled and compared to scores from previous research that had used a similar procedure but with fewer respondents. To illustrate their application, and as part of a larger study examining CRS and booster seat labeling requirements, the new standardized ESS ratings were applied to objective installation performance data from 26 adult participants who installed a convertible (rear- vs. forward-facing) CRS and booster seat in a vehicle, and a child test dummy in the CRS and booster seat, using labels that only just met minimal regulatory requirements. The outcome measure, the risk priority number (RPN), represented the composite scores of injury risk and observed installation error frequency. Variability within the sample of ESS ratings in the present study was smaller than that generated in previous studies, indicating better agreement among experts on what constituted injury risk. Application of the new standardized ESS ratings to installation

  18. A remarkable systemic error in calibration methods of γ spectrometer used for determining activity of 238U

    International Nuclear Information System (INIS)

    Su Qiong; Cheng Jianping; Diao Lijun; Li Guiqun

    2006-01-01

    A remarkable systemic error which was unknown in past long time has been indicated. The error appears in the calibration methods of determining activity of 238 U is used with γ-spectrometer with high resolution. When the γ-ray of 92.6 keV as the characteristic radiation from 238 U is used to determine the activity of 238 U in natural environment samples, the disturbing radiation produced by external excitation (or called outer sourcing X-ray radiation) is the main problem. Because the X-ray intensity is changed with many indefinite factors, it is advised that the calibration methods should be put away. As the influence of the systemic errors has been left in some past research papers, the authors suggest that the data from those papers should be cited carefully and if possible the data ought to be re-determined. (authors)

  19. DETERMINANTS OF AGRICULTURAL LAND EXPANSION IN NIGERIA: APPLICATION OF ERROR CORRECTION MODELING (ECM

    Directory of Open Access Journals (Sweden)

    A Oyekale

    2007-12-01

    Full Text Available This study used an ECM to analyze the determinants of agricultural land expansion in Nigeria. Results show that at first differencing, Augmented Dickey Fuller test indicated stationarity for all the variables (p< 0.05 and there were 7 cointegrating vectors using Johansen test. The dynamic unrestricted short-run parameters of permanent cropland growth rates (68.62, agricultural production index (10.23, livestock population (0.003, human population (-0.145, other land (-0.265 and cereal cropland growth rate (0.621 have significant impact on agricultural land expansion (p< 0.05. The study recommended that appropriate policies to address the problem of expansion of agricultural land and agricultural production must focus on development of cereal and permanent crop hybrids that are high yielding and resistant to environmental stress, human population control and guided use of land for industrial and urban development, among others.

  20. Linear transceiver design for nonorthogonal amplify-and-forward protocol using a bit error rate criterion

    KAUST Repository

    Ahmed, Qasim Zeeshan; Park, Kihong; Alouini, Mohamed-Slim; Aï ssa, Sonia

    2014-01-01

    The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network

  1. Relationship of Employee Attitudes and Supervisor-Controller Ratio to En Route Operational Error Rates

    National Research Council Canada - National Science Library

    Broach, Dana

    2002-01-01

    ...; Rodgers, Mogford, Mogford, 1998). In this study, the relationship of organizational factors to en route OE rates was investigated, based on an adaptation of the Human Factors Analysis and Classification System (HFACS; Shappell & Wiegmann 2000...

  2. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    Science.gov (United States)

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Statistical analysis of lifetime determinations in the presence of large errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1984-01-01

    The lifetimes of the new particles are very short, and most of the experiments which measure decay times are subject to measurement errors which are not negligible compared with the decay times themselves. Bartlett has analyzed the problem of lifetime estimation if the error on each event is small or zero. For the case of non-negligible measurement errors, σsub(i), on each event, we are interested in a few basic questions: How well does maximum likelihood work. That is, (a) are the errors reasonable, (b) is the answer unbiased, and (c) are there other estimators with superior performance. We concentrate on the results of our Monte Carlo investigation for the case in which the experiment is sensitive over all times -infinity< xsub(i)< infinity

  4. Bit Error-Rate Minimizing Detector for Amplify-and-Forward Relaying Systems Using Generalized Gaussian Kernel

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-01-01

    In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.

  5. Reducing Error Rates for Iris Image using higher Contrast in Normalization process

    Science.gov (United States)

    Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa

    2017-08-01

    Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.

  6. Determination of the Optimal Exchange Rate Via Control of the Domestic Interest Rate in Nigeria

    Directory of Open Access Journals (Sweden)

    Virtue U. Ekhosuehi

    2014-01-01

    Full Text Available An economic scenario has been considered where the government seeks to achieve a favourable balance-of-payments over a fixed planning horizon through exchange rate policy and control of the domestic interest rate. The dynamics of such an economy was considered in terms of a bounded optimal control problem where the exchange rate is the state variable and the domestic interest rate is the control variable. The idea of balance-of-payments was used as a theoretical underpinning to specify the objective function. By assuming that, changes in exchange rates were induced by two effects: the impact of the domestic interest rate on the exchange rate and the exchange rate system adopted by the government. Instances for both fixed and flexible optimal exchange rate regimes have been determined. The use of the approach has been illustrated employing data obtained from the Central Bank of Nigeria (CBN statistical bulletin. (original abstract

  7. Determination of fission products and actinides by inductively coupled plasma-mass spectrometry using isotope dilution analysis. A study of random and systematic errors

    International Nuclear Information System (INIS)

    Ignacio Garcia Alonso, Jose

    1995-01-01

    The theory of the propagation of errors (random and systematic) for isotope dilution analysis (IDA) has been applied to the analysis of fission products and actinide elements by inductively coupled plasma-mass spectrometry (ICP-MS). Systematic errors in ID-ICP-MS arising from mass-discrimination (mass bias), detector non-linearity and isobaric interferences in the measured isotopes have to be corrected for in order to achieve accurate results. The mass bias factor and the detector dead-time can be determined by using natural elements with well-defined isotope abundances. A combined method for the simultaneous determination of both factors is proposed. On the other hand, isobaric interferences for some fission products and actinides cannot be eliminated using mathematical corrections (due to the unknown isotope abundances in the sample) and a chemical separation is necessary. The theory for random error propagation in IDA has been applied to the determination of non-natural elements by ICP-MS taking into account all possible sources of uncertainty with pulse counting detection. For the analysis of fission products, the selection of the right spike isotope composition and spike to sample ratio can be performed by applying conventional random propagation theory. However, it has been observed that, in the experimental determination of the isotope abundances of the fission product elements to be determined, the correction for mass-discrimination and the correction for detector dead-time losses contribute to the total random uncertainty. For the instrument used in the experimental part of this study, it was found that the random uncertainty on the measured isotope ratios followed Poisson statistics for low counting rates whereas, for high counting rates, source instability was the main source of error

  8. Exploring behavioural determinants relating to health professional reporting of medication errors: a qualitative study using the Theoretical Domains Framework.

    Science.gov (United States)

    Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek

    2016-07-01

    Effective and efficient medication reporting processes are essential in promoting patient safety. Few qualitative studies have explored reporting of medication errors by health professionals, and none have made reference to behavioural theories. The objective was to describe and understand the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE). This was a qualitative study comprising face-to-face, semi-structured interviews within three major medical/surgical hospitals of Abu Dhabi, the UAE. Health professionals were sampled purposively in strata of profession and years of experience. The semi-structured interview schedule focused on behavioural determinants around medication error reporting, facilitators, barriers and experiences. The Theoretical Domains Framework (TDF; a framework of theories of behaviour change) was used as a coding framework. Ethical approval was obtained from a UK university and all participating hospital ethics committees. Data saturation was achieved after interviewing ten nurses, ten pharmacists and nine physicians. Whilst it appeared that patient safety and organisational improvement goals and intentions were behavioural determinants which facilitated reporting, there were key determinants which deterred reporting. These included the beliefs of the consequences of reporting (lack of any feedback following reporting and impacting professional reputation, relationships and career progression), emotions (fear and worry) and issues related to the environmental context (time taken to report). These key behavioural determinants which negatively impact error reporting can facilitate the development of an intervention, centring on organisational safety and reporting culture, to enhance reporting effectiveness and efficiency.

  9. Error associated with model predictions of wildland fire rate of spread

    Science.gov (United States)

    Miguel G. Cruz; Martin E. Alexander

    2015-01-01

    How well can we expect to predict the spread rate of wildfires and prescribed fires? The degree of accuracy in model predictions of wildland fire behaviour characteristics are dependent on the model's applicability to a given situation, the validity of the model's relationships, and the reliability of the model input data (Alexander and Cruz 2013b#. We...

  10. Error-free 5.1 Tbit/s data generation on a single-wavelength channel using a 1.28 Tbaud symbol rate

    DEFF Research Database (Denmark)

    Mulvad, Hans Christian Hansen; Galili, Michael; Oxenløwe, Leif Katsuo

    2009-01-01

    We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER......We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER...

  11. Determinants of Sub-Sovereign Government Ratings In Europe

    Directory of Open Access Journals (Sweden)

    Nicolas JANNONE-BELLOT

    2017-02-01

    Full Text Available The aim of this paper is to identify the determinantsof the rating assigned to sub-sovereignentities in Germany, Austria, Belgium, France,Italy and Spain, using a total of 92 territorial entitiesfor the 1989-2012 period. Multinomial orderedprobit estimation models were estimatedfor each specifi cation and agency.We conclude that the country’s rating is oneof the most important determinants of regionalgovernment’s ratings with a positive infl uence(as expected, and that the country debt/GDPratio is a stronger determinant for regions thantheir own indebtedness with a negative sign.Other relevant variables are population growthrate, unemployment rate, elderly people weight,regional public expenditure weight and size. Additionally,economic variables, such as country’srating and population growth are more importantto Fitch; whereas budget variables and size variablesare more relevant to Moody’s. Debt variablesand elderly people ratio are more importantto S&P.

  12. Reduction of determinate errors in mass bias-corrected isotope ratios measured using a multi-collector plasma mass spectrometer

    International Nuclear Information System (INIS)

    Doherty, W.

    2015-01-01

    A nebulizer-centric instrument response function model of the plasma mass spectrometer was combined with a signal drift model, and the result was used to identify the causes of the non-spectroscopic determinate errors remaining in mass bias-corrected Pb isotope ratios (Tl as internal standard) measured using a multi-collector plasma mass spectrometer. Model calculations, confirmed by measurement, show that the detectable time-dependent errors are a result of the combined effect of signal drift and differences in the coordinates of the Pb and Tl response function maxima (horizontal offset effect). If there are no horizontal offsets, then the mass bias-corrected isotope ratios are approximately constant in time. In the absence of signal drift, the response surface curvature and horizontal offset effects are responsible for proportional errors in the mass bias-corrected isotope ratios. The proportional errors will be different for different analyte isotope ratios and different at every instrument operating point. Consequently, mass bias coefficients calculated using different isotope ratios are not necessarily equal. The error analysis based on the combined model provides strong justification for recommending a three step correction procedure (mass bias correction, drift correction and a proportional error correction, in that order) for isotope ratio measurements using a multi-collector plasma mass spectrometer

  13. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  14. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  15. A Simulation Analysis of Errors in the Measurement of Standard Electrochemical Rate Constants from Phase-Selective Impedance Data.

    Science.gov (United States)

    1987-09-30

    RESTRICTIVE MARKINGSC Unclassif ied 2a SECURIly CLASSIFICATION ALIIMOA4TY 3 DIS1RSBj~jiOAVAILAB.I1Y OF RkPORI _________________________________ Approved...of the AC current, including the time dependence at a growing DME, at a given fixed potential either in the presence or the absence of an...the relative error in k b(app) is ob relatively small for ks (true) : 0.5 cm s-, and increases rapidly for ob larger rate constants as kob reaches the

  16. A web-based team-oriented medical error communication assessment tool: development, preliminary reliability, validity, and user ratings.

    Science.gov (United States)

    Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas

    2011-01-01

    Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.

  17. Determinants of self-rated health of Warsaw inhabitants.

    Science.gov (United States)

    Supranowicz, Piotr; Wysocki, Mirosław J; Car, Justyna; Debska, Anna; Gebska-Kuczerowska, Anita

    2012-01-01

    Self-rated health is a one-point measure commonly used for recognising subjectively perceived health and covering a wide range of individual's health aspects. The aim of our study was to examine the extent to which self-rated health reflects the differences due to demographic characteristics, physical, psychical and social well-being, health disorders, occurrence of chronic disease and negative life events in Polish social and cultural conditions. Data were collected by non-addressed questionnaire methods from 402 Warsaw inhabitants. The questionnaire contained the questions concerning self-rated health, physical, psychical and social well-being, the use of health care services, occurrence of chronic disease and contact with negative life events. The analysis showed that worse self-rated health increased exponentially with age and less sharply with lower level of education. Pensioners were more likely to assess their own health worse then employed or students. Such difference was not found for unemployed. Compared to married, the self-rated health of divorced or widowed respondents was lower. Gender does not differentiate self-rated health. In regard to well-being, self-rated health linearly decreased for physical well-being, for social and, especially, for psychical well-being the differences were significant, but more complicated. Hospitalisation, especially repeated, strongly determined worse self-rated health. In contrast, relationship between self-rated health and sickness absence or frequency of contact with physician were lower. Chronic diseases substantially increased the risk of poorer self-rated health, and their co-morbidity increased the risk exponentially. The patients with cancer were the group, in which the risk several times exceeded that reported for the patients of other diseases. Regarding negative life events, only experience with violence and financial difficulties were resulted in worse self-rated health. Our findings confirmed the usefulness

  18. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    Science.gov (United States)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-01

    This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.

  19. Determinants of Commercial Banks' Interest Rate Spread in Namibia ...

    African Journals Online (AJOL)

    reduces net interest margin whilst the liquidity levels of a commercial bank increases ... and the capital ratio are not important determinants of the net interest margin. .... profitable and resilient to shocks including the recent financial crisis .... affects its business operations including its decision of what interest rate to charge.

  20. Determinants of Commercial banks' interest rate spreads in Botswana

    African Journals Online (AJOL)

    The paper investigated the determinants of commercial banks' interest rate spreads in Botswana using time series cross-sectional analysis for the period of 2004Q1 to 2014Q4. Factors empirically tested are bank-specific, industry-specific and macroeconomic data. Results indicate that bank intermediation, GDP, inflation ...

  1. Determining the nucleation rate from the dimer growth probability

    NARCIS (Netherlands)

    Ter Horst, J.H.; Kashchiev, D.

    2005-01-01

    A new method is proposed for the determination of the stationary one-component nucleation rate J with the help of data for the growth probability P2 of a dimer which is the smallest cluster of the nucleating phase. The method is based on an exact formula relating J and P2, and is readily applicable

  2. Codon usage determines translation rate in Escherichia coli

    DEFF Research Database (Denmark)

    Sørensen, Michael Askvad; Kurland, C G; Pedersen, Steen

    1989-01-01

    We wish to determine whether differences in translation rate are correlated with differences in codon usage or with differences in mRNA secondary structure. We therefore inserted a small DNA fragment in the lacZ gene either directly or flanked by a few frame-shifting bases, leaving the reading fr...

  3. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  4. Structure analysis of tax revenue and inflation rate in Banda Aceh using vector error correction model with multiple alpha

    Science.gov (United States)

    Sofyan, Hizir; Maulia, Eva; Miftahuddin

    2017-11-01

    A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).

  5. Film techniques in radiotherapy for treatment verification, determination of patient exit dose, and detection of localization error

    International Nuclear Information System (INIS)

    Haus, A.G.; Marks, J.E.

    1974-01-01

    In patient radiation therapy, it is important to know that the diseased area is included in the treatment field and that normal anatomy is properly shielded or excluded. Since 1969, a film technique developed for imaging of the complete patient radiation exposure has been applied for treatment verification and for the detection and evaluation of localization errors that may occur during treatment. The technique basically consists of placing a film under the patient during the entire radiation exposure. This film should have proper sensitivity and contrast in the exit dose exposure range encountered in radiotherapy. In this communication, we describe how various exit doses fit the characteristic curve of the film; examples of films exposed to various exit doses; the technique for using the film to determine the spatial distribution of the absorbed exit dose; and types of errors commonly detected. Results are presented illustrating that, as the frequency of use of this film technique is increased, localization error is reduced significantly

  6. Determination of varying consumption rates from radiotracer data

    International Nuclear Information System (INIS)

    Cadwell, L.L.; Schreckhise, R.G.

    1976-01-01

    Data obtained on the uptake and elimination of phosphorus-32 by foraging grasshoppers were utilized to estimate consumption rates of blue grama grass (Bouteloua gracilis). Grasshoppers were caged in field enclosures containing blue grama grass labeled with 32 P. Periodic measurements were made to determine the body burdens of the grasshoppers and concentration of 32 P in the grass. This information, along with a two-component exponential function which was observed to best mathematically describe the retention of acutely ingested phosphorus, provided the basis for a convolution integral of the consumption rate. The consumption rate was estimated by dividing the observed body burden of the grasshopper by the convolution integral of the input (grass concentration) and impulse (retention curve) function over each observation period. Successive calculations of the consumption rates were made at various points in time as the body burden changed from continued feeding on labeled forage

  7. Error in the determination of the deformed shape of prismatic beams using the double integration of curvature

    Science.gov (United States)

    Sigurdardottir, Dorotea H.; Stearns, Jett; Glisic, Branko

    2017-07-01

    The deformed shape is a consequence of loading the structure and it is defined by the shape of the centroid line of the beam after deformation. The deformed shape is a universal parameter of beam-like structures. It is correlated with the curvature of the cross-section; therefore, any unusual behavior that affects the curvature is reflected through the deformed shape. Excessive deformations cause user discomfort, damage to adjacent structural members, and may ultimately lead to issues in structural safety. However, direct long-term monitoring of the deformed shape in real-life settings is challenging, and an alternative is indirect determination of the deformed shape based on curvature monitoring. The challenge of the latter is an accurate evaluation of error in the deformed shape determination, which is directly correlated with the number of sensors needed to achieve the desired accuracy. The aim of this paper is to study the deformed shape evaluated by numerical double integration of the monitored curvature distribution along the beam, and create a method to predict the associated errors and suggest the number of sensors needed to achieve the desired accuracy. The error due to the accuracy in the curvature measurement is evaluated within the scope of this work. Additionally, the error due to the numerical integration is evaluated. This error depends on the load case (i.e., the shape of the curvature diagram), the magnitude of curvature, and the density of the sensor network. The method is tested on a laboratory specimen and a real structure. In a laboratory setting, the double integration is in excellent agreement with the beam theory solution which was within the predicted error limits of the numerical integration. Consistent results are also achieved on a real structure—Streicker Bridge on Princeton University campus.

  8. Error rate on the director's task is influenced by the need to take another's perspective but not the type of perspective.

    Science.gov (United States)

    Legg, Edward W; Olivier, Laure; Samuel, Steven; Lurz, Robert; Clayton, Nicola S

    2017-08-01

    Adults are prone to responding erroneously to another's instructions based on what they themselves see and not what the other person sees. Previous studies have indicated that in instruction-following tasks participants make more errors when required to infer another's perspective than when following a rule. These inference-induced errors may occur because the inference process itself is error-prone or because they are a side effect of the inference process. Crucially, if the inference process is error-prone, then higher error rates should be found when the perspective to be inferred is more complex. Here, we found that participants were no more error-prone when they had to judge how an item appeared (Level 2 perspective-taking) than when they had to judge whether an item could or could not be seen (Level 1 perspective-taking). However, participants were more error-prone in the perspective-taking variants of the task than in a version that only required them to follow a rule. These results suggest that having to represent another's perspective induces errors when following their instructions but that error rates are not directly linked to errors in inferring another's perspective.

  9. Determinants of Interest Rates on Corporate Bonds of Mining Enterprises

    Science.gov (United States)

    Ranosz, Robert

    2017-09-01

    This article is devoted to the determinants of interest rates on corporate bonds of mining enterprises. The study includes a comparison between the cost of foreign capital as resulting from the issue of debt instruments in different sectors of the economy in relation to the mining industry. The article also depicts the correlation between the rating scores published by the three largest rating agencies: S&P, Moody's, and Fitch. The test was based on simple statistical methods. The analysis performed indicated that there is a dependency between the factors listed and the amount of interest rates on corporate bonds of global mining enterprises. Most significant factors include the rating level and the period for which the given series of bonds was issued. Additionally, it is not without significance whether the given bond has additional options. Pursuant to the obtained results, is should be recognized that in order to reduce the interest rate on bonds, mining enterprises should pay particular attention to the rating and attempt to include additional options in issued bonds. Such additional options may comprise, for example, an ability to exchange bonds to shares or raw materials.

  10. Errors of the backextrapolation method in determination of the blood volume

    Science.gov (United States)

    Schröder, T.; Rösler, U.; Frerichs, I.; Hahn, G.; Ennker, J.; Hellige, G.

    1999-01-01

    Backextrapolation is an empirical method to calculate the central volume of distribution (for example the blood volume). It is based on the compartment model, which says that after an injection the substance is distributed instantaneously in the central volume with no time delay. The occurrence of recirculation is not taken into account. The change of concentration with time of indocyanine green (ICG) was observed in an in vitro model, in which the volume was recirculating in 60 s and the clearance of the ICG could be varied. It was found that the higher the elimination of ICG, the higher was the error of the backextrapolation method. The theoretical consideration of Schröder et al ( Biomed. Tech. 42 (1997) 7-11) was proved. If the injected substance is eliminated somewhere in the body (i.e. not by radioactive decay), the backextrapolation method produces large errors.

  11. Methods for determining the effect of flatness deviations, eccentricity and pyramidal errors on angle measurements

    CSIR Research Space (South Africa)

    Kruger, OA

    2000-01-01

    Full Text Available on face-to-face angle measurements. The results show that flatness and eccentricity deviations have less effect on angle measurements than do pyramidal errors. 1. Introduction Polygons and angle blocks are the most important transfer standards in the field... of angle metrology. Polygons are used by national metrology institutes (NMIs) as transfer standards to industry, where they are used in conjunction with autocollimators to calibrate index tables, rotary tables and other forms of angle- measuring equipment...

  12. Impact of automated dispensing cabinets on medication selection and preparation error rates in an emergency department: a prospective and direct observational before-and-after study.

    Science.gov (United States)

    Fanning, Laura; Jones, Nick; Manias, Elizabeth

    2016-04-01

    The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.

  13. Heart rate variability as determinism with jump stochastic parameters.

    Science.gov (United States)

    Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M

    2013-08-01

    We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.

  14. ANALYSIS OF MACROECONOMIC DETERMINANTS OF EXCHANGE RATE VOLATILITY IN INDIA

    Directory of Open Access Journals (Sweden)

    Anita Mirchandani

    2013-01-01

    Full Text Available The Foreign Exchange Market in India has undergone substantial changes over last decade. It is imperative by the excessive volatility of Indian Rupee causing its depreciation against major dominating currencies in international market. This research has been carried out in order to investigate various macroeconomic variables leading to acute variations in the exchange rate of a currency. An attempt has been made to review the probable reasons for the depreciation of the Rupee and analyse different macroeconomic determinants that have impact on the volatility of exchange rate and their extent of correlation with the same.

  15. Effect of Price Determinants on World Cocoa Prices for Over the Last Three Decades: Error Correction Model (ECM Approach

    Directory of Open Access Journals (Sweden)

    Lya Aklimawati

    2013-12-01

    Full Text Available High  volatility  cocoa  price  movement  is  consequenced  by  imbalancing between power demand and power supply in commodity market. World economy expectation and market  liberalization would lead to instability on cocoa prices in  the  international  commerce.  Dynamic  prices  moving  erratically  influence the benefit  of market players, particularly  producers. The aim of this research is  (1  to  estimate  the  empirical  cocoa  prices  model  for  responding  market dynamics and (2 analyze short-term and long-term effect of price determinants variables  on cocoa prices.  This research  was  carried out by  analyzing  annualdata from 1980 to 2011, based on secondary data. Error correction mechanism (ECM  approach was  used  to  estimate the  econometric  model  of  cocoa  price.The  estimation  results  indicated  that  cocoa  price  was  significantly  affected  by exchange rate IDR-USD, world gross domestic product,  world inflation, worldcocoa production, world cocoa consumption, world cocoa stock and Robusta prices at varied significance level from 1 - 10%. All of these variables have a long run equilibrium relationship. In long run effect, world gross domestic product, world  cocoa  consumption  and  world  cocoa  stock  were  elastic  (E  >1,  while other  variables  were  inelastic  (E  <1.  Variables  that  affecting  cocoa  pricesin  short  run  equilibrium  were  exchange  rate  IDR-USD,  world  gross  domestic product,  world  inflation,  world  cocoa  consumption  and  world  cocoa  stock. The  analysis  results  showed  that  world  gross  domestic  product,  world  cocoa consumption  and  world  cocoa  stock  were  elastic  (E  >1  to  cocoa  prices  in short-term.  Whereas,  the  response  of  cocoa  prices  was  inelastic  to  change  of exchange rate IDR-USD and world inflation.Key words: Price

  16. Reply: Birnbaum's (2012 statistical tests of independence have unknown Type-I error rates and do not replicate within participant

    Directory of Open Access Journals (Sweden)

    Yun-shil Cha

    2013-01-01

    Full Text Available Birnbaum (2011, 2012 questioned the iid (independent and identically distributed sampling assumptions used by state-of-the-art statistical tests in Regenwetter, Dana and Davis-Stober's (2010, 2011 analysis of the ``linear order model''. Birnbaum (2012 cited, but did not use, a test of iid by Smith and Batchelder (2008 with analytically known properties. Instead, he created two new test statistics with unknown sampling distributions. Our rebuttal has five components: 1 We demonstrate that the Regenwetter et al. data pass Smith and Batchelder's test of iid with flying colors. 2 We provide evidence from Monte Carlo simulations that Birnbaum's (2012 proposed tests have unknown Type-I error rates, which depend on the actual choice probabilities and on how data are coded as well as on the null hypothesis of iid sampling. 3 Birnbaum analyzed only a third of Regenwetter et al.'s data. We show that his two new tests fail to replicate on the other two-thirds of the data, within participants. 4 Birnbaum selectively picked data of one respondent to suggest that choice probabilities may have changed partway into the experiment. Such nonstationarity could potentially cause a seemingly good fit to be a Type-II error. We show that the linear order model fits equally well if we allow for warm-up effects. 5 Using hypothetical data, Birnbaum (2012 claimed to show that ``true-and-error'' models for binary pattern probabilities overcome the alleged short-comings of Regenwetter et al.'s approach. We disprove this claim on the same data.

  17. Procedure for determining the optimum rate of increasing shaft depth

    Energy Technology Data Exchange (ETDEWEB)

    Durov, E.M.

    1983-03-01

    Presented is an economic analysis of increasing shaft depth during mine modernization. Investigations carried out by the Yuzhgiproshakht Institute are analyzed. The investigations are aimed at determining the optimum shaft sinking rate (the rate which reduces investment to the minimum). The following factors are considered: coal output of a mine (0.9, 1.2, 1.5 and 1.8 Mt/year), depth at which the new mining level is situated (600, 800, 1200, 1400 and 1600 m), four schemes of increasing depth of 2 central shafts (rock hoisting to ground surface, rock hoisting to the existing level, rock haulage to the developed level, rock haulage to the level being developed using a large diameter borehole drilled from the new level to the shaft bottom and enlarged from shaft bottom to the new level), shaft sinking rate (10, 20, 30, 40, 50 and 60 m/month), range of increasing shaft depth (the difference between depth of the shaft before and after increasing its depth by 100, 200, 300 and 400 m). Comparative evaluations show that the optimum shaft sinking rate depends on the scheme for rock hoisting (one of 4 analyzed), range of increasing shaft depth and gas content in coal seams. The optimum shaft sinking rate ranges from 20 to 40 m/month in coal mines with low methane content and from 20 to 30 m/month in gassy coal mines. The planned coal output of a mine does not influence the optimum shaft sinking rate.

  18. Determinants of Inter-Country Internet Diffusion Rates

    OpenAIRE

    Wunnava, Phanindra V.; Leiter, Daniel B.

    2008-01-01

    This paper employs cross-sectional data from 100 countries to analyze the main determinants of inter-country Internet diffusion rates. We set up an empirical model based on strong theoretical foundations, in which we regress Internet usage on variables that capture social, economic and political differences between these countries. Our results support past findings that economic strength, infrastructure and knowledge of the English language positively affect Internet connectivity. In addition...

  19. Metabolic rate determines haematopoietic stem cell self-renewal.

    Science.gov (United States)

    Sastry, P S R K

    2004-01-01

    The number of haematopoietic stem cells (HSCs) per animal is conserved across species. This means the HSCs need to maintain hematopoiesis over a longer period in larger animals. This would result in the requirement of stem cell self-renewal. At present the three existing models are the stochastic model, instructive model and the third more recently proposed is the chiaro-scuro model. It is a well known allometric law that metabolic rate scales to the three quarter power. Larger animals have a lower metabolic rate, compared to smaller animals. Here it is being hypothesized that metabolic rate determines haematopoietic stem cell self-renewal. At lower metabolic rate the stem cells commit for self-renewal, where as at higher metabolic rate they become committed to different lineages. The present hypothesis can explain the salient features of the different models. Recent findings regarding stem cell self-renewal suggest an important role for Wnt proteins and their receptors known as frizzleds, which are an important component of cell signaling pathway. The role of cGMP in the Wnts action provides further justification for the present hypothesis as cGMP is intricately linked to metabolic rate. One can also explain the telomere homeostasis by the present hypothesis. One prediction of the present hypothesis is with reference to the limit of cell divisions known as Hayflick limit, here it is being suggested that this is the result of metabolic rate in laboratory conditions and there can be higher number of cell divisions in vivo if the metabolic rate is lower. Copyright 2004 Elsevier Ltd.

  20. Comparison of the effect of paper and computerized procedures on operator error rate and speed of performance

    International Nuclear Information System (INIS)

    Converse, S.A.; Perez, P.B.; Meyer, S.; Crabtree, W.

    1994-01-01

    The Computerized Procedures Manual (COPMA-II) is an advanced procedure manual that can be used to select and execute procedures, to monitor the state of plant parameters, and to help operators track their progress through plant procedures. COPMA-II was evaluated in a study that compared the speed and accuracy of operators' performance when they performed with COPMA-II and traditional paper procedures. Sixteen licensed reactor operators worked in teams of two to operate the Scales Pressurized Water Reactor Facility at North Carolina State University. Each team performed one change of power with each type of procedure to simulate performance under normal operating conditions. Teams then performed one accident scenario with COPMA-II and one with paper procedures. Error rates, performance times, and subjective estimates of workload were collected, and were evaluated for each combination of procedure type and scenario type. For the change of power task, accuracy and response time were not different for COPMA-II and paper procedures. Operators did initiate responses to both accident scenarios fastest with paper procedures. However, procedure type did not moderate response completion time for either accident scenario. For accuracy, performance with paper procedures resulted in twice as many errors as did performance with COPMA-II. Subjective measures of mental workload for the accident scenarios were not affected by procedure type

  1. Inflation of type I error rates by unequal variances associated with parametric, nonparametric, and Rank-Transformation Tests

    Directory of Open Access Journals (Sweden)

    Donald W. Zimmerman

    2004-01-01

    Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.

  2. Determination of global positioning system (GPS) receiver clock errors: impact on positioning accuracy

    International Nuclear Information System (INIS)

    Yeh, Ta-Kang; Hwang, Cheinway; Xu, Guochang; Wang, Chuan-Sheng; Lee, Chien-Chih

    2009-01-01

    Enhancing the positioning precision is the primary pursuit of global positioning system (GPS) users. To achieve this goal, most studies have focused on the relationship between GPS receiver clock errors and GPS positioning precision. This study utilizes undifferentiated phase data to calculate GPS clock errors and to compare with the frequency of cesium clock directly, to verify estimated clock errors by the method used in this paper. The frequency stability calculated from this paper (the indirect method) and measured from the National Standard Time and Frequency Laboratory (NSTFL) of Taiwan (the direct method) match to 1.5 × 10 −12 (the value from this study was smaller than that from NSTFL), suggesting that the proposed technique has reached a certain level of quality. The built-in quartz clocks in the GPS receivers yield relative frequency offsets that are 3–4 orders higher than those of rubidium clocks. The frequency stability of the quartz clocks is on average two orders worse than that of the rubidium clock. Using the rubidium clock instead of the quartz clock, the horizontal and vertical positioning accuracies were improved by 26–78% (0.6–3.6 mm) and 20–34% (1.3–3.0 mm), respectively, for a short baseline. These improvements are 7–25% (0.3–1.7 mm) and 11% (1.7 mm) for a long baseline. Our experiments show that the frequency stability of the clock, rather than relative frequency offset, is the governing factor of positioning accuracy

  3. A low error reconstruction method for confocal holography to determine 3-dimensional properties

    Energy Technology Data Exchange (ETDEWEB)

    Jacquemin, P.B., E-mail: pbjacque@nps.edu [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada); Herring, R.A. [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada)

    2012-06-15

    A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as 'wily'. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: Black-Right-Pointing-Pointer Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. Black-Right-Pointing-Pointer Processing of multiple holograms containing the cumulative refractive index through the fluid. Black-Right-Pointing-Pointer Reconstruction issues due to restricting angular scanning to the numerical aperture of the

  4. A low error reconstruction method for confocal holography to determine 3-dimensional properties

    International Nuclear Information System (INIS)

    Jacquemin, P.B.; Herring, R.A.

    2012-01-01

    A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as “wily”. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: ► Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. ► Processing of multiple holograms containing the cumulative refractive index through the fluid. ► Reconstruction issues due to restricting angular scanning to the numerical aperture of the beam. ► Minimizing tomographic reconstruction error by defining boundary

  5. Sources of errors in the determination of fluorine in feeding stuffs

    Energy Technology Data Exchange (ETDEWEB)

    Oelschlaeger, W; Kirchgessner, M

    1960-01-01

    The difference between deficiency and toxicity levels of F in fodder is small; for this reason the many sources of error in the estimation of F contents are discussed. A list, and preventive measures suggested are included. Finally, detailed working instructions are given for accurate F analysis, and representative F contents of certain feeding stuffs are tabulated. A maximal permissible limit for dairy cattle of 2 - 3 mg F per day per kg body weight is suggested. F contents of plants growing near HF-producing plants especially downwind, are often dangerously high.

  6. Determining Surface Infiltration Rate of Permeable Pavements with Digital Imaging

    Directory of Open Access Journals (Sweden)

    Caterina Valeo

    2018-01-01

    Full Text Available Cell phone images of pervious pavement surfaces were used to explore relationships between surface infiltration rates (SIR measured using the ASTM C1701 standard test and using a simple falling head test. A fiber-reinforced porous asphalt surface and a highly permeable material comprised of stone, rubber and a polymer binder (Porous Pave were tested. Images taken with a high-resolution cellphone camera were acquired as JPEG files and converted to gray scale images in Matlab® for analysis. The distribution of gray levels was compared to the surface infiltration rates obtained for both pavements with attention given to the mean of the distribution. Investigation into the relationships between mean SIR and parameters determined from the gray level distribution produced in the image analysis revealed that mean SIR measured in both pavements were proportional to the inverse of the mean of the distribution. The relationships produced a coefficient of determination over 85% using both the ASTM and the falling head test in the porous asphalt surface. SIR measurements determined with the ASTM method were highly correlated with the inverse mean of the distribution of gray levels in the Porous Pave material as well, producing coefficients of determination of over 90% and Kendall’s tau-b of roughly 70% for nonparametric data.

  7. Bit Error Rate Performance Analysis of a Threshold-Based Generalized Selection Combining Scheme in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    Kousa Maan

    2005-01-01

    Full Text Available The severity of fading on mobile communication channels calls for the combining of multiple diversity sources to achieve acceptable error rate performance. Traditional approaches perform the combining of the different diversity sources using either the conventional selective diversity combining (CSC, equal-gain combining (EGC, or maximal-ratio combining (MRC schemes. CSC and MRC are the two extremes of compromise between performance quality and complexity. Some researches have proposed a generalized selection combining scheme (GSC that combines the best branches out of the available diversity resources ( . In this paper, we analyze a generalized selection combining scheme based on a threshold criterion rather than a fixed-size subset of the best channels. In this scheme, only those diversity branches whose energy levels are above a specified threshold are combined. Closed-form analytical solutions for the BER performances of this scheme over Nakagami fading channels are derived. We also discuss the merits of this scheme over GSC.

  8. Determination of cost effective waste management system receipt rates

    International Nuclear Information System (INIS)

    McKee, R.W.; Huber, H.D.

    1991-01-01

    A comprehensive logistics and cost analysis has been carried out to determine if there are potential benefits to the high-level waste management system for receipt rates other than the current 3000 MTU/yr design-basis. The analysis includes both a Repository-Only System and a Storage-Only System. Repository startup dates of 2010 and 2015 and MRS startup dates of 1988 and three years prior to the repository have been evaluated. Receipt rates ranging from 1,500 to 6, 000 MTU/yr have been considered. Higher receipt rates appear to be economically justified, for either system, minimum costs are found at a repository receipt rate of 6000 MTU/yr. However, the MRS receipt rate for minimum system costs depends on the MRS startup date. With a 1988 MRS and a 2010 repository, the added cost of providing the MRS is offset by at-reactor storage cost reductions and the total system cost of $10.0 billion is virtually the same as for the repository- only system. 9 refs., 8 figs., 3 tabs

  9. Determination of cost effective waste management system receipt rates

    International Nuclear Information System (INIS)

    McKee, R.W.; Huber, H.D.

    1991-01-01

    A comprehensive logistics and cost analysis has been carried out to determine if there are potential benefits to the high-level waste management system for receipt rates other than the current 3,000 MTU/yr design-basis receipt rate. The scope of the analysis includes both a Repository-Only System and a Storage-Only or Basic MRS System. To allow for current uncertainties in facility startup scheduling, cases considering repository startup dates of 2010 and 2015 and MRS startup dates of 1998 and three years prior to the repository have been evaluated. Receipt rates ranging from 1,500 to 6,000 MTU/yr have been considered for both the MRS and the repository. Higher receipt rates appear to be economically justified for both the repository and an MRS. For a repository-only system, minimum costs are found at a repository receipt rate of 6,000 MTU/yr. When a storage-only MRS is included in the system, minimum system costs are also achieved at a repository receipt rate of 6,000 MTU/yr. However, the MRS receipt rate for minimum system costs depends on the MRS startup date and ranges from 3,500 to 6,000 MTU/yr. With a 1998 MRS and a 2010 repository, the added cost of providing the MRS is offset by at-reactor storage cost reductions and the total system cost of $10.0 billion is virtually the same as for the repository-only system

  10. 31 CFR 359.14 - How are composite rates determined?

    Science.gov (United States)

    2010-07-01

    ... composite interest rates.): Composite rate = {(Fixed rate ÷ 2) + Semiannual inflation rate + [Semiannual inflation rate × (Fixed rate ÷ 2)]} × 2. 2 2 Example for I bonds issued May 2002-October 2002: Fixed rate = 2.00% Inflation rate = 0.28% Composite rate = [0.0200 ÷ 2 + 0.0028 + (0.0028 × 0.0200 ÷ 2)] ×2...

  11. Global determination of rating curves in the Amazon basin from satellite altimetry

    Science.gov (United States)

    Paris, Adrien; Paiva, Rodrigo C. D.; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Calmant, Stéphane; Collischonn, Walter; Bonnet, Marie-Paule; Seyler, Frédérique

    2014-05-01

    The Amazonian basin is the largest hydrological basin all over the world. Over the past few years, it has experienced an unusual succession of extreme droughts and floods, which origin is still a matter of debate. One of the major issues in understanding such events is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2009. The stage dataset is made of ~900 altimetry series at ENVISAT and Jason-2 virtual stations, sampling the stages over more than a hundred of rivers in the basin. Altimetry series span between 2002 and 2011. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are hydrologicaly meaningful throughout the entire Amazon basin. The rating curve parameters have been computed using an optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best value for the parameters together with their posterior probability distribution, allowing the determination of a credibility interval for calculated discharge. Also the error over discharges estimates from the MGB-IPH model is included in the rating curve determination. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present experiment shows that the stochastic approach

  12. Determination of heart rate variability with an electronic stethoscope.

    Science.gov (United States)

    Kamran, Haroon; Naggar, Isaac; Oniyuke, Francisca; Palomeque, Mercy; Chokshi, Priya; Salciccioli, Louis; Stewart, Mark; Lazar, Jason M

    2013-02-01

    Heart rate variability (HRV) is widely used to characterize cardiac autonomic function by measuring beat-to-beat alterations in heart rate. Decreased HRV has been found predictive of worse cardiovascular (CV) outcomes. HRV is determined from time intervals between QRS complexes recorded by electrocardiography (ECG) for several minutes to 24 h. Although cardiac auscultation with a stethoscope is performed routinely on patients, the human ear cannot detect heart sound time intervals. The electronic stethoscope digitally processes heart sounds, from which cardiac time intervals can be obtained. Accordingly, the objective of this study was to determine the feasibility of obtaining HRV from electronically recorded heart sounds. We prospectively studied 50 subjects with and without CV risk factors/disease and simultaneously recorded single lead ECG and heart sounds for 2 min. Time and frequency measures of HRV were calculated from R-R and S1-S1 intervals and were compared using intra-class correlation coefficients (ICC). The majority of the indices were strongly correlated (ICC 0.73-1.0), while the remaining indices were moderately correlated (ICC 0.56-0.63). In conclusion, we found HRV measures determined from S1-S1 are in agreement with those determined by single lead ECG, and we demonstrate and discuss differences in the measures in detail. In addition to characterizing cardiac murmurs and time intervals, the electronic stethoscope holds promise as a convenient low-cost tool to determine HRV in the hospital and outpatient settings as a practical extension of the physical examination.

  13. Variation in human recombination rates and its genetic determinants.

    Directory of Open Access Journals (Sweden)

    Adi Fledel-Alon

    Full Text Available Despite the fundamental role of crossing-over in the pairing and segregation of chromosomes during human meiosis, the rates and placements of events vary markedly among individuals. Characterizing this variation and identifying its determinants are essential steps in our understanding of the human recombination process and its evolution.Using three large sets of European-American pedigrees, we examined variation in five recombination phenotypes that capture distinct aspects of crossing-over patterns. We found that the mean recombination rate in males and females and the historical hotspot usage are significantly heritable and are uncorrelated with one another. We then conducted a genome-wide association study in order to identify loci that influence them. We replicated associations of RNF212 with the mean rate in males and in females as well as the association of Inversion 17q21.31 with the female mean rate. We also replicated the association of PRDM9 with historical hotspot usage, finding that it explains most of the genetic variance in this phenotype. In addition, we identified a set of new candidate regions for further validation.These findings suggest that variation at broad and fine scales is largely separable and that, beyond three known loci, there is no evidence for common variation with large effects on recombination phenotypes.

  14. On the Determination of Magnesium Degradation Rates under Physiological Conditions.

    Science.gov (United States)

    Nidadavolu, Eshwara Phani Shubhakar; Feyerabend, Frank; Ebel, Thomas; Willumeit-Römer, Regine; Dahms, Michael

    2016-07-28

    The current physiological in vitro tests of Mg degradation follow the procedure stated according to the ASTM standard. This standard, although useful in predicting the initial degradation behavior of an alloy, has its limitations in interpreting the same for longer periods of immersion in cell culture media. This is an important consequence as the alloy's degradation is time dependent. Even if two different alloys show similar corrosion rates in a short term experiment, their degradation characteristics might differ with increased immersion times. Furthermore, studies concerning Mg corrosion extrapolate the corrosion rate from a single time point measurement to the order of a year (mm/y), which might not be appropriate because of time dependent degradation behavior. In this work, the above issues are addressed and a new methodology of performing long-term immersion tests in determining the degradation rates of Mg alloys was put forth. For this purpose, cast and extruded Mg-2Ag and powder pressed and sintered Mg-0.3Ca alloy systems were chosen. DMEM Glutamax +10% FBS (Fetal Bovine Serum) +1% Penicillin streptomycin was used as cell culture medium. The advantages of such a method in predicting the degradation rates in vivo deduced from in vitro experiments are discussed.

  15. Apparatus and method for determining solids circulation rate

    Science.gov (United States)

    Ludlow, J Christopher [Morgantown, WV; Spenik, James L [Morgantown, WV

    2012-02-14

    The invention relates to a method of determining bed velocity and solids circulation rate in a standpipe experiencing a moving packed bed flow, such as the in the standpipe section of a circulating bed fluidized reactor The method utilizes in-situ measurement of differential pressure over known axial lengths of the standpipe in conjunction with in-situ gas velocity measurement for a novel application of Ergun equations allowing determination of standpipe void fraction and moving packed bed velocity. The method takes advantage of the moving packed bed property of constant void fraction in order to integrate measured parameters into simultaneous solution of Ergun-based equations and conservation of mass equations across multiple sections of the standpipe.

  16. Bit Error Rate Performance of a MIMO-CDMA System Employing Parity-Bit-Selected Spreading in Frequency Nonselective Rayleigh Fading

    Directory of Open Access Journals (Sweden)

    Claude D'Amours

    2011-01-01

    Full Text Available We analytically derive the upper bound for the bit error rate (BER performance of a single user multiple input multiple output code division multiple access (MIMO-CDMA system employing parity-bit-selected spreading in slowly varying, flat Rayleigh fading. The analysis is done for spatially uncorrelated links. The analysis presented demonstrates that parity-bit-selected spreading provides an asymptotic gain of 10log(Nt dB over conventional MIMO-CDMA when the receiver has perfect channel estimates. This analytical result concurs with previous works where the (BER is determined by simulation methods and provides insight into why the different techniques provide improvement over conventional MIMO-CDMA systems.

  17. Evapotranspiration estimates and consequences due to errors in the determination of the net radiation and advective effects

    International Nuclear Information System (INIS)

    Oliveira, G.M. de; Leitao, M. de M.V.B.R.

    2000-01-01

    The objective of this study was to analyze the consequences in the evapotranspiration estimates (ET) during the growing cycle of a peanut crop due to the errors committed in the determination of the radiation balance (Rn), as well as those caused by the advective effects. This research was conducted at the Experimental Station of CODEVASF in an irrigated perimeter located in the city of Rodelas, BA, during the period of September to December of 1996. The results showed that errors of the order of 2.2 MJ m -2 d -1 in the calculation of Rn, and consequently in the estimate of ET, can occur depending on the time considered for the daily total of Rn. It was verified that the surrounding areas of the experimental field, as well as the areas of exposed soil within the field, contributed significantly to the generation of local advection of sensible heat, which resulted in the increase of the evapotranspiration [pt

  18. Dose rate determining factors of PWR primary water

    International Nuclear Information System (INIS)

    Terachi, Takumi; Kuge, Toshiharu; Nakano, Nobuo

    2014-01-01

    The relationship between dose rate trends and water chemistry has been studied to clarify the determining factors on the dose rates. Therefore dose rate trends and water chemistry of 11 PWR plants of KEPCO (Kansai Electric Power Co., Inc.) were summarized. It is indicated that the chemical composition of the oxide film, behaviour of corrosion products and Co-58/Co-60 ratio in the primary system have effected dose rate trends based on plant operation experiences for over 40 years. According to plant operation experiences, the amount of Co-58 has been decreasing with the increasing duration of SG (Steam Generator) usage. It is indicated that the stable oxide film formation on the inner surface of SG tubing, is a major beneficial factor for radiation sources reduction. On the other hand, the reduction of the amount of Co-60 for the long term has been not clearly observed especially in particular high dose plants. The primary water parameters imply that considering release and purification balance on Co-59 is important to prevent accumulation of source term in primary water. In addition, the effect of zinc injection, which relates to the chemical composition of oxide film, was also assessed. As the results, the amount of radioactive Co has been clearly decreased. The decreasing trend seems to correlate to the half-life of Co-60, because it is considered that the injected zinc prevents the uptake of radioactive Co into the oxide film on the inner surface of the components and piping. In this paper, the influence of water chemistry and the replacement experiences of materials on the dose rates were discussed. (author)

  19. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Directory of Open Access Journals (Sweden)

    Philip J Kellman

    Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert

  20. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Science.gov (United States)

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  1. [Determination of the 120-day post prostatic biopsy mortality rate].

    Science.gov (United States)

    Canat, G A; Duclos, A; Couray-Targe, S; Schott, A-M; Polazzi, S; Scoazec, J-Y; Berger, F; Perrin, P

    2014-06-01

    Concerning death-rates were reported following prostate biopsy but the lack of contexts in which event occurred makes it difficult to take any position. Therefore, we aimed to determine the 120-day post-biopsy mortality rate. Between 2000 and 2011, 8804 men underwent prostate biopsy in the hospice civils de Lyon. We studied retrospectively, the mortality rate after each of the 11,816 procedures. Biopsies imputability was assessed by examining all medical records. Dates of death were extracted from our local patient management database, which is updated trimestrially with death notifications from the French National Institute for Statistics and Economic Studies. In our study 42 deaths occurred within 120days after 11,816 prostate biopsies (0.36%). Of the 42 records: 9 were lost to follow-up, 3 had no identifiable cause of death, 28 had an intercurrent event ruling out prostate biopsy as a cause of death. Only 2 deaths could be linked to biopsy. We reported at most 2 deaths possibly related to prostate biopsy over 11,816 procedures (0.02%). We confirmed the fact that prostate biopsies can be lethal but this rare outcome should not be considered as an argument against prostate screening given the circumstances in which it occurs. 5. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  2. Determination of area reduction rate by continuous ball indentation test

    International Nuclear Information System (INIS)

    Zou, Bin; Guan, Kai Shu; Wu, Sheng Bao

    2016-01-01

    Rate of area reduction is an important mechanical property to appraise the plasticity of metals, which is always obtained from the uniaxial tensile test. A methodology is proposed to determine the area reduction rate by continuous ball indentation test technique. The continuum damage accumulation theory has been adopted in this work to identify the failure point in the indentation. The corresponding indentation depth of this point can be obtained and used to estimate the area reduction rate. The local strain limit criterion proposed in the ASME VIII-2 2007 alternative rules is also adopted in this research to convert the multiaxial strain of indentation test to uniaxial strain of tensile test. The pile-up and sink-in phenomenon which can affect the result significantly is also discussed in this paper. This method can be useful in engineering practice to evaluate the material degradation under severe working condition due to the non-destructive nature of ball indentation test. In order to validate the method, continuous ball indentation test is performed on ferritic steel 16MnR and ASTM (A193B16), then the results are compared with that got from the traditional uniaxial tensile test.

  3. Effect of a health system's medical error disclosure program on gastroenterology-related claims rates and costs.

    Science.gov (United States)

    Adams, Megan A; Elmunzer, B Joseph; Scheiman, James M

    2014-04-01

    In 2001, the University of Michigan Health System (UMHS) implemented a novel medical error disclosure program. This study analyzes the effect of this program on gastroenterology (GI)-related claims and costs. This was a review of claims in the UMHS Risk Management Database (1990-2010), naming a gastroenterologist. Claims were classified according to pre-determined categories. Claims data, including incident date, date of resolution, and total liability dollars, were reviewed. Mean total liability incurred per claim in the pre- and post-implementation eras was compared. Patient encounter data from the Division of Gastroenterology was also reviewed in order to benchmark claims data with changes in clinical volume. There were 238,911 GI encounters in the pre-implementation era and 411,944 in the post-implementation era. A total of 66 encounters resulted in claims: 38 in the pre-implementation era and 28 in the post-implementation era. Of the total number of claims, 15.2% alleged delay in diagnosis/misdiagnosis, 42.4% related to a procedure, and 42.4% involved improper management, treatment, or monitoring. The reduction in the proportion of encounters resulting in claims was statistically significant (P=0.001), as was the reduction in time to claim resolution (1,000 vs. 460 days) (P<0.0001). There was also a reduction in the mean total liability per claim ($167,309 pre vs. $81,107 post, 95% confidence interval: 33682.5-300936.2 pre vs. 1687.8-160526.7 post). Implementation of a novel medical error disclosure program, promoting transparency and quality improvement, not only decreased the number of GI-related claims per patient encounter, but also dramatically shortened the time to claim resolution.

  4. The Differences in Error Rate and Type between IELTS Writing Bands and Their Impact on Academic Workload

    Science.gov (United States)

    Müller, Amanda

    2015-01-01

    This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…

  5. Galaxy Cluster Shapes and Systematic Errors in H_0 as Determined by the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sulkanen, Martin E.; Patel, Sandeep K.

    1998-01-01

    Imaging of the Sunyaev-Zeldovich (SZ) effect in galaxy clusters combined with cluster plasma x-ray diagnostics promises to measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic error's in the Hubble constant, H_0, because the true shape of the cluster is not known. In this paper we present a study of the systematic errors in the value of H_0, as determined by the x-ray and SZ properties of theoretical samples of triaxial isothermal "beta-model" clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. We calculate three estimates for H_0 for each cluster, based on their large and small apparent angular core radii, and their arithmetic mean. We average the estimates for H_0 for a sample of 25 clusters and find that the estimates have limited systematic error: the 99.7% confidence intervals for the mean estimated H_0 analyzing the clusters using either their large or mean angular core r;dius are within 14% of the "true" (assumed) value of H_0 (and enclose it), for a triaxial beta model cluster sample possessing a distribution of apparent x-ray cluster ellipticities consistent with that of observed x-ray clusters.

  6. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  7. Traditional biomolecular structure determination by NMR spectroscopy allows for major errors

    NARCIS (Netherlands)

    Nabuurs, S.B.; Spronk, C.A.E.M.; Vuister, G.W.; Vriend, G.

    2006-01-01

    One of the major goals of structural genomics projects is to determine the three-dimensional structure of representative members of as many different fold families as possible. Comparative modeling is expected to fill the remaining gaps by providing structural models of homologs of the

  8. Pyrosequencing as a tool for the detection of Phytophthora species: error rate and risk of false Molecular Operational Taxonomic Units.

    Science.gov (United States)

    Vettraino, A M; Bonants, P; Tomassini, A; Bruni, N; Vannini, A

    2012-11-01

    To evaluate the accuracy of pyrosequencing for the description of Phytophthora communities in terms of taxa identification and risk of assignment for false Molecular Operational Taxonomic Units (MOTUs). Pyrosequencing of Internal Transcribed Spacer 1 (ITS1) amplicons was used to describe the structure of a DNA mixture comprising eight Phytophthora spp. and Pythium vexans. Pyrosequencing resulted in 16 965 reads, detecting all species in the template DNA mixture. Reducing the ITS1 sequence identity threshold resulted in a decrease in numbers of unmatched reads but a concomitant increase in the numbers of false MOTUs. The total error rate was 0·63% and comprised mainly mismatches (0·25%) Pyrosequencing of ITS1 region is an efficient and accurate technique for the detection and identification of Phytophthora spp. in environmental samples. However, the risk of allocating false MOTUs, even when demonstrated to be low, may require additional validation with alternative detection methods. Phytophthora spp. are considered among the most destructive groups of invasive plant pathogens, affecting thousands of cultivated and wild plants worldwide. Simultaneous early detection of Phytophthora complexes in environmental samples offers an unique opportunity for the interception of known and unknown species along pathways of introduction, along with the identification of these organisms in invaded environments. © 2012 The Authors Letters in Applied Microbiology © 2012 The Society for Applied Microbiology.

  9. Extent and Determinants of Error in Doctors' Prognoses in Terminally Ill Patients: Prospective Cohort Study

    OpenAIRE

    Lamont, Elizabeth; Christakis, Nicholas

    2000-01-01

    Objective: To describe doctors' prognostic accuracy in terminally ill patients and to evaluate the determinants of that accuracy. Design: Prospective cohort study. Setting: Five outpatient hospice programmes in Chicago. Participants: 343 doctors provided survival estimates for 468 terminally ill patients at the time of hospice referral. Main outcome measures: Patients' estimated and actual survival. Results: Median survival was 24 days. Only 20% (92/468) of predictions were acc...

  10. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    Science.gov (United States)

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  11. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    Science.gov (United States)

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  12. 42 CFR 418.306 - Determination of payment rates.

    Science.gov (United States)

    2010-10-01

    ...) of the Act. (b) Payment rates. The payment rates for routine home care and other services included in... October 21, 1990, through December 31, 1990, the payment rates for routine home care and other services... December 31, 1990: Routine home care $75.80 Continuous home care: Full rate for 24 hours 442.40 Hourly rate...

  13. Determinants of patient-rated and clinician-rated illness severity in schizophrenia.

    Science.gov (United States)

    Fervaha, Gagan; Takeuchi, Hiroyoshi; Agid, Ofer; Lee, Jimmy; Foussias, George; Remington, Gary

    2015-07-01

    The contribution of specific symptoms on ratings of global illness severity in patients with schizophrenia is not well understood. The present study examined the clinical determinants of clinician and patient ratings of overall illness severity. This study included 1,010 patients with a DSM-IV diagnosis of schizophrenia who participated in the baseline visit of the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) study conducted between January 2001 and December 2004 and who had available symptom severity, side effect burden, cognition, and community functioning data. Both clinicians and patients completed the 7-point Clinical Global Impressions-Severity of Illness scale (CGI-S), the primary measure of interest in the present study. Symptoms were rated using the Positive and Negative Syndrome Scale and the Calgary Depression Scale for Schizophrenia, and functional status with the Quality of Life Scale. Neurocognition, insight, and medication-related side effects were also evaluated. Clinicians rated illness severity significantly higher than patients (P negative, disorganized, and depressive symptoms, as well as functional outcome (all P values enhance patient engagement in care and improve outcomes. ClinicalTrials.gov identifier: NCT00014001. © Copyright 2014 Physicians Postgraduate Press, Inc.

  14. [Determination of Hard Rate of Alfalfa (Medicago sativa L.) Seeds with Near Infrared Spectroscopy].

    Science.gov (United States)

    Wang, Xin-xun; Chen, Ling-ling; Zhang, Yun-wei; Mao, Pei-sheng

    2016-03-01

    Alfalfa (Medicago sativa L.) is the most commonly grown forage crop due to its better quality characteristics and high adaptability in China. However, there was 20%-80% hard seeds in alfalfa which could not be identified easily from non hard seeds which would cause the loss of seed utilization value and plant production. This experiment was designed for 121 samples of alfalfa. Seeds were collected according to different regions, harvested year and varieties. 31 samples were artificial matched as hard rates ranging from 20% to 80% to establish a model for hard seed rate by near infrared spectroscopy (NIRS) with Partial Least Square (PLS). The objective of this study was to establish a model and to estimate the efficiency of NIRS for determining hard rate of alfalfa seeds. The results showed that the correlation coefficient (R2(cal)) of calibration model was 0.981 6, root mean square error of cross validation (RMSECV) was 5.32, and the ratio of prediction to deviation (RPD) was 3.58. The forecast model in this experiment presented the satisfied precision. The proposed method using NIRS technology is feasible for identification and classification of hard seed in alfalfa. A new method, as nondestructive testing of hard seed rate, was provided to theoretical basis for fast nondestructive detection of hard seed rates in alfalfa.

  15. The effect of speaking rate on serial-order sound-level errors in normal healthy controls and persons with aphasia.

    Science.gov (United States)

    Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I

    Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions

  16. Quantification and Assessment of Interfraction Setup Errors Based on Cone Beam CT and Determination of Safety Margins for Radiotherapy.

    Directory of Open Access Journals (Sweden)

    Macarena Cubillos Mesías

    Full Text Available To quantify interfraction patient setup-errors for radiotherapy based on cone-beam computed tomography and suggest safety margins accordingly.Positioning vectors of pre-treatment cone-beam computed tomography for different treatment sites were collected (n = 9504. For each patient group the total average and standard deviation were calculated and the overall mean, systematic and random errors as well as safety margins were determined.The systematic (and random errors in the superior-inferior, left-right and anterior-posterior directions were: for prostate, 2.5(3.0, 2.6(3.9 and 2.9(3.9mm; for prostate bed, 1.7(2.0, 2.2(3.6 and 2.6(3.1mm; for cervix, 2.8(3.4, 2.3(4.6 and 3.2(3.9mm; for rectum, 1.6(3.1, 2.1(2.9 and 2.5(3.8mm; for anal, 1.7(3.7, 2.1(5.1 and 2.5(4.8mm; for head and neck, 1.9(2.3, 1.4(2.0 and 1.7(2.2mm; for brain, 1.0(1.5, 1.1(1.4 and 1.0(1.1mm; and for mediastinum, 3.3(4.6, 2.6(3.7 and 3.5(4.0mm. The CTV-to-PTV margins had the smallest value for brain (3.6, 3.7 and 3.3mm and the largest for mediastinum (11.5, 9.1 and 11.6mm. For pelvic treatments the means (and standard deviations were 7.3 (1.6, 8.5 (0.8 and 9.6 (0.8mm.Systematic and random setup-errors were smaller than 5mm. The largest errors were found for organs with higher motion probability. The suggested safety margins were comparable to published values in previous but often smaller studies.

  17. Error causes in the determination of the acid-base reactivity of oxi-hydroxides

    International Nuclear Information System (INIS)

    Duc, M.; Lefevre, G.; Fedoroff, M.

    2004-01-01

    The long term safety of radioactive waste depositories is based on the sorption of radionuclides from underground water onto engineered and natural barriers. For a quantitative prediction of the migration in such barriers, we need accurate sorption data. Models should be in agreement with the sorption mechanism. Surface complexation is the most often used model for oxides and hydroxides. In fact, there are several types of surface complexation models such as 1-pK and 2-pK monosite, 1-pK and 2-pK multisite, pK-distribution models. Furthermore, there are several ways to describe the distribution of the electrostatic potential in the vicinity of the solid surface (CCM, DLM, BSM, TLM,..). However, all these models are based on the acid-base properties of superficial hydroxide or oxide groups of the solid. It is necessary to determine the surface charge versus pH (titration curves), the point of zero charge (pzc), the surface density of sites active towards protons and hydroxides in aqueous solutions, the acid-base constants of these sites. These parameters are then used for calculating the sorption constants of ions other than protons and hydroxide ions. It is therefore important to determine these parameters very accurately. A comparison of acid-base parameters published in the literature shows a large scatter for the ''same'' oxides [1,2]. Several causes could explain this scatter. One reason is the use of different models, each electrostatic models leading to different values of site density and constants. However, titration curves and pzc are independent of the model chosen. Another reason may be uncontrolled differences in the composition and purity of oxides. Finally, other causes could be found in the titration procedure, in the solubility and the stability of the solid. In order to understand more about the acid-base properties of oxides and about the origin of the discrepancies between measurements, we have performed a systematic experimental study of several

  18. Determinants of foreign direct investment in Lesotho: evidence from cointegration and error correction modeling

    Directory of Open Access Journals (Sweden)

    Malefa Rose Malefane

    2013-02-01

    Full Text Available Over the past decade, Lesotho has recorded a substantial increase in levels of foreign direct investment (FDI inflow, part of it prompted by trade privileges. Building on the extant literature, this study provides an empirical analysis of determinants of FDI in Lesotho. The study looks at how macroeconomic stability, regulatory frameworks, political stability and market size affect FDI.  The evidence from this study shows that some of the foreign enterprises in Lesotho are there to serve a bigger South African market. Also, the country has benefited from a more export-oriented investment promotion strategy. Critical issues however remain that must be addressed if the country is to attract more FDI and retain existing investors .These issues pertain to bureaucratic red-tape, corruption and political instability.

  19. Determination of in vitro oxygen consumption rates for tumor cells

    International Nuclear Information System (INIS)

    Cardenas-Navia, L.I.; Moeller, B.J.; Kirkpatrick, J.P.; Laursen, T.A.; Dewhirst, M.W.

    2003-01-01

    To determine pO 2 at the surface of a monolayer of confluent HCT 116 cells, and to then determine consumption rate in vitro by examining the pO 2 profile in media above the cells. Materials and Methods: A recessed-tip polarographic oxygen microelectrode (diameter ∼10μm) was used to measure pO 2 profiles of media above a confluent monolayer of HCT 116 human colon adenocarcinoma cells in a T25 flask exposed to a 95% air, 5% CO 2 mixture. A two-dimensional finite element analysis of the diffusion equation was used to fit the data, thereby extracting a steady-state O 2 consumption rate. The diffusion equation was solved for zeroth and first-order expressions. No-flux boundary conditions were imposed on its bottom and side boundaries and experimental data was used for boundary conditions at the gas-media boundary. All flasks show an O 2 gradient in the media, with a mean (SE) media layer of 1677 (147) μm and a mean pO 2 at the cell layer/media interface of 44 (8) mm Hg (n=9). pO 2 gradient over the entire media layer is 630 (90) mm Hg/cm, equivalent to a consumption rate of 6.3 x 10 -4 (9.0 x 10 -5 ) mm Hg/s. The mean values for the zeroth and first order rate constants are 8.1 x 10 -9 (1.3 x 10 -9 ) g mol O 2 /cm 3 s and 1.0 x 10 3 (0.46 x 10 3 ) /s, respectively. Control experiments in flasks containing no cells show slight gradients in pO 2 of 38 (12) mm Hg/cm, resulting from some O 2 diffusion through the flask into the surrounding water bath. An addition of 10 -3 M NaCN to the media results in a dramatic increase in pO 2 at the cell layer, consistent with a shut-down in respiration. Under normal cell culture conditions there is an O 2 gradient present in the media of cull culture systems, resulting in physiologic O 2 concentrations at the cell layer, despite the non-physiologic O 2 concentration of the gas mixture to which the cell culture system is exposed. This significant (p -6 ) O 2 gradient in the media of cell culture systems is a result of cell O 2

  20. Oesophageal fistula/tritium-labelled water technique for determining dry matter intake and saliva secretion rates of grazing herbivores

    International Nuclear Information System (INIS)

    Luick, J.R.

    1982-01-01

    Seven assumptions on which the use of tritium-labelled water and oesophageal fistula depend, for determining the dry matter intake and saliva secretion rates of grazing herbivores, were tested experimentally. It is concluded that many of the possible sources of error can be ignored, but that a correction is necessary for the saliva dry matter content when calculating the dry matter of ingested food from fistula samples. (author)

  1. Germination rate is the significant characteristic determining coconut palm diversity.

    Science.gov (United States)

    Harries, Hugh C

    2012-01-01

    This review comes at a time when in vitro embryo culture techniques are being adopted for the safe exchange and cryo-conservation of coconut germplasm. In due course, laboratory procedures may replace the options that exist among standard commercial nursery germination techniques. These, in their turn, have supplanted traditional methods that are now forgotten or misunderstood. Knowledge of all germination options should help to ensure the safe regeneration of conserved material. This review outlines the many options for commercial propagation, recognizes the full significance of one particular traditional method and suggests that the diversity of modern cultivated coconut varieties has arisen because natural selection and domestic selection were associated with different rates of germination and other morphologically recognizable phenotypic characteristics. The review takes into account both the recalcitrant and the viviparous nature of the coconut. The ripe fruits that fall but do not germinate immediately and lose viability if dried for storage are contrasted with the bunches of fruit retained in the crown of the palm that may, in certain circumstances, germinate to produce seedlings high above ground level. Slow-germinating and quick-germinating coconuts have different patterns of distribution. The former predominate on tropical islands and coastlines that could be reached by floating when natural dispersal originally spread coconuts widely-but only where tides and currents were favourable-and then only to sea-level locations. Human settlers disseminated the domestic types even more widely-to otherwise inaccessible coastal sites not reached by floating-and particularly to inland and upland locations on large islands and continental land masses. This review suggests four regions where diversity has been determined by germination rates. Although recent DNA studies support these distinctions, further analyses of genetic markers related to fruit abscission and

  2. Germination rate is the significant characteristic determining coconut palm diversity

    Science.gov (United States)

    Harries, Hugh C.

    2012-01-01

    Rationale This review comes at a time when in vitro embryo culture techniques are being adopted for the safe exchange and cryo-conservation of coconut germplasm. In due course, laboratory procedures may replace the options that exist among standard commercial nursery germination techniques. These, in their turn, have supplanted traditional methods that are now forgotten or misunderstood. Knowledge of all germination options should help to ensure the safe regeneration of conserved material. Scope This review outlines the many options for commercial propagation, recognizes the full significance of one particular traditional method and suggests that the diversity of modern cultivated coconut varieties has arisen because natural selection and domestic selection were associated with different rates of germination and other morphologically recognizable phenotypic characteristics. The review takes into account both the recalcitrant and the viviparous nature of the coconut. The ripe fruits that fall but do not germinate immediately and lose viability if dried for storage are contrasted with the bunches of fruit retained in the crown of the palm that may, in certain circumstances, germinate to produce seedlings high above ground level. Significance Slow-germinating and quick-germinating coconuts have different patterns of distribution. The former predominate on tropical islands and coastlines that could be reached by floating when natural dispersal originally spread coconuts widely—but only where tides and currents were favourable—and then only to sea-level locations. Human settlers disseminated the domestic types even more widely—to otherwise inaccessible coastal sites not reached by floating—and particularly to inland and upland locations on large islands and continental land masses. This review suggests four regions where diversity has been determined by germination rates. Although recent DNA studies support these distinctions, further analyses of genetic markers

  3. What determines the exchange rate: economic factors or market sentiment?

    OpenAIRE

    Gregory P. Hopper

    1997-01-01

    Do economic factors influence exchange rates? Or does market sentiment play a bigger role? Are short-run exchange rates predictable? Greg Hopper reviews exchange-rate economics, focusing on what is predictable and what isn't. He also examines the practical implications of exchange-rate theories for currency option pricing, risk management, and portfolio selection.

  4. Determinants of intra-specific variation in basal metabolic rate.

    Science.gov (United States)

    Konarzewski, Marek; Książek, Aneta

    2013-01-01

    Basal metabolic rate (BMR) provides a widely accepted benchmark of metabolic expenditure for endotherms under laboratory and natural conditions. While most studies examining BMR have concentrated on inter-specific variation, relatively less attention has been paid to the determinants of within-species variation. Even fewer studies have analysed the determinants of within-species BMR variation corrected for the strong influence of body mass by appropriate means (e.g. ANCOVA). Here, we review recent advancements in studies on the quantitative genetics of BMR and organ mass variation, along with their molecular genetics. Next, we decompose BMR variation at the organ, tissue and molecular level. We conclude that within-species variation in BMR and its components have a clear genetic signature, and are functionally linked to key metabolic process at all levels of biological organization. We highlight the need to integrate molecular genetics with conventional metabolic field studies to reveal the adaptive significance of metabolic variation. Since comparing gene expressions inter-specifically is problematic, within-species studies are more likely to inform us about the genetic underpinnings of BMR. We also urge for better integration of animal and medical research on BMR; the latter is quickly advancing thanks to the application of imaging technologies and 'omics' studies. We also suggest that much insight on the biochemical and molecular underpinnings of BMR variation can be gained from integrating studies on the mammalian target of rapamycin (mTOR), which appears to be the major regulatory pathway influencing the key molecular components of BMR.

  5. Determination of rate constants for the oxygen reduction reaction

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.; Walter, T.; Stimming, U. [Munich Technical Univ., Garching (Germany). Dept. of Physics

    2008-07-01

    The oxygen reduction reaction (ORR) in fuel cells is a complex and fundamental electrochemical reaction. However, greater insight is needed into this multi-electron reaction in order to develop efficient and innovative catalysts. The rotating ring disc electrode (RRDE) is a useful tool for studying reaction intermediates of the ORR and to better understand the reaction pathway. Carbon materials such as carbon nanofilaments-platelets (CNF-PL) have high electrical conductivity and may be considered for fuel cells. In particular Pt and RuSe{sub x}, deposited on CNF-PL materials could act as efficient catalysts in fuel cells. This study used the RRDE to evaluate the oxygen reduction kinetics of these catalysts in oxygen-saturated, diluted sulphuric acid at room temperature. Kinetic data and hydrogen peroxide formation were determined by depositing a thin-film of the catalyst on the Au disc. The values for the constants k1, k2 and k3 were obtained using diagnostic criteria and expressions to calculate the rate constants of the cathodic oxygen reduction reaction for RuSe on new carbon supports. A potential dependency of the constants k1 and k2 for RuSe{sub x}/CNF-PL was observed. The transition of the Tafel slopes for this catalyst was obtained. 4 refs., 1 fig.

  6. Determining cardiac vagal threshold from short term heart rate complexity

    Directory of Open Access Journals (Sweden)

    Hamdan Rami Abou

    2016-09-01

    Full Text Available Evaluating individual aerobic exercise capacity is fundamental in sports and exercise medicine but associated with organizational and instrumental effort. Here, we extract an index related to common performance markers, the aerobic and anaerobic thresholds enabling the estimation of exercise capacity from a conventional sports watch supporting beatwise heart rate tracking. Therefore, cardiac vagal threshold (CVT was determined in 19 male subjects performing an incremental maximum exercise test. CVT varied around the anaerobic threshold AnT with mean deviation of 7.9 ± 17.7 W. A high correspondence of the two thresholds was indicated by Bland-Altman plots with limits of agreement −27.5 W and 43.4 W. Additionally, CVT was strongly correlated AnT (rp = 0.86, p < 0.001 and reproduced this marker well (rc = 0.81. We conclude, that cardiac vagal threshold derived from compression entropy time course can be useful to assess physical fitness in an uncomplicated way.

  7. Dating of sediments and determination of sedimentation rate. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Illus, E [ed.

    1998-08-01

    The Second NKS (Nordic Nuclear Safety Research)/EKO-1 Seminar was held at the Finnish Centre for Radiation and Nuclear Safety (STUK) on April 2-3, 1997. The work of the NKS is based on 4-year programmes; the current programme having been planned for the years 1994-1997. The programme comprises 3 major fields, one of them being environmental effects (EKO). Under this umbrella there are 4 main projects. The EKO-1 project deals with marine radioecology, in particular bottom sediments and sediment processes. The programme of the second seminar consisted of 8 invited lecturers and 6 other scientific presentations. Dating of sediments and determination of sedimentation rate are important in all types of sedimentological study and model calculations of fluxes of substances in the aquatic environment. In many cases these tasks have been closely related to radioecological studies undertaken in marine and fresh water environments, because they are often based on measured depth profiles of certain natural or artificial radionuclides present in the sediments. During recent decades Pb-210 has proved to be very useful in dating of sediments, but some other radionuclides have also been successfully used, e.g. Pu-239,240, Am-241 and Cs-137. The difficulties existing and problems involved in dating of sediments, as well as solutions for resolving these problems are discussed in the presentations

  8. Dating of sediments and determination of sedimentation rate. Proceedings

    International Nuclear Information System (INIS)

    Illus, E.

    1998-01-01

    The Second NKS (Nordic Nuclear Safety Research)/EKO-1 Seminar was held at the Finnish Centre for Radiation and Nuclear Safety (STUK) on April 2-3, 1997. The work of the NKS is based on 4-year programmes; the current programme having been planned for the years 1994-1997. The programme comprises 3 major fields, one of them being environmental effects (EKO). Under this umbrella there are 4 main projects. The EKO-1 project deals with marine radioecology, in particular bottom sediments and sediment processes. The programme of the second seminar consisted of 8 invited lecturers and 6 other scientific presentations. Dating of sediments and determination of sedimentation rate are important in all types of sedimentological study and model calculations of fluxes of substances in the aquatic environment. In many cases these tasks have been closely related to radioecological studies undertaken in marine and fresh water environments, because they are often based on measured depth profiles of certain natural or artificial radionuclides present in the sediments. During recent decades Pb-210 has proved to be very useful in dating of sediments, but some other radionuclides have also been successfully used, e.g. Pu-239,240, Am-241 and Cs-137. The difficulties existing and problems involved in dating of sediments, as well as solutions for resolving these problems are discussed in the presentations

  9. Rate-base determination through real-time efficiency assessment

    International Nuclear Information System (INIS)

    Eckhardt, J.H.; Bishop, T.W.

    1990-01-01

    One of the main problems with nuclear power is the extremely high construction costs and long schedules for plant construction and start-up. It is unlikely that utility executives will risk their companies' financial health by committing the necessary capital resources given the prevailing uncertainties. For new nuclear plants to play a major role in preventing future electric supply shortages, the financial uncertainties associated with high construction costs must be minimized. To contain costs and maintain reasonable schedules for future plants, the utilities, vendors, the US Nuclear Regulatory Commission (NRC), and the state regulatory commissions can make specific changes. One of the key factors to reduce uncertainty and improve cost and schedule performance is for the state regulatory commissions to change the method of determining reasonable plant costs and placing those costs in the rate base. Currently, most state regulatory commissions assess the reasonableness of costs only after completion of construction, resulting in years of financial uncertainty and untimely conclusions as to what should have been done better

  10. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  11. INTERNATIONAL RATINGS IN DETERMINING THE FINANCIAL STABILITY OF UKRAINE

    Directory of Open Access Journals (Sweden)

    N. Plieshakova

    2014-01-01

    Full Text Available Problem of the global financial stability and position of Ukraine in international ratings are considered in the paper. Impact of ratings assessment on the financial stability of the country in general is analysed.

  12. The Determinants of Real Exchange Rate Volatility in Nigeria

    African Journals Online (AJOL)

    Rahel

    magnitude of exchange rate volatility while the federal government exercises control of ... objectives in the area of price stability and economic growth. Volatile real ..... Exchange rate shocks and instability is a common feature of emerging.

  13. Determination of sedimentation rates and absorption coefficient of ...

    African Journals Online (AJOL)

    2+ has a higher sedimentation rate of 5.10x10-2 s-1 while Ni2+ has the lowest sedimentation rates of 1.10 x10-3. The rate of sedimentation of the metal carbonates decreased in the order: Zn2+ > Cd2+ > Cu2+ > Co2+ > Ni2+. The order ...

  14. The determinants of real exchange rate volatility in Nigeria | Ajao ...

    African Journals Online (AJOL)

    This study recommends that the central monetary authority should institute policies that will minimize the magnitude of exchange rate volatility while the federal government exercises control of viable macroeconomic variables which have direct influence on exchange rate fluctuation. Keywords: Exchange Rate, Volatility, ...

  15. Galaxy Cluster Shapes and Systematic Errors in the Hubble Constant as Determined by the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sulkanen, Martin E.; Joy, M. K.; Patel, S. K.

    1998-01-01

    Imaging of the Sunyaev-Zei'dovich (S-Z) effect in galaxy clusters combined with the cluster plasma x-ray diagnostics can measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic errors in the Hubble constant, H$-O$, because the true shape of the cluster is not known. This effect remains present for clusters that are otherwise chosen to avoid complications for the S-Z and x-ray analysis, such as plasma temperature variations, cluster substructure, or cluster dynamical evolution. In this paper we present a study of the systematic errors in the value of H$-0$, as determined by the x-ray and S-Z properties of a theoretical sample of triaxial isothermal 'beta-model' clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. The model clusters are not generated as ellipsoids of rotation, but have three independent 'core radii', as well as a random orientation to the plane of the sky.

  16. Transfer of Technology in Determining Lowest Achievable Emission Rate (LAER)

    Science.gov (United States)

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  17. Determination of Lowest Achievable Emission Rate for Coors Container Corporation

    Science.gov (United States)

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  18. Calculation error of collective effective dose of external exposure during works at 'Shelter' object

    International Nuclear Information System (INIS)

    Batij, V.G.; Derengovskij, V.V.; Kochnev, N.A.; Sizov, A.A.

    2001-01-01

    Collective effective dose (CED) error assessment is the most important task for optimal planning of works in the 'Shelter' object conditions. The main components of CED error are as follows: error in transient factor determination from exposition dose to equivalent dose; error in working hours determination in 'Shelter' object conditions; error in determination of dose rate at workplaces; additional CED error introduced by shielding of workplaces

  19. Exchange Rate Determinants in Russia; 1992-1993

    OpenAIRE

    Vincent Koen; Eric Meyermans

    1994-01-01

    This paper examines the evolution of the exchange rate of the ruble vis-à-vis the U.S. dollar from exchange rate unification, in July 1992, to the end of 1993. The expected and actual paths of the exchange rate are related to the exchange and trade regime and to the stance of financial and exchange rate policies. An econometric analysis based on weekly data is offered, which suggests that monetary factors have a significant impact on the short run behavior of the exchange rate.

  20. Accurate determination of rates from non-uniformly sampled relaxation data

    Energy Technology Data Exchange (ETDEWEB)

    Stetz, Matthew A.; Wand, A. Joshua, E-mail: wand@upenn.edu [University of Pennsylvania Perelman School of Medicine, Johnson Research Foundation and Department of Biochemistry and Biophysics (United States)

    2016-08-15

    The application of non-uniform sampling (NUS) to relaxation experiments traditionally used to characterize the fast internal motion of proteins is quantitatively examined. Experimentally acquired Poisson-gap sampled data reconstructed with iterative soft thresholding are compared to regular sequentially sampled (RSS) data. Using ubiquitin as a model system, it is shown that 25 % sampling is sufficient for the determination of quantitatively accurate relaxation rates. When the sampling density is fixed at 25 %, the accuracy of rates is shown to increase sharply with the total number of sampled points until eventually converging near the inherent reproducibility of the experiment. Perhaps contrary to some expectations, it is found that accurate peak height reconstruction is not required for the determination of accurate rates. Instead, inaccuracies in rates arise from inconsistencies in reconstruction across the relaxation series that primarily manifest as a non-linearity in the recovered peak height. This indicates that the performance of an NUS relaxation experiment cannot be predicted from comparison of peak heights using a single RSS reference spectrum. The generality of these findings was assessed using three alternative reconstruction algorithms, eight different relaxation measurements, and three additional proteins that exhibit varying degrees of spectral complexity. From these data, it is revealed that non-linearity in peak height reconstruction across the relaxation series is strongly correlated with errors in NUS-derived relaxation rates. Importantly, it is shown that this correlation can be exploited to reliably predict the performance of an NUS-relaxation experiment by using three or more RSS reference planes from the relaxation series. The RSS reference time points can also serve to provide estimates of the uncertainty of the sampled intensity, which for a typical relaxation times series incurs no penalty in total acquisition time.

  1. Modification of an impulse-factoring orbital transfer technique to account for orbit determination and maneuver execution errors

    Science.gov (United States)

    Kibler, J. F.; Green, R. N.; Young, G. R.; Kelly, M. G.

    1974-01-01

    A method has previously been developed to satisfy terminal rendezvous and intermediate timing constraints for planetary missions involving orbital operations. The method uses impulse factoring in which a two-impulse transfer is divided into three or four impulses which add one or two intermediate orbits. The periods of the intermediate orbits and the number of revolutions in each orbit are varied to satisfy timing constraints. Techniques are developed to retarget the orbital transfer in the presence of orbit-determination and maneuver-execution errors. Sample results indicate that the nominal transfer can be retargeted with little change in either the magnitude (Delta V) or location of the individual impulses. Additonally, the total Delta V required for the retargeted transfer is little different from that required for the nominal transfer. A digital computer program developed to implement the techniques is described.

  2. Bit-error-rate performance analysis of self-heterodyne detected radio-over-fiber links using phase and intensity modulation

    DEFF Research Database (Denmark)

    Yin, Xiaoli; Yu, Xianbin; Tafur Monroy, Idelfonso

    2010-01-01

    We theoretically and experimentally investigate the performance of two self-heterodyne detected radio-over-fiber (RoF) links employing phase modulation (PM) and quadrature biased intensity modulation (IM), in term of bit-error-rate (BER) and optical signal-to-noise-ratio (OSNR). In both links, self...

  3. Downlink Error Rates of Half-duplex Users in Full-duplex Networks over a Laplacian Inter-User Interference Limited and EGK fading

    KAUST Repository

    Soury, Hamza; Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have

  4. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations

    NARCIS (Netherlands)

    Derks, E. M.; Zwinderman, A. H.; Gamazon, E. R.

    2017-01-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (FST) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates;

  5. Increased error rates in preliminary reports issued by radiology residents working more than 10 consecutive hours overnight.

    Science.gov (United States)

    Ruutiainen, Alexander T; Durand, Daniel J; Scanlon, Mary H; Itri, Jason N

    2013-03-01

    To determine if the rate of major discrepancies between resident preliminary reports and faculty final reports increases during the final hours of consecutive 12-hour overnight call shifts. Institutional review board exemption status was obtained for this study. All overnight radiology reports interpreted by residents on-call between January 2010 and June 2010 were reviewed by board-certified faculty and categorized as major discrepancies if they contained a change in interpretation with the potential to impact patient management or outcome. Initial determination of a major discrepancy was at the discretion of individual faculty radiologists based on this general definition. Studies categorized as major discrepancies were secondarily reviewed by the residency program director (M.H.S.) to ensure consistent application of the major discrepancy designation. Multiple variables associated with each report were collected and analyzed, including the time of preliminary interpretation, time into shift study was interpreted, volume of studies interpreted during each shift, day of the week, patient location (inpatient or emergency department), block of shift (2-hour blocks for 12-hour shifts), imaging modality, patient age and gender, resident identification, and faculty identification. Univariate risk factor analysis was performed to determine the optimal data format of each variable (ie, continuous versus categorical). A multivariate logistic regression model was then constructed to account for confounding between variables and identify independent risk factors for major discrepancies. We analyzed 8062 preliminary resident reports with 79 major discrepancies (1.0%). There was a statistically significant increase in major discrepancy rate during the final 2 hours of consecutive 12-hour call shifts. Multivariate analysis confirmed that interpretation during the last 2 hours of 12-hour call shifts (odds ratio (OR) 1.94, 95% confidence interval (CI) 1.18-3.21), cross

  6. 47 CFR 54.607 - Determining the rural rate.

    Science.gov (United States)

    2010-10-01

    .... (a) The rural rate shall be the average of the rates actually being charged to commercial customers... programs, charged for the same or similar services in that rural area over the same distance as the... into account anticipated and actual demand for telecommunications services by all customers who will...

  7. Estimated Interest Rate Rules: Do they Determine Determinacy Properties?

    DEFF Research Database (Denmark)

    Jensen, Henrik

    2011-01-01

    I demonstrate that econometric estimations of nominal interest rate rules may tell little, if anything, about an economy's determinacy properties. In particular, correct inference about the interest-rate response to inflation provides no information about determinacy. Instead, it could reveal...

  8. Factors which determine the swelling rate of austenitic stainless steels

    International Nuclear Information System (INIS)

    Garner, F.A.; Wolfer, W.G.

    1983-01-01

    Once void nucleation subsides, the swelling rate of many austenitic alloys becomes rather insensitive to variables that control the transient regime of swelling. Models are presented which describe the roles of nickel, chromium and silicon in void nucleation. The relative insensitivity of steady-state swelling to temperature, displacement rate and composition is also discussed

  9. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  10. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.

  11. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Science.gov (United States)

    Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn

    2013-01-01

    The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID).

  12. Noninvasive Biosensor Algorithms for Continuous Metabolic Rate Determination

    Data.gov (United States)

    National Aeronautics and Space Administration — Our collaborators in the JSC Cardiovascular lab implemented a technique to determine stroke volume during exercise using ultrasound imaging. Data collection using...

  13. HEART RATE VARIABILITY AND BODY COMPOSITION AS VO2MAX DETERMINANTS

    Directory of Open Access Journals (Sweden)

    Henry Humberto León-Ariza

    Full Text Available ABSTRACT Introduction: The maximum oxygen consumption (VO2max is the gold standard in the cardiorespiratory endurance assessment. Objective: This study aimed to develop a mathematical model that contains variables to determine the VO2max of sedentary people. Methods: Twenty participants (10 men and 10 women with a mean age of 19.8±1.77 years were included. For each participant, body composition (percentage of fat and muscle, heart rate variability (HRV at rest (supine and standing, and VO2max were evaluated through an indirect test on a cycloergometer. A multivariate linear regression model was developed from the data obtained, and the model assumptions were verified. Results: Using the data obtained, including percentage of fat (F, percentage of muscle (M, percentage of power at very low frequency (VLF, α-value of the detrended fluctuation analysis (DFAα1, heart rate (HR in the resting standing position, and age of the participants, a model was established for men, which was expressed as VO2max = 4.216 + (Age*0.153 + (F*0.110 - (M*0.053 - (VLF*0.649 - (DFAα1*2.441 - (HR*0.014, with R2 = 0.965 and standard error = 0.146 L/min. For women, the model was expressed as VO2max = 1.947 - (Age*0.047 + (F*0.024 + (M*0.054 + (VLF*1.949 - (DFAα1*0.424 - (HR*0.019, with R2 = 0.987 and standard error = 0.077 L/min. Conclusion: The obtained model demonstrated the influence exerted by body composition, the autonomic nervous system, and age in the prediction of VO2max.

  14. Resident Physicians' Clinical Training and Error Rate: The Roles of Autonomy, Consultation, and Familiarity with the Literature

    Science.gov (United States)

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-01-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores…

  15. Exchange rate determination and the flaws of mainstream monetary theory

    Directory of Open Access Journals (Sweden)

    HEINER FLASSBECK

    2018-03-01

    Full Text Available ABSTRACT Developing countries in general need flexibility and a sufficient number of instruments to prevent excessive volatility. Evidence does not support the orthodox belief that, with free floating, international financial markets will perform that role by smoothly adjusting exchange rates to their “equilibrium” level. In reality, exchange rates under a floating regime have proved to be highly unstable, leading to long spells of misalignment. The experience with hard pegs has not been satisfactory either: the exchange rate could not be corrected in cases of external shocks or misalignment. Given this experience, “intermediate” regimes are preferable when there is instability in international financial markets.

  16. Determination of surface dose rate for cloisonne using thermoluminescent dosimeters

    Energy Technology Data Exchange (ETDEWEB)

    Hengyuan, Zhao; Yulian, Zhang

    1985-07-01

    In this paper, the measuring method and results of surface dose rate of cloisonne using CaSO/sub 4/ Dy-Teflon foil dosimeter are described. The surface dose rate of all products are below 0.015 mrad/h. These products contain 42 sorts of jewelery and 20 sets of wares (such as vases, plates, ash-trays, etc.). Most of the data fall within the range of natural background. For comparison, some jewelery from Taiwan and 3 vases from Japan are measured. The highest surface dose rate of 0.78 mrad/h is due to the necklace jewelery from Taiwan.

  17. Genetic determination of mortality rate in Danish dairy cows

    DEFF Research Database (Denmark)

    Maia, Rafael Pimentel; Ask, Birgitte; Madsen, Per

    2014-01-01

    : a sire random component with pedigree representing the sire genetic effects and a herd-year-season component. Moreover, the level of heterozygosity and the sire breed proportions were included in the models as covariates in order to account for potential non-additive genetic effects due to the massive...... introduction of genetic material from other populations. The correlations between the sire components for death rate and slaughter rate were negative and small for the 3 populations, suggesting the existence of specific genetic mechanisms for each culling reason and common concurrent genetic mechanisms...

  18. Determination of the void nucleation rate from void size distributions

    International Nuclear Information System (INIS)

    Brailsford, A.D.

    1977-01-01

    A method of estimating the void nucleation rate from one void size distribution and from observation of the maximum void radius at prior times is proposed. Implicit in the method are the assumptions that both variations in the critical radius with dose and vacancy thermal emission processes during post-nucleation quasi-steady-state growth may be neglected. (Auth.)

  19. The Determination of Rate-Limiting Steps during Soot Formation

    Science.gov (United States)

    1990-06-08

    and a CH3N precursor of acetonitrile such as 2H-aziridine although other intermediates of lower energy such as ketenimine have been identified on the...precursor of acetonitrile such as 2H-aziridine or ketenimine . Experimentally it was found that the overall rate of disappearance of pyrrole is first order

  20. Determination of sedimentation rates and absorption coefficient of ...

    African Journals Online (AJOL)

    DR. MIKE HORSFALL

    particles have pores that can absorb radiation. Gamma rays have been used to study the absorption coefficients of cobalt(II) insoluble compounds (Essien and Ekpe, 1998), densities of marine sediments. (Gerland and Villinger, 1995) and soil particle-size distribution (Vaz et al., 1992). In this study, sedimentation rates of ...

  1. determination of design inflow rate in furrow irrigation using ...

    African Journals Online (AJOL)

    Dr Obe

    taken as the minimum inflow rate which gave rise to a minimum water application efficiency of. 60% and a minimum distribution uniformity of 75%. It is recommended that the procedure described in this work is useful for the modification of existing furrow irrigation systems and the establishment of new ones. Also, the design ...

  2. The determinants of exchange rates and the movements of EUR/RON exchange rate via non-linear stochastic processes

    Directory of Open Access Journals (Sweden)

    Petrică Andreea-Cristina

    2017-07-01

    Full Text Available Modeling exchange rate volatility became an important topic for research debate starting with 1973, when many countries switched to floating exchange rate system. In this paper, we focus on the EUR/RON exchange rate both as an economic measure and present the implied economic links, and also as a financial investment and analyze its movements and fluctuations through two volatility stochastic processes: the Standard Generalized Autoregressive Conditionally Heteroscedastic Model (GARCH and the Exponential Generalized Autoregressive Conditionally Heteroscedastic Model (EGARCH. The objective of the conditional variance processes is to capture dependency in the return series of the EUR/RON exchange rate. On this account, analyzing exchange rates could be seen as the input for economic decisions regarding Romanian macroeconomics - the exchange rates being influenced by many factors such as: interest rates, inflation, trading relationships with other countries (imports and exports, or investments - portfolio optimization, risk management, asset pricing. Therefore, we talk about political stability and economic performance of a country that represents a link between the two types of inputs mentioned above and influences both the macroeconomics and the investments. Based on time-varying volatility, we examine implied volatility of daily returns of EUR/RON exchange rate using the standard GARCH model and the asymmetric EGARCH model, whose parameters are estimated through the maximum likelihood method and the error terms follow two distributions (Normal and Student’s t. The empirical results show EGARCH(2,1 with Asymmetric order 2 and Student’s t error terms distribution performs better than all the estimated standard GARCH models (GARCH(1,1, GARCH(1,2, GARCH(2,1 and GARCH(2,2. This conclusion is supported by the major advantage of the EGARCH model compared to the GARCH model which consists in allowing good and bad news having different impact on the

  3. Determination of transverse phase-space and momentum error from size measurements along the 50-MeV H- RCS injection line

    International Nuclear Information System (INIS)

    Cho, Y.; Crosbie, E.A.; Takeda, H.

    1981-01-01

    The 50-MeV H - injection line for the RCS at Argonne National Laboratory has 16 quadrupole and eight bending magnets. Horizontal and vertical profiles can be obtained at 12 wire scanner positions. Size information from these profiles can be used to determine the three ellipses parameters in each plane required to describe the transverse phase space. These locations that have dispersion permit the momentum error to be used as a fourth fitting parameter. The assumed accuracy of the size measurements provides an error matrix that predicts the rms errors of the fitted parameters

  4. Determination of transverse phase-space and momentum error from size measurements along the 50-MeV H/sup -/ RCS injection line

    International Nuclear Information System (INIS)

    Cho, Y.; Crosbie, E.A.; Takeda, H.

    1981-01-01

    The 50-Mev H/sup -/ injection line for the RCS at Argonne National Laboratory has 16 quadrupole and eight bending magnets. Horizontal and vertical profiles can be obtained at 12 wire scanner positions. Size information from these profiles can be used to determine the three ellipses parameters in each plane required to describe the transverse phase space. Those locations that have dispersion permit the momentum error to be used as a fourth fitting parameter. The assumed accuracy of the size measurements provides an error matrix that predicts the rms errors of the fitted parameters. 3 refs

  5. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  6. Throughput Estimation Method in Burst ACK Scheme for Optimizing Frame Size and Burst Frame Number Appropriate to SNR-Related Error Rate

    Science.gov (United States)

    Ohteru, Shoko; Kishine, Keiji

    The Burst ACK scheme enhances effective throughput by reducing ACK overhead when a transmitter sends sequentially multiple data frames to a destination. IEEE 802.11e is one such example. The size of the data frame body and the number of burst data frames are important burst transmission parameters that affect throughput. The larger the burst transmission parameters are, the better the throughput under error-free conditions becomes. However, large data frame could reduce throughput under error-prone conditions caused by signal-to-noise ratio (SNR) deterioration. If the throughput can be calculated from the burst transmission parameters and error rate, the appropriate ranges of the burst transmission parameters could be narrowed down, and the necessary buffer size for storing transmit data or received data temporarily could be estimated. In this paper, we present a method that features a simple algorithm for estimating the effective throughput from the burst transmission parameters and error rate. The calculated throughput values agree well with the measured ones for actual wireless boards based on the IEEE 802.11-based original MAC protocol. We also calculate throughput values for larger values of the burst transmission parameters outside the assignable values of the wireless boards and find the appropriate values of the burst transmission parameters.

  7. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  8. Determination of LEDs degradation with entropy generation rate

    Science.gov (United States)

    Cuadras, Angel; Yao, Jiaqiang; Quilez, Marcos

    2017-10-01

    We propose a method to assess the degradation and aging of light emitting diodes (LEDs) based on irreversible entropy generation rate. We degraded several LEDs and monitored their entropy generation rate ( S ˙ ) in accelerated tests. We compared the thermoelectrical results with the optical light emission evolution during degradation. We find a good relationship between aging and S ˙ (t), because S ˙ is both related to device parameters and optical performance. We propose a threshold of S ˙ (t) as a reliable damage indicator of LED end-of-life that can avoid the need to perform optical measurements to assess optical aging. The method lays beyond the typical statistical laws for lifetime prediction provided by manufacturers. We tested different LED colors and electrical stresses to validate the electrical LED model and we analyzed the degradation mechanisms of the devices.

  9. Rates and determinants of peripartum and puerperal anemia in ...

    African Journals Online (AJOL)

    Methods: A prospective longitudinal study involving women with uncomplicated singleton pregnancies who were recruited at term at two tertiary maternity centers and were followed up with the determination of hemoglobin and ferritin concentrations till 6 weeks after delivery. Data were analyzed with descriptive and ...

  10. Determination of dose rates from natural radionuclides in dental materials

    International Nuclear Information System (INIS)

    Veronese, I.; Guzzi, G.; Giussani, A.; Cantone, M.C.; Ripamonti, D.

    2006-01-01

    Different types of materials used for dental prosthetics restoration, including feldspathic ceramics, glass ceramics, zirconia-based ceramics, alumina-based ceramics, and resin-based materials, were investigated with regard to content of natural radionuclides by means of thermoluminescence beta dosimetry and gamma spectrometry. The gross beta dose rate from feldspathic and glass ceramics was about ten times higher than the background measurement, whereas resin-based materials generated negligible beta dose rate, similarly to natural tooth samples. The specific activity of uranium and thorium was significantly below the levels found in the period when addition of uranium to dental porcelain materials was still permitted. The high-beta dose levels observed in feldspathic porcelains and glass ceramics are thus mainly ascribable to 4 K, naturally present in these specimens. Although the measured values are below the recommended limits, results indicate that patients with prostheses are subject to higher dose levels than other members of the population. Alumina- and zirconia-based ceramics might be a promising alternative, as they have generally lower beta dose rates than the conventional porcelain materials. However, the dosimetry results, which imply the presence of inhomogeneously distributed clusters of radionuclides in the sample matrix, and the still unsuitable structural properties call for further optimization of these materials

  11. Experimental Determination of the Cosmogenic Ar Production Rate From Ca

    Science.gov (United States)

    Niedermann, S.; Schäfer, J. M.; Wieler, R.; Naumann, R.

    2005-12-01

    Cosmogenic 38Ar is produced in terrestrial surface rocks by spallation of target nuclides, in particular K and Ca. Though the presence of cosmogenic Ar in Ca-rich minerals has been demonstrated earlier [1], is has proven difficult to establish its production rate. To circumvent problems connected to 36Ar production by 35Cl neutron capture and different production rates from K and Ca, we have analyzed the noble gases in seven pyroxene separates (px) from the Antarctic Dry Valleys which are essentially free of Cl and K. The px were obtained from dolerite rocks, for which 3He and 21Ne exposure ages from 1.5 to 6.5 Ma have been reported [2]. The noble gases were extracted in two or three heating steps at GFZ Potsdam, yielding 38Ar/36Ar ratios up to 0.2283 ± 0.0008 (air: 0.1880). Ca (3.7-11.2 wt. %) is expected to be the only relevant target element for Ar production in the five pure px (ratio of 1.5 ± 0.2, we obtain cosmogenic 38Ar concentrations between 130 and 530x106 atoms/g. The 38Ar production rate was calculated based on 21Ne exposure ages [2], corrected for elevated nuclide production in Antarctica due to prevailing low air pressure and for the revised 21Ne production rate from Si. We obtain values between 188 ± 17 and 243 +110/-24 atoms (g Ca)-1 a-1 at sea level and high (northern) latitudes for four out of the five pure px, while one yields a very high value of 348 ± 70 atoms (g Ca)-1 a-1. Values above 250 atoms (g Ca)-1 a-1 are also obtained from two less pure px containing 0.3 and 0.9% K and from one feldspar/quartz accumulate, indicating that the production rate from K may be higher than that from Ca. The weighted mean (excluding the outlier) of ~200 atoms (g Ca)-1 a-1 is in excellent agreement with Lal's [3] theoretical estimate. [1] Renne et al., EPSL 188 (2001) 435. [2] Schäfer et al., EPSL 167 (1999) 215. [3] Lal, EPSL 104 (1991) 424.

  12. Testing the Monetary Model for Exchange Rate Determination in South Africa: Evidence from 101 Years of Data

    Directory of Open Access Journals (Sweden)

    Riané de Bruyn

    2013-03-01

    Full Text Available Evidence in favor of the monetary model of exchange rate determination for the South African Rand is, at best, mixed. A co-integrating relationship between the nominal exchange rate and monetary fundamentals forms the basis of the monetary model. With the econometric literature suggesting that the span of the data, not the frequency, determines the power of the co-integration tests and the studies on South Africa primarily using short-span data from the post-Bretton Woods era, we decided to test the long-run monetary model of exchange rate determination for the South African Rand relative to the US Dollar using annual data from 1910 – 2010. The results provide some support for the monetary model in that long-run co-integration is found between the nominal exchange rate and the output and money supply deviations. However, the theoretical restrictions required by the monetary model are rejected. A vector error-correction model identifies both the nominal exchange rate and the monetary fundamentals as the channel for the adjustment process of deviations from the long-run equilibrium exchange rate. A subsequent comparison of nominal exchange rate forecasts based on the monetary model with those of the random walk model suggests that the forecasting performance of the monetary model is superior.

  13. Determination of the stoichiometric rate in UO2 samples

    International Nuclear Information System (INIS)

    Moura, Sergio C.; Lima, Nelson B. de; Sassine, Andre; Bustillos, Jose Oscar Vega

    2000-01-01

    The gravimetric and voltammetric methods for determination of non-stoichiometric O/U ratio in uranium dioxide used as nuclear fuel are discussed in this work. The oxidation of uranium oxide is very complex due to many phase changes. Gravimetric and voltammetric methods do not detect phase changes. The results of this work shown that, to evaluate both methods is requiring to be done Rietveld methods by X-ray diffraction data to identify the uranium oxide phase changes. (author)

  14. Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment

    Science.gov (United States)

    Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.

    2016-11-01

    This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.

  15. A New Method to Detect and Correct the Critical Errors and Determine the Software-Reliability in Critical Software-System

    International Nuclear Information System (INIS)

    Krini, Ossmane; Börcsök, Josef

    2012-01-01

    In order to use electronic systems comprising of software and hardware components in safety related and high safety related applications, it is necessary to meet the Marginal risk numbers required by standards and legislative provisions. Existing processes and mathematical models are used to verify the risk numbers. On the hardware side, various accepted mathematical models, processes, and methods exist to provide the required proof. To this day, however, there are no closed models or mathematical procedures known that allow for a dependable prediction of software reliability. This work presents a method that makes a prognosis on the residual critical error number in software. Conventional models lack this ability and right now, there are no methods that forecast critical errors. The new method will show that an estimate of the residual error number of critical errors in software systems is possible by using a combination of prediction models, a ratio of critical errors, and the total error number. Subsequently, the critical expected value-function at any point in time can be derived from the new solution method, provided the detection rate has been calculated using an appropriate estimation method. Also, the presented method makes it possible to make an estimate on the critical failure rate. The approach is modelled on a real process and therefore describes two essential processes - detection and correction process.

  16. Determinants of inter-specific variation in basal metabolic rate.

    Science.gov (United States)

    White, Craig R; Kearney, Michael R

    2013-01-01

    Basal metabolic rate (BMR) is the rate of metabolism of a resting, postabsorptive, non-reproductive, adult bird or mammal, measured during the inactive circadian phase at a thermoneutral temperature. BMR is one of the most widely measured physiological traits, and data are available for over 1,200 species. With data available for such a wide range of species, BMR is a benchmark measurement in ecological and evolutionary physiology, and is often used as a reference against which other levels of metabolism are compared. Implicit in such comparisons is the assumption that BMR is invariant for a given species and that it therefore represents a stable point of comparison. However, BMR shows substantial variation between individuals, populations and species. Investigation of the ultimate (evolutionary) explanations for these differences remains an active area of inquiry, and explanation of size-related trends remains a contentious area. Whereas explanations for the scaling of BMR are generally mechanistic and claim ties to the first principles of chemistry and physics, investigations of mass-independent variation typically take an evolutionary perspective and have demonstrated that BMR is ultimately linked with a range of extrinsic variables including diet, habitat temperature, and net primary productivity. Here we review explanations for size-related and mass-independent variation in the BMR of animals, and suggest ways that the various explanations can be evaluated and integrated.

  17. A Research on the Responsibility of Accounting Professionals to Determine and Prevent Accounting Errors and Frauds: Edirne Sample

    Directory of Open Access Journals (Sweden)

    Semanur Adalı

    2017-09-01

    Full Text Available In this study, the ethical dimensions of accounting professionals related to accounting errors and frauds were examined. Firstly, general and technical information about accounting were provided. Then, some terminology on error, fraud and ethics in accounting were discussed. Study also included recent statistics about accounting errors and fraud as well as presenting a literature review. As the methodology of research, a questionnaire was distributed to 36 accounting professionals residing in Edirne city of Turkey. The collected data were then entered to the SPSS package program for analysis. The study revealed very important results. Accounting professionals think that, accounting chambers do not organize enough seminars/conferences on errors and fraud. They also believe that supervision and disciplinary boards of professional accounting chambers fulfill their responsibilities partially. Attitude of professional accounting chambers in terms of errors, fraud and ethics is considered neither strict nor lenient. But, most accounting professionals are aware of colleagues who had disciplinary penalties. Most important and effective tool to prevent errors and fraud is indicated as external audit, but internal audit and internal control are valued as well. According to accounting professionals, most errors occur due to incorrect data received from clients and as a result of recording. Fraud is generally made in order to get credit from banks and for providing benefits to the organization by not showing the real situation of the firm. Finally, accounting professionals state that being honest, trustworthy and impartial is the basis of accounting profession and accountants must adhere to ethical rules.

  18. 7 CFR 1610.10 - Determination of interest rate on Bank loans.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Determination of interest rate on Bank loans. 1610.10..., DEPARTMENT OF AGRICULTURE LOAN POLICIES § 1610.10 Determination of interest rate on Bank loans. (a) All loan..., shall bear interest at the rate determined as established below, but not less than 5 percent per annum...

  19. 75 FR 11227 - Proposed Collection; Comment Request for Tip Rate Determination Agreement (TRDA) for Use in...

    Science.gov (United States)

    2010-03-10

    ... Rate Determination Agreement (TRDA) for Use in Industries Other Than the Food and Beverage Industry and... Tip Rate Determination Agreement (TRDA) for industries other than the food and beverage industry and...: Tip Rate Determination Agreement (TRDA) for industries other than the food and beverage industry and...

  20. Determination of total flow rate and flow rate of every operating branch in commissioning of heavy water loop for ARR-2

    International Nuclear Information System (INIS)

    Han Yan

    1997-01-01

    The heavy water loop (i,e, RCS) for ARR-2 in Algeria is a complex loop. Flow regulating means are not provided by the design in order to operate the reactor safely and simplify operating processes. How to determine precisely the orifice diameters of resistance parts for the loop is a key point for decreasing deviation between practical and design flow rates. Commissioning tests shall ensure that under every one of combined operating modes for the pumps, total coolant flow rate is about the same (the number of pumps operating in parallel is the same) and is consistent with design requirement, as well as the distribution of coolant flow rate to every branch is uniform. The flow Determination is divided into two steps. First and foremost, corresponding resistance part at each pump outlet is determined in commissioning test of shorted heavy water loop with light water, so that the problem about uniform distribution of the flow rate to each branch is solved, Secondly, resistance part at the reactor inlet is determined in commissioning test of heavy water loop connected with the vessel, so that the problem about that total heavy water flow rate is within optimal range is solved. According to practical requirements of the project, a computer program of hydraulic calculation and analysis for heavy water loop has been developed, and hydraulic characteristics test for a part of loop has been conducted in order to correct calculation error. By means of program calculation combining with tests in site, orifice diameters of 9 resistance parts has been determined rapidly and precisely and requirements of design and operation has been met adequately

  1. Who Do Hospital Physicians and Nurses Go to for Advice About Medications? A Social Network Analysis and Examination of Prescribing Error Rates.

    Science.gov (United States)

    Creswick, Nerida; Westbrook, Johanna Irene

    2015-09-01

    To measure the weekly medication advice-seeking networks of hospital staff, to compare patterns across professional groups, and to examine these in the context of prescribing error rates. A social network analysis was conducted. All 101 staff in 2 wards in a large, academic teaching hospital in Sydney, Australia, were surveyed (response rate, 90%) using a detailed social network questionnaire. The extent of weekly medication advice seeking was measured by density of connections, proportion of reciprocal relationships by reciprocity, number of colleagues to whom each person provided advice by in-degree, and perceptions of amount and impact of advice seeking between physicians and nurses. Data on prescribing error rates from the 2 wards were compared. Weekly medication advice-seeking networks were sparse (density: 7% ward A and 12% ward B). Information sharing across professional groups was modest, and rates of reciprocation of advice were low (9% ward A, 14% ward B). Pharmacists provided advice to most people, and junior physicians also played central roles. Senior physicians provided medication advice to few people. Many staff perceived that physicians rarely sought advice from nurses when prescribing, but almost all believed that an increase in communication between physicians and nurses about medications would improve patient safety. The medication networks in ward B had higher measures for density, reciprocation, and fewer senior physicians who were isolates. Ward B had a significantly lower rate of both procedural and clinical prescribing errors than ward A (0.63 clinical prescribing errors per admission [95%CI, 0.47-0.79] versus 1.81/ admission [95%CI, 1.49-2.13]). Medication advice-seeking networks among staff on hospital wards are limited. Hubs of advice provision include pharmacists, junior physicians, and senior nurses. Senior physicians are poorly integrated into medication advice networks. Strategies to improve the advice-giving networks between senior

  2. The determination of carbon dioxide concentration using atmospheric pressure ionization mass spectrometry/isotopic dilution and errors in concentration measurements caused by dryers.

    Science.gov (United States)

    DeLacy, Brendan G; Bandy, Alan R

    2008-01-01

    An atmospheric pressure ionization mass spectrometry/isotopically labeled standard (APIMS/ILS) method has been developed for the determination of carbon dioxide (CO(2)) concentration. Descriptions of the instrumental components, the ionization chemistry, and the statistics associated with the analytical method are provided. This method represents an alternative to the nondispersive infrared (NDIR) technique, which is currently used in the atmospheric community to determine atmospheric CO(2) concentrations. The APIMS/ILS and NDIR methods exhibit a decreased sensitivity for CO(2) in the presence of water vapor. Therefore, dryers such as a nafion dryer are used to remove water before detection. The APIMS/ILS method measures mixing ratios and demonstrates linearity and range in the presence or absence of a dryer. The NDIR technique, on the other hand, measures molar concentrations. The second half of this paper describes errors in molar concentration measurements that are caused by drying. An equation describing the errors was derived from the ideal gas law, the conservation of mass, and Dalton's Law. The purpose of this derivation was to quantify errors in the NDIR technique that are caused by drying. Laboratory experiments were conducted to verify the errors created solely by the dryer in CO(2) concentration measurements post-dryer. The laboratory experiments verified the theoretically predicted errors in the derived equations. There are numerous references in the literature that describe the use of a dryer in conjunction with the NDIR technique. However, these references do not address the errors that are caused by drying.

  3. Errors and limits in the determination of plasma electron density by measuring the absolute values of the emitted continuum radiation intensity

    International Nuclear Information System (INIS)

    Bilbao, L.; Bruzzone, H.; Grondona, D.

    1994-01-01

    The reliable determination of a plasma electron structure requires a good knowledge of the errors affecting the employed technique. A technique based on the measurements of the absolute light intensity emitted by travelling plasma structures in plasma focus devices has been used, but it can be easily modified to other geometries and even to stationary plasma structures with time-varying plasma densities. The purpose of this work is to discuss in some detail the errors and limits of this technique. Three separate errors are shown: the minimum size of the density structure that can be resolved, an overall error in the measurements themselves, and an uncertainty in the shape of the density profile. (author)

  4. Determination of reactivity rates of silicate particle-size fractions

    Directory of Open Access Journals (Sweden)

    Angélica Cristina Fernandes Deus

    2014-04-01

    Full Text Available The efficiency of sources used for soil acidity correction depends on reactivity rate (RR and neutralization power (NP, indicated by effective calcium carbonate (ECC. Few studies establish relative efficiency of reactivity (RER for silicate particle-size fractions, therefore, the RER applied for lime are used. This study aimed to evaluate the reactivity of silicate materials affected by particle size throughout incubation periods in comparison to lime, and to calculate the RER for silicate particle-size fractions. Six correction sources were evaluated: three slags from distinct origins, dolomitic and calcitic lime separated into four particle-size fractions (2, 0.84, 0.30 and <0.30-mm sieves, and wollastonite, as an additional treatment. The treatments were applied to three soils with different texture classes. The dose of neutralizing material (calcium and magnesium oxides was applied at equal quantities, and the only variation was the particle-size material. After a 90-day incubation period, the RER was calculated for each particle-size fraction, as well as the RR and ECC of each source. The neutralization of soil acidity of the same particle-size fraction for different sources showed distinct solubility and a distinct reaction between silicates and lime. The RER for slag were higher than the limits established by Brazilian legislation, indicating that the method used for limes should not be used for the slags studied here.

  5. A software solution to estimate the SEU-induced soft error rate for systems implemented on SRAM-based FPGAs

    International Nuclear Information System (INIS)

    Wang Zhongming; Lu Min; Yao Zhibin; Guo Hongxia

    2011-01-01

    SRAM-based FPGAs are very susceptible to radiation-induced Single-Event Upsets (SEUs) in space applications. The failure mechanism in FPGA's configuration memory differs from those in traditional memory device. As a result, there is a growing demand for methodologies which could quantitatively evaluate the impact of this effect. Fault injection appears to meet such requirement. In this paper, we propose a new methodology to analyze the soft errors in SRAM-based FPGAs. This method is based on in depth understanding of the device architecture and failure mechanisms induced by configuration upsets. The developed programs read in the placed and routed netlist, search for critical logic nodes and paths that may destroy the circuit topological structure, and then query a database storing the decoded relationship of the configurable resources and corresponding control bit to get the sensitive bits. Accelerator irradiation test and fault injection experiments were carried out to validate this approach. (semiconductor integrated circuits)

  6. Error Rates of M-PAM and M-QAM in Generalized Fading and Generalized Gaussian Noise Environments

    KAUST Repository

    Soury, Hamza

    2013-07-01

    This letter investigates the average symbol error probability (ASEP) of pulse amplitude modulation and quadrature amplitude modulation coherent signaling over flat fading channels subject to additive white generalized Gaussian noise. The new ASEP results are derived in a generic closed-form in terms of the Fox H function and the bivariate Fox H function for the extended generalized-K fading case. The utility of this new general closed-form is that it includes some special fading distributions, like the Generalized-K, Nakagami-m, and Rayleigh fading and special noise distributions such as Gaussian and Laplacian. Some of these special cases are also treated and are shown to yield simplified results.

  7. 77 FR 31756 - Energy Conservation Program: Alternative Efficiency Determination Methods and Alternative Rating...

    Science.gov (United States)

    2012-05-30

    ...-AC46 Energy Conservation Program: Alternative Efficiency Determination Methods and Alternative Rating... regulations authorizing the use of alternative methods of determining energy efficiency or energy consumption... alternative methods of determining energy efficiency or energy consumption of various consumer products and...

  8. Determination of deuterium concentrations in JET plasmas from fusion reaction rate measurements

    International Nuclear Information System (INIS)

    Jarvis, O.N.; Balet, B.; Cordey, J.G.; Morgan, P.D.; Sadler, G.; Belle, P. van; Conroy, S.; Elevant, T.

    1989-01-01

    The concentration of deuterium in the central regions of JET plasmas, expressed as a fraction of the electron concentration (n d /n e ), has been determined using four different methods involving neutron detection. These measurements are found to be consistent and agree within experimental errors with values deduced from Z eff measurements using visible bremsstrahlung radiation. (author) 11 refs., 1 fig., 1 tab

  9. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    Science.gov (United States)

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  10. System and method for determining an ammonia generation rate in a three-way catalyst

    Science.gov (United States)

    Sun, Min; Perry, Kevin L; Kim, Chang H

    2014-12-30

    A system according to the principles of the present disclosure includes a rate determination module, a storage level determination module, and an air/fuel ratio control module. The rate determination module determines an ammonia generation rate in a three-way catalyst based on a reaction efficiency and a reactant level. The storage level determination module determines an ammonia storage level in a selective catalytic reduction (SCR) catalyst positioned downstream from the three-way catalyst based on the ammonia generation rate. The air/fuel ratio control module controls an air/fuel ratio of an engine based on the ammonia storage level.

  11. Does growth rate determine the rate of metabolism in shorebird chicks living in the arctic?

    NARCIS (Netherlands)

    Williams, Joseph B.; Tieleman, B. Irene; Visser, G. Henk; Ricklefs, Robert E.

    2007-01-01

    We measured resting and peak metabolic rates (RMR and PMR, respectively) during development of chicks of seven species of shorebirds: least sandpiper (Calidris minutilla; adult mass 20 22 g), dunlin (Calidris alpina; 56-62 g), lesser yellowlegs (Tringa flavipes; 88-92 g), short-billed dowitcher

  12. Use of GRACE determined secular gravity rates for glacial isostatic adjustment studies in North-America

    Science.gov (United States)

    van der Wal, Wouter; Wu, Patrick; Sideris, Michael G.; Shum, C. K.

    2008-10-01

    Monthly geopotential spherical harmonic coefficients from the GRACE satellite mission are used to determine their usefulness and limitations for studying glacial isostatic adjustment (GIA) in North-America. Secular gravity rates are estimated by unweighted least-squares estimation using release 4 coefficients from August 2002 to August 2007 provided by the Center for Space Research (CSR), University of Texas. Smoothing is required to suppress short wavelength noise, in addition to filtering to diminish geographically correlated errors, as shown in previous studies. Optimal cut-off degrees and orders are determined for the destriping filter to maximize the signal to noise ratio. The halfwidth of the Gaussian filter is shown to significantly affect the sensitivity of the GRACE data (with respect to upper mantle viscosity and ice loading history). Therefore, the halfwidth should be selected based on the desired sensitivity. It is shown that increase in water storage in an area south west of Hudson Bay, from the summer of 2003 to the summer of 2006, contributes up to half of the maximum estimated gravity rate. Hydrology models differ in the predictions of the secular change in water storage, therefore even 4-year trend estimates are influenced by the uncertainty in water storage changes. Land ice melting in Greenland and Alaska has a non-negligible contribution, up to one-fourth of the maximum gravity rate. The estimated secular gravity rate shows two distinct peaks that can possibly be due to two domes in the former Pleistocene ice cover: west and south east of Hudson Bay. With a limited number of models, a better fit is obtained with models that use the ICE-3G model compared to the ICE-5G model. However, the uncertainty in interannual variations in hydrology models is too large to constrain the ice loading history with the current data span. For future work in which GRACE will be used to constrain ice loading history and the Earth's radial viscosity profile, it is

  13. Application for Determination of the Forward Exchange Rate in Access 2003

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2006-01-01

    Full Text Available The exchange rate set the present rate for a foreign currency transaction with payment or delivery at some future date. Forward rates are calculated by using the current exchange rate for the currency pair and the interest rates for the two currencies and allow you to lock in rate now for a future. This paper describes the formulas which determinate the forward exchange rate and how can we implement them in a short, but efficient informatics application.

  14. Errors in Computing the Normalized Protein Catabolic Rate due to Use of Single-pool Urea Kinetic Modeling or to Omission of the Residual Kidney Urea Clearance.

    Science.gov (United States)

    Daugirdas, John T

    2017-07-01

    The protein catabolic rate normalized to body size (PCRn) often is computed in dialysis units to obtain information about protein ingestion. However, errors can manifest when inappropriate modeling methods are used. We used a variable volume 2-pool urea kinetic model to examine the percent errors in PCRn due to use of a 1-pool urea kinetic model or after omission of residual urea clearance (Kru). When a single-pool model was used, 2 sources of errors were identified. The first, dependent on the ratio of dialyzer urea clearance to urea distribution volume (K/V), resulted in a 7% inflation of the PCRn when K/V was in the range of 6 mL/min per L. A second, larger error appeared when Kt/V values were below 1.0 and was related to underestimation of urea distribution volume (due to overestimation of effective clearance) by the single-pool model. A previously reported prediction equation for PCRn was valid, but data suggest that it should be modified using 2-pool eKt/V and V coefficients instead of single-pool values. A third source of error, this one unrelated to use of a single-pool model, namely omission of Kru, was shown to result in an underestimation of PCRn, such that each ml/minute Kru per 35 L of V caused a 5.6% underestimate in PCRn. Marked overestimation of PCRn can result due to inappropriate use of a single-pool urea kinetic model, particularly when Kt/V <1.0 (as in short daily dialysis), or after omission of residual native kidney clearance. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  15. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  16. Exchange Rate Volatility, Its Determinants and Effects on the Manufacturing Sector in Nigeria

    OpenAIRE

    Chimaobi V. Okolo; Onyinye S. Ugwuanyi; Kenneth A. Okpala

    2017-01-01

    This study evaluated the effect of exchange rate volatility on the manufacturing sector of Nigeria. The flow and stock market theories of exchange rate determination was adopted considering macroeconomic determinants such as balance of trade, trade openness, and net international investment. Furthermore, the influence of changes in parallel exchange rate, official exchange rate and real effective exchange rate was modeled on the manufacturing sector output. Vector autoregression techniques an...

  17. Error estimation and global fitting in transverse-relaxation dispersion experiments to determine chemical-exchange parameters

    International Nuclear Information System (INIS)

    Ishima, Rieko; Torchia, Dennis A.

    2005-01-01

    Off-resonance effects can introduce significant systematic errors in R 2 measurements in constant-time Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation dispersion experiments. For an off-resonance chemical shift of 500 Hz, 15 N relaxation dispersion profiles obtained from experiment and computer simulation indicated a systematic error of ca. 3%. This error is three- to five-fold larger than the random error in R 2 caused by noise. Good estimates of total R 2 uncertainty are critical in order to obtain accurate estimates in optimized chemical exchange parameters and their uncertainties derived from χ 2 minimization of a target function. Here, we present a simple empirical approach that provides a good estimate of the total error (systematic + random) in 15 N R 2 values measured for the HIV protease. The advantage of this empirical error estimate is that it is applicable even when some of the factors that contribute to the off-resonance error are not known. These errors are incorporated into a χ 2 minimization protocol, in which the Carver-Richards equation is used fit the observed R 2 dispersion profiles, that yields optimized chemical exchange parameters and their confidence limits. Optimized parameters are also derived, using the same protein sample and data-fitting protocol, from 1 H R 2 measurements in which systematic errors are negligible. Although 1 H and 15 N relaxation profiles of individual residues were well fit, the optimized exchange parameters had large uncertainties (confidence limits). In contrast, when a single pair of exchange parameters (the exchange lifetime, τ ex , and the fractional population, p a ), were constrained to globally fit all R 2 profiles for residues in the dimer interface of the protein, confidence limits were less than 8% for all optimized exchange parameters. In addition, F-tests showed that quality of the fits obtained using τ ex , p a as global parameters were not improved when these parameters were free to fit the R

  18. Reliability of perceived neighbourhood conditions and the effects of measurement error on self-rated health across urban and rural neighbourhoods.

    Science.gov (United States)

    Pruitt, Sandi L; Jeffe, Donna B; Yan, Yan; Schootman, Mario

    2012-04-01

    Limited psychometric research has examined the reliability of self-reported measures of neighbourhood conditions, the effect of measurement error on associations between neighbourhood conditions and health, and potential differences in the reliabilities between neighbourhood strata (urban vs rural and low vs high poverty). We assessed overall and stratified reliability of self-reported perceived neighbourhood conditions using five scales (social and physical disorder, social control, social cohesion, fear) and four single items (multidimensional neighbouring). We also assessed measurement error-corrected associations of these conditions with self-rated health. Using random-digit dialling, 367 women without breast cancer (matched controls from a larger study) were interviewed twice, 2-3 weeks apart. Test-retest (intraclass correlation coefficients (ICC)/weighted κ) and internal consistency reliability (Cronbach's α) were assessed. Differences in reliability across neighbourhood strata were tested using bootstrap methods. Regression calibration corrected estimates for measurement error. All measures demonstrated satisfactory internal consistency (α ≥ 0.70) and either moderate (ICC/κ=0.41-0.60) or substantial (ICC/κ=0.61-0.80) test-retest reliability in the full sample. Internal consistency did not differ by neighbourhood strata. Test-retest reliability was significantly lower among rural (vs urban) residents for two scales (social control, physical disorder) and two multidimensional neighbouring items; test-retest reliability was higher for physical disorder and lower for one multidimensional neighbouring item among the high (vs low) poverty strata. After measurement error correction, the magnitude of associations between neighbourhood conditions and self-rated health were larger, particularly in the rural population. Research is needed to develop and test reliable measures of perceived neighbourhood conditions relevant to the health of rural populations.

  19. Errors in determination of soil water content using time-domain reflectometry caused by soil compaction around wave guides

    Energy Technology Data Exchange (ETDEWEB)

    Ghezzehei, T.A.

    2008-05-29

    Application of time domain reflectometry (TDR) in soil hydrology often involves the conversion of TDR-measured dielectric permittivity to water content using universal calibration equations (empirical or physically based). Deviations of soil-specific calibrations from the universal calibrations have been noted and are usually attributed to peculiar composition of soil constituents, such as high content of clay and/or organic matter. Although it is recognized that soil disturbance by TDR waveguides may have impact on measurement errors, to our knowledge, there has not been any quantification of this effect. In this paper, we introduce a method that estimates this error by combining two models: one that describes soil compaction around cylindrical objects and another that translates change in bulk density to evolution of soil water retention characteristics. Our analysis indicates that the compaction pattern depends on the mechanical properties of the soil at the time of installation. The relative error in water content measurement depends on the compaction pattern as well as the water content and water retention properties of the soil. Illustrative calculations based on measured soil mechanical and hydrologic properties from the literature indicate that the measurement errors of using a standard three-prong TDR waveguide could be up to 10%. We also show that the error scales linearly with the ratio of rod radius to the interradius spacing.

  20. An emerging network storage management standard: Media error monitoring and reporting information (MEMRI) - to determine optical tape data integrity

    Science.gov (United States)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.

  1. Effect of catalogues coordinate errors of a star onto determination of the physical libration of the Moon from the observations of stars

    Science.gov (United States)

    Petrova, Natalia; Kocoulin, Valerii; Nefediev, Yurii

    2016-07-01

    In the Kazan University computer simulation is carried out for observation of lunar physical libration in projects planned installation of measuring equipment on the lunar surface. One such project is the project of ILOM (Japan), in which on the lunar pole an optical telescope with CCD will be equipped. As a result, the determining the selenographic coordinates (x and y) of a star with an accuracy of 1 ms of arc will be achieved. On the basis of the analytical theory of physical libration we developed a technique for solving the inverse problem of the libration. And we have already shown, for example, that the error in determining selenographic coordinates about ɛ seconds does not lead to errors in the determination of the libration angles ρ and Iσ larger than the 1.414ɛ. Libration in longitude is not determined from observations of the polar star (Petrova et al., 2012). The accuracy of the libration in the inverse problem depends on accuracy of the coordinates of the stars - α and δ - taken from the star catalogs. Checking this influence is the task of the present study. To do simulation we have developed that allows to choose the stars, falling in the field of view of the lunar telescope on observation period. Equatorial coordinates of stars were chosen by us from several fundamental catalogs: UCAC2-BSS, Hipparcos, Tycho, FK6 (part I, III) and the Astronomical Almanac. An analysis of these catalogues from the point of view accuracy of coordinates of stars represented in them was performed by Nefediev et al., 2013. The largest error, 20-70 ms, found in the catalogues UCAC2 and Tycho, the others have an error about a millisecond of arc. We simulated the observations with mentioned errors and got the following results. 1. The error in the declination Δδ of the star causes the same order error in libration parameters ρ and Iσ , while the sensitivity of libration to errors in Δα is ten time smaller. Fortunately, due to statistics (30 to 70, depending on

  2. Variation of haemoglobin extinction coefficients can cause errors in the determination of haemoglobin concentration measured by near-infrared spectroscopy

    International Nuclear Information System (INIS)

    Kim, J G; Liu, H

    2007-01-01

    Near-infrared spectroscopy or imaging has been extensively applied to various biomedical applications since it can detect the concentrations of oxyhaemoglobin (HbO 2 ), deoxyhaemoglobin (Hb) and total haemoglobin (Hb total ) from deep tissues. To quantify concentrations of these haemoglobin derivatives, the extinction coefficient values of HbO 2 and Hb have to be employed. However, it was not well recognized among researchers that small differences in extinction coefficients could cause significant errors in quantifying the concentrations of haemoglobin derivatives. In this study, we derived equations to estimate errors of haemoglobin derivatives caused by the variation of haemoglobin extinction coefficients. To prove our error analysis, we performed experiments using liquid-tissue phantoms containing 1% Intralipid in a phosphate-buffered saline solution. The gas intervention of pure oxygen was given in the solution to examine the oxygenation changes in the phantom, and 3 mL of human blood was added twice to show the changes in [Hb total ]. The error calculation has shown that even a small variation (0.01 cm -1 mM -1 ) in extinction coefficients can produce appreciable relative errors in quantification of Δ[HbO 2 ], Δ[Hb] and Δ[Hb total ]. We have also observed that the error of Δ[Hb total ] is not always larger than those of Δ[HbO 2 ] and Δ[Hb]. This study concludes that we need to be aware of any variation in haemoglobin extinction coefficients, which could result from changes in temperature, and to utilize corresponding animal's haemoglobin extinction coefficients for the animal experiments, in order to obtain more accurate values of Δ[HbO 2 ], Δ[Hb] and Δ[Hb total ] from in vivo tissue measurements

  3. Variation of haemoglobin extinction coefficients can cause errors in the determination of haemoglobin concentration measured by near-infrared spectroscopy

    Science.gov (United States)

    Kim, J. G.; Liu, H.

    2007-10-01

    Near-infrared spectroscopy or imaging has been extensively applied to various biomedical applications since it can detect the concentrations of oxyhaemoglobin (HbO2), deoxyhaemoglobin (Hb) and total haemoglobin (Hbtotal) from deep tissues. To quantify concentrations of these haemoglobin derivatives, the extinction coefficient values of HbO2 and Hb have to be employed. However, it was not well recognized among researchers that small differences in extinction coefficients could cause significant errors in quantifying the concentrations of haemoglobin derivatives. In this study, we derived equations to estimate errors of haemoglobin derivatives caused by the variation of haemoglobin extinction coefficients. To prove our error analysis, we performed experiments using liquid-tissue phantoms containing 1% Intralipid in a phosphate-buffered saline solution. The gas intervention of pure oxygen was given in the solution to examine the oxygenation changes in the phantom, and 3 mL of human blood was added twice to show the changes in [Hbtotal]. The error calculation has shown that even a small variation (0.01 cm-1 mM-1) in extinction coefficients can produce appreciable relative errors in quantification of Δ[HbO2], Δ[Hb] and Δ[Hbtotal]. We have also observed that the error of Δ[Hbtotal] is not always larger than those of Δ[HbO2] and Δ[Hb]. This study concludes that we need to be aware of any variation in haemoglobin extinction coefficients, which could result from changes in temperature, and to utilize corresponding animal's haemoglobin extinction coefficients for the animal experiments, in order to obtain more accurate values of Δ[HbO2], Δ[Hb] and Δ[Hbtotal] from in vivo tissue measurements.

  4. Variation of haemoglobin extinction coefficients can cause errors in the determination of haemoglobin concentration measured by near-infrared spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J G; Liu, H [Joint Graduate Program in Biomedical Engineering, University of Texas at Arlington/University of Texas Southwestern Medical Center at Dallas, Arlington, TX 76019 (United States)

    2007-10-21

    Near-infrared spectroscopy or imaging has been extensively applied to various biomedical applications since it can detect the concentrations of oxyhaemoglobin (HbO{sub 2}), deoxyhaemoglobin (Hb) and total haemoglobin (Hb{sub total}) from deep tissues. To quantify concentrations of these haemoglobin derivatives, the extinction coefficient values of HbO{sub 2} and Hb have to be employed. However, it was not well recognized among researchers that small differences in extinction coefficients could cause significant errors in quantifying the concentrations of haemoglobin derivatives. In this study, we derived equations to estimate errors of haemoglobin derivatives caused by the variation of haemoglobin extinction coefficients. To prove our error analysis, we performed experiments using liquid-tissue phantoms containing 1% Intralipid in a phosphate-buffered saline solution. The gas intervention of pure oxygen was given in the solution to examine the oxygenation changes in the phantom, and 3 mL of human blood was added twice to show the changes in [Hb{sub total}]. The error calculation has shown that even a small variation (0.01 cm{sup -1} mM{sup -1}) in extinction coefficients can produce appreciable relative errors in quantification of {delta}[HbO{sub 2}], {delta}[Hb] and {delta}[Hb{sub total}]. We have also observed that the error of {delta}[Hb{sub total}] is not always larger than those of {delta}[HbO{sub 2}] and {delta}[Hb]. This study concludes that we need to be aware of any variation in haemoglobin extinction coefficients, which could result from changes in temperature, and to utilize corresponding animal's haemoglobin extinction coefficients for the animal experiments, in order to obtain more accurate values of {delta}[HbO{sub 2}], {delta}[Hb] and {delta}[Hb{sub total}] from in vivo tissue measurements.

  5. 42 CFR 23.25 - How will interest rates for loans be determined?

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false How will interest rates for loans be determined? 23... will interest rates for loans be determined? Interest will be charged at the Treasury Current Value of Funds (CVF) rate in effect on April 1 immediately preceding the date on which the loan is approved and...

  6. Determining the capacity and rate of advance of tunneling scoops during the sinking of shafts

    Energy Technology Data Exchange (ETDEWEB)

    Durov, E.M.

    1979-03-01

    Methods for calculating parameters and selecting tunneling rigs for deepening mine shafts and for determining their rate of advance are outlined. The output according to type of rig, scoop capacity and range of rate of advance in the shaft are determined firstly and then a graph of output in relation to the change of maximum rates of advance is constructed. The desired productivity is determined on the basis of output per working shift in loose soil. Having determined scoop capacity and rate of advance, the remaining parameters of the excavation may be determined.

  7. Application of Fermat's Principle to Calculation of the Errors of Acoustic Flow-Rate Measurements for a Three-Dimensional Fluid Flow or Gas

    Science.gov (United States)

    Petrov, A. G.; Shkundin, S. Z.

    2018-01-01

    Fermat's variational principle is used for derivation of the formula for the time of propagation of a sonic signal between two set points A and B in a steady three-dimensional flow of a fluid or gas. It is shown that the fluid flow changes the time of signal reception by a value proportional to the flow rate independently of the velocity profile. The time difference in the reception of the signals from point B to point A and vice versa is proportional with a high accuracy to the flow rate. It is shown that the relative error of the formula does not exceed the square of the largest Mach number. This makes it possible to measure the flow rate of a fluid or gas with an arbitrary steady subsonic velocity field.

  8. Correction factor determination on failure rate equation of MacLaurin series for low and high mode application

    Directory of Open Access Journals (Sweden)

    Totok R. Biyanto

    2016-06-01

    Full Text Available Safety Instrumented Function (SIF is implemented on the system to prevent hazard in process industry. In general, most of SIF implementation in process industry works in low demand condition. Safety valuation of SIF that works in low demand can be solved by using quantitative method. The quantitative method is a simplified exponential equation form of MacLaurin series, which can be called simplified equation. Simplified equation used in high demand condition will generate a higher Safety Integrity Level (SIL and it will affect the higher safety cost. Therefore, the value of low or high demand rate limit should be determined to prevent it. The result of this research is a first order equation that can fix the error of SIL, which arises from the usage of simplified equation, without looking the demand rate limit for low and high demand. This equation is applied for SIL determination on SIF with 1oo1 vote. The new equation from this research is λ = 0.9428 λMC + 1.062E−04 H/P, with 5% average of error, where λMC is a value of λ from the simplified equation, Hazardous event frequency (H is a probabilistic frequency of hazard event and P is Probability of Failure on Demand (PFD in Independent Protection Layers (IPLs. The equation generated from this research could correct SIL of SIF in various H and P. Therefore, SIL design problem could be solved and it provides an appropriate SIL.

  9. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  10. Attempt to Determine the Prevalence of Two Inborn Errors of Primary Bile Acid Synthesis : Results of a European Survey

    NARCIS (Netherlands)

    Jahnel, Jörg; Zöhrer, Evelyn; Fischler, Björn; D'Antiga, Lorenzo; Debray, Dominique; Dezsofi, Antal; Haas, Dorothea; Hadzic, Nedim; Jacquemin, Emmanuel; Lamireau, Thierry; Maggiore, Giuseppe; McKiernan, Pat J; Calvo, Pier Luigi; Verkade, Henkjan J; Hierro, Loreto; McLin, Valerie; Baumann, Ulrich; Gonzales, Emmanuel

    2017-01-01

    Objective: Inborn errors of primary bile acid (BA) synthesis are genetic cholestatic disorders leading to accumulation of atypical BA with deficiency of normal BA. Unless treated with primary BA, chronic liver disease usually progresses to cirrhosis and liver failure before adulthood. We sought to

  11. Why are autopsy rates low in Japan? Views of ordinary citizens and doctors in the case of unexpected patient death and medical error.

    Science.gov (United States)

    Maeda, Shoichi; Kamishiraki, Etsuko; Starkey, Jay; Ikeda, Noriaki

    2013-01-01

    This article examines what could account for the low autopsy rate in Japan based on the findings from an anonymous, self-administered, structured questionnaire that was given to a sample population of the general public and physicians in Japan. The general public and physicians indicated that autopsy may not be carried out because: (1) conducting an autopsy might result in the accusation that patient death was caused by a medical error even when there was no error (50.4% vs. 13.1%, respectively), (2) suggesting an autopsy makes the families suspicious of a medical error even when there was none (61.0% vs. 19.1%, respectively), (3) families do not want the body to be damaged by autopsy (81.6% vs. 87.3%, respectively), and (4) families do not want to make the patient suffer any more in addition to what he/she has already endured (61.8% vs. 87.1%, respectively). © 2013 American Society for Healthcare Risk Management of the American Hospital Association.

  12. Determining Methane Leak Locations and Rates with a Wireless Network Composed of Low-Cost, Printed Sensors

    Science.gov (United States)

    Smith, C. J.; Kim, B.; Zhang, Y.; Ng, T. N.; Beck, V.; Ganguli, A.; Saha, B.; Daniel, G.; Lee, J.; Whiting, G.; Meyyappan, M.; Schwartz, D. E.

    2015-12-01

    We will present our progress on the development of a wireless sensor network that will determine the source and rate of detected methane leaks. The targeted leak detection threshold is 2 g/min with a rate estimation error of 20% and localization error of 1 m within an outdoor area of 100 m2. The network itself is composed of low-cost, high-performance sensor nodes based on printed nanomaterials with expected sensitivity below 1 ppmv methane. High sensitivity to methane is achieved by modifying high surface-area-to-volume-ratio single-walled carbon nanotubes (SWNTs) with materials that adsorb methane molecules. Because the modified SWNTs are not perfectly selective to methane, the sensor nodes contain arrays of variously-modified SWNTs to build diversity of response towards gases with adsorption affinity. Methane selectivity is achieved through advanced pattern-matching algorithms of the array's ensemble response. The system is low power and designed to operate for a year on a single small battery. The SWNT sensing elements consume only microwatts. The largest power consumer is the wireless communication, which provides robust, real-time measurement data. Methane leak localization and rate estimation will be performed by machine-learning algorithms built with the aid of computational fluid dynamics simulations of gas plume formation. This sensor system can be broadly applied at gas wells, distribution systems, refineries, and other downstream facilities. It also can be utilized for industrial and residential safety applications, and adapted to other gases and gas combinations.

  13. 12 CFR 617.7125 - How should a qualified lender determine the effective interest rate?

    Science.gov (United States)

    2010-01-01

    ... effective interest rate? 617.7125 Section 617.7125 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM BORROWER RIGHTS Disclosure of Effective Interest Rates § 617.7125 How should a qualified lender determine the effective interest rate? (a) A qualified lender must calculate the effective interest rate on...

  14. An evaluation of a Low-Dose-Rate (LDR) brachytherapy procedure using a systems engineering & error analysis methodology for health care (SEABH) - (SAVE)

    LENUS (Irish Health Repository)

    Chadwick, Liam

    2012-03-12

    Health Care Failure Modes and Effects Analysis (HFMEA®) is an established tool for risk assessment in health care. A number of deficiencies have been identified in the method. A new method called Systems and Error Analysis Bundle for Health Care (SEABH) was developed to address these deficiencies. SEABH has been applied to a number of medical processes as part of its validation and testing. One of these, Low Dose Rate (LDR) prostate Brachytherapy is reported in this paper. The case study supported the validity of SEABH with respect to its capacity to address the weaknesses of (HFMEA®).

  15. Using the global positioning satellite system to determine attitude rates using doppler effects

    Science.gov (United States)

    Campbell, Charles E. (Inventor)

    2003-01-01

    In the absence of a gyroscope, the attitude and attitude rate of a receiver can be determined using signals received by antennae on the receiver. Based on the signals received by the antennae, the Doppler difference between the signals is calculated. The Doppler difference may then be used to determine the attitude rate. With signals received from two signal sources by three antennae pairs, the three-dimensional attitude rate is determined.

  16. Choice of reference sequence and assembler for alignment of Listeria monocytogenes short-read sequence data greatly influences rates of error in SNP analyses.

    Directory of Open Access Journals (Sweden)

    Arthur W Pightling

    Full Text Available The wide availability of whole-genome sequencing (WGS and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i depth of sequencing coverage, ii choice of reference-guided short-read sequence assembler, iii choice of reference genome, and iv whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT, using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming. We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers

  17. Long-run and Short-run Determinants of the Real Exchange Rate in Zambia

    OpenAIRE

    Mkenda, Beatrice Kalinda

    2001-01-01

    The paper analyses the main determinants of the real exchange rate in Zambia. It first gives a brief review of the Zambian economy and a review on real exchange rate studies. Then an illustrative model is presented. The study employs cointegration analysis in estimating the long-run determinants of the real exchange rates for imports and exports, and of the internal real exchange rate. The finding is that terms of trade, government consumption, and investment share all influence the real exch...

  18. Determinants of self-rated health: could health status explain the association between self-rated health and mortality?

    Science.gov (United States)

    Murata, Chiyoe; Kondo, Takaaki; Tamakoshi, Koji; Yatsuya, Hiroshi; Toyoshima, Hideaki

    2006-01-01

    The purpose of this study was to investigate factors related to self-rated health and to mortality among 2490 community-living elderly. Respondents were followed for 7.3 years for all-cause mortality. To compare the relative impact of each variable, we employed logistic regression analysis for self-rated health and Cox hazard analysis for mortality. Cox analysis stratified by gender, follow-up periods, age group, and functional status was also employed. Series of analysis found that factors associated with self-rated health and with mortality were not identical. Psychological factors such as perceived isolation at home or 'ikigai (one aspect of psychological well-being)' were associated with self-rated health only. Age, functional status, and social relations were associated both with self-rated health and mortality after controlling for possible confounders. Illnesses and functional status accounted for 35-40% of variances in the fair/poor self-rated health. Differences by gender and functional status were observed in the factors related to self-rated health. Overall, self-rated health effect on mortality was stronger for people with no functional impairment, for shorter follow-up period, and for young-old age group. Although, illnesses and functional status were major determinants of self-rated health, economical, psychological, and social factors were also related to self-rated health.

  19. The sensitivity of bit error rate (BER) performance in multi-carrier (OFDM) and single-carrier

    Science.gov (United States)

    Albdran, Saleh; Alshammari, Ahmed; Matin, Mohammad

    2012-10-01

    Recently, the single-carrier and multi-carrier transmissions have grabbed the attention of industrial systems. Theoretically, OFDM as a Multicarrier has more advantages over the Single-Carrier especially for high data rate. In this paper we will show which one of the two techniques outperforms the other. We will study and compare the performance of BER for both techniques for a given channel. As a function of signal to noise ratio SNR, the BER will be measure and studied. Also, Peak-to-Average Power Ratio (PAPR) is going to be examined and presented as a drawback of using OFDM. To make a reasonable comparison between the both techniques, we will use additive white Gaussian noise (AWGN) as a communication channel.

  20. Evaluation of the effect of noise on the rate of errors and speed of work by the ergonomic test of two-hand co-ordination

    Directory of Open Access Journals (Sweden)

    Ehsanollah Habibi

    2013-01-01

    Full Text Available Background: Among the most important and effective factors affecting the efficiency of the human workforce are accuracy, promptness, and ability. In the context of promoting levels and quality of productivity, the aim of this study was to investigate the effects of exposure to noise on the rate of errors, speed of work, and capability in performing manual activities. Methods: This experimental study was conducted on 96 students (52 female and 44 male of the Isfahan Medical Science University with the average and standard deviations of age, height, and weight of 22.81 (3.04 years, 171.67 (8.51 cm, and 65.05 (13.13 kg, respectively. Sampling was conducted with a randomized block design. Along with controlling for intervening factors, a combination of sound pressure levels [65 dB (A, 85 dB (A, and 95 dB (A] and exposure times (0, 20, and 40 were used for evaluation of precision and speed of action of the participants, in the ergonomic test of two-hand coordination. Data was analyzed by SPSS18 software using a descriptive and analytical statistical method by analysis of covariance (ANCOVA repeated measures. Results: The results of this study showed that increasing sound pressure level from 65 to 95 dB in network ′A′ increased the speed of work (P 0.05. Male participants got annoyed from the noise more than females. Also, increase in sound pressure level increased the rate of error (P < 0.05. Conclusions: According to the results of this research, increasing the sound pressure level decreased efficiency and increased the errors and in exposure to sounds less than 85 dB in the beginning, the efficiency decreased initially and then increased in a mild slope.

  1. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  2. Downlink Error Rates of Half-duplex Users in Full-duplex Networks over a Laplacian Inter-User Interference Limited and EGK fading

    KAUST Repository

    Soury, Hamza

    2017-03-14

    This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have non-line of sight (NLOS) interfering link. Consequently, we study the interferer limited problem that appears between NLOS HD users-pair that are scheduled on the same FD channel. The distribution of the interference is first characterized via its distribution function, which is derived in closed form. Then, a comprehensive performance assessment for the proposed pairing scheme is provided by assuming Extended Generalized- $cal{K}$ (EGK) fading for the downlink and studying different modulation schemes. To this end, a unified closed form expression for the average symbol error rate is derived. Furthermore, we show the effective downlink throughput gain harvested by the pairing NLOS users as a function of the average signal-to-interferenceratio when compared to an idealized HD scenario with neither interference nor noise. Finally, we show the minimum required channel gain pairing threshold to harvest downlink throughput via the FD operation when compared to the HD case for each modulation scheme.

  3. Improved read disturb and write error rates in voltage-control spintronics memory (VoCSM) by controlling energy barrier height

    Science.gov (United States)

    Inokuchi, T.; Yoda, H.; Kato, Y.; Shimizu, M.; Shirotori, S.; Shimomura, N.; Koi, K.; Kamiguchi, Y.; Sugiyama, H.; Oikawa, S.; Ikegami, K.; Ishikawa, M.; Altansargai, B.; Tiwari, A.; Ohsawa, Y.; Saito, Y.; Kurobe, A.

    2017-06-01

    A hybrid writing scheme that combines the spin Hall effect and voltage-controlled magnetic-anisotropy effect is investigated in Ta/CoFeB/MgO/CoFeB/Ru/CoFe/IrMn junctions. The write current and control voltage are applied to Ta and CoFeB/MgO/CoFeB junctions, respectively. The critical current density required for switching the magnetization in CoFeB was modulated 3.6-fold by changing the control voltage from -1.0 V to +1.0 V. This modulation of the write current density is explained by the change in the surface anisotropy of the free layer from 1.7 mJ/m2 to 1.6 mJ/m2, which is caused by the electric field applied to the junction. The read disturb rate and write error rate, which are important performance parameters for memory applications, are drastically improved, and no error was detected in 5 × 108 cycles by controlling read and write sequences.

  4. Global minimum profile error (GMPE) - a least-squares-based approach for extracting macroscopic rate coefficients for complex gas-phase chemical reactions.

    Science.gov (United States)

    Duong, Minh V; Nguyen, Hieu T; Mai, Tam V-T; Huynh, Lam K

    2018-01-03

    Master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) has shown to be a powerful framework for modeling kinetic and dynamic behaviors of a complex gas-phase chemical system on a complicated multiple-species and multiple-channel potential energy surface (PES) for a wide range of temperatures and pressures. Derived from the ME time-resolved species profiles, the macroscopic or phenomenological rate coefficients are essential for many reaction engineering applications including those in combustion and atmospheric chemistry. Therefore, in this study, a least-squares-based approach named Global Minimum Profile Error (GMPE) was proposed and implemented in the MultiSpecies-MultiChannel (MSMC) code (Int. J. Chem. Kinet., 2015, 47, 564) to extract macroscopic rate coefficients for such a complicated system. The capability and limitations of the new approach were discussed in several well-defined test cases.

  5. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  6. Determinants Factors of Interest Rates on Three-Month Deposits of Bank Persero

    Directory of Open Access Journals (Sweden)

    Tedy Kurniawan

    2017-03-01

    Full Text Available This research aims at analyzing the influence of Capital Adequacy Ratio (CAR, Operating Expenses of Operating Income (BOPO, inflation, exchange rate, and the amount of money supply (M1 to the interest rate of three month deposits of the State-Owned Bank in Indonesia in 2007-2015. This research uses the error correction model analysis. The result obtained is the CAR that has a significant effect on the long term and has no effect on the short term, BOPO has a significant influence on the long term and short term, inflation has the significant effect on the long term and has no effect on the short term, the exchange rate has an influence on the short and long term, the money supply has no effects on the short and long-term on the interest rate on three month deposits of the State-Owned Bank.

  7. 48 CFR 22.1002-2 - Wage determinations based on prevailing rates.

    Science.gov (United States)

    2010-10-01

    ... on prevailing rates. 22.1002-2 Section 22.1002-2 Federal Acquisition Regulations System FEDERAL... Contract Act of 1965, as Amended 22.1002-2 Wage determinations based on prevailing rates. Contractors... Department of Labor to prevail in the locality or, in the absence of a wage determination, the minimum wage...

  8. Engineering task plan for determining breathing rates in single shell tanks using tracer gas

    International Nuclear Information System (INIS)

    Andersen, J.A.

    1997-01-01

    The testing of single shell tanks to determine breathing rates. Inert tracer gases helium, and sulfur hexafluoride will be injected into the tanks AX-103, BY-105, C-107 and U-103. Periodic samples will be taken over a three month interval to determine actual headspace breathing rates

  9. Fieldable computer system for determining gamma-ray pulse-height distributions, flux spectra, and dose rates from Little Boy

    International Nuclear Information System (INIS)

    Moss, C.E.; Lucas, M.C.; Tisinger, E.W.; Hamm, M.E.

    1984-01-01

    Our system consists of a LeCroy 3500 data acquisition system with a built-in CAMAC crate and eight bismuth-germanate detectors 7.62 cm in diameter and 7.62 cm long. Gamma-ray pulse-height distributions are acquired simultaneously for up to eight positions. The system was very carefully calibrated and characterized from 0.1 to 8.3 MeV using gamma-ray spectra from a variety of radioactive sources. By fitting the pulse-height distributions from the sources with a function containing 17 parameters, we determined theoretical repsonse functions. We use these response functions to unfold the distributions to obtain flux spectra. A flux-to-dose-rate conversion curve based on the work of Dimbylow and Francis is then used to obtain dose rates. Direct use of measured spectra and flux-to-dose-rate curves to obtain dose rates avoids the errors that can arise from spectrum dependence in simple gamma-ray dosimeter instruments. We present some gamma-ray doses for the Little Boy assembly operated at low power. These results can be used to determine the exposures of the Hiroshima survivors and thus aid in the establishment of radation exposure limits for the nuclear industry

  10. 26 CFR 1.430(h)(2)-1 - Interest rates used to determine present value.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 5 2010-04-01 2010-04-01 false Interest rates used to determine present value... to the interest rates to be applied for a plan year under section 430(h)(2). Section 430(h)(2) and... defined in section 414(f)). Paragraph (b) of this section describes how the segment interest rates are...

  11. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    Science.gov (United States)

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The

  12. The human error rate assessment and optimizing system HEROS - a new procedure for evaluating and optimizing the man-machine interface in PSA

    International Nuclear Information System (INIS)

    Richei, A.; Hauptmanns, U.; Unger, H.

    2001-01-01

    A new procedure allowing the probabilistic evaluation and optimization of the man-machine system is presented. This procedure and the resulting expert system HEROS, which is an acronym for Human Error Rate Assessment and Optimizing System, is based on the fuzzy set theory. Most of the well-known procedures employed for the probabilistic evaluation of human factors involve the use of vague linguistic statements on performance shaping factors to select and to modify basic human error probabilities from the associated databases. This implies a large portion of subjectivity. Vague statements are expressed here in terms of fuzzy numbers or intervals which allow mathematical operations to be performed on them. A model of the man-machine system is the basis of the procedure. A fuzzy rule-based expert system was derived from ergonomic and psychological studies. Hence, it does not rely on a database, whose transferability to situations different from its origin is questionable. In this way, subjective elements are eliminated to a large extent. HEROS facilitates the importance analysis for the evaluation of human factors, which is necessary for optimizing the man-machine system. HEROS is applied to the analysis of a simple diagnosis of task of the operating personnel in a nuclear power plant

  13. The determination of capital controls: Which role do exchange rate regimes play?

    OpenAIRE

    von Hagen, Jürgen; Zhou, Jizhong

    2003-01-01

    This paper investigates the role of exchange rate regime choices in the determination of capital controls in transition economies. We first use a simultaneous equations model to allow direct interactions between decisions on capital controls and on exchange rate regimes. We find that exchange rate regime choices strongly influence the imposition or removal of capital controls, but the feed-back effect is weak. We further estimate a single equation model for capital controls with exchange rate...

  14. Evaluation of drug administration errors in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Berdot Sarah

    2012-03-01

    Full Text Available Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds. A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors with one or more errors were detected (27.6%. There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501. The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%. The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission. In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  15. Simplified method of ''push-pull'' test data analysis for determining in situ reaction rate coefficients

    International Nuclear Information System (INIS)

    Haggerty, R.; Schroth, M.H.; Istok, J.D.

    1998-01-01

    The single-well, ''''push-pull'''' test method is useful for obtaining information on a wide variety of aquifer physical, chemical, and microbiological characteristics. A push-pull test consists of the pulse-type injection of a prepared test solution into a single monitoring well followed by the extraction of the test solution/ground water mixture from the same well. The test solution contains a conservative tracer and one or more reactants selected to investigate a particular process. During the extraction phase, the concentrations of tracer, reactants, and possible reaction products are measured to obtain breakthrough curves for all solutes. This paper presents a simplified method of data analysis that can be used to estimate a first-order reaction rate coefficient from these breakthrough curves. Rate coefficients are obtained by fitting a regression line to a plot of normalized concentrations versus elapsed time, requiring no knowledge of aquifer porosity, dispersivity, or hydraulic conductivity. A semi-analytical solution to the advective-dispersion equation is derived and used in a sensitivity analysis to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a homogeneous, confined aquifer with a fully-penetrating injection/extraction well and varying porosity, dispersivity, test duration, and reaction rate. A numerical flow and transport code (SUTRA) is used to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a heterogeneous, unconfined aquifer with a partially penetrating well. In all cases the simplified method provides accurate estimates of reaction rate coefficients; estimation errors ranged from 0.1 to 8.9% with most errors less than 5%

  16. Systematic errors in the determination of the spectroscopic g-factor in broadband ferromagnetic resonance spectroscopy: A proposed solution

    Science.gov (United States)

    Gonzalez-Fuentes, C.; Dumas, R. K.; García, C.

    2018-01-01

    A theoretical and experimental study of the influence of small offsets of the magnetic field (δH) on the measurement accuracy of the spectroscopic g-factor (g) and saturation magnetization (Ms) obtained by broadband ferromagnetic resonance (FMR) measurements is presented. The random nature of δH generates systematic and opposite sign deviations of the values of g and Ms with respect to their true values. A δH on the order of a few Oe leads to a ˜10% error of g and Ms for a typical range of frequencies employed in broadband FMR experiments. We propose a simple experimental methodology to significantly minimize the effect of δH on the fitted values of g and Ms, eliminating their apparent dependence in the range of frequencies employed. Our method was successfully tested using broadband FMR measurements on a 5 nm thick Ni80Fe20 film for frequencies ranging between 3 and 17 GHz.

  17. Methane combustion kinetic rate constants determination: an ill-posed inverse problem analysis

    Directory of Open Access Journals (Sweden)

    Bárbara D. L. Ferreira

    2013-01-01

    Full Text Available Methane combustion was studied by the Westbrook and Dryer model. This well-established simplified mechanism is very useful in combustion science, for computational effort can be notably reduced. In the inversion procedure to be studied, rate constants are obtained from [CO] concentration data. However, when inherent experimental errors in chemical concentrations are considered, an ill-conditioned inverse problem must be solved for which appropriate mathematical algorithms are needed. A recurrent neural network was chosen due to its numerical stability and robustness. The proposed methodology was compared against Simplex and Levenberg-Marquardt, the most used methods for optimization problems.

  18. A multiphase flow meter for the on-line determination of the flow rates of oil, water and gas

    International Nuclear Information System (INIS)

    Roach, G.J.; Watt, J.S.

    1997-01-01

    Multiphase mixtures of crude oil, formation water and gas are carried in pipelines from oil wells to production facilities. Multiphase flow meters (MFMs) are being developed to determine the flow rates of each component of the heterogeneous mixture in the pipeline. CSIRO Minerals has developed and field tested a gamma-ray MFM for the on-line determination of the flow rates of heterogeneous mixtures of oil, water and gas in pipelines. It consists of two specialised gamma-ray transmission gauges, and pressure and temperature sensors, mounted on the pipeline carrying the full flow of the production stream. The MFM separately measures liquids and gas flow rates, and the volume ratio of water and liquids (water cut). The MFM has been trialled at three offshore production facilities in Australia. In each, the MFM was mounted on the pipeline between the test manifold and the test separator. The multiphase streams from the various wells feeding to the platform were sequentially routed past the MFM. The MFM and test separator outputs were compared using regression analysis. The flow rates of oil, water and gas were each determined to relative errors in the range of 5-10% . The MFM has been in routine use on the West Kingfish platform in the Bass Strait since November 1994. The MFM was recently tested over a wide range of flow conditions at a Texaco flow facility near Houston. Water cut, based on pre-trial calibration, was determined to 2% rms over the range 0-100% water cut. The liquids and gas flow results were interpreted based on slip correlations obtained from comparison of the MFM and Texaco flows. Using these, the relative errors were respectively 6.6% for liquid flow, 6.2% for gas, 8% for oil and 8% for water. The MFM is licensed to Kvaerner FSSL of Aberdeen. Kvaerner will supply the gamma-ray MFM for both platform and subsea use. Technology transfer commenced in December 1996, and Kvaerner completed the manufacture of the first MFM in August 1997

  19. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  20. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    Science.gov (United States)

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.

  1. Social Determinants of Overweight and Obesity Rates by Elementary School in a Predominantly Hispanic School District.

    Science.gov (United States)

    Santos, Richard; Huerta, Gabriel; Karki, Menuka; Cantarero, Andrea

    This study analyzes the social determinants associated with the overweight or obesity prevalence of 85 elementary schools during the 2010-11 academic year in a predominantly Hispanic school district. A binomial logistic regression is used to analyze the aggregate overweight or obesity rate of a school by the percent of Hispanic students in each school, selected school and neighborhood characteristics, and its geographical location. The proportion of Hispanic enrollment more readily explains a school's aggregate overweight or obesity rate than social determinants or spatial location. Number of fast food establishments and the academic ranking of a school appear to slightly impact the aggregate prevalence rate. Spatial location of school is not a significant factor, controlling for other determinants. An elementary school's overall overweight or obesity rate provides a valuable health indicator to study the social determinants of obesity among Hispanics and other students within a local neighborhood. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Determinants of the ZAR/USD exchange rate and policy implications: A simultaneous-equation model

    Directory of Open Access Journals (Sweden)

    Yu Hsing

    2016-12-01

    Full Text Available This paper examines the determinants of the South African rand/US dollar (ZAR/USD exchange rate based on demand and supply analysis. Applying the EGARCH method, the paper finds that the ZAR/USD exchange rate is positively associated with the South African government bond yield, US real GDP, the US stock price and the South African inflation rate and negatively influenced by the 10-year US government bond yield, South African real GDP, the South African stock price, and the US inflation rate. The adoption of a free floating exchange rate regime has reduced the value of the rand vs. the US dollar.

  3. Study of systematic errors in the determination of total Hg levels in the range -5% in inorganic and organic matrices with two reliable spectrometrical determination procedures

    International Nuclear Information System (INIS)

    Kaiser, G.; Goetz, D.; Toelg, G.; Max-Planck-Institut fuer Metallforschung, Stuttgart; Knapp, G.; Maichin, B.; Spitzy, H.

    1978-01-01

    In the determiniation of Hg at ng/g and pg/g levels systematic errors are due to faults in the analytical methods such as intake, preparation and decomposition of a sample. The sources of these errors have been studied both with 203 Hg-radiotracer techniques and two multi-stage procedures developed for the determiniation of trace levels. The emission spectrometrie (OES-MIP) procedure includes incineration of the sample in a microwave induced oxygen plasma (MIP), the isolation and enrichment on a gold absorbent and its excitation in an argon plasma (MIP). The emitted Hg-radiation (253,7 nm) is evaluated photometrically with a semiconductor element. The detection limit of the OES-MIP procedure was found to be 0,01 ng, the coefficient of variation 5% for 1 ng Hg. The second procedure combines a semi-automated wet digestion method (HCLO 3 /HNO 3 ) with a reduction-aeration (ascorbic acid/SnCl 2 ), and the flameless atomic absorption technique (253,7 nm). The detection limit of this procedure was found to be 0,5 ng, the coefficient of variation 5% for 5 ng Hg. (orig.) [de

  4. Errors and discrepancies in the administration of intravenous infusions: a mixed methods multihospital observational study

    OpenAIRE

    Lyons, I.; Furniss, D.; Blandford, A.; Chumbley, G.; Iacovides, I.; Wei, L.; Cox, A.; Mayer, A.; Vos, J.; Galal-Edeen, G. H.; Schnock, K. O.; Dykes, P. C.; Bates, D. W.; Franklin, B. D.

    2018-01-01

    INTRODUCTION: Intravenous medication administration has traditionally been regarded as error prone, with high potential for harm. A recent US multisite study revealed few potentially harmful errors despite a high overall error rate. However, there is limited evidence about infusion practices in England and how they relate to prevalence and types of error. OBJECTIVES: To determine the prevalence, types and severity of errors and discrepancies in infusion administration in English hospitals, an...

  5. An inverse modeling procedure to determine particle growth and nucleation rates from measured aerosol size distributions

    Directory of Open Access Journals (Sweden)

    B. Verheggen

    2006-01-01

    Full Text Available Classical nucleation theory is unable to explain the ubiquity of nucleation events observed in the atmosphere. This shows a need for an empirical determination of the nucleation rate. Here we present a novel inverse modeling procedure to determine particle nucleation and growth rates based on consecutive measurements of the aerosol size distribution. The particle growth rate is determined by regression analysis of the measured change in the aerosol size distribution over time, taking into account the effects of processes such as coagulation, deposition and/or dilution. This allows the growth rate to be determined with a higher time-resolution than can be deduced from inspecting contour plots ('banana-plots''. Knowing the growth rate as a function of time enables the evaluation of the time of nucleation of measured particles of a certain size. The nucleation rate is then obtained by integrating the particle losses from time of measurement to time of nucleation. The regression analysis can also be used to determine or verify the optimum value of other parameters of interest, such as the wall loss or coagulation rate constants. As an example, the method is applied to smog chamber measurements. This program offers a powerful interpretive tool to study empirical aerosol population dynamics in general, and nucleation and growth in particular.

  6. 75 FR 79308 - Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2011

    Science.gov (United States)

    2010-12-20

    ...-11213, Notice No. 14] Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2011... random testing positive rates were .037 percent for drugs and .014 percent for alcohol. Because the... effective December 20, 2010. FOR FURTHER INFORMATION CONTACT: Lamar Allen, Alcohol and Drug Program Manager...

  7. 77 FR 75896 - Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2013

    Science.gov (United States)

    2012-12-26

    ...-11213, Notice No. 16] Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2013...., Washington, DC 20590, (telephone 202-493- 1342); or Kathy Schnakenberg, FRA Alcohol/Drug Program Specialist... from FRA's Management Information System, the rail industry's random drug testing positive rate has...

  8. Forsmark - System 522. Recursive linear regression for the determination of heating rate

    International Nuclear Information System (INIS)

    Carlsson, B.

    1980-01-01

    The heating rate for reactor tank and steam tubes is limited. The algorithm of the heating rate has been implemented on the computer and compared with real data from Forsmark-2. The evaluation of data shows a considerable improvement of the determination of derivata which contributes to information during heating events. (G.B.)

  9. Exchange Rate Exposure and Its Determinants: A Firm and Industry Analysis of the UK Companies

    OpenAIRE

    He, Jiao

    2010-01-01

    This study assesses whether the unexpected exchange rate movements volatilize the UK firms’ stock return based on the firm- and industry-level analysis, and examines whether the magnitude of the exchange rate exposure is determined by the firm-specific factors. Using a sample of 244 UK companies listed in the FTSE 350 during the test period between December 1999 and December 2009, the result documents that the exchange rate fluctuation does affect the firm value. Among the five introduced ex...

  10. 42 CFR 412.328 - Determining and updating the hospital-specific rate.

    Science.gov (United States)

    2010-10-01

    ... of stay for each transfer case by the geometric mean length of stay for the DRG (but in no case using... rate. (c) Case-mix adjustment—(1) Determining transfer-adjusted case mix value. Step 1: For base year... received as of June 30, 1991 to determine the hospital's transfer-adjusted case-mix value. For base year...

  11. Maternal body size and condition determine calf growth rates in southern right whales

    DEFF Research Database (Denmark)

    Christiansen, Fredrik; Vivier, Fabien; Charlton, Claire

    2018-01-01

    The cost of reproduction is a key parameter determining a species' life history strategy. Despite exhibiting some of the fastest offspring growth rates among mammals, the cost of reproduction in baleen whales is largely unknown since standard field metabolic techniques cannot be applied. We...... quantified the cost of reproduction for southern right whales Eubalaena australis over a 3 mo breeding season. We did this by determining the relationship between calf growth rate and maternal rate of loss in energy reserves, using repeated measurements of body volume obtained from unmanned aerial vehicle...... period, and highlights the importance of sufficient maternal energy reserves for reproduction in this capital breeding species....

  12. Determination of enzyme-substrate dissociation rates by dynamic isotope exchange enhancement experiments

    International Nuclear Information System (INIS)

    Kim, S.C.; Raushel, F.M.

    1986-01-01

    A new method for the determination of dissociation rates of enzyme-substrate complexes has been developed. The rate of exchange of a labeled product back into the substrate is measured during catalysis of the forward reaction when the forward reaction is kept far from equilibrium by the enzymatic removal of the nonexchanging product. The ratio of the exchange rate and the net rate for product formation is then determined at various concentrations of the exchanging product. A plot of this ratio is a diagnostic indication of the kinetic mechanism and the relative rates of product dissociation from the binary and ternary enzyme complexes. This technique has been applied to the reaction catalyzed by bovine liver argininosuccinate lyase. The ratio for the rate of exchange of fumarate into argininosuccinate and the net rate for product formation was found to increase with the concentration of fumarate but to reach a limit of 3.3. The ratio of rates was half-maximal at 36 mM fumarate. The data have been interpreted to indicate the argininosuccinate lyase has a random kinetic mechanism. The calculated lower limit for the rate of release of arginine from the enzyme-fumarate-arginine complex is 0.35 times as fast as the Vmax in the reverse direction. The rate of release of arginine from the enzyme-arginine binary complex is 210 times faster than Vmax in the reverse direction

  13. Determination of void fraction from source range monitor and mass flow rate data

    International Nuclear Information System (INIS)

    McCormick, R.D.

    1986-09-01

    This is a report on the calculation of the TMI-2 primary coolant system local void fraction from source range neutron flux monitor data and from hot leg mass flowrate meter data during the first 100 minutes of the accident. The methods of calculation of void fraction from the two data sources is explained and the results are compared. It is indicated that the void fraction determined using the mass flowrate data contained an error of unknown magnitude due to the assumption of constant homogeneous volumetric flowrate used in the calculation and required further work. Void fraction determined from the source range monitor data is felt to be usable although an uncertainty analysis has not been performed

  14. Chromatographic determination of the rate and extent of absorption of air pollutants by sea water

    International Nuclear Information System (INIS)

    Nikolakaki, S.; Vassilakos, C.; Katsanos, N.A.

    1994-01-01

    A simple chromatographic method is developed to determine the rate constant for expulsion of an air pollutant from water or its diffusion parameter in the liquid, the rate constant for chemical reaction of the pollutant with water, its mass transfer coefficient in the liquid, and the partition coefficient between liquid water and air. From these physicochemical parameters, the absorption rate by sea water and, therefore, the depletion rate of a polluting substance from the air can be calculated, together with the equilibrium state of this absorption. The method has been applied to nitrogen dioxide being absorbed by triple-distilled water and by sea water, at various temperatures. From the temperature variation of the reaction rate constant and of the partition coefficient, the activation energy for the reaction and the differential heat of solution were determined. (orig.)

  15. Determination and Analysis of Ar-41 Dose Rate Characteristic at Thermal Column of Kartini Reactor

    International Nuclear Information System (INIS)

    Widarto; Sardjono, Y.

    2007-01-01

    Determination and Analysis of Ar-41 activity dose rate at the thermal column after shutdown of Kartini reactor has been done. Based on evaluation and analysis concluded that external dose rate is D = 1.606x10 -6 Sv/second and internal dose rate is 3.429x10 -1 1 Sv/second. It means that if employee work at the column thermal area for 15 minutes a day, 5 days a week, in a year will be 0.376 Sv still under dose rate limit i.e. 0.5 Sv, so that the column thermal facility is safely area. (author)

  16. Development of An Optimization Method for Determining Automation Rate in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun

    2014-01-01

    Since automation was introduced in various industrial fields, it has been known that automation provides positive effects like greater efficiency and fewer human errors, and negative effect defined as out-of-the-loop (OOTL). Thus, before introducing automation in nuclear field, the estimation of the positive and negative effects of automation on human operators should be conducted. In this paper, by focusing on CPS, the optimization method to find an appropriate proportion of automation is suggested by integrating the suggested cognitive automation rate and the concepts of the level of ostracism. The cognitive automation rate estimation method was suggested to express the reduced amount of human cognitive loads, and the level of ostracism was suggested to express the difficulty in obtaining information from the automation system and increased uncertainty of human operators' diagnosis. The maximized proportion of automation that maintains the high level of attention for monitoring the situation is derived by an experiment, and the automation rate is estimated by the suggested automation rate estimation method. It is expected to derive an appropriate inclusion proportion of the automation system avoiding the OOTL problem and having maximum efficacy at the same time

  17. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    Science.gov (United States)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  18. [Determination of plasma protein binding rate of arctiin and arctigenin with ultrafiltration].

    Science.gov (United States)

    Han, Xue-Ying; Wang, Wei; Tan, Ri-Qiu; Dou, De-Qiang

    2013-02-01

    To determine the plasma protein binding rate of arctiin and arctigenin. The ultrafiltration combined with HPLC was employed to determine the plasma protein binding rate of arctiin and arctigenin as well as rat plasma and healthy human plasma proteins. The plasma protein binding rate of arctiin with rat plasma at the concentrations of 64. 29, 32.14, 16.07 mg x L(-1) were (71.2 +/- 2.0)%, (73.4 +/- 0.61)%, (78.2 +/- 1.9)%, respectively; while the plasma protein binding rate of arctiin with healthy human plasma at the above concentrations were (64.8 +/- 3.1)%, (64.5 +/- 2.5)%, (77.5 +/- 1.7)%, respectively. The plasma protein binding rate of arctigenin with rat plasma at the concentrations of 77.42, 38.71, 19.36 mg x L(-1) were (96.7 +/- 0.41)%, (96.8 +/- 1.6)%, (97.3 +/- 0.46)%, respectively; while the plasma protein binding rate of arctigenin with normal human plasma at the above concentrations were (94.7 +/- 3.1)%, (96.8 +/- 1.6)%, (97.9 +/- 1.3)%, respectively. The binding rate of arctiin with rat plasma protein was moderate, which is slightly higher than the binding rate of arctiin with healthy human plasma protein. The plasma protein binding rates of arctigenin with both rat plasma and healthy human plasma are very high.

  19. Standard Test Method for Determining Thermal Neutron Reaction Rates and Thermal Neutron Fluence Rates by Radioactivation Techniques

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 The purpose of this test method is to define a general procedure for determining an unknown thermal-neutron fluence rate by neutron activation techniques. It is not practicable to describe completely a technique applicable to the large number of experimental situations that require the measurement of a thermal-neutron fluence rate. Therefore, this method is presented so that the user may adapt to his particular situation the fundamental procedures of the following techniques. 1.1.1 Radiometric counting technique using pure cobalt, pure gold, pure indium, cobalt-aluminum, alloy, gold-aluminum alloy, or indium-aluminum alloy. 1.1.2 Standard comparison technique using pure gold, or gold-aluminum alloy, and 1.1.3 Secondary standard comparison techniques using pure indium, indium-aluminum alloy, pure dysprosium, or dysprosium-aluminum alloy. 1.2 The techniques presented are limited to measurements at room temperatures. However, special problems when making thermal-neutron fluence rate measurements in high-...

  20. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  1. Determination of the sterile release rate for stopping growing age-structured populations

    International Nuclear Information System (INIS)

    Barclay, Hugh John

    2016-01-01

    A freely-growing age-structured population was modelled for growth and control by sterile male releases. Equilibrium populations yield critical sterile male release rates that would hold the population at equilibrium. It is shown here that these rates may be different from the release rates required to stop a growing population and bring it to an equilibrium. A computer simulation was constructed of this population and a parameter sensitivity analysis graphed the effects on the required sterile male release rate of fertility, mating delay in adult females, net juvenile survivorship, three adult survivorship curves, the time spent in the juvenile stages, and total life span. The adult survivorship curves had the greatest effect on the required sterile release rate for population elimination. The required release rate was also determined for Ceratitis capitata (Wiedemann) using survivorship and fertility data from a laboratory strain. The concepts of over-flooding ratio and release ratio were discussed and quantified for the cases above. (author)

  2. A Physics-Based Engineering Methodology for Calculating Soft Error Rates of Bulk CMOS and SiGe Heterojunction Bipolar Transistor Integrated Circuits

    Science.gov (United States)

    Fulkerson, David E.

    2010-02-01

    This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.

  3. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1982-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR 1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determines HEPs for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  4. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1981-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR-1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determined HEP's for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  5. Determination of the biodegradation rate of asphalt for the Hanford grout vaults

    International Nuclear Information System (INIS)

    Luey, J.; Li, S.W.

    1993-04-01

    Testing was initiated in March 1991 and completed in November 1992 to determine the rate at which asphalt is biodegraded by microorganisms native to the Hanford Site soils. The asphalt tested (AR-6000, US Oil, Tacoma, Washington) is to be used in the construction of a diffusion barrier for the Hanford grout vaults. Experiments to determine asphalt biodegradation rates were conducted using three separate test sets. These test sets were initiated in March 1991, January 1992, and June 1992 and ran for periods of 6 months, 11 months, and 6 months, respectively. The experimental method used was one originally developed by Bartha and Pramer (1965), and further refined by Bowerman et al. (1985), that determined the asphalt biodegradation rate through the measurement of carbon dioxide evolved

  6. Determination of alpha-dose rates and chronostratigraphical study of travertine samples

    International Nuclear Information System (INIS)

    Oufni, L.; University Moulay Ismail, Errachidia; Misdaq, M.A.; Boudad, L.; Kabiri, L.

    2001-01-01

    Uranium and thorium contents in different layers of stratigraphical sedimentary deposits have been evaluated by using LR-115 type II and CR-39 solid state nuclear track detectors (SSNTD). A method has been developed for determining the alpha-dose rates of the sedimentary travertine samples. Using the U/Th dating method, we succeeded in age dating carbonated level sampled in the sedimentary deposits. Correlation between the stratigraphy, alpha-dose rates and age has been investigated. (author)

  7. Comparative evaluation of iohexol and inulin clearance for glomerular filtration rate determinations

    International Nuclear Information System (INIS)

    Lindblad, H.G.; Berg, U.B.

    1994-01-01

    The authors have evaluated iohexol as a filtration marker in 150 children. The clearance of iohexol was compared with that of inulin or with a formula clearance. The single-sample clearance of iohexol showed a good correlation with the clearance of inulin. The clearance of iohexol correlated well with the formula clearance. The optimal blood sampling time for iohexol clearance determinations appears to be between 120 and 180 min after injection, at least in patient with relatively normal filtration rates. It is concluded that iohexol clearance is an accurate method of determining the glomerular filtration rate in clinical practice. 25 refs., 5 figs

  8. Linear and Non-Linear Associations of Gonorrhea Diagnosis Rates with Social Determinants of Health

    Directory of Open Access Journals (Sweden)

    Hazel D. Dean

    2012-09-01

    Full Text Available Identifying how social determinants of health (SDH influence the burden of disease in communities and populations is critically important to determine how to target public health interventions and move toward health equity. A holistic approach to disease prevention involves understanding the combined effects of individual, social, health system, and environmental determinants on geographic area-based disease burden. Using 2006–2008 gonorrhea surveillance data from the National Notifiable Sexually Transmitted Disease Surveillance and SDH variables from the American Community Survey, we calculated the diagnosis rate for each geographic area and analyzed the associations between those rates and the SDH and demographic variables. The estimated product moment correlation (PMC between gonorrhea rate and SDH variables ranged from 0.11 to 0.83. Proportions of the population that were black, of minority race/ethnicity, and unmarried, were each strongly correlated with gonorrhea diagnosis rates. The population density, female proportion, and proportion below the poverty level were moderately correlated with gonorrhea diagnosis rate. To better understand relationships among SDH, demographic variables, and gonorrhea diagnosis rates, more geographic area-based estimates of additional variables are required. With the availability of more SDH variables and methods that distinguish linear from non-linear associations, geographic area-based analysis of disease incidence and SDH can add value to public health prevention and control programs.

  9. Determinants of the AUD/USD Exchange Rate and Policy Implications

    OpenAIRE

    Yu Hsing

    2015-01-01

    This paper examines short-run determinants of the Australian dollar/U.S. dollar (AUD/USD) exchange rate based on a simultaneous-equation model. Applying the EGARCH method, the paper finds that the AUD/USD exchange rate is positively associated with the 10-year U.S. real government bond yield, U.S. real GDP, the U.S. real stock price and the expected exchange rate and negatively influenced by the Australian real government bond yield, Australian real GDP, and the real Australian stock price.

  10. Determination of the N2 recombination rate coefficient in the ionosphere

    Science.gov (United States)

    Orsini, N.; Torr, D. G.; Brinton, H. C.; Brace, L. H.; Hanson, W. B.; Hoffman, J. H.; Nier, A. O.

    1977-01-01

    Measurements of aeronomic parameters made by the Atmosphere Explorer-C satellite are used to determine the recombination rate coefficient of N2(+) in the ionosphere. The rate is found to increase significantly with decreasing electron density. Values obtained range from approximately 1.4 x 10 to the -7th to 3.8 x 10 to the -7th cu cm/sec. This variation is explained in a preliminary way in terms of an increase in the rate coefficient with vibrational excitation. Thus, high electron densities depopulate high vibrational levels reducing the effective recombination rate, whereas, low electron densities result in an enhancement in the population of high vibrational levels, thus, increasing the effective recombination rate.

  11. Inverse method for determining radon diffusion coefficient and free radon production rate of fragmented uranium ore

    International Nuclear Information System (INIS)

    Ye, Yong-jun; Wang, Li-heng; Ding, De-xin; Zhao, Ya-li; Fan, Nan-bin

    2014-01-01

    The radon diffusion coefficient and the free radon production rate are important parameters for describing radon migration in the fragmented uranium ore. In order to determine the two parameters, the pure diffusion migration equation for radon was firstly established and its analytic solution with the two parameters to be determined was derived. Then, a self manufactured experimental column was used to simulate the pure diffusion of the radon, the improved scintillation cell method was used to measure the pore radon concentrations at different depths of the column loaded with the fragmented uranium ore, and the nonlinear least square algorithm was used to inversely determine the radon diffusion coefficient and the free radon production rate. Finally, the solution with the two inversely determined parameters was used to predict the pore radon concentrations at some depths of the column, and the predicted results were compared with the measured results. The results show that the predicted results are in good agreement with the measured results and the numerical inverse method is applicable to the determination of the radon diffusion coefficient and the free radon production rate for the fragmented uranium ore. - Highlights: • Inverse method for determining two transport parameters of radon is proposed. • A self-made experimental apparatus is used to simulate radon diffusion process. • Sampling volume and position for measuring radon concentration are optimized. • The inverse results of an experimental sample are verified

  12. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    Science.gov (United States)

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Determination of the Optimized Automation Rate considering Effects of Automation on Human Operators in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun; Kim, Man Cheol

    2015-01-01

    Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem

  14. Quantifying behavioural determinants relating to health professional reporting of medication errors: a cross-sectional survey using the Theoretical Domains Framework.

    Science.gov (United States)

    Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek

    2016-11-01

    The aims of this study were to quantify the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE) and to explore any differences between respondents. A cross-sectional survey of patient-facing doctors, nurses and pharmacists within three major hospitals of Abu Dhabi, the UAE. An online questionnaire was developed based on the Theoretical Domains Framework (TDF, a framework of behaviour change theories). Principal component analysis (PCA) was used to identify components and internal reliability determined. Ethical approval was obtained from a UK university and all hospital ethics committees. Two hundred and ninety-four responses were received. Questionnaire items clustered into six components of knowledge and skills, feedback and support, action and impact, motivation, effort and emotions. Respondents generally gave positive responses for knowledge and skills, feedback and support and action and impact components. Responses were more neutral for the motivation and effort components. In terms of emotions, the component with the most negative scores, there were significant differences in terms of years registered as health professional (those registered longest most positive, p = 0.002) and age (older most positive, p Theoretical Domains Framework to quantify the behavioural determinants of health professional reporting of medication errors. • Questionnaire items relating to emotions surrounding reporting generated the most negative responses with significant differences in terms of years registered as health professional (those registered longest most positive) and age (older most positive) with no differences for gender and health profession. • Interventions based on behaviour change techniques mapped to emotions should be prioritised for development.

  15. Neutron spectra determination methods using the measured reaction rates in SAPIS

    International Nuclear Information System (INIS)

    Bondars, Kh.Ya.; Lapenas, A.A.

    1980-01-01

    Mathematical basis of algorithms is given for methods of neutron spectra restoration in accordance with the measured reaction rates of the activation detectors included into the information-determination system SAIPS aimed at generalization of the most popular home and foreign neutron spectra determination methods as well as the establishment of their mutual relations. The following neutron spectra determination methods are described: SAND-II, CRYSTAL BALL, WINDOWS, SPECTRA, RESP, JUL; polynominal and directed divergence methods. The algorithms have been realized on the ES computer

  16. Health Sector Inflation Rate and its Determinants in Iran: A Longitudinal Study (1995–2008)

    Science.gov (United States)

    TEIMOURIZAD, Abedin; HADIAN, Mohamad; REZAEI, Satar; HOMAIE RAD, Enayatollah

    2014-01-01

    Abstract Background Health price inflation rate is different from increasing in health expenditures. Health expenditures contain both quantity and prices but inflation rate contains prices. This study aimed to determine the factors that affect the Inflation Rate for Health Care Services (IRCPIHC) in Iran. Methods We used Central Bank of Iran data. We estimated the relationship between the inflation rate and its determinants using dynamic factor variable approach. For this purpose, we used STATA software. Results The study results revealed a positive relationship between the overall inflation as well as the number of dentists and health inflation. However, number of beds and physicians per 1000 people had a negative relationship with health inflation. Conclusion When the number of hospital beds and doctors increased, the competition between them increased, as well, thereby decreasing the inflation rate. Moreover, dentists and drug stores had the conditions of monopoly markets; therefore, they could change the prices easier compared to other health sectors. Health inflation is the subset of growth in health expenditures and the determinants of health expenditures are not similar to health inflation. PMID:26060721

  17. Health Sector Inflation Rate and its Determinants in Iran: A Longitudinal Study (1995-2008).

    Science.gov (United States)

    Teimourizad, Abedin; Hadian, Mohamad; Rezaei, Satar; Homaie Rad, Enayatollah

    2014-11-01

    Health price inflation rate is different from increasing in health expenditures. Health expenditures contain both quantity and prices but inflation rate contains prices. This study aimed to determine the factors that affect the Inflation Rate for Health Care Services (IRCPIHC) in Iran. We used Central Bank of Iran data. We estimated the relationship between the inflation rate and its determinants using dynamic factor variable approach. For this purpose, we used STATA software. The study results revealed a positive relationship between the overall inflation as well as the number of dentists and health inflation. However, number of beds and physicians per 1000 people had a negative relationship with health inflation. When the number of hospital beds and doctors increased, the competition between them increased, as well, thereby decreasing the inflation rate. Moreover, dentists and drug stores had the conditions of monopoly markets; therefore, they could change the prices easier compared to other health sectors. Health inflation is the subset of growth in health expenditures and the determinants of health expenditures are not similar to health inflation.

  18. Method and apparatus for simultaneous determination of fluid mass flow rate, mean velocity and density

    International Nuclear Information System (INIS)

    Hamel, W.R.

    1984-01-01

    This invention relates to a new method and new apparatus for determining fluid mass flow rate and density. In one aspect of the invention, the fluid is passed through a straight cantilevered tube in which transient oscillation has been induced, thus generating Coriolis damping forces on the tube. The decay rate and frequency of the resulting damped oscillation are measured, and the fluid mass flow rate and density are determined therefrom. In another aspect of the invention, the fluid is passed through the cantilevered tube while an electrically powered device imparts steady-state harmonic excitation to the tube. This generates Coriolis tube-damping forces which are dependent on the mass flow rate of the fluid. Means are provided to respond to incipient flow-induced changes in the amplitude of vibration by changing the power input to the excitation device as required to sustain the original amplitude of vibration. The fluid mass flow rate and density are determined from the required change in power input. The invention provides stable, rapid, and accurate measurements. It does not require bending of the fluid flow

  19. 78 FR 78275 - Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2014

    Science.gov (United States)

    2013-12-26

    ...-11213, Notice No. 17] Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2014... December 26, 2013. FOR FURTHER INFORMATION CONTACT: Jerry Powers, FRA Drug and Alcohol Program Manager, W38...-493-6313); or Sam Noe, FRA Drug and Alcohol Program Specialist, (telephone 615-719- 2951). Issued in...

  20. 75 FR 1547 - Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2010

    Science.gov (United States)

    2010-01-12

    ...-11213, Notice No. 13] RIN 2130-AA81 Alcohol and Drug Testing: Determination of Minimum Random Testing... percent for alcohol. Because the industry-wide random drug testing positive rate has remained below 1.0... effective upon publication. FOR FURTHER INFORMATION CONTACT: Lamar Allen, Alcohol and Drug Program Manager...

  1. Water vapor mass balance method for determining air infiltration rates in houses

    Science.gov (United States)

    David R. DeWalle; Gordon M. Heisler

    1980-01-01

    A water vapor mass balance technique that includes the use of common humidity-control equipment can be used to determine average air infiltration rates in buildings. Only measurements of the humidity inside and outside the home, the mass of vapor exchanged by a humidifier/dehumidifier, and the volume of interior air space are needed. This method gives results that...

  2. Probing the Rate-Determining Step of the Claisen-Schmidt Condensation by Competition Reactions

    Science.gov (United States)

    Mak, Kendrew K. W.; Chan, Wing-Fat; Lung, Ka-Ying; Lam, Wai-Yee; Ng, Weng-Cheong; Lee, Siu-Fung

    2007-01-01

    Competition experiments are a useful tool for preliminary study of the linear free energy relationship of organic reactions. This article describes a physical organic experiment for upper-level undergraduates to identify the rate-determining step of the Claisen-Schmidt condensation of benzaldehyde and acetophenone by studying the linear free…

  3. A Classroom Experiment on Exchange Rate Determination with Purchasing Power Parity

    Science.gov (United States)

    Mitchell, David T.; Rebelein, Robert P.; Schneider, Patricia H.; Simpson, Nicole B.; Fisher, Eric

    2009-01-01

    The authors developed a classroom experiment on exchange rate determination appropriate for undergraduate courses in macroeconomics and international economics. In the experiment, students represent citizens from different countries and need to obtain currency to purchase goods. By participating in an auction to buy currency, students gain a…

  4. Low reproducibility of maximum urinary flow rate determined by portable flowmetry

    NARCIS (Netherlands)

    Sonke, G. S.; Kiemeney, L. A.; Verbeek, A. L.; Kortmann, B. B.; Debruyne, F. M.; de la Rosette, J. J.

    1999-01-01

    To evaluate the reproducibility in maximum urinary flow rate (Qmax) in men with lower urinary tract symptoms (LUTSs) and to determine the number of flows needed to obtain a specified reliability in mean Qmax, 212 patients with LUTSs (mean age, 62 years) referred to the University Hospital Nijmegen,

  5. 42 CFR 412.308 - Determining and updating the Federal rate.

    Science.gov (United States)

    2010-10-01

    ..., changes in the case mix index, the effect of changes to DRG classification and relative weights, and... increase attributable to changes in case mix. (ii) Effective FY 1996. Effective FY 1996, the standard... under the Federal rate for outlier cases under subpart F of this part, determined as a proportion of...

  6. 42 CFR 413.314 - Determining payment amounts: Routine per diem rate.

    Science.gov (United States)

    2010-10-01

    ... Prospectively Determined Payment Rates for Low-Volume Skilled Nursing Facilities, for Cost Reporting Periods... reflect area wage differences and the cost reporting period beginning date (if necessary) and is subject... appropriate wage index; and (ii) A nonlabor-related portion. (2) A routine capital-related cost portion. (3...

  7. 78 FR 66276 - Determination of Rates and Terms for Business Establishment Services

    Science.gov (United States)

    2013-11-05

    ...). The revisions read as follows: Sec. 384.4 Terms for making payment of royalty fees and statements of... LIBRARY OF CONGRESS Copyright Royalty Board 37 CFR Part 384 [Docket No. 2012-1 CRB Business Establishments II] Determination of Rates and Terms for Business Establishment Services AGENCY: Copyright Royalty...

  8. 76 FR 590 - Adjustment or Determination of Compulsory License Rates for Making and Distributing Phonorecords

    Science.gov (United States)

    2011-01-05

    ... $150 filing fee, must be addressed to: Copyright Royalty Board, P.O. Box 70977, Washington, DC 20024... LIBRARY OF CONGRESS Copyright Royalty Board [Docket No. 2011-3 CRB Phonorecords II] Adjustment or Determination of Compulsory License Rates for Making and Distributing Phonorecords AGENCY: Copyright Royalty...

  9. A Direct inverse model to determine permeability fields from pressure and flow rate measurements

    NARCIS (Netherlands)

    Brouwer, G.K.; Fokker, P.A.; Wilschut, F.; Zijl, W.

    2008-01-01

    The determination of the permeability field from pressure and flow rate measurements in wells is a key problem in reservoir engineering. This paper presents a Double Constraint method for inverse modeling that is an example of direct inverse modeling. The method is used with a standard

  10. Determination of x-radiation exposure rates from color television sets

    International Nuclear Information System (INIS)

    Campos, L.L.; Caldas, L.V.E.

    1988-05-01

    The exposure rates of low energy X-rays emitted from color televisions were determined by thermoluminescence using CaSO 4 :Dy + Teflon pellets. The measurements were taken at the distances of 5 cm, 2 and 3 m in front of the screens. The results were compared with those obtained for video display terminals at the same experimental conditions. (author) [pt

  11. Determination and Interpretation of the Norm Values of Preschool Social Skills Rating Scale Teacher Form

    Science.gov (United States)

    Omeroglu, Esra; Buyukozturk, Sener; Aydogan, Yasemin; Cakan, Mehtap; Cakmak, Ebru Kilic; Ozyurek, Arzu; Akduman, Gulumser Gultekin; Gunindi, Yunus; Kutlu, Omer; Coban, Aysel; Yurt, Ozlem; Kogar, Hakan; Karayol, Seda

    2015-01-01

    This study aimed to determine and interpret norms of the Preschool Social Skills Rating Scale (PSSRS) teacher form. The sample included 224 independent preschools and 169 primary schools. The schools are distributed among 48 provinces and 3324 children were included. Data were obtained from the PSSRS teacher form. The validity and reliability…

  12. Competitive kinetics as a tool to determine rate constants for reduction of ferrylmyoglobin by food components

    DEFF Research Database (Denmark)

    Jongberg, Sisse; Lund, Marianne Nissen; Pattison, David I.

    2016-01-01

    Competitive kinetics were applied as a tool to determine apparent rate constants for the reduction of hypervalent haem pigment ferrylmyoglobin (MbFe(IV)=O) by proteins and phenols in aqueous solution of pH 7.4 and I = 1.0 at 25 °C. Reduction of MbFe(IV)=O by a myofibrillar protein isolate (MPI) f...

  13. Determination of albumin transport rate between plasma and peritoneal space in decompensated cirrhosis

    DEFF Research Database (Denmark)

    Ring-Larsen, H; Henriksen, Jens Henrik Sahl

    1984-01-01

    Plasma-to-peritoneal transport rate of albumin (TERperit.space) was determined in eighteen patients with decompensated cirrhosis by sampling ascitic fluid after i.v. injection of 125I-labelled serum albumin. Median TERperit.space was 0.30% of the intravascular albumin mass (IVM) per hour (range 0...

  14. 76 FR 21673 - Alternative Efficiency Determination Methods and Alternate Rating Methods

    Science.gov (United States)

    2011-04-18

    ... EERE-2011-BP-TP-00024] RIN 1904-AC46 Alternative Efficiency Determination Methods and Alternate Rating Methods AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of... and data related to the use of computer simulations, mathematical methods, and other alternative...

  15. Determination of air kerma standard of high dose rate 192Ir brachytherapy source

    International Nuclear Information System (INIS)

    Pires, E.J.; Alves, C.F.E.; Leite, S.P.; Magalhaes, L.A.G.; David, M.G.; Almeida, C.E. de

    2015-01-01

    This paper presents the methodology developed by the Laboratorio de Ciencias Radiologicas and presently in use for determining of the air kerma standard of 192 Ir high dose rate sources to calibrate well-type chambers. Uncertainty analysis involving the measurements procedure are presented. (author)

  16. Determination of flow-rate characteristics and parameters of piezo pilot valves

    Directory of Open Access Journals (Sweden)

    Takosoglu Jakub

    2017-01-01

    Full Text Available Pneumatic directional valves are used in most industrial pneumatic systems. Most of them are two-stage valves controlled by a pilot valve. Pilot valves are often chosen randomly. Experimental studies in order to determine the flow-rate characteristics and parameters of pilot valves were not conducted. The paper presents experimental research of two piezo pilot valves.

  17. The probability and the management of human error

    International Nuclear Information System (INIS)

    Dufey, R.B.; Saull, J.W.

    2004-01-01

    Embedded within modern technological systems, human error is the largest, and indeed dominant contributor to accident cause. The consequences dominate the risk profiles for nuclear power and for many other technologies. We need to quantify the probability of human error for the system as an integral contribution within the overall system failure, as it is generally not separable or predictable for actual events. We also need to provide a means to manage and effectively reduce the failure (error) rate. The fact that humans learn from their mistakes allows a new determination of the dynamic probability and human failure (error) rate in technological systems. The result is consistent with and derived from the available world data for modern technological systems. Comparisons are made to actual data from large technological systems and recent catastrophes. Best estimate values and relationships can be derived for both the human error rate, and for the probability. We describe the potential for new approaches to the management of human error and safety indicators, based on the principles of error state exclusion and of the systematic effect of learning. A new equation is given for the probability of human error (λ) that combines the influences of early inexperience, learning from experience (ε) and stochastic occurrences with having a finite minimum rate, this equation is λ 5.10 -5 + ((1/ε) - 5.10 -5 ) exp(-3*ε). The future failure rate is entirely determined by the experience: thus the past defines the future

  18. Scintillation and bit error rate analysis of a phase-locked partially coherent flat-topped array laser beam in oceanic turbulence.

    Science.gov (United States)

    Yousefi, Masoud; Kashani, Fatemeh Dabbagh; Golmohammady, Shole; Mashal, Ahmad

    2017-12-01

    In this paper, the performance of underwater wireless optical communication (UWOC) links, which is made up of the partially coherent flat-topped (PCFT) array laser beam, has been investigated in detail. Providing high power, array laser beams are employed to increase the range of UWOC links. For characterization of the effects of oceanic turbulence on the propagation behavior of the considered beam, using the extended Huygens-Fresnel principle, an analytical expression for cross-spectral density matrix elements and a semi-analytical one for fourth-order statistical moment have been derived. Then, based on these expressions, the on-axis scintillation index of the mentioned beam propagating through weak oceanic turbulence has been calculated. Furthermore, in order to quantify the performance of the UWOC link, the average bit error rate (BER) has also been evaluated. The effects of some source factors and turbulent ocean parameters on the propagation behavior of the scintillation index and the BER have been studied in detail. The results of this investigation indicate that in comparison with the Gaussian array beam, when the source size of beamlets is larger than the first Fresnel zone, the PCFT array laser beam with the higher flatness order is found to have a lower scintillation index and hence lower BER. Specifically, in the sense of scintillation index reduction, using the PCFT array laser beams has a considerable benefit in comparison with the single PCFT or Gaussian laser beams and also Gaussian array beams. All the simulation results of this paper have been shown by graphs and they have been analyzed in detail.

  19. Determining the ventilation and aerosol deposition rates from routine indoor-air measurements.

    Science.gov (United States)

    Halios, Christos H; Helmis, Costas G; Deligianni, Katerina; Vratolis, Sterios; Eleftheriadis, Konstantinos

    2014-01-01

    Measurement of air exchange rate provides critical information in energy and indoor-air quality studies. Continuous measurement of ventilation rates is a rather costly exercise and requires specific instrumentation. In this work, an alternative methodology is proposed and tested, where the air exchange rate is calculated by utilizing indoor and outdoor routine measurements of a common pollutant such as SO2, whereas the uncertainties induced in the calculations are analytically determined. The application of this methodology is demonstrated, for three residential microenvironments in Athens, Greece, and the results are also compared against ventilation rates calculated from differential pressure measurements. The calculated time resolved ventilation rates were applied to the mass balance equation to estimate the particle loss rate which was found to agree with literature values at an average of 0.50 h(-1). The proposed method was further evaluated by applying a mass balance numerical model for the calculation of the indoor aerosol number concentrations, using the previously calculated ventilation rate, the outdoor measured number concentrations and the particle loss rates as input values. The model results for the indoors' concentrations were found to be compared well with the experimentally measured values.

  20. Ventilator-associated pneumonia: the influence of bacterial resistance, prescription errors, and de-escalation of antimicrobial therapy on mortality rates

    Directory of Open Access Journals (Sweden)

    Ana Carolina Souza-Oliveira

    2016-09-01

    Conclusion: Prescription errors influenced mortality of patients with Ventilator-associated pneumonia, underscoring the challenge of proper Ventilator-associated pneumonia treatment, which requires continuous reevaluation to ensure that clinical response to therapy meets expectations.