A Simulator for Human Error Probability Analysis (SHERPA)
International Nuclear Information System (INIS)
Di Pasquale, Valentina; Miranda, Salvatore; Iannone, Raffaele; Riemma, Stefano
2015-01-01
A new Human Reliability Analysis (HRA) method is presented in this paper. The Simulator for Human Error Probability Analysis (SHERPA) model provides a theoretical framework that exploits the advantages of the simulation tools and the traditional HRA methods in order to model human behaviour and to predict the error probability for a given scenario in every kind of industrial system. Human reliability is estimated as function of the performed task, the Performance Shaping Factors (PSF) and the time worked, with the purpose of considering how reliability depends not only on the task and working context, but also on the time that the operator has already spent on the work. The model is able to estimate human reliability; to assess the effects due to different human reliability levels through evaluation of tasks performed more or less correctly; and to assess the impact of context via PSFs. SHERPA also provides the possibility of determining the optimal configuration of breaks. Through a methodology that uses assessments of an economic nature, it allows identification of the conditions required for the suspension of work in the shift for the operator's psychophysical recovery and then for the restoration of acceptable values of reliability. - Highlights: • We propose a new method for Human Reliability Analysis called SHERPA. • SHERPA is able to model human behaviour and to predict the error probability. • Human reliability is function of task done, influencing factors and time worked. • SHERPA exploits benefits of the simulation tools and the traditional HRA methods. • SHERPA is implemented as a simulation template enable to assess human reliability
Error Probability Analysis of Hardware Impaired Systems with Asymmetric Transmission
Javed, Sidrah
2018-04-26
Error probability study of the hardware impaired (HWI) systems highly depends on the adopted model. Recent models have proved that the aggregate noise is equivalent to improper Gaussian signals. Therefore, considering the distinct noise nature and self-interfering (SI) signals, an optimal maximum likelihood (ML) receiver is derived. This renders the conventional minimum Euclidean distance (MED) receiver as a sub-optimal receiver because it is based on the assumptions of ideal hardware transceivers and proper Gaussian noise in communication systems. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds and approximations are derived for various adopted systems including transmitter and receiver I/Q imbalanced systems with or without transmitter distortions as well as transmitter or receiver only impaired systems. Motivated by recent studies that shed the light on the benefit of improper Gaussian signaling in mitigating the HWIs, asymmetric quadrature amplitude modulation or phase shift keying is optimized and adapted for transmission. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver, the tightness of the derived bounds and effectiveness of asymmetric transmission in dampening HWIs and improving overall system performance
Saviane, Chiara; Silver, R Angus
2006-06-15
Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.
Estimation of the human error probabilities in the human reliability analysis
International Nuclear Information System (INIS)
Liu Haibin; He Xuhong; Tong Jiejuan; Shen Shifei
2006-01-01
Human error data is an important issue of human reliability analysis (HRA). Using of Bayesian parameter estimation, which can use multiple information, such as the historical data of NPP and expert judgment data to modify the human error data, could get the human error data reflecting the real situation of NPP more truly. This paper, using the numeric compute program developed by the authors, presents some typical examples to illustrate the process of the Bayesian parameter estimation in HRA and discusses the effect of different modification data on the Bayesian parameter estimation. (authors)
International Nuclear Information System (INIS)
Varde, P. V.; Lee, D. Y.; Han, J. B.
2003-03-01
A case of study on human reliability analysis has been performed as part of reliability analysis of digital protection system of the reactor automatically actuates the shutdown system of the reactor when demanded. However, the safety analysis takes credit for operator action as a diverse mean for tripping the reactor for, though a low probability, ATWS scenario. Based on the available information two cases, viz., human error in tripping the reactor and calibration error for instrumentations in protection system, have been analyzed. Wherever applicable a parametric study has also been performed
Simulator data on human error probabilities
International Nuclear Information System (INIS)
Kozinsky, E.J.; Guttmann, H.E.
1981-01-01
Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR-1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determined HEP's for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date
Simulator data on human error probabilities
International Nuclear Information System (INIS)
Kozinsky, E.J.; Guttmann, H.E.
1982-01-01
Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR 1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determines HEPs for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date
Fixed setpoints introduce error in licensing probability
Energy Technology Data Exchange (ETDEWEB)
Laratta, F., E-mail: flaratta@cogeco.ca [Oakville, ON (Canada)
2015-07-01
Although we license fixed (constrained) trip setpoints to a target probability, there is no provision for error in probability calculations or how error can be minimized. Instead, we apply reverse-compliance preconditions on the accident scenario such as a uniform and slow LOR to make probability seem error-free. But how can it be? Probability is calculated from simulated pre-LOR detector readings plus uncertainties before the LOR progression is even knowable. We can conserve probability without preconditions by continuously updating field setpoint equations with on-line detector data. Programmable Digital Controllers (PDC's) in CANDU 6 plants already have variable setpoints for Steam Generator and Pressurizer Low Level. Even so, these setpoints are constrained as a ramp or step in other CANDU plants and don't exhibit unconstrained variability. Fixed setpoints penalize safety and operation margins and cause spurious trips. We nevertheless continue to design suboptimal trip setpoint comparators for all trip parameters. (author)
Collection of offshore human error probability data
International Nuclear Information System (INIS)
Basra, Gurpreet; Kirwan, Barry
1998-01-01
Accidents such as Piper Alpha have increased concern about the effects of human errors in complex systems. Such accidents can in theory be predicted and prevented by risk assessment, and in particular human reliability assessment (HRA), but HRA ideally requires qualitative and quantitative human error data. A research initiative at the University of Birmingham led to the development of CORE-DATA, a Computerised Human Error Data Base. This system currently contains a reasonably large number of human error data points, collected from a variety of mainly nuclear-power related sources. This article outlines a recent offshore data collection study, concerned with collecting lifeboat evacuation data. Data collection methods are outlined and a selection of human error probabilities generated as a result of the study are provided. These data give insights into the type of errors and human failure rates that could be utilised to support offshore risk analyses
Error probabilities in default Bayesian hypothesis testing
Gu, Xin; Hoijtink, Herbert; Mulder, J,
2016-01-01
This paper investigates the classical type I and type II error probabilities of default Bayes factors for a Bayesian t test. Default Bayes factors quantify the relative evidence between the null hypothesis and the unrestricted alternative hypothesis without needing to specify prior distributions for
Ash, Robert B; Lukacs, E
1972-01-01
Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var
Spataru, Aurel
2013-01-01
Probability theory is a rapidly expanding field and is used in many areas of science and technology. Beginning from a basis of abstract analysis, this mathematics book develops the knowledge needed for advanced students to develop a complex understanding of probability. The first part of the book systematically presents concepts and results from analysis before embarking on the study of probability theory. The initial section will also be useful for those interested in topology, measure theory, real analysis and functional analysis. The second part of the book presents the concepts, methodology and fundamental results of probability theory. Exercises are included throughout the text, not just at the end, to teach each concept fully as it is explained, including presentations of interesting extensions of the theory. The complete and detailed nature of the book makes it ideal as a reference book or for self-study in probability and related fields. It covers a wide range of subjects including f-expansions, Fuk-N...
Human error probability estimation using licensee event reports
International Nuclear Information System (INIS)
Voska, K.J.; O'Brien, J.N.
1984-07-01
Objective of this report is to present a method for using field data from nuclear power plants to estimate human error probabilities (HEPs). These HEPs are then used in probabilistic risk activities. This method of estimating HEPs is one of four being pursued in NRC-sponsored research. The other three are structured expert judgment, analysis of training simulator data, and performance modeling. The type of field data analyzed in this report is from Licensee Event reports (LERs) which are analyzed using a method specifically developed for that purpose. However, any type of field data or human errors could be analyzed using this method with minor adjustments. This report assesses the practicality, acceptability, and usefulness of estimating HEPs from LERs and comprehensively presents the method for use
The probability and the management of human error
Energy Technology Data Exchange (ETDEWEB)
Dufey, R.B. [Atomic Energy of Canada Limited, Chalk River Laboratories, Chalk River, ON (Canada); Saull, J.W. [International Federation of Airworthiness, Sussex (United Kingdom)
2004-07-01
Embedded within modern technological systems, human error is the largest, and indeed dominant contributor to accident cause. The consequences dominate the risk profiles for nuclear power and for many other technologies. We need to quantify the probability of human error for the system as an integral contribution within the overall system failure, as it is generally not separable or predictable for actual events. We also need to provide a means to manage and effectively reduce the failure (error) rate. The fact that humans learn from their mistakes allows a new determination of the dynamic probability and human failure (error) rate in technological systems. The result is consistent with and derived from the available world data for modern technological systems. Comparisons are made to actual data from large technological systems and recent catastrophes. Best estimate values and relationships can be derived for both the human error rate, and for the probability. We describe the potential for new approaches to the management of human error and safety indicators, based on the principles of error state exclusion and of the systematic effect of learning. A new equation is given for the probability of human error ({lambda}) that combines the influences of early inexperience, learning from experience ({epsilon}) and stochastic occurrences with having a finite minimum rate, this equation is {lambda} 5.10{sup -5} + ((1/{epsilon}) - 5.10{sup -5}) exp(-3*{epsilon}). The future failure rate is entirely determined by the experience: thus the past defines the future.
A Quantum Theoretical Explanation for Probability Judgment Errors
Busemeyer, Jerome R.; Pothos, Emmanuel M.; Franco, Riccardo; Trueblood, Jennifer S.
2011-01-01
A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction and disjunction fallacies, averaging effects, unpacking effects, and order effects on inference. On the one hand, quantum theory is similar to other categorization and memory models of cognition in that it relies on vector…
Human error probability quantification using fuzzy methodology in nuclear plants
International Nuclear Information System (INIS)
Nascimento, Claudio Souza do
2010-01-01
This work obtains Human Error Probability (HEP) estimates from operator's actions in response to emergency situations a hypothesis on Research Reactor IEA-R1 from IPEN. It was also obtained a Performance Shaping Factors (PSF) evaluation in order to classify them according to their influence level onto the operator's actions and to determine these PSF actual states over the plant. Both HEP estimation and PSF evaluation were done based on Specialists Evaluation using interviews and questionnaires. Specialists group was composed from selected IEA-R1 operators. Specialist's knowledge representation into linguistic variables and group evaluation values were obtained through Fuzzy Logic and Fuzzy Set Theory. HEP obtained values show good agreement with literature published data corroborating the proposed methodology as a good alternative to be used on Human Reliability Analysis (HRA). (author)
Collision Probability Analysis
DEFF Research Database (Denmark)
Hansen, Peter Friis; Pedersen, Preben Terndrup
1998-01-01
It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...
Directory of Open Access Journals (Sweden)
A. B. Levina
2016-03-01
Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking
ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals
International Nuclear Information System (INIS)
Vogel, J.E.
1983-01-01
1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x
Development of an integrated system for estimating human error probabilities
Energy Technology Data Exchange (ETDEWEB)
Auflick, J.L.; Hahn, H.A.; Morzinski, J.A.
1998-12-01
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project had as its main objective the development of a Human Reliability Analysis (HRA), knowledge-based expert system that would provide probabilistic estimates for potential human errors within various risk assessments, safety analysis reports, and hazard assessments. HRA identifies where human errors are most likely, estimates the error rate for individual tasks, and highlights the most beneficial areas for system improvements. This project accomplished three major tasks. First, several prominent HRA techniques and associated databases were collected and translated into an electronic format. Next, the project started a knowledge engineering phase where the expertise, i.e., the procedural rules and data, were extracted from those techniques and compiled into various modules. Finally, these modules, rules, and data were combined into a nearly complete HRA expert system.
The Human Bathtub: Safety and Risk Predictions Including the Dynamic Probability of Operator Errors
International Nuclear Information System (INIS)
Duffey, Romney B.; Saull, John W.
2006-01-01
Reactor safety and risk are dominated by the potential and major contribution for human error in the design, operation, control, management, regulation and maintenance of the plant, and hence to all accidents. Given the possibility of accidents and errors, now we need to determine the outcome (error) probability, or the chance of failure. Conventionally, reliability engineering is associated with the failure rate of components, or systems, or mechanisms, not of human beings in and interacting with a technological system. The probability of failure requires a prior knowledge of the total number of outcomes, which for any predictive purposes we do not know or have. Analysis of failure rates due to human error and the rate of learning allow a new determination of the dynamic human error rate in technological systems, consistent with and derived from the available world data. The basis for the analysis is the 'learning hypothesis' that humans learn from experience, and consequently the accumulated experience defines the failure rate. A new 'best' equation has been derived for the human error, outcome or failure rate, which allows for calculation and prediction of the probability of human error. We also provide comparisons to the empirical Weibull parameter fitting used in and by conventional reliability engineering and probabilistic safety analysis methods. These new analyses show that arbitrary Weibull fitting parameters and typical empirical hazard function techniques cannot be used to predict the dynamics of human errors and outcomes in the presence of learning. Comparisons of these new insights show agreement with human error data from the world's commercial airlines, the two shuttle failures, and from nuclear plant operator actions and transient control behavior observed in transients in both plants and simulators. The results demonstrate that the human error probability (HEP) is dynamic, and that it may be predicted using the learning hypothesis and the minimum
Treelet Probabilities for HPSG Parsing and Error Correction
Ivanova, Angelina; van Noord, Gerardus; Calzolari, Nicoletta; al, et
2014-01-01
Most state-of-the-art parsers take an approach to produce an analysis for any input despite errors. However, small grammatical mistakes in a sentence often cause parser to fail to build a correct syntactic tree. Applications that can identify and correct mistakes during parsing are particularly
Orbit IMU alignment: Error analysis
Corson, R. W.
1980-01-01
A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.
Quantification of the effects of dependence on human error probabilities
International Nuclear Information System (INIS)
Bell, B.J.; Swain, A.D.
1980-01-01
In estimating the probabilities of human error in the performance of a series of tasks in a nuclear power plant, the situation-specific characteristics of the series must be considered. A critical factor not to be overlooked in this estimation is the dependence or independence that pertains to any of the several pairs of task performances. In discussing the quantification of the effects of dependence, the event tree symbology described will be used. In any series of tasks, the only dependence considered for quantification in this document will be that existing between the task of interest and the immediately preceeding task. Tasks performed earlier in the series may have some effect on the end task, but this effect is considered negligible
A framework to assess diagnosis error probabilities in the advanced MCR
International Nuclear Information System (INIS)
Kim, Ar Ryum; Seong, Poong Hyun; Kim, Jong Hyun; Jang, Inseok; Park, Jinkyun
2016-01-01
The Institute of Nuclear Power Operations (INPO)’s operating experience database revealed that about 48% of the total events in world NPPs for 2 years (2010-2011) happened due to human errors. The purposes of human reliability analysis (HRA) method are to evaluate the potential for, and mechanism of, human errors that may affect plant safety. Accordingly, various HRA methods have been developed such as technique for human error rate prediction (THERP), simplified plant analysis risk human reliability assessment (SPAR-H), cognitive reliability and error analysis method (CREAM) and so on. Many researchers have asserted that procedure, alarm, and display are critical factors to affect operators’ generic activities, especially for diagnosis activities. None of various HRA methods was explicitly designed to deal with digital systems. SCHEME (Soft Control Human error Evaluation MEthod) considers only for the probability of soft control execution error in the advanced MCR. The necessity of developing HRA methods in various conditions of NPPs has been raised. In this research, the framework to estimate diagnosis error probabilities in the advanced MCR was suggested. The assessment framework was suggested by three steps. The first step is to investigate diagnosis errors and calculate their probabilities. The second step is to quantitatively estimate PSFs’ weightings in the advanced MCR. The third step is to suggest the updated TRC model to assess the nominal diagnosis error probabilities. Additionally, the proposed framework was applied by using the full-scope simulation. Experiments conducted in domestic full-scope simulator and HAMMLAB were used as data-source. Total eighteen tasks were analyzed and twenty-three crews participated in
Human error recovery failure probability when using soft controls in computerized control rooms
International Nuclear Information System (INIS)
Jang, Inseok; Kim, Ar Ryum; Seong, Poong Hyun; Jung, Wondea
2014-01-01
Many literatures categorized recovery process into three phases; detection of problem situation, explanation of problem causes or countermeasures against problem, and end of recovery. Although the focus of recovery promotion has been on categorizing recovery phases and modeling recovery process, research related to human recovery failure probabilities has not been perform actively. On the other hand, a few study regarding recovery failure probabilities were implemented empirically. Summarizing, researches that have performed so far have several problems in terms of use in human reliability analysis (HRA). By adopting new human-system interfaces that are based on computer-based technologies, the operation environment of MCRs in NPPs has changed from conventional MCRs to advanced MCRs. Because of the different interfaces between conventional and advanced MCRs, different recovery failure probabilities should be considered in the HRA for advanced MCRs. Therefore, this study carries out an empirical analysis of human error recovery probabilities under an advanced MCR mockup called compact nuclear simulator (CNS). The aim of this work is not only to compile a recovery failure probability database using the simulator for advanced MCRs but also to collect recovery failure probability according to defined human error modes to compare that which human error mode has highest recovery failure probability. The results show that recovery failure probability regarding wrong screen selection was lowest among human error modes, which means that most of human error related to wrong screen selection can be recovered. On the other hand, recovery failure probabilities of operation selection omission and delayed operation were 1.0. These results imply that once subject omitted one task in the procedure, they have difficulties finding and recovering their errors without supervisor's assistance. Also, wrong screen selection had an effect on delayed operation. That is, wrong screen
On the average capacity and bit error probability of wireless communication systems
Yilmaz, Ferkan
2011-12-01
Analysis of the average binary error probabilities and average capacity of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function-based unified expression for both average binary error probabilities and average capacity of single and multiple link communication with maximal ratio combining. It is a matter to note that the generic unified expression offered in this paper can be easily calculated and that is applicable to a wide variety of fading scenarios, and the mathematical formalism is illustrated with the generalized Gamma fading distribution in order to validate the correctness of our newly derived results. © 2011 IEEE.
Sensitivity analysis using probability bounding
International Nuclear Information System (INIS)
Ferson, Scott; Troy Tucker, W.
2006-01-01
Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values
Douglas, Julie A; Skol, Andrew D; Boehnke, Michael
2002-02-01
Gene-mapping studies routinely rely on checking for Mendelian transmission of marker alleles in a pedigree, as a means of screening for genotyping errors and mutations, with the implicit assumption that, if a pedigree is consistent with Mendel's laws of inheritance, then there are no genotyping errors. However, the occurrence of inheritance inconsistencies alone is an inadequate measure of the number of genotyping errors, since the rate of occurrence depends on the number and relationships of genotyped pedigree members, the type of errors, and the distribution of marker-allele frequencies. In this article, we calculate the expected probability of detection of a genotyping error or mutation as an inheritance inconsistency in nuclear-family data, as a function of both the number of genotyped parents and offspring and the marker-allele frequency distribution. Through computer simulation, we explore the sensitivity of our analytic calculations to the underlying error model. Under a random-allele-error model, we find that detection rates are 51%-77% for multiallelic markers and 13%-75% for biallelic markers; detection rates are generally lower when the error occurs in a parent than in an offspring, unless a large number of offspring are genotyped. Errors are especially difficult to detect for biallelic markers with equally frequent alleles, even when both parents are genotyped; in this case, the maximum detection rate is 34% for four-person nuclear families. Error detection in families in which parents are not genotyped is limited, even with multiallelic markers. Given these results, we recommend that additional error checking (e.g., on the basis of multipoint analysis) be performed, beyond routine checking for Mendelian consistency. Furthermore, our results permit assessment of the plausibility of an observed number of inheritance inconsistencies for a family, allowing the detection of likely pedigree-rather than genotyping-errors in the early stages of a genome scan
Demonstration Integrated Knowledge-Based System for Estimating Human Error Probabilities
Energy Technology Data Exchange (ETDEWEB)
Auflick, Jack L.
1999-04-21
Human Reliability Analysis (HRA) is currently comprised of at least 40 different methods that are used to analyze, predict, and evaluate human performance in probabilistic terms. Systematic HRAs allow analysts to examine human-machine relationships, identify error-likely situations, and provide estimates of relative frequencies for human errors on critical tasks, highlighting the most beneficial areas for system improvements. Unfortunately, each of HRA's methods has a different philosophical approach, thereby producing estimates of human error probabilities (HEPs) that area better or worse match to the error likely situation of interest. Poor selection of methodology, or the improper application of techniques can produce invalid HEP estimates, where that erroneous estimation of potential human failure could have potentially severe consequences in terms of the estimated occurrence of injury, death, and/or property damage.
Automatic Error Analysis Using Intervals
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Quantitative analysis of error mode, error effect and criticality
International Nuclear Information System (INIS)
Li Pengcheng; Zhang Li; Xiao Dongsheng; Chen Guohua
2009-01-01
The quantitative method of human error mode, effect and criticality is developed in order to reach the ultimate goal of Probabilistic Safety Assessment. The criticality identification matrix of human error mode and task is built to identify the critical human error mode and task and the critical organizational root causes on the basis of the identification of human error probability, error effect probability and the criticality index of error effect. Therefore, this will be beneficial to take targeted measures to reduce and prevent the occurrence of critical human error mode and task. Finally, the application of the technique is explained through the application example. (authors)
ATC operational error analysis.
1972-01-01
The primary causes of operational errors are discussed and the effects of these errors on an ATC system's performance are described. No attempt is made to specify possible error models for the spectrum of blunders that can occur although previous res...
On error probability exponents of many hypotheses optimal testing ...
African Journals Online (AJOL)
In this paper we study a model of hypotheses testing consisting of with to simple homogeneous stationary Markov chains ith finite number of states such that having different distributions from four possible transmission probabilities.For solving this problem we apply the method of type and large deviation techniques (LTD).
Some aspects of statistical modeling of human-error probability
International Nuclear Information System (INIS)
Prairie, R.R.
1982-01-01
Human reliability analyses (HRA) are often performed as part of risk assessment and reliability projects. Recent events in nuclear power have shown the potential importance of the human element. There are several on-going efforts in the US and elsewhere with the purpose of modeling human error such that the human contribution can be incorporated into an overall risk assessment associated with one or more aspects of nuclear power. An effort that is described here uses the HRA (event tree) to quantify and model the human contribution to risk. As an example, risk analyses are being prepared on several nuclear power plants as part of the Interim Reliability Assessment Program (IREP). In this process the risk analyst selects the elements of his fault tree that could be contributed to by human error. He then solicits the HF analyst to do a HRA on this element
International Nuclear Information System (INIS)
Nascimento, C.S. do; Mesquita, R.N. de
2009-01-01
Recent studies point human error as an important factor for many industrial and nuclear accidents: Three Mile Island (1979), Bhopal (1984), Chernobyl and Challenger (1986) are classical examples. Human contribution to these accidents may be better understood and analyzed by using Human Reliability Analysis (HRA), which has being taken as an essential part on Probabilistic Safety Analysis (PSA) of nuclear plants. Both HRA and PSA depend on Human Error Probability (HEP) for a quantitative analysis. These probabilities are extremely affected by the Performance Shaping Factors (PSF), which has a direct effect on human behavior and thus shape HEP according with specific environment conditions and personal individual characteristics which are responsible for these actions. This PSF dependence raises a great problem on data availability as turn these scarcely existent database too much generic or too much specific. Besides this, most of nuclear plants do not keep historical records of human error occurrences. Therefore, in order to overcome this occasional data shortage, a methodology based on Fuzzy Inference and expert judgment was employed in this paper in order to determine human error occurrence probabilities and to evaluate PSF's on performed actions by operators in a nuclear power plant (IEA-R1 nuclear reactor). Obtained HEP values were compared with reference tabled data used on current literature in order to show method coherence and valid approach. This comparison leads to a conclusion that this work results are able to be employed both on HRA and PSA enabling efficient prospection of plant safety conditions, operational procedures and local working conditions potential improvements (author)
Probable sources of errors in radiation therapy (abstract)
International Nuclear Information System (INIS)
Khan, U.H.
1998-01-01
It is fact that some errors are always in dose-volume prescription, management of radiation beam, derivation of exposure, planning the treatment and finally the treatment of the patient ( a three dimensional subject). This paper highlights all the sources of error and relevant methods to decrease or eliminate them, thus improving the over-all therapeutic efficiency and accuracy. It is a comprehensive teamwork of the radiotherapist, medical radiation physicist, medical technologist and the patient. All the links, in the whole chain of radiotherapy, are equally important and duly considered in the paper. The decision for Palliative or Radical treatment is based on the nature and extent disease, site, stage, grade, length of the history of condition and biopsy reports etc. This may entail certain uncertainties in Volume of tumor, quality and quantity of radiation and dose fractionation etc, which may be under or over-estimated. An effort has been made to guide the radiotherapist in avoiding the pitfalls in the arena of radiotherapy. (author)
Bounds on the Error Probability of Raptor Codes
Lázaro, Francisco; Liva, Gianluigi; Paolini, Enrico; Bauch, Gerhard
2016-01-01
In this paper q-ary Raptor codes under ML decoding are considered. An upper bound on the probability of decoding failure is derived using the weight enumerator of the outer code, or its expected weight enumerator if the outer code is drawn randomly from some ensemble of codes. The bound is shown to be tight by means of simulations. This bound provides a new insight into Raptor codes since it shows how Raptor codes can be analyzed similarly to a classical fixed-rate serial concatenation.
International Nuclear Information System (INIS)
Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun
2014-01-01
Highlights: • Many researchers have tried to understand human recovery process or step. • Modeling human recovery process is not sufficient to be applied to HRA. • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • Recovery failure probability in a soft control operation environment is investigated. • Recovery failure probability here would be important evidence for expert judgment. - Abstract: It is well known that probabilistic safety assessments (PSAs) today consider not just hardware failures and environmental events that can impact upon risk, but also human error contributions. Consequently, the focus on reliability and performance management has been on the prevention of human errors and failures rather than the recovery of human errors. However, the recovery of human errors is as important as the prevention of human errors and failures for the safe operation of nuclear power plants (NPPs). For this reason, many researchers have tried to find a human recovery process or step. However, modeling the human recovery process is not sufficient enough to be applied to human reliability analysis (HRA), which requires human error and recovery probabilities. In this study, therefore, human error recovery failure probabilities based on predefined human error modes were investigated by conducting experiments in the operation mockup of advanced/digital main control rooms (MCRs) in NPPs. To this end, 48 subjects majoring in nuclear engineering participated in the experiments. In the experiments, using the developed accident scenario based on tasks from the standard post trip action (SPTA), the steam generator tube rupture (SGTR), and predominant soft control tasks, which are derived from the loss of coolant accident (LOCA) and the excess steam demand event (ESDE), all error detection and recovery data based on human error modes were checked with the performance sheet and the statistical analysis of error recovery/detection was then
Energy Technology Data Exchange (ETDEWEB)
Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)
2016-10-15
The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.
Klaus, Christian A; Carrasco, Luis E; Goldberg, Daniel W; Henry, Kevin A; Sherman, Recinda L
2015-09-15
The utility of patient attributes associated with the spatiotemporal analysis of medical records lies not just in their values but also the strength of association between them. Estimating the extent to which a hierarchy of conditional probability exists between patient attribute associations such as patient identifying fields, patient and date of diagnosis, and patient and address at diagnosis is fundamental to estimating the strength of association between patient and geocode, and patient and enumeration area. We propose a hierarchy for the attribute associations within medical records that enable spatiotemporal relationships. We also present a set of metrics that store attribute association error probability (AAEP), to estimate error probability for all attribute associations upon which certainty in a patient geocode depends. A series of experiments were undertaken to understand how error estimation could be operationalized within health data and what levels of AAEP in real data reveal themselves using these methods. Specifically, the goals of this evaluation were to (1) assess if the concept of our error assessment techniques could be implemented by a population-based cancer registry; (2) apply the techniques to real data from a large health data agency and characterize the observed levels of AAEP; and (3) demonstrate how detected AAEP might impact spatiotemporal health research. We present an evaluation of AAEP metrics generated for cancer cases in a North Carolina county. We show examples of how we estimated AAEP for selected attribute associations and circumstances. We demonstrate the distribution of AAEP in our case sample across attribute associations, and demonstrate ways in which disease registry specific operations influence the prevalence of AAEP estimates for specific attribute associations. The effort to detect and store estimates of AAEP is worthwhile because of the increase in confidence fostered by the attribute association level approach to the
The relative impact of sizing errors on steam generator tube failure probability
International Nuclear Information System (INIS)
Cizelj, L.; Dvorsek, T.
1998-01-01
The Outside Diameter Stress Corrosion Cracking (ODSCC) at tube support plates is currently the major degradation mechanism affecting the steam generator tubes made of Inconel 600. This caused development and licensing of degradation specific maintenance approaches, which addressed two main failure modes of the degraded piping: tube rupture; and excessive leakage through degraded tubes. A methodology aiming at assessing the efficiency of a given set of possible maintenance approaches has already been proposed by the authors. It pointed out better performance of the degradation specific over generic approaches in (1) lower probability of single and multiple steam generator tube rupture (SGTR), (2) lower estimated accidental leak rates and (3) less tubes plugged. A sensitivity analysis was also performed pointing out the relative contributions of uncertain input parameters to the tube rupture probabilities. The dominant contribution was assigned to the uncertainties inherent to the regression models used to correlate the defect size and tube burst pressure. The uncertainties, which can be estimated from the in-service inspections, are further analysed in this paper. The defect growth was found to have significant and to some extent unrealistic impact on the probability of single tube rupture. Since the defect growth estimates were based on the past inspection records they strongly depend on the sizing errors. Therefore, an attempt was made to filter out the sizing errors and to arrive at more realistic estimates of the defect growth. The impact of different assumptions regarding sizing errors on the tube rupture probability was studied using a realistic numerical example. The data used is obtained from a series of inspection results from Krsko NPP with 2 Westinghouse D-4 steam generators. The results obtained are considered useful in safety assessment and maintenance of affected steam generators. (author)
Sporadic error probability due to alpha particles in dynamic memories of various technologies
International Nuclear Information System (INIS)
Edwards, D.G.
1980-01-01
The sensitivity of MOS memory components to errors induced by alpha particles is expected to increase with integration level. The soft error rate of a 65-kbit VMOS memory has been compared experimentally with that of three field-proven 16-kbit designs. The technological and design advantages of the VMOS RAM ensure an error rate which is lower than those of the 16-kbit memories. Calculation of the error probability for the 65-kbit RAM and comparison with the measurements show that for large duty cycles single particle hits lead to sensing errors and for small duty cycles cell errors caused by multiple hits predominate. (Auth.)
Extracting and Converting Quantitative Data into Human Error Probabilities
Energy Technology Data Exchange (ETDEWEB)
Tuan Q. Tran; Ronald L. Boring; Jeffrey C. Joe; Candice D. Griffith
2007-08-01
This paper discusses a proposed method using a combination of advanced statistical approaches (e.g., meta-analysis, regression, structural equation modeling) that will not only convert different empirical results into a common metric for scaling individual PSFs effects, but will also examine the complex interrelationships among PSFs. Furthermore, the paper discusses how the derived statistical estimates (i.e., effect sizes) can be mapped onto a HRA method (e.g. SPAR-H) to generate HEPs that can then be use in probabilistic risk assessment (PRA). The paper concludes with a discussion of the benefits of using academic literature in assisting HRA analysts in generating sound HEPs and HRA developers in validating current HRA models and formulating new HRA models.
International Nuclear Information System (INIS)
Stillwell, W.G.; Seaver, D.A.; Schwartz, J.P.
1982-05-01
This report reviews probability assessment and psychological scaling techniques that could be used to estimate human error probabilities (HEPs) in nuclear power plant operations. The techniques rely on expert opinion and can be used to estimate HEPs where data do not exist or are inadequate. These techniques have been used in various other contexts and have been shown to produce reasonably accurate probabilities. Some problems do exist, and limitations are discussed. Additional topics covered include methods for combining estimates from multiple experts, the effects of training on probability estimates, and some ideas on structuring the relationship between performance shaping factors and HEPs. Preliminary recommendations are provided along with cautions regarding the costs of implementing the recommendations. Additional research is required before definitive recommendations can be made
The Maximum Error Probability Criterion, Random Encoder, and Feedback, in Multiple Input Channels
Directory of Open Access Journals (Sweden)
Ning Cai
2014-02-01
Full Text Available For a multiple input channel, one may define different capacity regions, according to the criterions of error, types of codes, and presence of feedback. In this paper, we aim to draw a complete picture of relations among these different capacity regions. To this end, we first prove that the average-error-probability capacity region of a multiple input channel can be achieved by a random code under the criterion of maximum error probability. Moreover, we show that for a non-deterministic multiple input channel with feedback, the capacity regions are the same under two different error criterions. In addition, we discuss two special classes of channels to shed light on the relation of different capacity regions. In particular, to illustrate the roles of feedback, we provide a class of MAC, for which feedback may enlarge maximum-error-probability capacity regions, but not average-error-capacity regions. Besides, we present a class of MAC, as an example for which the maximum-error-probability capacity regions are strictly smaller than the average-error-probability capacity regions (first example showing this was due to G. Dueck. Differently from G. Dueck’s enlightening example in which a deterministic MAC was considered, our example includes and further generalizes G. Dueck’s example by taking both deterministic and non-deterministic MACs into account. Finally, we extend our results for a discrete memoryless two-input channel, to compound, arbitrarily varying MAC, and MAC with more than two inputs.
Real analysis and probability solutions to problems
Ash, Robert P
1972-01-01
Real Analysis and Probability: Solutions to Problems presents solutions to problems in real analysis and probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability; the interplay between measure theory and topology; conditional probability and expectation; the central limit theorem; and strong laws of large numbers in terms of martingale theory.Comprised of eight chapters, this volume begins with problems and solutions for the theory of measure and integration, followed by various applications of the basic integration theory.
Energy Technology Data Exchange (ETDEWEB)
Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-23
Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.
Rothmann, Mark
2005-01-01
When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.
Quantitative estimation of the human error probability during soft control operations
International Nuclear Information System (INIS)
Lee, Seung Jun; Kim, Jaewhan; Jung, Wondea
2013-01-01
Highlights: ► An HRA method to evaluate execution HEP for soft control operations was proposed. ► The soft control tasks were analyzed and design-related influencing factors were identified. ► An application to evaluate the effects of soft controls was performed. - Abstract: In this work, a method was proposed for quantifying human errors that can occur during operation executions using soft controls. Soft controls of advanced main control rooms have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to identify the human error modes and quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests an evaluation framework for quantifying the execution error probability using soft controls. In the application result, it was observed that the human error probabilities of soft controls showed both positive and negative results compared to the conventional controls according to the design quality of advanced main control rooms
Human Error: A Concept Analysis
Hansen, Frederick D.
2007-01-01
Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.
ERROR OCCURRENCE PROBABILITY OF TYPE I AND II IN MONITORING OF A SEEDER-FERTILIZER
Directory of Open Access Journals (Sweden)
W. G. Vale
2016-07-01
Full Text Available The monitoring of the seeder-fertilizer performance throughout the sowing grains becomes essential to ensure its operation and to determine in which moment the pause intervention during the operation should occur. However, a way to analyze the performance of the seeder-fertilizer can be done through the individual values control cards, which detect the presence of eventual causes due the seeding, becoming an important analysis/manager tool. In this way, this paper focuses in evaluate the probability of occurrence of the errors type I and II in the operational performance analysis of a seeder-fertilizer, using values of number one (1 σ, two (2 σ and three (3 σ multiples of the standard deviation. The experiment was performed in rural area within the county of Sinop – MT, during the crop 2014/15. The experimental design used was based on the statistical quality control logic, to monitor the variables throughout the operational course. Has been collected 120 sampling points in total, 60 being collected per day (at random moments, for each seeding type in a period of two days, for each variant analyzed. The quality indicators were the seeder-fertilizer driving wheels skidding and overall field capacity, all variants being collected during the soybean seeding. The major probability of the occurrence of errors type I é presented to all the quality indicators which use value one (1 σ and two (2 σ as standard deviation multiple. The driving wheel skidding, both in the conventional seeding and in the direct seeding can be evaluated using the value multiple of the standard deviation number three (3 σ. The overall field capacity on the conventional seeding system can be evaluated using the value multiple of the standard deviation number three (3 σ. And, the direct seeding can be evaluated using the value multiple of the standard deviation number two (2 σ.
Analysis of Medication Error Reports
Energy Technology Data Exchange (ETDEWEB)
Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.
2004-11-15
In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.
Probability analysis of nuclear power plant hazards
International Nuclear Information System (INIS)
Kovacs, Z.
1985-01-01
The probability analysis of risk is described used for quantifying the risk of complex technological systems, especially of nuclear power plants. Risk is defined as the product of the probability of the occurrence of a dangerous event and the significance of its consequences. The process of the analysis may be divided into the stage of power plant analysis to the point of release of harmful material into the environment (reliability analysis) and the stage of the analysis of the consequences of this release and the assessment of the risk. The sequence of operations is characterized in the individual stages. The tasks are listed which Czechoslovakia faces in the development of the probability analysis of risk, and the composition is recommended of the work team for coping with the task. (J.C.)
Having Fun with Error Analysis
Siegel, Peter
2007-01-01
We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…
Modeling the probability distribution of positional errors incurred by residential address geocoding
Directory of Open Access Journals (Sweden)
Mazumdar Soumya
2007-01-01
Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.
Measurement Error and Equating Error in Power Analysis
Phillips, Gary W.; Jiang, Tao
2016-01-01
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
International Nuclear Information System (INIS)
Kim, Yochan; Park, Jinkyun; Jung, Wondea
2017-01-01
Because it has been indicated that empirical data supporting the estimates used in human reliability analysis (HRA) is insufficient, several databases have been constructed recently. To generate quantitative estimates from human reliability data, it is important to appropriately sort the erroneous behaviors found in the reliability data. Therefore, this paper proposes a scheme to classify the erroneous behaviors identified by the HuREX (Human Reliability data Extraction) framework through a review of the relevant literature. A case study of the human error probability (HEP) calculations is conducted to verify that the proposed scheme can be successfully implemented for the categorization of the erroneous behaviors and to assess whether the scheme is useful for the HEP quantification purposes. Although continuously accumulating and analyzing simulator data is desirable to secure more reliable HEPs, the resulting HEPs were insightful in several important ways with regard to human reliability in off-normal conditions. From the findings of the literature review and the case study, the potential and limitations of the proposed method are discussed. - Highlights: • A taxonomy of erroneous behaviors is proposed to estimate HEPs from a database. • The cognitive models, procedures, HRA methods, and HRA databases were reviewed. • HEPs for several types of erroneous behaviors are calculated as a case study.
Selected papers on analysis, probability, and statistics
Nomizu, Katsumi
1994-01-01
This book presents papers that originally appeared in the Japanese journal Sugaku. The papers fall into the general area of mathematical analysis as it pertains to probability and statistics, dynamical systems, differential equations and analytic function theory. Among the topics discussed are: stochastic differential equations, spectra of the Laplacian and Schrödinger operators, nonlinear partial differential equations which generate dissipative dynamical systems, fractal analysis on self-similar sets and the global structure of analytic functions.
On the Probability of Error and Stochastic Resonance in Discrete Memoryless Channels
2013-12-01
electromagnetic) and optical communication (light). Radio wave and optical communication does not work well in deep underwater environment. Thus, acoustic... Optical communications has a greater advantage in data rate that can exceed 1 Giga Hz. However, when used in underwater environment, the light is rapidly... underwater wireless sensor networks. We formulated an analytic relationship that relates the average probability of error to the systems parameters, the
Quantum Probability and Spectral Analysis of Graphs
Hora, Akihito
2007-01-01
This is the first book to comprehensively cover the quantum probabilistic approach to spectral analysis of graphs. This approach has been developed by the authors and has become an interesting research area in applied mathematics and physics. The book can be used as a concise introduction to quantum probability from an algebraic aspect. Here readers will learn several powerful methods and techniques of wide applicability, which have been recently developed under the name of quantum probability. The exercises at the end of each chapter help to deepen understanding. Among the topics discussed along the way are: quantum probability and orthogonal polynomials; asymptotic spectral theory (quantum central limit theorems) for adjacency matrices; the method of quantum decomposition; notions of independence and structure of graphs; and asymptotic representation theory of the symmetric groups.
Fast Outage Probability Simulation for FSO Links with a Generalized Pointing Error Model
Ben Issaid, Chaouki
2017-02-07
Over the past few years, free-space optical (FSO) communication has gained significant attention. In fact, FSO can provide cost-effective and unlicensed links, with high-bandwidth capacity and low error rate, making it an exciting alternative to traditional wireless radio-frequency communication systems. However, the system performance is affected not only by the presence of atmospheric turbulences, which occur due to random fluctuations in the air refractive index but also by the existence of pointing errors. Metrics, such as the outage probability which quantifies the probability that the instantaneous signal-to-noise ratio is smaller than a given threshold, can be used to analyze the performance of this system. In this work, we consider weak and strong turbulence regimes, and we study the outage probability of an FSO communication system under a generalized pointing error model with both a nonzero boresight component and different horizontal and vertical jitter effects. More specifically, we use an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results.
International Nuclear Information System (INIS)
Jang, Inseok; Kim, Ar Ryum; Harbi, Mohamed Ali Salem Al; Lee, Seung Jun; Kang, Hyun Gook; Seong, Poong Hyun
2013-01-01
Highlights: ► The operation environment of MCRs in NPPs has changed by adopting new HSIs. ► The operation action in NPP Advanced MCRs is performed by soft control. ► Different basic human error probabilities (BHEPs) should be considered. ► BHEPs in a soft control operation environment are investigated empirically. ► This work will be helpful to verify if soft control has positive or negative effects. -- Abstract: By adopting new human–system interfaces that are based on computer-based technologies, the operation environment of main control rooms (MCRs) in nuclear power plants (NPPs) has changed. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, soft controls, and so on, are called Advanced MCRs. Among the many features in Advanced MCRs, soft controls are an important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, touch screens, and so on, operators can select a specific screen, then choose the controller, and finally manipulate the devices. However, because of the different interfaces between soft control and hardwired conventional type control, different basic human error probabilities (BHEPs) should be considered in the Human Reliability Analysis (HRA) for advanced MCRs. Although there are many HRA methods to assess human reliabilities, such as Technique for Human Error Rate Prediction (THERP), Accident Sequence Evaluation Program (ASEP), Human Error Assessment and Reduction Technique (HEART), Human Event Repository and Analysis (HERA), Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR), Cognitive Reliability and Error Analysis Method (CREAM), and so on, these methods have been applied to conventional MCRs, and they do not consider the new features of advance MCRs such as soft controls. As a result, there is an insufficient database for assessing human reliabilities in advanced
Silva, Ivair R
2018-01-15
Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.
Error Analysis of Band Matrix Method
Taniguchi, Takeo; Soga, Akira
1984-01-01
Numerical error in the solution of the band matrix method based on the elimination method in single precision is investigated theoretically and experimentally, and the behaviour of the truncation error and the roundoff error is clarified. Some important suggestions for the useful application of the band solver are proposed by using the results of above error analysis.
Uncertainty quantification and error analysis
Energy Technology Data Exchange (ETDEWEB)
Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Binomial moments of the distance distribution and the probability of undetected error
Energy Technology Data Exchange (ETDEWEB)
Barg, A. [Lucent Technologies, Murray Hill, NJ (United States). Bell Labs.; Ashikhmin, A. [Los Alamos National Lab., NM (United States)
1998-09-01
In [1] K.A.S. Abdel-Ghaffar derives a lower bound on the probability of undetected error for unrestricted codes. The proof relies implicitly on the binomial moments of the distance distribution of the code. The authors use the fact that these moments count the size of subcodes of the code to give a very simple proof of the bound in [1] by showing that it is essentially equivalent to the Singleton bound. They discuss some combinatorial connections revealed by this proof. They also discuss some improvements of this bound. Finally, they analyze asymptotics. They show that an upper bound on the undetected error exponent that corresponds to the bound of [1] improves known bounds on this function.
Directory of Open Access Journals (Sweden)
Juan Mario Torres Nova
2008-09-01
Full Text Available Gaussian minimum shift keying (GMSK and differential binary phase shift keying (DBPSK are two digital modulation schemes which are -frequently used in radio communication systems; however, there is interdependence in the use of its benefits (spectral efficiency, low bit error rate, low inter symbol interference, etc. Optimising one parameter creates problems for another; for example, the GMSK scheme succeeds in reducing bandwidth when introducing a Gaussian filter into an MSK (minimum shift ke-ying modulator in exchange for increasing inter-symbol interference in the system. The DBPSK scheme leads to lower error pro-bability, occupying more bandwidth; it likewise facilitates synchronous data transmission due to the receiver’s bit delay when re-covering a signal.
A Comparative Study on Error Analysis
DEFF Research Database (Denmark)
Wu, Xiaoli; Zhang, Chun
2015-01-01
of the grammatical errors with using comparative sentences is developed, which include comparative item-related errors, comparative result-related errors and blend errors. The results further indicate that these errors could attribute to negative L1 transfer and overgeneralization of grammatical rule and structures......Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production...
Error Analysis and the EFL Classroom Teaching
Xie, Fang; Jiang, Xue-mei
2007-01-01
This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…
Estimation of analysis and forecast error variances
Directory of Open Access Journals (Sweden)
Malaquias Peña
2014-11-01
Full Text Available Accurate estimates of error variances in numerical analyses and forecasts (i.e. difference between analysis or forecast fields and nature on the resolved scales are critical for the evaluation of forecasting systems, the tuning of data assimilation (DA systems and the proper initialisation of ensemble forecasts. Errors in observations and the difficulty in their estimation, the fact that estimates of analysis errors derived via DA schemes, are influenced by the same assumptions as those used to create the analysis fields themselves, and the presumed but unknown correlation between analysis and forecast errors make the problem difficult. In this paper, an approach is introduced for the unbiased estimation of analysis and forecast errors. The method is independent of any assumption or tuning parameter used in DA schemes. The method combines information from differences between forecast and analysis fields (‘perceived forecast errors’ with prior knowledge regarding the time evolution of (1 forecast error variance and (2 correlation between errors in analyses and forecasts. The quality of the error estimates, given the validity of the prior relationships, depends on the sample size of independent measurements of perceived errors. In a simulated forecast environment, the method is demonstrated to reproduce the true analysis and forecast error within predicted error bounds. The method is then applied to forecasts from four leading numerical weather prediction centres to assess the performance of their corresponding DA and modelling systems. Error variance estimates are qualitatively consistent with earlier studies regarding the performance of the forecast systems compared. The estimated correlation between forecast and analysis errors is found to be a useful diagnostic of the performance of observing and DA systems. In case of significant model-related errors, a methodology to decompose initial value and model-related forecast errors is also
On Bit Error Probability and Power Optimization in Multihop Millimeter Wave Relay Systems
Chelli, Ali
2018-01-15
5G networks are expected to provide gigabit data rate to users via millimeter-wave (mmWave) communication technology. One of the major problem faced by mmWaves is that they cannot penetrate buildings. In this paper, we utilize multihop relaying to overcome the signal blockage problem in mmWave band. The multihop relay network comprises a source device, several relay devices and a destination device and uses device-todevice communication. Relay devices redirect the source signal to avoid the obstacles existing in the propagation environment. Each device amplifies and forwards the signal to the next device, such that a multihop link ensures the connectivity between the source device and the destination device. We consider that the relay devices and the destination device are affected by external interference and investigate the bit error probability (BEP) of this multihop mmWave system. Note that the study of the BEP allows quantifying the quality of communication and identifying the impact of different parameters on the system reliability. In this way, the system parameters, such as the powers allocated to different devices, can be tuned to maximize the link reliability. We derive exact expressions for the BEP of M-ary quadrature amplitude modulation (M-QAM) and M-ary phase-shift keying (M-PSK) in terms of multivariate Meijer’s G-function. Due to the complicated expression of the exact BEP, a tight lower-bound expression for the BEP is derived using a novel Mellin-approach. Moreover, an asymptotic expression for the BEP at high SIR regime is derived and used to determine the diversity and the coding gain of the system. Additionally, we optimize the power allocation at different devices subject to a sum power constraint such that the BEP is minimized. Our analysis reveals that optimal power allocation allows achieving more than 3 dB gain compared to the equal power allocation.This research work can serve as a framework for designing and optimizing mmWave multihop
International Nuclear Information System (INIS)
Jung, W.D.; Kim, T.W.; Park, C.K.
1991-01-01
This paper presents an integrated approach to prediction of human error probabilities with a computer program, HREP (Human Reliability Evaluation Program). HREP is developed to provide simplicity in Human Reliability Analysis (HRA) and consistency in the obtained results. The basic assumption made in developing HREP is that human behaviors can be quantified in two separate steps. One is the diagnosis error evaluation step and the other the response error evaluation step. HREP integrates the Human Cognitive Reliability (HCR) model and the HRA Event Tree technique. The former corresponds to the Diagnosis model, and the latter the Response model. HREP consists of HREP-IN and HREP-MAIN. HREP-IN is used to generate input files. HREP-MAIN is used to evaluate selected human errors in a given input file. HREP-MAIN is divided into three subsections ; the diagnosis evaluation step, the subaction evaluation step and the modification step. The final modification step takes dependency and/or recovery factors into consideration. (author)
Modelling soft error probability in fi rmware: A case study | Kourie ...
African Journals Online (AJOL)
This case study involves an analysis of firmware that controls explosions in mining operations. The purpose is to estimate the probability that external disruptive events (such as electro-magnetic interference) could drive the firmware into a state which results in an unintended explosion. Two probabilistic models are built, ...
Error Analysis in Mathematics. Technical Report #1012
Lai, Cheng-Fei
2012-01-01
Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…
An Error Analysis on TFL Learners’ Writings
Directory of Open Access Journals (Sweden)
Arif ÇERÇİ
2016-12-01
Full Text Available The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner level and completed the process by taking C1 (advanced certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choice errors. The ratio and categorical distributions of identified errors were analyzed through error analysis. The data were analyzed through statistical procedures in an effort to determine whether error types differ according to the levels of the students. The errors in this study are limited to the linguistic and intralingual developmental errors
Analysis and classification of human error
Rouse, W. B.; Rouse, S. H.
1983-01-01
The literature on human error is reviewed with emphasis on theories of error and classification schemes. A methodology for analysis and classification of human error is then proposed which includes a general approach to classification. Identification of possible causes and factors that contribute to the occurrence of errors is also considered. An application of the methodology to the use of checklists in the aviation domain is presented for illustrative purposes.
International Nuclear Information System (INIS)
Seaver, D.A.; Stillwell, W.G.
1983-03-01
This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use
Directory of Open Access Journals (Sweden)
Madeiro Francisco
2010-01-01
Full Text Available Abstract This paper presents an alternative method for determining exact expressions for the bit error probability (BEP of modulation schemes subject to Nakagami- fading. In this method, the Nakagami- fading channel is seen as an additive noise channel whose noise is modeled as the ratio between Gaussian and Nakagami- random variables. The method consists of using the cumulative density function of the resulting noise to obtain closed-form expressions for the BEP of modulation schemes subject to Nakagami- fading. In particular, the proposed method is used to obtain closed-form expressions for the BEP of -ary quadrature amplitude modulation ( -QAM, -ary pulse amplitude modulation ( -PAM, and rectangular quadrature amplitude modulation ( -QAM under Nakagami- fading. The main contribution of this paper is to show that this alternative method can be used to reduce the computational complexity for detecting signals in the presence of fading.
Symbol Error Probability of DF Relay Selection over Arbitrary Nakagami-m Fading Channels
Directory of Open Access Journals (Sweden)
George C. Alexandropoulos
2013-01-01
Full Text Available We present a new analytical expression for the moment generating function (MGF of the end-to-end signal-to-noise ratio of dual-hop decode-and-forward (DF relaying systems with relay selection when operating over Nakagami-m fading channels. The derived MGF expression, which is valid for arbitrary values of the fading parameters of both hops, is subsequently utilized to evaluate the average symbol error probability (ASEP of M-ary phase shift keying modulation for the considered DF relaying scheme under various asymmetric fading conditions. It is shown that the MGF-based ASEP performance evaluation results are in excellent agreement with equivalent ones obtained by means of computer simulations, thus validating the correctness of the presented MGF expression.
Meteor radar signal processing and error analysis
Kang, Chunmei
Meteor wind radar systems are a powerful tool for study of the horizontal wind field in the mesosphere and lower thermosphere (MLT). While such systems have been operated for many years, virtually no literature has focused on radar system error analysis. The instrumental error may prevent scientists from getting correct conclusions on geophysical variability. The radar system instrumental error comes from different sources, including hardware, software, algorithms and etc. Radar signal processing plays an important role in radar system and advanced signal processing algorithms may dramatically reduce the radar system errors. In this dissertation, radar system error propagation is analyzed and several advanced signal processing algorithms are proposed to optimize the performance of radar system without increasing the instrument costs. The first part of this dissertation is the development of a time-frequency waveform detector, which is invariant to noise level and stable to a wide range of decay rates. This detector is proposed to discriminate the underdense meteor echoes from the background white Gaussian noise. The performance of this detector is examined using Monte Carlo simulations. The resulting probability of detection is shown to outperform the often used power and energy detectors for the same probability of false alarm. Secondly, estimators to determine the Doppler shift, the decay rate and direction of arrival (DOA) of meteors are proposed and evaluated. The performance of these estimators is compared with the analytically derived Cramer-Rao bound (CRB). The results show that the fast maximum likelihood (FML) estimator for determination of the Doppler shift and decay rate and the spatial spectral method for determination of the DOAs perform best among the estimators commonly used on other radar systems. For most cases, the mean square error (MSE) of the estimator meets the CRB above a 10dB SNR. Thus meteor echoes with an estimated SNR below 10dB are
Synthetic aperture interferometry: error analysis
Energy Technology Data Exchange (ETDEWEB)
Biswas, Amiya; Coupland, Jeremy
2010-07-10
Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.
Synthetic aperture interferometry: error analysis
International Nuclear Information System (INIS)
Biswas, Amiya; Coupland, Jeremy
2010-01-01
Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.
A comparison of error bounds for a nonlinear tracking system with detection probability Pd < 1.
Tong, Huisi; Zhang, Hao; Meng, Huadong; Wang, Xiqin
2012-12-14
Error bounds for nonlinear filtering are very important for performance evaluation and sensor management. This paper presents a comparative study of three error bounds for tracking filtering, when the detection probability is less than unity. One of these bounds is the random finite set (RFS) bound, which is deduced within the framework of finite set statistics. The others, which are the information reduction factor (IRF) posterior Cramer-Rao lower bound (PCRLB) and enumeration method (ENUM) PCRLB are introduced within the framework of finite vector statistics. In this paper, we deduce two propositions and prove that the RFS bound is equal to the ENUM PCRLB, while it is tighter than the IRF PCRLB, when the target exists from the beginning to the end. Considering the disappearance of existing targets and the appearance of new targets, the RFS bound is tighter than both IRF PCRLB and ENUM PCRLB with time, by introducing the uncertainty of target existence. The theory is illustrated by two nonlinear tracking applications: ballistic object tracking and bearings-only tracking. The simulation studies confirm the theory and reveal the relationship among the three bounds.
Error Analysis in the Teaching of English
Hasyim, Sunardi
2002-01-01
The main purpose of this article is to discuss the importance of error analysis in the teaching of English as a foreign language. Although errors are bad things in learning English as a foreign language%2C error analysis is advantageous for both learners and teachers. For learners%2C error analysis is needed to show them in what aspect in grammar which is difficult for them%2C where as for teachers%2C it is required to evaluate themselves whether they are successful or not in teaching English...
Error Analysis for Geotechnical Engineering.
1987-09-01
methods for algegraic and differential systems." Geotechnical Seminar Notes, Arthur D. Litle, Inc., Cambridge. Hammersley, J.M. and D.C. Handscomb...lectures on choices under uncertainty. Adison-Wesley, Reading, Massachusetts. Rosenblueth , E. (1975). "Point estimates for probability moments," Proceedings
Error Analysis: Past, Present, and Future
McCloskey, George
2017-01-01
This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…
SIMULATED HUMAN ERROR PROBABILITY AND ITS APPLICATION TO DYNAMIC HUMAN FAILURE EVENTS
Energy Technology Data Exchange (ETDEWEB)
Herberger, Sarah M.; Boring, Ronald L.
2016-10-01
Abstract Objectives: Human reliability analysis (HRA) methods typically analyze human failure events (HFEs) at the overall task level. For dynamic HRA, it is important to model human activities at the subtask level. There exists a disconnect between dynamic subtask level and static task level that presents issues when modeling dynamic scenarios. For example, the SPAR-H method is typically used to calculate the human error probability (HEP) at the task level. As demonstrated in this paper, quantification in SPAR-H does not translate to the subtask level. Methods: Two different discrete distributions were generated for each SPAR-H Performance Shaping Factor (PSF) to define the frequency of PSF levels. The first distribution was a uniform, or uninformed distribution that assumed the frequency of each PSF level was equally likely. The second non-continuous distribution took the frequency of PSF level as identified from an assessment of the HERA database. These two different approaches were created to identify the resulting distribution of the HEP. The resulting HEP that appears closer to the known distribution, a log-normal centered on 1E-3, is the more desirable. Each approach then has median, average and maximum HFE calculations applied. To calculate these three values, three events, A, B and C are generated from the PSF level frequencies comprised of subtasks. The median HFE selects the median PSF level from each PSF and calculates HEP. The average HFE takes the mean PSF level, and the maximum takes the maximum PSF level. The same data set of subtask HEPs yields starkly different HEPs when aggregated to the HFE level in SPAR-H. Results: Assuming that each PSF level in each HFE is equally likely creates an unrealistic distribution of the HEP that is centered at 1. Next the observed frequency of PSF levels was applied with the resulting HEP behaving log-normally with a majority of the values under 2.5% HEP. The median, average and maximum HFE calculations did yield
Directory of Open Access Journals (Sweden)
2012-12-01
Full Text Available Introduction: Emergency situation is one of the influencing factors on human error. The aim of this research was purpose to evaluate human error in emergency situation of fire and explosion at the oil company warehouse in Hamadan city applying human error probability index (HEPI. . Material and Method: First, the scenario of emergency situation of those situation of fire and explosion at the oil company warehouse was designed and then maneuver against, was performed. The scaled questionnaire of muster for the maneuver was completed in the next stage. Collected data were analyzed to calculate the probability success for the 18 actions required in an emergency situation from starting point of the muster until the latest action to temporary sheltersafe. .Result: The result showed that the highest probability of error occurrence was related to make safe workplace (evaluation phase with 32.4 % and lowest probability of occurrence error in detection alarm (awareness phase with 1.8 %, probability. The highest severity of error was in the evaluation phase and the lowest severity of error was in the awareness and recovery phase. Maximum risk level was related to the evaluating exit routes and selecting one route and choosy another exit route and minimum risk level was related to the four evaluation phases. . Conclusion: To reduce the risk of reaction in the exit phases of an emergency situation, the following actions are recommended, based on the finding in this study: A periodic evaluation of the exit phase and modifying them if necessary, conducting more maneuvers and analyzing this results along with a sufficient feedback to the employees.
ERROR ANALYSIS in the TEACHING of ENGLISH
Directory of Open Access Journals (Sweden)
Sunardi Hasyim
2002-01-01
Full Text Available The main purpose of this article is to discuss the importance of error analysis in the teaching of English as a foreign language. Although errors are bad things in learning English as a foreign language%2C error analysis is advantageous for both learners and teachers. For learners%2C error analysis is needed to show them in what aspect in grammar which is difficult for them%2C where as for teachers%2C it is required to evaluate themselves whether they are successful or not in teaching English.%0D%0AIn this article%2C the writer presented some English sentences containing grammatical errors. These grammatical errors were analyzed based on the theories presented by the linguists. This analysis aimed at showing the students the causes and kinds of the grammatical errors. By this way%2C the students are expected to increase their knowledge on the English grammar. Abstract in Bahasa Indonesia : errors%2C+mistake%2C+over+orrer%2C+covert+error%2C+interference%2C+overgeneralization%2C+grammar%2C+interlingual%2C+intralingual%2C+idiosyncrasies.
Notes on human error analysis and prediction
International Nuclear Information System (INIS)
Rasmussen, J.
1978-11-01
The notes comprise an introductory discussion of the role of human error analysis and prediction in industrial risk analysis. Following this introduction, different classes of human errors and role in industrial systems are mentioned. Problems related to the prediction of human behaviour in reliability and safety analysis are formulated and ''criteria for analyzability'' which must be met by industrial systems so that a systematic analysis can be performed are suggested. The appendices contain illustrative case stories and a review of human error reports for the task of equipment calibration and testing as found in the US Licensee Event Reports. (author)
Experimental research on English vowel errors analysis
Directory of Open Access Journals (Sweden)
Huang Qiuhua
2016-01-01
Full Text Available Our paper analyzed relevant acoustic parameters of people’s speech samples and the results that compared with English standard pronunciation with methods of experimental phonetics by phonetic analysis software and statistical analysis software. Then we summarized phonetic pronunciation errors of college students through the analysis of English pronunciation of vowels, we found that college students’ English pronunciation are easy occur tongue position and lip shape errors during pronounce vowels. Based on analysis of pronunciation errors, we put forward targeted voice training for college students’ English pronunciation, eventually increased the students learning interest, and improved the teaching of English phonetics.
A technique for human error analysis (ATHEANA)
Energy Technology Data Exchange (ETDEWEB)
Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others
1996-05-01
Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.
A technique for human error analysis (ATHEANA)
International Nuclear Information System (INIS)
Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W.
1996-05-01
Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions
A Comparative Study on Error Analysis
DEFF Research Database (Denmark)
Wu, Xiaoli; Zhang, Chun
2015-01-01
Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... of errors in the written and spoken production of L2 learners has a long tradition in L2 pedagogy. Yet, in teaching and learning Chinese as a foreign language (CFL), only handful studies have been made either to define the ‘error’ in a pedagogically insightful way or to empirically investigate...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production...
An error analysis perspective for patient alignment systems.
Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann
2013-09-01
This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.
Experimental research on English vowel errors analysis
Huang Qiuhua
2016-01-01
Our paper analyzed relevant acoustic parameters of people’s speech samples and the results that compared with English standard pronunciation with methods of experimental phonetics by phonetic analysis software and statistical analysis software. Then we summarized phonetic pronunciation errors of college students through the analysis of English pronunciation of vowels, we found that college students’ English pronunciation are easy occur tongue position and lip shape errors during pronounce vow...
Yilmaz, Ferkan
2012-07-01
Analysis of the average binary error probabilities (ABEP) and average capacity (AC) of wireless communications systems over generalized fading channels have been considered separately in past years. This paper introduces a novel moment generating function (MGF)-based unified expression for the ABEP and AC of single and multiple link communications with maximal ratio combining. In addition, this paper proposes the hyper-Fox\\'s H fading model as a unified fading distribution of a majority of the well-known generalized fading environments. As such, the authors offer a generic unified performance expression that can be easily calculated, and that is applicable to a wide variety of fading scenarios. The mathematical formulism is illustrated with some selected numerical examples that validate the correctness of the authors\\' newly derived results. © 1972-2012 IEEE.
Heart sounds analysis using probability assessment
Czech Academy of Sciences Publication Activity Database
Plešinger, Filip; Viščor, Ivo; Halámek, Josef; Jurčo, Juraj; Jurák, Pavel
2017-01-01
Roč. 38, č. 8 (2017), s. 1685-1700 ISSN 0967-3334 R&D Projects: GA ČR GAP102/12/2034; GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : heart sounds * FFT * machine learning * signal averaging * probability assessment Subject RIV: FS - Medical Facilities ; Equipment OBOR OECD: Medical engineering Impact factor: 2.058, year: 2016
Toward General Analysis of Recursive Probability Models
Pless, Daniel; Luger, George
2013-01-01
There is increasing interest within the research community in the design and use of recursive probability models. Although there still remains concern about computational complexity costs and the fact that computing exact solutions can be intractable for many nonrecursive models and impossible in the general case for recursive problems, several research groups are actively developing computational techniques for recursive stochastic languages. We have developed an extension to the traditional...
Analysis of Position Error Headway Protection
1975-07-01
An analysis is developed to determine safe headway on PRT systems that use point-follower control. Periodic measurements of the position error relative to a nominal trajectory provide warning against the hazards of overspeed and unexpected stop. A co...
Error estimation in plant growth analysis
Directory of Open Access Journals (Sweden)
Andrzej Gregorczyk
2014-01-01
Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.
Orthonormal polynomials in wavefront analysis: error analysis.
Dai, Guang-Ming; Mahajan, Virendra N
2008-07-01
Zernike circle polynomials are in widespread use for wavefront analysis because of their orthogonality over a circular pupil and their representation of balanced classical aberrations. However, they are not appropriate for noncircular pupils, such as annular, hexagonal, elliptical, rectangular, and square pupils, due to their lack of orthogonality over such pupils. We emphasize the use of orthonormal polynomials for such pupils, but we show how to obtain the Zernike coefficients correctly. We illustrate that the wavefront fitting with a set of orthonormal polynomials is identical to the fitting with a corresponding set of Zernike polynomials. This is a consequence of the fact that each orthonormal polynomial is a linear combination of the Zernike polynomials. However, since the Zernike polynomials do not represent balanced aberrations for a noncircular pupil, the Zernike coefficients lack the physical significance that the orthonormal coefficients provide. We also analyze the error that arises if Zernike polynomials are used for noncircular pupils by treating them as circular pupils and illustrate it with numerical examples.
Heart sounds analysis using probability assessment.
Plesinger, F; Viscor, I; Halamek, J; Jurco, J; Jurak, P
2017-07-31
This paper describes a method for automated discrimination of heart sounds recordings according to the Physionet Challenge 2016. The goal was to decide if the recording refers to normal or abnormal heart sounds or if it is not possible to decide (i.e. 'unsure' recordings). Heart sounds S1 and S2 are detected using amplitude envelopes in the band 15-90 Hz. The averaged shape of the S1/S2 pair is computed from amplitude envelopes in five different bands (15-90 Hz; 55-150 Hz; 100-250 Hz; 200-450 Hz; 400-800 Hz). A total of 53 features are extracted from the data. The largest group of features is extracted from the statistical properties of the averaged shapes; other features are extracted from the symmetry of averaged shapes, and the last group of features is independent of S1 and S2 detection. Generated features are processed using logical rules and probability assessment, a prototype of a new machine-learning method. The method was trained using 3155 records and tested on 1277 hidden records. It resulted in a training score of 0.903 (sensitivity 0.869, specificity 0.937) and a testing score of 0.841 (sensitivity 0.770, specificity 0.913). The revised method led to a test score of 0.853 in the follow-up phase of the challenge. The presented solution achieved 7th place out of 48 competing entries in the Physionet Challenge 2016 (official phase). In addition, the PROBAfind software for probability assessment was introduced.
An Error Analysis on TFL Learners’ Writings
ÇERÇİ, Arif; DERMAN, Serdar; BARDAKÇI, Mehmet
2016-01-01
The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner) level and completed the process by taking C1 (advanced) certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choi...
14 CFR 417.224 - Probability of failure analysis.
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Probability of failure analysis. 417.224..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.224 Probability of failure...) Failure. For flight safety analysis purposes, a failure occurs when a launch vehicle does not complete any...
Error propagation analysis for a sensor system
International Nuclear Information System (INIS)
Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.
1976-01-01
As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm
Outage Probability Analysis of FSO Links over Foggy Channel
Esmail, Maged Abdullah
2017-02-22
Outdoor Free space optic (FSO) communication systems are sensitive to atmospheric impairments such as turbulence and fog, in addition to being subject to pointing errors. Fog is particularly severe because it induces an attenuation that may vary from few dBs up to few hundreds of dBs per kilometer. Pointing errors also distort the link alignment and cause signal fading. In this paper, we investigate and analyze the FSO systems performance under fog conditions and pointing errors in terms of outage probability. We then study the impact of several effective communication mitigation techniques that can improve the system performance including multi-hop, transmit laser selection (TLS) and hybrid RF/FSO transmission. Closed-form expressions for the outage probability are derived and practical and comprehensive numerical examples are suggested to assess the obtained results. We found that the FSO system has limited performance that prevents applying FSO in wireless microcells that have a 500 m minimum cell radius. The performance degrades more when pointing errors appear. Increasing the transmitted power can improve the performance under light to moderate fog. However, under thick and dense fog the improvement is negligible. Using mitigation techniques can play a major role in improving the range and outage probability.
Data analysis & probability drill sheets : grades 6-8
Forest, Chris
2011-01-01
For grades 6-8, our Common Core State Standards-based resource meets the data analysis & probability concepts addressed by the NCTM standards and encourages your students to review the concepts in unique ways. Each drill sheet contains warm-up and timed drill activities for the student to practice data analysis & probability concepts.
International Nuclear Information System (INIS)
Yoshida, Yoshitaka; Ohtani, Masanori; Fujita, Yushi
2002-01-01
In the nuclear power plant, much knowledge is acquired through probabilistic safety assessment (PSA) of a severe accident, and accident management (AM) is prepared. It is necessary to evaluate the effectiveness of AM using the decision-making failure probability of an emergency organization, operation failure probability of operators, success criteria of AM and reliability of AM equipments in PSA. However, there has been no suitable qualification method for PSA so far to obtain the decision-making failure probability, because the decision-making failure of an emergency organization treats the knowledge based error. In this work, we developed a new method for quantification of the decision-making failure probability of an emergency organization using cognitive analysis model, which decided an AM strategy, in a nuclear power plant at the severe accident, and tried to apply it to a typical pressurized water reactor (PWR) plant. As a result: (1) It could quantify the decision-making failure probability adjusted to PSA for general analysts, who do not necessarily possess professional human factors knowledge, by choosing the suitable value of a basic failure probability and an error-factor. (2) The decision-making failure probabilities of six AMs were in the range of 0.23 to 0.41 using the screening evaluation method and in the range of 0.10 to 0.19 using the detailed evaluation method as the result of trial evaluation based on severe accident analysis of a typical PWR plant, and a result of sensitivity analysis of the conservative assumption, failure probability decreased about 50%. (3) The failure probability using the screening evaluation method exceeded that using detailed evaluation method by 99% of probability theoretically, and the failure probability of AM in this study exceeded 100%. From this result, it was shown that the decision-making failure probability was more conservative than the detailed evaluation method, and the screening evaluation method satisfied
Low Probability Tail Event Analysis and Mitigation in BPA Control Area: Task 2 Report
Energy Technology Data Exchange (ETDEWEB)
Lu, Shuai; Makarov, Yuri V.; McKinstry, Craig A.; Brothers, Alan J.; Jin, Shuangshuang
2009-09-18
Task report detailing low probability tail event analysis and mitigation in BPA control area. Tail event refers to the situation in a power system when unfavorable forecast errors of load and wind are superposed onto fast load and wind ramps, or non-wind generators falling short of scheduled output, causing the imbalance between generation and load to become very significant.
Chasing probabilities — Signaling negative and positive prediction errors across domains
DEFF Research Database (Denmark)
Meder, David; Madsen, Kristoffer H; Hulme, Oliver
2016-01-01
Adaptive actions build on internal probabilistic models of possible outcomes that are tuned according to the errors of their predictions when experiencing an actual outcome. Prediction errors (PEs) inform choice behavior across a diversity of outcome domains and dimensions, yet neuroimaging studies...... of the two. We acquired functional MRI data while volunteers performed four probabilistic reversal learning tasks which differed in terms of outcome valence (reward-seeking versus punishment-avoidance) and domain (abstract symbols versus facial expressions) of outcomes. We found that ventral striatum...
Numeracy, Literacy and Newman's Error Analysis
White, Allan Leslie
2010-01-01
Newman (1977, 1983) defined five specific literacy and numeracy skills as crucial to performance on mathematical word problems: reading, comprehension, transformation, process skills, and encoding. Newman's Error Analysis (NEA) provided a framework for considering the reasons that underlay the difficulties students experienced with mathematical…
Analysis of the interface tracking errors
International Nuclear Information System (INIS)
Cerne, G.; Tiselj, I.; Petelin, S.
2001-01-01
An important limitation of the interface-tracking algorithm is the grid density, which determines the space scale of the surface tracking. In this paper the analysis of the interface tracking errors, which occur in a dispersed flow, is performed for the VOF interface tracking method. A few simple two-fluid tests are proposed for the investigation of the interface tracking errors and their grid dependence. When the grid density becomes too coarse to follow the interface changes, the errors can be reduced either by using denser nodalization or by switching to the two-fluid model during the simulation. Both solutions are analyzed and compared on a simple vortex-flow test.(author)
Macroscopic analysis of human errors at nuclear power plant
International Nuclear Information System (INIS)
Jeong, Y. S.; Gee, M. G.; Kim, J. T.
2003-01-01
A decision tree for analysis of human errors is developed. The nodes and edges show human error patterns and their occurrence. Since the nodes are related to manageable resources, human errors could be reduced by allocation of their resources and by controlling human error barriers. Microscopic analysis of human errors is also performed by adding the additional information from the graph
Error Analysis and Propagation in Metabolomics Data Analysis.
Moseley, Hunter N B
2013-01-01
Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of 'omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed.
Error analysis of stochastic gradient descent ranking.
Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan
2013-06-01
Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant.
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2016-03-01
A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided.
Analysis of human errors in operating heavy water production facilities
International Nuclear Information System (INIS)
Preda, Irina; Lazar, Roxana; Croitoru, Cornelia
1997-01-01
The heavy water plants are complex chemical installations in which high quantities of H 2 S, a corrosive inflammable explosive high toxicity gas are circulated. In addition, in the process, it is maintained at high temperatures and pressures. According to the statistics, about 20-30% of the damages arising in the installations are due directly or indirectly to human errors. These are due mainly to incorrect actions, maintenance errors, incorrect recording of instrumental readings, etc. This study of human performances by probabilistic safety analysis gives the possibilities of evaluating the human error contribution in the occurrence of event/accident sequences. This work presents the results obtained from the analysis of human errors at the stage 1 of the heavy water production pilot, at INC-DTCI ICIS Rm.Valcea, using the dual temperature process in the H 2 O-H 2 S isotopic exchange. The case of loss of steam was considered. The results are interpreted having in view the making decision of improving the activity, as well as, the level of safety/reliability, in order to reduce the risk for population/environment. For such an initiation event, the event tree has been developed based on failure trees. The human error probabilities were assessed as a function of the action complexity, the psychological stress level, the existence of written procedures and of a secondary control (the method of decision tree). For a critical accident sequence, weight evaluations (RAW, RRW, F and V) to make evident the contribution of human errors at the risk level and methods to reduce this errors were suggested
Error propagation analysis for a sensor system
Energy Technology Data Exchange (ETDEWEB)
Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.
1976-01-01
As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm.
AGAPE-ET for human error analysis of emergency tasks and its application
International Nuclear Information System (INIS)
Kim, J. H.; Jeong, W. D.
2002-01-01
The paper presents a proceduralised human reliability analysis (HRA) methodology, AGAPE-ET (A Guidance And Procedure for Human Error Analysis for Emergency Tasks), covering both qualitative error analysis and quantification of human error probability (HEP) of emergency tasks in nuclear power plants. The AGAPE-ET method is based on the simplified cognitive model. By each cognitive function, error causes or error-likely situations have been identified considering the characteristics of the performance of each cognitive function and influencing mechanism of the performance influencing factors (PIFs) on the cognitive function. Then, error analysis items have been determined from the identified error causes or error-likely situations and a human error analysis procedure based on the error analysis items is organised to help the analysts cue or guide overall human error analysis. The basic scheme for the quantification of HEP consists in the multiplication of the BHEP assigned by the error analysis item and the weight from the influencing factors decision tree (IFDT) constituted by cognitive function. The method can be characterised by the structured identification of the weak points of the task required to perform and the efficient analysis process that the analysts have only to carry out with the necessary cognitive functions. The paper also presents the application of AGAPE-ET to 31 nuclear emergency tasks and its results
Error analysis to improve the speech recognition accuracy on ...
Indian Academy of Sciences (India)
measures, error-rate and Word Error Rate (WER) by application of the proposed method. Keywords. Speech recognition; pronunciation dictionary modification method; error analysis; F-measure. 1. Introduction. Speech is one of the easiest modes of ...
Error analysis for mesospheric temperature profiling by absorptive occultation sensors
Directory of Open Access Journals (Sweden)
M. J. Rieder
Full Text Available An error analysis for mesospheric profiles retrieved from absorptive occultation data has been performed, starting with realistic error assumptions as would apply to intensity data collected by available high-precision UV photodiode sensors. Propagation of statistical errors was investigated through the complete retrieval chain from measured intensity profiles to atmospheric density, pressure, and temperature profiles. We assumed unbiased errors as the occultation method is essentially self-calibrating and straight-line propagation of occulted signals as we focus on heights of 50–100 km, where refractive bending of the sensed radiation is negligible. Throughout the analysis the errors were characterized at each retrieval step by their mean profile, their covariance matrix and their probability density function (pdf. This furnishes, compared to a variance-only estimation, a much improved insight into the error propagation mechanism. We applied the procedure to a baseline analysis of the performance of a recently proposed solar UV occultation sensor (SMAS – Sun Monitor and Atmospheric Sounder and provide, using a reasonable exponential atmospheric model as background, results on error standard deviations and error correlation functions of density, pressure, and temperature profiles. Two different sensor photodiode assumptions are discussed, respectively, diamond diodes (DD with 0.03% and silicon diodes (SD with 0.1% (unattenuated intensity measurement noise at 10 Hz sampling rate. A factor-of-2 margin was applied to these noise values in order to roughly account for unmodeled cross section uncertainties. Within the entire height domain (50–100 km we find temperature to be retrieved to better than 0.3 K (DD / 1 K (SD accuracy, respectively, at 2 km height resolution. The results indicate that absorptive occultations acquired by a SMAS-type sensor could provide mesospheric profiles of fundamental variables such as temperature with
Majewicz, Peter J; Blessner, Paul; Olson, Bill; Blackburn, Timothy
2017-04-05
This article proposes a methodology for incorporating electrical component failure data into the human error assessment and reduction technique (HEART) for estimating human error probabilities (HEPs). The existing HEART method contains factors known as error-producing conditions (EPCs) that adjust a generic HEP to a more specific situation being assessed. The selection and proportioning of these EPCs are at the discretion of an assessor, and are therefore subject to the assessor's experience and potential bias. This dependence on expert opinion is prevalent in similar HEP assessment techniques used in numerous industrial areas. The proposed method incorporates factors based on observed trends in electrical component failures to produce a revised HEP that can trigger risk mitigation actions more effectively based on the presence of component categories or other hazardous conditions that have a history of failure due to human error. The data used for the additional factors are a result of an analysis of failures of electronic components experienced during system integration and testing at NASA Goddard Space Flight Center. The analysis includes the determination of root failure mechanisms and trend analysis. The major causes of these defects were attributed to electrostatic damage, electrical overstress, mechanical overstress, or thermal overstress. These factors representing user-induced defects are quantified and incorporated into specific hardware factors based on the system's electrical parts list. This proposed methodology is demonstrated with an example comparing the original HEART method and the proposed modified technique. © 2017 Society for Risk Analysis.
International Nuclear Information System (INIS)
Anon.
1991-01-01
This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements
Applications of human error analysis to aviation and space operations
International Nuclear Information System (INIS)
Nelson, W.R.
1998-01-01
For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) we have been working to apply methods of human error analysis to the design of complex systems. We have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. We are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. These applications lead to different requirements when compared with HR.As performed as part of a PSA. For example, because the analysis will begin early during the design stage, the methods must be usable when only partial design information is available. In addition, the ability to perform numerous ''what if'' analyses to identify and compare multiple design alternatives is essential. Finally, since the goals of such human error analyses focus on proactive design changes rather than the estimate of failure probabilities for PRA, there is more emphasis on qualitative evaluations of error relationships and causal factors than on quantitative estimates of error frequency. The primary vehicle we have used to develop and apply these methods has been a series of prqjects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. The first NASA-sponsored project had the goal to evaluate human errors caused by advanced cockpit automation. Our next aviation project focused on the development of methods and tools to apply human error analysis to the design of commercial aircraft. This project was performed by a consortium comprised of INEEL, NASA, and Boeing Commercial Airplane Group. The focus of the project was aircraft design and procedures that could lead to human errors during airplane maintenance
Error analysis of aspheric surface with reference datum.
Peng, Yanglin; Dai, Yifan; Chen, Shanyong; Song, Ci; Shi, Feng
2015-07-20
Severe requirements of location tolerance provide new challenges for optical component measurement, evaluation, and manufacture. Form error, location error, and the relationship between form error and location error need to be analyzed together during error analysis of aspheric surface with reference datum. Based on the least-squares optimization method, we develop a least-squares local optimization method to evaluate form error of aspheric surface with reference datum, and then calculate the location error. According to the error analysis of a machined aspheric surface, the relationship between form error and location error is revealed, and the influence on the machining process is stated. In different radius and aperture of aspheric surface, the change laws are simulated by superimposing normally distributed random noise on an ideal surface. It establishes linkages between machining and error analysis, and provides an effective guideline for error correcting.
Yilmaz, Ferkan
2010-09-01
In this paper, we propose an analytical framework on the exact computation of the average symbol error probabilities (ASEP) of multihop transmission over generalized fading channels when an arbitrary number of amplify-and-forward relays is used. Our approach relies on moment generating function (MGF) framework to obtain exact single integral expressions which can be easily computed by Gauss-Chebyshev Quadrature (GCQ) rule. As such, the derived results are a convenient tool to analyze the ASEP performance of multihop transmission over amplify-and-forward relay fading channels. Numerical and simulation results, performed to verify the correctness of the proposed formulation, are in perfect agreement. © 2010 IEEE.
Soury, Hamza
2013-07-01
This paper considers the average symbol error probability of square Quadrature Amplitude Modulation (QAM) coherent signaling over flat fading channels subject to additive generalized Gaussian noise. More specifically, a generic closedform expression in terms of the Fox H function and the bivariate Fox H function is offered for the extended generalized-K fading case. Simplifications for some special fading distributions such as generalized-K fading, Nakagami-m fading, and Rayleigh fading and special additive noise distributions such as Gaussian and Laplacian noise are then presented. Finally, the mathematical formalism is illustrated by some numerical examples verified by computer based simulations for a variety of fading and additive noise parameters.
Soury, Hamza
2012-06-01
This letter considers the average bit error probability of binary coherent signaling over flat fading channels subject to additive generalized Gaussian noise. More specifically, a generic closed form expression in terms of the Fox\\'s H function is offered for the extended generalized-K fading case. Simplifications for some special fading distributions such as generalized-K fading and Nakagami-m fading and special additive noise distributions such as Gaussian and Laplacian noise are then presented. Finally, the mathematical formalism is illustrated by some numerical examples verified by computer based simulations for a variety of fading and additive noise parameters. © 2012 IEEE.
Error analysis in solving mathematical problems
Directory of Open Access Journals (Sweden)
Geovana Luiza Kliemann
2017-12-01
Full Text Available This paper presents a survey carried out within the Centre for Education Programme, in order to assist in improving the quality of the teaching and learning of Mathematics in Primary Education. From the study of the evaluative systems that constitute the scope of the research project, it was found that their focus is solving problems, and from this point, it began the development of several actions with the purpose of assisting the students in the process of solving them. One of these actions objected to analyze the errors presented by students in the 5th year in the interpretation, understanding, and problem-solving. We describe three games developed in six schools, with questions drawn from the “Prova Brasil” performed in previous years, in objective to diagnose the main difficulties presented by the students in solving the problems, besides helping them to verify possibilities to overcome such gaps. To reach the proposed objectives, a qualitative study was carried out in which the researchers were constantly involved during the process. After each meeting, there was an analysis of the responses developed to classify the errors in different categories. It was found that most students attended succeeded in solving the proposed problems, and major errors presented are related to the difficulty of interpretation.
Error begat error: design error analysis and prevention in social infrastructure projects.
Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M
2012-09-01
Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.
Energy Technology Data Exchange (ETDEWEB)
Lon N. Haney; David I. Gertman
2003-04-01
Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.
Joseph, Maria L; Carriquiry, Alicia
2010-11-01
Collection of dietary intake information requires time-consuming and expensive methods, making it inaccessible to many resource-poor countries. Quantifying the association between simple measures of usual dietary diversity and usual nutrient intake/adequacy would allow inferences to be made about the adequacy of micronutrient intake at the population level for a fraction of the cost. In this study, we used secondary data from a dietary intake study carried out in Bangladesh to assess the association between 3 food group diversity indicators (FGI) and calcium intake; and the association between these same 3 FGI and a composite measure of nutrient adequacy, mean probability of adequacy (MPA). By implementing Fuller's error-in-the-equation measurement error model (EEM) and simple linear regression (SLR) models, we assessed these associations while accounting for the error in the observed quantities. Significant associations were detected between usual FGI and usual calcium intakes, when the more complex EEM was used. The SLR model detected significant associations between FGI and MPA as well as for variations of these measures, including the best linear unbiased predictor. Through simulation, we support the use of the EEM. In contrast to the EEM, the SLR model does not account for the possible correlation between the measurement errors in the response and predictor. The EEM performs best when the model variables are not complex functions of other variables observed with error (e.g. MPA). When observation days are limited and poor estimates of the within-person variances are obtained, the SLR model tends to be more appropriate.
Directory of Open Access Journals (Sweden)
Y. Yang
2017-12-01
Full Text Available We model the outage probability and bit-error rate (BER for an intensity-modulation/direct detection optical wireless communication (OWC systems for the ground-to-train of the curved track in rainy weather. By adopting the inverse Gaussian models of the raining turbulence, we derive the outage probability and average BER expression for the channel with pointing errors. The numerical analysis reveals that the rainfall can disrupt the stability and accuracy of the system, especially the rainstorm weather. The improving of the shockproof performance of the tracks and using long wavelength of the signal source will improve the communication performance of OWC links. The atmospheric turbulence has greater impact on the OWC link than the cover track length. The pointing errors caused by beam wander or train vibration are the dominant factors decreasing the performance of OWC link for the train along the curved track. We can choose the size of communication transmitting and receiving apertures to optimize the performance of the OWC link.
International Nuclear Information System (INIS)
Sin, Y. C.; Jung, Y. S.; Kim, K. H.; Kim, J. H.
2008-04-01
Main control room of nuclear power plants has been computerized and digitalized in new and modernized plants, as information and digital technologies make great progresses and become mature. Survey on human factors engineering issues in advanced MCRs: Model-based approach, Literature survey-based approach. Analysis of human error types and performance shaping factors is analysis of three human errors. The results of project can be used for task analysis, evaluation of human error probabilities, and analysis of performance shaping factors in the HRA analysis
Error Analysis of Determining Airplane Location by Global Positioning System
Hajiyev, Chingiz; Burat, Alper
1999-01-01
This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.
Data analysis & probability task sheets : grades pk-2
Cook, Tanya
2009-01-01
For grades PK-2, our Common Core State Standards-based resource meets the data analysis & probability concepts addressed by the NCTM standards and encourages your students to learn and review the concepts in unique ways. Each task sheet is organized around a central problem taken from real-life experiences of the students.
Generic Degraded Configuration Probability Analysis for the Codisposal Waste Package
International Nuclear Information System (INIS)
S.F.A. Deng; M. Saglam; L.J. Gratton
2001-01-01
In accordance with the technical work plan, ''Technical Work Plan For: Department of Energy Spent Nuclear Fuel Work Packages'' (CRWMS M and O 2000c), this Analysis/Model Report (AMR) is developed for the purpose of screening out degraded configurations for U.S. Department of Energy (DOE) spent nuclear fuel (SNF) types. It performs the degraded configuration parameter and probability evaluations of the overall methodology specified in the ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2000, Section 3) to qualifying configurations. Degradation analyses are performed to assess realizable parameter ranges and physical regimes for configurations. Probability calculations are then performed for configurations characterized by k eff in excess of the Critical Limit (CL). The scope of this document is to develop a generic set of screening criteria or models to screen out degraded configurations having potential for exceeding a criticality limit. The developed screening criteria include arguments based on physical/chemical processes and probability calculations and apply to DOE SNF types when codisposed with the high-level waste (HLW) glass inside a waste package. The degradation takes place inside the waste package and is long after repository licensing has expired. The emphasis of this AMR is on degraded configuration screening and the probability analysis is one of the approaches used for screening. The intended use of the model is to apply the developed screening criteria to each DOE SNF type following the completion of the degraded mode criticality analysis internal to the waste package
Trends in MODIS Geolocation Error Analysis
Wolfe, R. E.; Nishihama, Masahiro
2009-01-01
Data from the two MODIS instruments have been accurately geolocated (Earth located) to enable retrieval of global geophysical parameters. The authors describe the approach used to geolocate with sub-pixel accuracy over nine years of data from M0DIS on NASA's E0S Terra spacecraft and seven years of data from MODIS on the Aqua spacecraft. The approach uses a geometric model of the MODIS instruments, accurate navigation (orbit and attitude) data and an accurate Earth terrain model to compute the location of each MODIS pixel. The error analysis approach automatically matches MODIS imagery with a global set of over 1,000 ground control points from the finer-resolution Landsat satellite to measure static biases and trends in the MO0lS geometric model parameters. Both within orbit and yearly thermally induced cyclic variations in the pointing have been found as well as a general long-term trend.
Energy Technology Data Exchange (ETDEWEB)
Seaver, D.A.; Stillwell, W.G.
1983-03-01
This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use.
Yilmaz, Ferkan
2014-04-01
The main idea in the moment generating function (MGF) approach is to alternatively express the conditional bit error probability (BEP) in a desired exponential form so that possibly multi-fold performance averaging is readily converted into a computationally efficient single-fold averaging - sometimes into a closed-form - by means of using the MGF of the signal-to-noise ratio. However, as presented in [1] and specifically indicated in [2] and also to the best of our knowledge, there does not exist an MGF-based approach in the literature to represent Wojnar\\'s generic BEP expression in a desired exponential form. This paper presents novel MGF-based expressions for calculating the average BEP of binary signalling over generalized fading channels, specifically by expressing Wojnar\\'s generic BEP expression in a desirable exponential form. We also propose MGF-based expressions to explore the amount of dispersion in the BEP for binary signalling over generalized fading channels.
Analysis of errors in forensic science
Directory of Open Access Journals (Sweden)
Mingxiao Du
2017-01-01
Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
[Analysis of intrusion errors in free recall].
Diesfeldt, H F A
2017-06-01
Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.
Analysis of Disparity Error for Stereo Autofocus.
Yang, Cheng-Chieh; Huang, Shao-Kang; Shih, Kuang-Tsu; Chen, Homer H
2018-04-01
As more and more stereo cameras are installed on electronic devices, we are motivated to investigate how to leverage disparity information for autofocus. The main challenge is that stereo images captured for disparity estimation are subject to defocus blur unless the lenses of the stereo cameras are at the in-focus position. Therefore, it is important to investigate how the presence of defocus blur would affect stereo matching and, in turn, the performance of disparity estimation. In this paper, we give an analytical treatment of this fundamental issue of disparity-based autofocus by examining the relation between image sharpness and disparity error. A statistical approach that treats the disparity estimate as a random variable is developed. Our analysis provides a theoretical backbone for the empirical observation that, regardless of the initial lens position, disparity-based autofocus can bring the lens to the hill zone of the focus profile in one movement. The insight gained from the analysis is useful for the implementation of an autofocus system.
International Nuclear Information System (INIS)
Reece, W.J.; Gilbert, B.G.; Richards, R.E.
1994-09-01
This data manual contains a hard copy of the information in the Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) Version 3.5 database, which is sponsored by the US Nuclear Regulatory Commission. NUCLARR was designed as a tool for risk analysis. Many of the nuclear reactors in the US and several outside the US are represented in the NUCLARR database. NUCLARR includes both human error probability estimates for workers at the plants and hardware failure data for nuclear reactor equipment. Aggregations of these data yield valuable reliability estimates for probabilistic risk assessments and human reliability analyses. The data manual is organized to permit manual searches of the information if the computerized version is not available. Originally, the manual was published in three parts. In this revision the introductory material located in the original Part 1 has been incorporated into the text of Parts 2 and 3. The user can now find introductory material either in the original Part 1, or in Parts 2 and 3 as revised. Part 2 contains the human error probability data, and Part 3, the hardware component reliability data
Energy Technology Data Exchange (ETDEWEB)
Reece, W.J.; Gilbert, B.G.; Richards, R.E. [EG and G Idaho, Inc., Idaho Falls, ID (United States)
1994-09-01
This data manual contains a hard copy of the information in the Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) Version 3.5 database, which is sponsored by the US Nuclear Regulatory Commission. NUCLARR was designed as a tool for risk analysis. Many of the nuclear reactors in the US and several outside the US are represented in the NUCLARR database. NUCLARR includes both human error probability estimates for workers at the plants and hardware failure data for nuclear reactor equipment. Aggregations of these data yield valuable reliability estimates for probabilistic risk assessments and human reliability analyses. The data manual is organized to permit manual searches of the information if the computerized version is not available. Originally, the manual was published in three parts. In this revision the introductory material located in the original Part 1 has been incorporated into the text of Parts 2 and 3. The user can now find introductory material either in the original Part 1, or in Parts 2 and 3 as revised. Part 2 contains the human error probability data, and Part 3, the hardware component reliability data.
ERROR ANALYSIS ON INFORMATION AND TECHNOLOGY STUDENTS’ SENTENCE WRITING ASSIGNMENTS
Rentauli Mariah Silalahi
2015-01-01
Students’ error analysis is very important for helping EFL teachers to develop their teaching materials, assessments and methods. However, it takes much time and effort from the teachers to do such an error analysis towards their students’ language. This study seeks to identify the common errors made by 1 class of 28 freshmen students studying English in their first semester in an IT university. The data is collected from their writing assignments for eight consecutive weeks. The errors found...
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
The application of probability analysis in evidence evaluation
Directory of Open Access Journals (Sweden)
Feješ Ištvan
2014-01-01
Full Text Available The paper is divided into three larger parts. In the introductory part the author reminds us that the application of mathematics to assess the evidence is not a substantially new idea. Inquisition procedures prescribed the legal assessment of evidence that consisted of primitive mechanical adding and subtracting the available evidence. However, modern mathematical methods that can be used for assessing evidence are far more sophisticated, and are based on probability analysis and computer technology. In the second part the paper deals with some possibilities of applying the probability analysis to evaluate evidence. It particularly tackles the potential of the Bayes' theorem of conditional probability in evidence assessment. The third part is the conclusion in which the author emphasizes that scientific progress is extremely fast and that mathematics will enter even more into courtrooms and thus progressively increase the exact elements of the now primarily subjective process of evidence evaluation. It particularly emphasizes the advantages of Bayesian analysis, but it warns that the results of this method are only one' mathematical truth', and that interpretation is the key.
The error performance analysis over cyclic redundancy check codes
Yoon, Hee B.
1991-06-01
The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.
Error-rate performance analysis of opportunistic regenerative relaying
Tourki, Kamel
2011-09-01
In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.
Comparative analysis through probability distributions of a data set
Cristea, Gabriel; Constantinescu, Dan Mihai
2018-02-01
In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.
Fixturing error measurement and analysis using CMMs
International Nuclear Information System (INIS)
Wang, Y; Chen, X; Gindy, N
2005-01-01
Influence of fixture on the errors of a machined surface can be very significant. The machined surface errors generated during machining can be measured by using a coordinate measurement machine (CMM) through the displacements of three coordinate systems on a fixture-workpiece pair in relation to the deviation of the machined surface. The surface errors consist of the component movement, component twist, deviation between actual machined surface and defined tool path. A turbine blade fixture for grinding operation is used for case study
Solar Tracking Error Analysis of Fresnel Reflector
Directory of Open Access Journals (Sweden)
Jiantao Zheng
2014-01-01
Full Text Available Depending on the rotational structure of Fresnel reflector, the rotation angle of the mirror was deduced under the eccentric condition. By analyzing the influence of the sun tracking rotation angle error caused by main factors, the change rule and extent of the influence were revealed. It is concluded that the tracking errors caused by the difference between the rotation axis and true north meridian, at noon, were maximum under certain conditions and reduced at morning and afternoon gradually. The tracking error caused by other deviations such as rotating eccentric, latitude, and solar altitude was positive at morning, negative at afternoon, and zero at a certain moment of noon.
Comment on 'The meaning of probability in probabilistic safety analysis'
International Nuclear Information System (INIS)
Yellman, Ted W.; Murray, Thomas M.
1995-01-01
A recent article in Reliability Engineering and System Safety argues that there is 'fundamental confusion over how to interpret the numbers which emerge from a Probabilistic Safety Analysis [PSA]', [Watson, S. R., The meaning of probability in probabilistic safety analysis. Reliab. Engng and System Safety, 45 (1994) 261-269.] As a standard for comparison, the author employs the 'realist' interpretation that a PSA output probability should be a 'physical property' of the installation being analyzed, 'objectively measurable' without controversy. The author finds all the other theories and philosophies discussed wanting by this standard. Ultimately, he argues that the outputs of a PSA should be considered to be no more than constructs of the computational procedure chosen - just an 'argument' or a 'framework for the debate about safety' rather than a 'representation of truth'. He even suggests that 'competing' PSA's be done - each trying to 'argue' for a different message. The commentors suggest that the position the author arrives at is an overreaction to the subjectivity which is part of any complex PSA, and that that overreaction could in fact easily lead to the belief that PSA's are meaningless. They suggest a broader interpretation, one based strictly on relative frequency--a concept which the commentors believe the author abandoned too quickly. Their interpretation does not require any 'tests' to determine whether a statement of likelihood is qualified to be a 'true' probability and it applies equally well in pure analytical models. It allows anyone's proper numerical statement of the likelihood of an event to be considered a probability. It recognizes that the quality of PSA's and their results will vary. But, unlike the author, the commentors contend that a PSA should always be a search for truth--not a vehicle for adversarial pleadings
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-07-16
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Probability analysis of MCO over-pressurization during staging
International Nuclear Information System (INIS)
Pajunen, A.L.
1997-01-01
The purpose of this calculation is to determine the probability of Multi-Canister Overpacks (MCOs) over-pressurizing during staging at the Canister Storage Building (CSB). Pressurization of an MCO during staging is dependent upon changes to the MCO gas temperature and the build-up of reaction products during the staging period. These effects are predominantly limited by the amount of water that remains in the MCO following cold vacuum drying that is available for reaction during staging conditions. Because of the potential for increased pressure within an MCO, provisions for a filtered pressure relief valve and rupture disk have been incorporated into the MCO design. This calculation provides an estimate of the frequency that an MCO will contain enough water to pressurize beyond the limits of these design features. The results of this calculation will be used in support of further safety analyses and operational planning efforts. Under the bounding steady state CSB condition assumed for this analysis, an MCO must contain less than 1.6 kg (3.7 lbm) of water available for reaction to preclude actuation of the pressure relief valve at 100 psid. To preclude actuation of the MCO rupture disk at 150 psid, an MCO must contain less than 2.5 kg (5.5 lbm) of water available for reaction. These limits are based on the assumption that hydrogen generated by uranium-water reactions is the sole source of gas produced within the MCO and that hydrates in fuel particulate are the primary source of water available for reactions during staging conditions. The results of this analysis conclude that the probability of the hydrate water content of an MCO exceeding 1.6 kg is 0.08 and the probability that it will exceed 2.5 kg is 0.01. This implies that approximately 32 of 400 staged MCOs may experience pressurization to the point where the pressure relief valve actuates. In the event that an MCO pressure relief valve fails to open, the probability is 1 in 100 that the MCO would experience
Rahim, Ahmad Nabil Bin Ab; Mohamed, Faizal; Farid, Mohd Fairus Abdul; Fazli Zakaria, Mohd; Sangau Ligam, Alfred; Ramli, Nurhayati Binti
2018-01-01
Human factor can be affected by prevalence stress measured using Depression, Anxiety and Stress Scale (DASS). From the respondents feedback can be summarized that the main factor causes the highest prevalence stress is due to the working conditions that require operators to handle critical situation and make a prompt critical decisions. The relationship between the prevalence stress and performance shaping factors found that PSFFitness and PSFWork Process showed positive Pearson’s Correlation with the score of .763 and .826 while the level of significance, p = .028 and p = .012. These positive correlations with good significant values between prevalence stress and human performance shaping factor (PSF) related to fitness, work processes and procedures. The higher the stress level of the respondents, the higher the score of selected for the PSFs. This is due to the higher levels of stress lead to deteriorating physical health and cognitive also worsened. In addition, the lack of understanding in the work procedures can also be a factor that causes a growing stress. The higher these values will lead to the higher the probabilities of human error occur. Thus, monitoring the level of stress among operators RTP is important to ensure the safety of RTP.
Directory of Open Access Journals (Sweden)
Matthew Nahorniak
Full Text Available In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB. Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools--linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we
Dose error analysis for a scanned proton beam delivery system
International Nuclear Information System (INIS)
Coutrakon, G; Wang, N; Miller, D W; Yang, Y
2010-01-01
All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 x 10 x 8 cm 3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.
A new convolution algorithm for loss probablity analysis in multiservice networks
DEFF Research Database (Denmark)
Huang, Qian; Ko, King-Tim; Iversen, Villy Bæk
2011-01-01
Performance analysis in multiservice loss systems generally focuses on accurate and efficient calculation methods for traffic loss probability. Convolution algorithm is one of the existing efficient numerical methods. Exact loss probabilities are obtainable from the convolution algorithm in systems...... where the bandwidth is fully shared by all traffic classes; but not available for systems with trunk reservation, i.e. part of the bandwidth is reserved for a special class of traffic. A proposal known as asymmetric convolution algorithm (ACA) has been made to overcome the deficiency of the convolution...... algorithm. It obtains an approximation of the channel occupancy distribution in multiservice systems with trunk reservation. However, the ACA approximation is only accurate with two traffic flows; increased approximation errors are observed for systems with three or more traffic flows. In this paper, we...
Energy Technology Data Exchange (ETDEWEB)
Kim, Jae Whan; Jung, Won Dea [Korea Atomic Energy Research Institute, Taejeon (Korea)
2002-03-01
This report presents a procedurised human reliability analysis (HRA) methodology, AGAPE-ET (A Guidance And Procedure for Human Error Analysis for Emergency Tasks), for both qualitative error analysis and quantification of human error probability (HEP) of emergency tasks in nuclear power plants. The AGAPE-ET is based on the simplified cognitive model. By each cognitive function, error causes or error-likely situations have been identified considering the characteristics of the performance of each cognitive function and influencing mechanism of PIFs on the cognitive function. Then, error analysis items have been determined from the identified error causes or error-likely situations to help the analysts cue or guide overall human error analysis. A human error analysis procedure based on the error analysis items is organised. The basic scheme for the quantification of HEP consists in the multiplication of the BHEP assigned by the error analysis item and the weight from the influencing factors decision tree (IFDT) constituted by cognitive function. The method can be characterised by the structured identification of the weak points of the task required to perform and the efficient analysis process that the analysts have only to carry out with the necessary cognitive functions. The report also presents the the application of AFAPE-ET to 31 nuclear emergency tasks and its results. 42 refs., 7 figs., 36 tabs. (Author)
Analysis of Errors in a Special Perturbations Satellite Orbit Propagator
Energy Technology Data Exchange (ETDEWEB)
Beckerman, M.; Jones, J.P.
1999-02-01
We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.
Rekaya, Romdhane; Smith, Shannon; Hay, El Hamidi; Farhat, Nourhene; Aggrey, Samuel E
2016-01-01
Errors in the binary status of some response traits are frequent in human, animal, and plant applications. These error rates tend to differ between cases and controls because diagnostic and screening tests have different sensitivity and specificity. This increases the inaccuracies of classifying individuals into correct groups, giving rise to both false-positive and false-negative cases. The analysis of these noisy binary responses due to misclassification will undoubtedly reduce the statistical power of genome-wide association studies (GWAS). A threshold model that accommodates varying diagnostic errors between cases and controls was investigated. A simulation study was carried out where several binary data sets (case-control) were generated with varying effects for the most influential single nucleotide polymorphisms (SNPs) and different diagnostic error rate for cases and controls. Each simulated data set consisted of 2000 individuals. Ignoring misclassification resulted in biased estimates of true influential SNP effects and inflated estimates for true noninfluential markers. A substantial reduction in bias and increase in accuracy ranging from 12% to 32% was observed when the misclassification procedure was invoked. In fact, the majority of influential SNPs that were not identified using the noisy data were captured using the proposed method. Additionally, truly misclassified binary records were identified with high probability using the proposed method. The superiority of the proposed method was maintained across different simulation parameters (misclassification rates and odds ratios) attesting to its robustness.
Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors
San, Bingbing; Yang, Qingshan; Yin, Liwei
2017-03-01
Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.
ERROR ANALYSIS ON INFORMATION AND TECHNOLOGY STUDENTS’ SENTENCE WRITING ASSIGNMENTS
Directory of Open Access Journals (Sweden)
Rentauli Mariah Silalahi
2015-03-01
Full Text Available Students’ error analysis is very important for helping EFL teachers to develop their teaching materials, assessments and methods. However, it takes much time and effort from the teachers to do such an error analysis towards their students’ language. This study seeks to identify the common errors made by 1 class of 28 freshmen students studying English in their first semester in an IT university. The data is collected from their writing assignments for eight consecutive weeks. The errors found were classified into 24 types and the top ten most common errors committed by the students were article, preposition, spelling, word choice, subject-verb agreement, auxiliary verb, plural form, verb form, capital letter, and meaningless sentences. The findings about the students’ frequency of committing errors were, then, contrasted to their midterm test result and in order to find out the reasons behind the error recurrence; the students were given some questions to answer in a questionnaire format. Most of the students admitted that careless was the major reason for their errors and lack understanding came next. This study suggests EFL teachers to devote their time to continuously check the students’ language by giving corrections so that the students can learn from their errors and stop committing the same errors.
Probability maps as a measure of reliability for indivisibility analysis
Directory of Open Access Journals (Sweden)
Joksić Dušan
2005-01-01
Full Text Available Digital terrain models (DTMs represent segments of spatial data bases related to presentation of terrain features and landforms. Square grid elevation models (DEMs have emerged as the most widely used structure during the past decade because of their simplicity and simple computer implementation. They have become an important segment of Topographic Information Systems (TIS, storing natural and artificial landscape in forms of digital models. This kind of a data structure is especially suitable for morph metric terrain evaluation and analysis, which is very important in environmental and urban planning and Earth surface modeling applications. One of the most often used functionalities of Geographical information systems software packages is indivisibility or view shed analysis of terrain. Indivisibility determination from analog topographic maps may be very exhausting, because of the large number of profiles that have to be extracted and compared. Terrain representation in form of the DEMs databases facilitates this task. This paper describes simple algorithm for terrain view shed analysis by using DEMs database structures, taking into consideration the influence of uncertainties of such data to the results obtained thus far. The concept of probability maps is introduced as a mean for evaluation of results, and is presented as thematic display.
ERROR CONVERGENCE ANALYSIS FOR LOCAL HYPERTHERMIA APPLICATIONS
Directory of Open Access Journals (Sweden)
NEERU MALHOTRA
2016-01-01
Full Text Available The accuracy of numerical solution for electromagnetic problem is greatly influenced by the convergence of the solution obtained. In order to quantify the correctness of the numerical solution the errors produced on solving the partial differential equations are required to be analyzed. Mesh quality is another parameter that affects convergence. The various quality metrics are dependent on the type of solver used for numerical simulation. The paper focuses on comparing the performance of iterative solvers used in COMSOL Multiphysics software. The modeling of coaxial coupled waveguide applicator operating at 485MHz has been done for local hyperthermia applications using adaptive finite element method. 3D heat distribution within the muscle phantom depicting spherical leison and localized heating pattern confirms the proper selection of the solver. The convergence plots are obtained during simulation of the problem using GMRES (generalized minimal residual and geometric multigrid linear iterative solvers. The best error convergence is achieved by using nonlinearity multigrid solver and further introducing adaptivity in nonlinear solver.
Human Error Assessmentin Minefield Cleaning Operation Using Human Event Analysis
Directory of Open Access Journals (Sweden)
Mohammad Hajiakbari
2015-12-01
Full Text Available Background & objective: Human error is one of the main causes of accidents. Due to the unreliability of the human element and the high-risk nature of demining operations, this study aimed to assess and manage human errors likely to occur in such operations. Methods: This study was performed at a demining site in war zones located in the West of Iran. After acquiring an initial familiarity with the operations, methods, and tools of clearing minefields, job task related to clearing landmines were specified. Next, these tasks were studied using HTA and related possible errors were assessed using ATHEANA. Results: de-mining task was composed of four main operations, including primary detection, technical identification, investigation, and neutralization. There were found four main reasons for accidents occurring in such operations; walking on the mines, leaving mines with no action, error in neutralizing operation and environmental explosion. The possibility of human error in mine clearance operations was calculated as 0.010. Conclusion: The main causes of human error in de-mining operations can be attributed to various factors such as poor weather and operating conditions like outdoor work, inappropriate personal protective equipment, personality characteristics, insufficient accuracy in the work, and insufficient time available. To reduce the probability of human error in de-mining operations, the aforementioned factors should be managed properly.
Analysis of Medication Errors in Simulated Pediatric Resuscitation by Residents
Directory of Open Access Journals (Sweden)
Evelyn Porter
2014-07-01
Full Text Available Introduction: The objective of our study was to estimate the incidence of prescribing medication errors specifically made by a trainee and identify factors associated with these errors during the simulated resuscitation of a critically ill child. Methods: The results of the simulated resuscitation are described. We analyzed data from the simulated resuscitation for the occurrence of a prescribing medication error. We compared univariate analysis of each variable to medication error rate and performed a separate multiple logistic regression analysis on the significant univariate variables to assess the association between the selected variables. Results: We reviewed 49 simulated resuscitations . The final medication error rate for the simulation was 26.5% (95% CI 13.7% - 39.3%. On univariate analysis, statistically significant findings for decreased prescribing medication error rates included senior residents in charge, presence of a pharmacist, sleeping greater than 8 hours prior to the simulation, and a visual analog scale score showing more confidence in caring for critically ill children. Multiple logistic regression analysis using the above significant variables showed only the presence of a pharmacist to remain significantly associated with decreased medication error, odds ratio of 0.09 (95% CI 0.01 - 0.64. Conclusion: Our results indicate that the presence of a clinical pharmacist during the resuscitation of a critically ill child reduces the medication errors made by resident physician trainees.
Analysis of medication errors in simulated pediatric resuscitation by residents.
Porter, Evelyn; Barcega, Besh; Kim, Tommy Y
2014-07-01
The objective of our study was to estimate the incidence of prescribing medication errors specifically made by a trainee and identify factors associated with these errors during the simulated resuscitation of a critically ill child. The results of the simulated resuscitation are described. We analyzed data from the simulated resuscitation for the occurrence of a prescribing medication error. We compared univariate analysis of each variable to medication error rate and performed a separate multiple logistic regression analysis on the significant univariate variables to assess the association between the selected variables. We reviewed 49 simulated resuscitations. The final medication error rate for the simulation was 26.5% (95% CI 13.7% - 39.3%). On univariate analysis, statistically significant findings for decreased prescribing medication error rates included senior residents in charge, presence of a pharmacist, sleeping greater than 8 hours prior to the simulation, and a visual analog scale score showing more confidence in caring for critically ill children. Multiple logistic regression analysis using the above significant variables showed only the presence of a pharmacist to remain significantly associated with decreased medication error, odds ratio of 0.09 (95% CI 0.01 - 0.64). Our results indicate that the presence of a clinical pharmacist during the resuscitation of a critically ill child reduces the medication errors made by resident physician trainees.
Error Analysis on Plane-to-Plane Linear Approximate Coordinate ...
Indian Academy of Sciences (India)
c Indian Academy of Sciences. Error Analysis on Plane-to-Plane Linear Approximate Coordinate. Transformation. Q. F. Zhang1,∗, Q. Y. Peng1 & J. H. Fan2 ... In astronomy, some tasks require performing the coordinate transformation between two tangent planes in ... Based on these parameters, we get maxi- mum errors in ...
Error Analysis on Plane-to-Plane Linear Approximate Coordinate ...
Indian Academy of Sciences (India)
Abstract. In this paper, the error analysis has been done for the linear approximate transformation between two tangent planes in celestial sphere in a simple case. The results demonstrate that the error from the linear transformation does not meet the requirement of high-precision astrometry under some conditions, so the ...
Error Analysis on Plane-to-Plane Linear Approximate Coordinate ...
Indian Academy of Sciences (India)
2016-01-27
Jan 27, 2016 ... In this paper, the error analysis has been done for the linear approximate transformation between two tangent planes in celestial sphere in a simple case. The results demonstrate that the error from the linear transformation does not meet the requirement of high-precision astrometry under some conditions, ...
Implications of Error Analysis Studies for Academic Interventions
Mather, Nancy; Wendling, Barbara J.
2017-01-01
We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…
Lower extremity angle measurement with accelerometers - error and sensitivity analysis
Willemsen, A.T.M.; Willemsen, Antoon Th.M.; Frigo, Carlo; Boom, H.B.K.
1991-01-01
The use of accelerometers for angle assessment of the lower extremities is investigated. This method is evaluated by an error-and-sensitivity analysis using healthy subject data. Of three potential error sources (the reference system, the accelerometers, and the model assumptions) the last is found
Real-time analysis for Stochastic errors of MEMS gyro
Miao, Zhiyong; Shi, Hongyang; Zhang, Yi
2017-10-01
Since a good knowledge of MEMS gyro stochastic errors is important and critical to MEMS INS/GPS integration system. Therefore, the stochastic errors of MEMS gyro should be accurately modeled and identified. The Allan variance method is IEEE standard method in the filed of analysis stochastic errors of gyro. This kind of method can fully characterize the random character of stochastic errors. However, it requires a large amount of data to be stored, resulting in large offline computational burden. Moreover, it has a painful procedure of drawing slope lines for estimation. To overcome the barriers, a simple linear state-space model was established for MEMS gyro. Then, a recursive EM algorithm was implemented to estimate the stochastic errors of MEMS gyro in real time. The experimental results of ADIS16405 IMU show that the real-time estimations of proposed approach are well within the error limits of Allan variance method. Moreover, the proposed method effectively avoids the storage of data.
Characterizing single-molecule FRET dynamics with probability distribution analysis.
Santoso, Yusdi; Torella, Joseph P; Kapanidis, Achillefs N
2010-07-12
Probability distribution analysis (PDA) is a recently developed statistical tool for predicting the shapes of single-molecule fluorescence resonance energy transfer (smFRET) histograms, which allows the identification of single or multiple static molecular species within a single histogram. We used a generalized PDA method to predict the shapes of FRET histograms for molecules interconverting dynamically between multiple states. This method is tested on a series of model systems, including both static DNA fragments and dynamic DNA hairpins. By fitting the shape of this expected distribution to experimental data, the timescale of hairpin conformational fluctuations can be recovered, in good agreement with earlier published results obtained using different techniques. This method is also applied to studying the conformational fluctuations in the unliganded Klenow fragment (KF) of Escherichia coli DNA polymerase I, which allows both confirmation of the consistency of a simple, two-state kinetic model with the observed smFRET distribution of unliganded KF and extraction of a millisecond fluctuation timescale, in good agreement with rates reported elsewhere. We expect this method to be useful in extracting rates from processes exhibiting dynamic FRET, and in hypothesis-testing models of conformational dynamics against experimental data.
Error and Uncertainty Analysis for Ecological Modeling and Simulation
National Research Council Canada - National Science Library
Gertner, George
1998-01-01
The main objectives of this project are a) to develop a general methodology for conducting sensitivity and uncertainty analysis and building error budgets in simulation modeling over space and time; and b...
Error analysis of large aperture static interference imaging spectrometer
Li, Fan; Zhang, Guo
2015-12-01
Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.
Analysis of Employee's Survey for Preventing Human-Errors
International Nuclear Information System (INIS)
Sung, Chanho; Kim, Younggab; Joung, Sanghoun
2013-01-01
Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses
Attitude Determination Error Analysis System (ADEAS) mathematical specifications document
Nicholson, Mark; Markley, F.; Seidewitz, E.
1988-01-01
The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.
Data Analysis & Statistical Methods for Command File Errors
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Errors of DWPF frit analysis: Final report
International Nuclear Information System (INIS)
Schumacher, R.F.
1993-01-01
Glass frit will be a major raw material for the operation of the Defense Waste Processing Facility. The frit will be controlled by certificate of conformance and a confirmatory analysis from a commercial analytical laboratory. The following effort provides additional quantitative information on the variability of frit chemical analyses at two commercial laboratories. Identical samples of IDMS Frit 202 were chemically analyzed at two commercial laboratories and at three different times over a period of four months. The SRL-ADS analyses, after correction with the reference standard and normalization, provided confirmatory information, but did not detect the low silica level in one of the frit samples. A methodology utilizing elliptical limits for confirming the certificate of conformance or confirmatory analysis was introduced and recommended for use when the analysis values are close but not within the specification limits. It was also suggested that the lithia specification limits might be reduced as long as CELS is used to confirm the analysis
Bayesian Total Error Analysis - An Error Sensitive Approach to Model Calibration
Franks, S. W.; Kavetski, D.; Kuczera, G.
2002-12-01
The majority of environmental models require calibration of their parameters before meaningful predictions of catchment behaviour can be made. Despite the importance of reliable parameter estimates, there are growing concerns about the ability of objective-based inference methods to adequately calibrate environmental models. The problem lies with the formulation of the objective or likelihood function, which is currently implemented using essentially ad-hoc methods. We outline limitations of current calibration methodologies and introduce a more systematic Bayesian Total Error Analysis (BATEA) framework for environmental model calibration and validation, which imposes a hitherto missing rigour in environmental modelling by requiring the specification of physically realistic model and data uncertainty models with explicit assumptions that can and must be tested against available evidence. The BATEA formalism enables inference of the hydrological parameters and also of any latent variables of the uncertainty models, e.g., precipitation depth errors. The latter could be useful for improving data sampling and measurement methodologies. In addition, distinguishing between the various sources of errors will reduce the current ambiguity about parameter and predictive uncertainty and enable rational testing of environmental models' hypotheses. Monte Carlo Markov Chain methods are employed to manage the increased computational requirements of BATEA. A case study using synthetic data demonstrates that explicitly accounting for forcing errors leads to immediate advantages over traditional regression (e.g., standard least squares calibration) that ignore rainfall history corruption and pseudo-likelihood methods (e.g., GLUE) do not explicitly characterise data and model errors. It is precisely data and model errors that are responsible for the need for calibration in the first place; we expect that understanding these errors will force fundamental shifts in the model
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
International Nuclear Information System (INIS)
Reer, B.; Mertens, J.
1996-05-01
Actions and errors by the operating personnel, which are of significance for the safety of a technical system, are classified according to various criteria. Each type of action thus identified is roughly discussed with respect to its quantifiability by state-of-the-art human reliability analysis (HRA) within a probabilistic safety assessment (PSA). Thereby, the principal limit of quantifying human actions are discussed with special emphasis on data quality and cognitive error modelling. In this connection, the basic procedure for a HRA is briefly described under realistic conditions. With respect to the quantitative part of a HRA - the determination of error probabilities - an evaluating description of the standard method THERP (Technique of Human Error Rate Prediction) is given using eight evaluation criteria. Furthermore, six new developments (EdF'sPHRA, HCR, HCR/ORE, SLIM, HEART, INTENT) are briefly described and roughly evaluated. The report concludes with a catalogue of requirements for HRA methods. (orig.) [de
Grinding Method and Error Analysis of Eccentric Shaft Parts
Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua
2017-12-01
RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.
Integrated data analysis of fusion diagnostics by means of the Bayesian probability theory
International Nuclear Information System (INIS)
Fischer, R.; Dinklage, A.
2004-01-01
Integrated data analysis (IDA) of fusion diagnostics is the combination of heterogeneous diagnostics to obtain validated physical results. Benefits from the integrated approach result from a systematic use of interdependencies; in that sense IDA optimizes the extraction of information from sets of different data. For that purpose IDA requires a systematic and formalized error analysis of all (statistical and systematic) uncertainties involved in each diagnostic. Bayesian probability theory allows for a systematic combination of all information entering the diagnostic model by considering all uncertainties of the measured data, the calibration measurements, and the physical model. Prior physics knowledge on model parameters can be included. Handling of systematic errors is provided. A central goal of the integration of redundant or complementary diagnostics is to provide information to resolve inconsistencies by exploiting interdependencies. A comparable analysis of sets of diagnostics (meta-diagnostics) is performed by combining statistical and systematical uncertainties with model parameters and model uncertainties. Diagnostics improvement and experimental optimization and design of meta-diagnostics will be discussed
The use of error analysis to assess resident performance.
D'Angelo, Anne-Lise D; Law, Katherine E; Cohen, Elaine R; Greenberg, Jacob A; Kwan, Calvin; Greenberg, Caprice; Wiegmann, Douglas A; Pugh, Carla M
2015-11-01
The aim of this study was to assess validity of a human factors error assessment method for evaluating resident performance during a simulated operative procedure. Seven postgraduate year 4-5 residents had 30 minutes to complete a simulated laparoscopic ventral hernia (LVH) repair on day 1 of a national, advanced laparoscopic course. Faculty provided immediate feedback on operative errors and residents participated in a final product analysis of their repairs. Residents then received didactic and hands-on training regarding several advanced laparoscopic procedures during a lecture session and animate lab. On day 2, residents performed a nonequivalent LVH repair using a simulator. Three investigators reviewed and coded videos of the repairs using previously developed human error classification systems. Residents committed 121 total errors on day 1 compared with 146 on day 2. One of 7 residents successfully completed the LVH repair on day 1 compared with all 7 residents on day 2 (P = .001). The majority of errors (85%) committed on day 2 were technical and occurred during the last 2 steps of the procedure. There were significant differences in error type (P ≤ .001) and level (P = .019) from day 1 to day 2. The proportion of omission errors decreased from day 1 (33%) to day 2 (14%). In addition, there were more technical and commission errors on day 2. The error assessment tool was successful in categorizing performance errors, supporting known-groups validity evidence. Evaluating resident performance through error classification has great potential in facilitating our understanding of operative readiness. Copyright © 2015 Elsevier Inc. All rights reserved.
An Analysis and Quantification Method of Human Errors of Soft Controls in Advanced MCRs
International Nuclear Information System (INIS)
Lee, Seung Jun; Kim, Jae Whan; Jang, Seung Cheol
2011-01-01
In this work, a method was proposed for quantifying human errors that may occur during operation executions using soft control. Soft controls of advanced main control rooms (MCRs) have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to define the human error modes and to quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests a modified K-HRA method for quantifying error probability
Comparative analysis on the probability of being a good payer
Mihova, V.; Pavlov, V.
2017-10-01
Credit risk assessment is crucial for the bank industry. The current practice uses various approaches for the calculation of credit risk. The core of these approaches is the use of multiple regression models, applied in order to assess the risk associated with the approval of people applying for certain products (loans, credit cards, etc.). Based on data from the past, these models try to predict what will happen in the future. Different data requires different type of models. This work studies the causal link between the conduct of an applicant upon payment of the loan and the data that he completed at the time of application. A database of 100 borrowers from a commercial bank is used for the purposes of the study. The available data includes information from the time of application and credit history while paying off the loan. Customers are divided into two groups, based on the credit history: Good and Bad payers. Linear and logistic regression are applied in parallel to the data in order to estimate the probability of being good for new borrowers. A variable, which contains value of 1 for Good borrowers and value of 0 for Bad candidates, is modeled as a dependent variable. To decide which of the variables listed in the database should be used in the modelling process (as independent variables), a correlation analysis is made. Due to the results of it, several combinations of independent variables are tested as initial models - both with linear and logistic regression. The best linear and logistic models are obtained after initial transformation of the data and following a set of standard and robust statistical criteria. A comparative analysis between the two final models is made and scorecards are obtained from both models to assess new customers at the time of application. A cut-off level of points, bellow which to reject the applications and above it - to accept them, has been suggested for both the models, applying the strategy to keep the same Accept Rate as
National Research Council Canada - National Science Library
Scheirman, Katherine
2001-01-01
An analysis was accomplished of all inpatient medication errors at a military academic medical center during the year 2000, based on the causes of medication errors as described by current research in the field...
Lugtig, Peter; Toepoel, Vera
2016-01-01
Respondents in an Internet panel survey can often choose which device they use to complete questionnaires: a traditional PC, laptop, tablet computer, or a smartphone. Because all these devices have different screen sizes and modes of data entry, measurement errors may differ between devices. Using
Disasters of endoscopic surgery and how to avoid them: error analysis.
Troidl, H
1999-08-01
For every innovation there are two sides to consider. For endoscopic surgery the positive side is more comfort for the patient, and the negative side is new complications, even disasters, such as injuries to organs (e.g., the bowel), vessels, and the common bile duct. These disasters are rare and seldom reported in the scientific world, as at conferences, at symposiums, and in publications. Today there are many methods for testing an innovation (controlled clinical trials, consensus conferences, audits, and confidential inquiries). Reporting "complications," however, does not help to avoid them. We need real methods for avoiding negative failures. The failure analysis is the method of choice in industry. If an airplane crashes, error analysis starts immediately. Humans make errors, and making errors means punishment. Failure analysis means rigorously and objectively investigating a clinical situation to find clinical relevant information for avoiding these negative events in the future. Error analysis has four important steps: (1) What was the clinical situation? (2) What has happened? (3) Most important: Why did it happen? (4) How do we avoid the negative event or disaster in the future. Error analysis has decisive advantages. It is easy to perform; it supplies clinically relevant information to help avoid it; and there is no need for money. It can be done everywhere; and the information is available in a short time. The other side of the coin is that error analysis is of course retrospective, it may not be objective, and most important it will probably have legal consequences. To be more effective in medicine and surgery we must handle our errors using a different approach. According to Sir Karl Popper: "The consituation is that we have to learn from our errors. To cover up failure is therefore the biggest intellectual sin.
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Theoretical analysis on the probability of initiating persistent fission chain
International Nuclear Information System (INIS)
Liu Jianjun; Wang Zhe; Zhang Ben'ai
2005-01-01
For the finite multiplying system of fissile material in the presence of a weak neutron source, the authors analyses problems on the probability of initiating a persistent fission chain through reckoning the stochastic theory of neutron multiplication. In the theoretical treatment, the conventional point reactor conception model is developed to an improved form with position x and velocity v dependence. The estimated results including approximate value of the probability mentioned above and its distribution are given by means of diffusion approximation and compared with those with previous point reactor conception model. They are basically consistent, however the present model can provide details on the distribution. (authors)
Error analysis of two methods for range-images registration
Liu, Xiaoli; Yin, Yongkai; Li, Ameng; He, Dong; Peng, Xiang
2010-08-01
With the improvements in range image registration techniques, this paper focuses on error analysis of two registration methods being generally applied in industry metrology including the algorithm comparison, matching error, computing complexity and different application areas. One method is iterative closest points, by which beautiful matching results with little error can be achieved. However some limitations influence its application in automatic and fast metrology. The other method is based on landmarks. We also present a algorithm for registering multiple range-images with non-coding landmarks, including the landmarks' auto-identification and sub-pixel location, 3D rigid motion, point pattern matching, global iterative optimization techniques et al. The registering results by the two methods are illustrated and a thorough error analysis is performed.
Human error identification and analysis implicitly determined from LERs
International Nuclear Information System (INIS)
Luckas, W.J. Jr.; Speaker, D.M.
1983-01-01
As part of an ongoing effort to quantify human error using modified task analysis on Licensee Event Report (LER) system data, the initial results have been presented and documented in NUREG/CR-1880 and -2416. These results indicate the relatively important need for indepth anlaysis of LERs to obtain a more realistic assessment of human error caused events than those explicitly identified in the LERs themselves
Some remarks on the error analysis in the case of poor statistics
International Nuclear Information System (INIS)
Schmidt, K.H.; Sahm, C.C.; Pielenz, K.; Clerc, H.G.
1984-01-01
A prescription for the error analysis of experimental data in the case of stochastic background is formulated. Several relations are given which allow to establish the significance of mother-daughter relationships obtained from delayed coincidences. Both, the probability that a cascade is produced randomly and the probability that the parameters of an observed event chain are incompatible with known properties of a given species are formulated. The expressions given are applicable also in cases of poor statistics down even to single events. (orig.)
Analysis of Drop Call Probability in Well Established Cellular ...
African Journals Online (AJOL)
Technology in Africa has increased over the past decade. The increase in modern cellular networks requires stringent quality of service (QoS). Drop call probability is one of the most important indices of QoS evaluation in a large scale well-established cellular network. In this work we started from an accurate statistical ...
Ionospheric error analysis in gps measurements
Directory of Open Access Journals (Sweden)
G. Pugliano
2008-06-01
Full Text Available The results of an experiment aimed at evaluating the effects of the ionosphere on GPS positioning applications are presented in this paper. Specifically, the study, based upon a differential approach, was conducted utilizing GPS measurements acquired by various receivers located at increasing inter-distances. The experimental research was developed upon the basis of two groups of baselines: the first group is comprised of "short" baselines (less than 10 km; the second group is characterized by greater distances (up to 90 km. The obtained results were compared either upon the basis of the geometric characteristics, for six different baseline lengths, using 24 hours of data, or upon temporal variations, by examining two periods of varying intensity in ionospheric activity respectively coinciding with the maximum of the 23 solar cycle and in conditions of low ionospheric activity. The analysis revealed variations in terms of inter-distance as well as different performances primarily owing to temporal modifications in the state of the ionosphere.
Acceptance Probability (P a) Analysis for Process Validation Lifecycle Stages.
Alsmeyer, Daniel; Pazhayattil, Ajay; Chen, Shu; Munaretto, Francesco; Hye, Maksuda; Sanghvi, Pradeep
2016-04-01
This paper introduces an innovative statistical approach towards understanding how variation impacts the acceptance criteria of quality attributes. Because of more complex stage-wise acceptance criteria, traditional process capability measures are inadequate for general application in the pharmaceutical industry. The probability of acceptance concept provides a clear measure, derived from specific acceptance criteria for each quality attribute. In line with the 2011 FDA Guidance, this approach systematically evaluates data and scientifically establishes evidence that a process is capable of consistently delivering quality product. The probability of acceptance provides a direct and readily understandable indication of product risk. As with traditional capability indices, the acceptance probability approach assumes that underlying data distributions are normal. The computational solutions for dosage uniformity and dissolution acceptance criteria are readily applicable. For dosage uniformity, the expected AV range may be determined using the s lo and s hi values along with the worst case estimates of the mean. This approach permits a risk-based assessment of future batch performance of the critical quality attributes. The concept is also readily applicable to sterile/non sterile liquid dose products. Quality attributes such as deliverable volume and assay per spray have stage-wise acceptance that can be converted into an acceptance probability. Accepted statistical guidelines indicate processes with C pk > 1.33 as performing well within statistical control and those with C pk 1.33 is associated with a centered process that will statistically produce less than 63 defective units per million. This is equivalent to an acceptance probability of >99.99%.
Ben Issaid, Chaouki
2017-07-28
When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.
Formal Analysis of Soft Errors using Theorem Proving
Directory of Open Access Journals (Sweden)
Sofiène Tahar
2013-07-01
Full Text Available Modeling and analysis of soft errors in electronic circuits has traditionally been done using computer simulations. Computer simulations cannot guarantee correctness of analysis because they utilize approximate real number representations and pseudo random numbers in the analysis and thus are not well suited for analyzing safety-critical applications. In this paper, we present a higher-order logic theorem proving based method for modeling and analysis of soft errors in electronic circuits. Our developed infrastructure includes formalized continuous random variable pairs, their Cumulative Distribution Function (CDF properties and independent standard uniform and Gaussian random variables. We illustrate the usefulness of our approach by modeling and analyzing soft errors in commonly used dynamic random access memory sense amplifier circuits.
Cloud retrieval using infrared sounder data - Error analysis
Wielicki, B. A.; Coakley, J. A., Jr.
1981-01-01
An error analysis is presented for cloud-top pressure and cloud-amount retrieval using infrared sounder data. Rms and bias errors are determined for instrument noise (typical of the HIRS-2 instrument on Tiros-N) and for uncertainties in the temperature profiles and water vapor profiles used to estimate clear-sky radiances. Errors are determined for a range of test cloud amounts (0.1-1.0) and cloud-top pressures (920-100 mb). Rms errors vary by an order of magnitude depending on the cloud height and cloud amount within the satellite's field of view. Large bias errors are found for low-altitude clouds. These bias errors are shown to result from physical constraints placed on retrieved cloud properties, i.e., cloud amounts between 0.0 and 1.0 and cloud-top pressures between the ground and tropopause levels. Middle-level and high-level clouds (above 3-4 km) are retrieved with low bias and rms errors.
Application of human error analysis to aviation and space operations
Energy Technology Data Exchange (ETDEWEB)
Nelson, W.R.
1998-03-01
For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) the authors have been working to apply methods of human error analysis to the design of complex systems. They have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. They are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. The primary vehicle the authors have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. They are currently adapting their methods and tools of human error analysis to the domain of air traffic management (ATM) systems. Under the NASA-sponsored Advanced Air Traffic Technologies (AATT) program they are working to address issues of human reliability in the design of ATM systems to support the development of a free flight environment for commercial air traffic in the US. They are also currently testing the application of their human error analysis approach for space flight operations. They have developed a simplified model of the critical habitability functions for the space station Mir, and have used this model to assess the affects of system failures and human errors that have occurred in the wake of the collision incident last year. They are developing an approach so that lessons learned from Mir operations can be systematically applied to design and operation of long-term space missions such as the International Space Station (ISS) and the manned Mars mission.
Bootstrap Standard Error Estimates in Dynamic Factor Analysis
Zhang, Guangjian; Browne, Michael W.
2010-01-01
Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…
Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors
Sarcevic, Aleksandra
2009-01-01
An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…
QUALITATIVE DATA AND ERROR MEASUREMENT IN INPUT-OUTPUT-ANALYSIS
NIJKAMP, P; OOSTERHAVEN, J; OUWERSLOOT, H; RIETVELD, P
1992-01-01
This paper is a contribution to the rapidly emerging field of qualitative data analysis in economics. Ordinal data techniques and error measurement in input-output analysis are here combined in order to test the reliability of a low level of measurement and precision of data by means of a stochastic
Error analysis of mechanical system and wavelength calibration of monochromator
Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong
2018-02-01
This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.
Error analysis of mechanical system and wavelength calibration of monochromator.
Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong
2018-02-01
This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.
Geometric error analysis for shuttle imaging spectrometer experiment
Wang, S. J.; Ih, C. H.
1984-01-01
The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
Extensions of the space trajectories error analysis programs
Adams, G. L.; Bradt, A. J.; Peterson, F. M.
1971-01-01
A generalized covariance analysis technique which permits the study of the sensitivity of linear estimation algorithms to errors in a priori statistics has been developed and programed. Several sample cases are presented to illustrate the use of this technique. Modifications to the Simulated Trajectories Error Analysis Program (STEAP) to enable targeting a multiprobe mission of the Planetary Explorer type are discussed. The logic for the mini-probe targeting is presented. Finally, the initial phases of the conversion of the Viking mission Lander Trajectory Reconstruction (LTR) program for use on Venus missions is discussed. An integrator instability problem is discussed and a solution proposed.
Error Grid Analysis for Arterial Pressure Method Comparison Studies.
Saugel, Bernd; Grothe, Oliver; Nicklas, Julia Y
2018-04-01
The measurement of arterial pressure (AP) is a key component of hemodynamic monitoring. A variety of different innovative AP monitoring technologies became recently available. The decision to use these technologies must be based on their measurement performance in validation studies. These studies are AP method comparison studies comparing a new method ("test method") with a reference method. In these studies, different comparative statistical tests are used including correlation analysis, Bland-Altman analysis, and trending analysis. These tests provide information about the statistical agreement without adequately providing information about the clinical relevance of differences between the measurement methods. To overcome this problem, we, in this study, propose an "error grid analysis" for AP method comparison studies that allows illustrating the clinical relevance of measurement differences. We constructed smoothed consensus error grids with calibrated risk zones derived from a survey among 25 specialists in anesthesiology and intensive care medicine. Differences between measurements of the test and the reference method are classified into 5 risk levels ranging from "no risk" to "dangerous risk"; the classification depends on both the differences between the measurements and on the measurements themselves. Based on worked examples and data from the Multiparameter Intelligent Monitoring in Intensive Care II database, we show that the proposed error grids give information about the clinical relevance of AP measurement differences that cannot be obtained from Bland-Altman analysis. Our approach also offers a framework on how to adapt the error grid analysis for different clinical settings and patient populations.
A case of error disclosure: a communication privacy management analysis.
Petronio, Sandra; Helft, Paul R; Child, Jeffrey T
2013-12-01
To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios's theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient's family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public healthMuch of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information
Unbiased bootstrap error estimation for linear discriminant analysis.
Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R
2014-12-01
Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.
Doctors' duty to disclose error: a deontological or Kantian ethical analysis.
Bernstein, Mark; Brown, Barry
2004-05-01
Medical (surgical) error is being talked about more openly and besides being the subject of retrospective reviews, is now the subject of prospective research. Disclosure of error has been a difficult issue because of fear of embarrassment for doctors in the eyes of their peers, and fear of punitive action by patients, consisting of medicolegal action and/or complaints to doctors' governing bodies. This paper examines physicians' and surgeons' duty to disclose error, from an ethical standpoint; specifically by applying the moral philosophical theory espoused by Immanuel Kant (ie. deontology). The purpose of this discourse is to apply moral philosophical analysis to a delicate but important issue which will be a matter all physicians and surgeons will have to confront, probably numerous times, in their professional careers.
Al-Murad, Tamim M.
2011-07-01
Evaluating the reliability of wireless sensor networks is becoming more important as theses networks are being used in crucial applications. The outage probability defined as the probability that the error in the system exceeds a maximum acceptable threshold has recently been used as a measure of the reliability of such systems. In this work we find the outage probability of wireless sensor network in different scenarios of distributed sensing where sensors\\' readings are affected by spatial correlation and in the presence of channel fading. © 2011 IEEE.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
International Nuclear Information System (INIS)
Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro
2016-01-01
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy.
Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro
2016-09-01
The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Balanced data according to the one-factor random effect model were assumed. Analysis-of-variance (anova)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The anova-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.
BETASCAN: probable beta-amyloids identified by pairwise probabilistic analysis.
Directory of Open Access Journals (Sweden)
Allen W Bryan
2009-03-01
Full Text Available Amyloids and prion proteins are clinically and biologically important beta-structures, whose supersecondary structures are difficult to determine by standard experimental or computational means. In addition, significant conformational heterogeneity is known or suspected to exist in many amyloid fibrils. Recent work has indicated the utility of pairwise probabilistic statistics in beta-structure prediction. We develop here a new strategy for beta-structure prediction, emphasizing the determination of beta-strands and pairs of beta-strands as fundamental units of beta-structure. Our program, BETASCAN, calculates likelihood scores for potential beta-strands and strand-pairs based on correlations observed in parallel beta-sheets. The program then determines the strands and pairs with the greatest local likelihood for all of the sequence's potential beta-structures. BETASCAN suggests multiple alternate folding patterns and assigns relative a priori probabilities based solely on amino acid sequence, probability tables, and pre-chosen parameters. The algorithm compares favorably with the results of previous algorithms (BETAPRO, PASTA, SALSA, TANGO, and Zyggregator in beta-structure prediction and amyloid propensity prediction. Accurate prediction is demonstrated for experimentally determined amyloid beta-structures, for a set of known beta-aggregates, and for the parallel beta-strands of beta-helices, amyloid-like globular proteins. BETASCAN is able both to detect beta-strands with higher sensitivity and to detect the edges of beta-strands in a richly beta-like sequence. For two proteins (Abeta and Het-s, there exist multiple sets of experimental data implying contradictory structures; BETASCAN is able to detect each competing structure as a potential structure variant. The ability to correlate multiple alternate beta-structures to experiment opens the possibility of computational investigation of prion strains and structural heterogeneity of amyloid
Error Analysis Of Clock Time (T), Declination (*) And Latitude ...
African Journals Online (AJOL)
), latitude (Φ), longitude (λ) and azimuth (A); which are aimed at establishing fixed positions and orientations of survey points and lines on the earth surface. The paper attempts the analysis of the individual and combined effects of error in time ...
Measurement Error, Education Production and Data Envelopment Analysis
Ruggiero, John
2006-01-01
Data Envelopment Analysis has become a popular tool for evaluating the efficiency of decision making units. The nonparametric approach has been widely applied to educational production. The approach is, however, deterministic and leads to biased estimates of performance in the presence of measurement error. Numerous simulation studies confirm the…
Error analysis and bounds in time delay extimation
Czech Academy of Sciences Publication Activity Database
Pánek, Petr
2007-01-01
Roč. 55, 7 Part I (2007), s. 3547-3549 ISSN 1053-587X Institutional research plan: CEZ:AV0Z20670512 Keywords : time measurement * delay estimation * error analysis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.640, year: 2007
Analysis of possible systematic errors in the Oslo method
International Nuclear Information System (INIS)
Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.
2011-01-01
In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.
L'analyse des erreurs. Problemes et perspectives (Error Analysis. Problems and Perspectives)
Porquier, Remy
1977-01-01
Summarizes the usefulness and the disadvantage of error analysis, and discusses a reorientation of error analysis, specifically regarding grammar instruction and the significance of errors. (Text is in French.) (AM)
Directory of Open Access Journals (Sweden)
Edgar Romo-Montiel
2016-01-01
Full Text Available Las redes inalámbricas de sensores están compuestas por un gran número de nodos autónomos que vigilan algún parámetro del ambiente de interés, como puede ser la temperatura, la humedad o incluso objetivos móviles. Este trabajo se enfoca en la detección de móviles en áreas amplias como puede ser la vigilancia de animales en un bosque o la detección de vehículos en misiones de seguridad. Específicamente, se propone, analiza y estudia un protocolo de agrupación de bajo consumo de energía. Para ello, se presentan dos esquemas de comunicaciones basados en el bien conocido protocolo LEACH. El desempeño del sistema se estudia por medio de un modelo matemático que describe el comportamiento de la red bajo los parámetros más relevantes, como son: radio de cobertura, radio de transmisión y número de nodos en la red. Adicionalmente, se estudia la probabilidad de transmisión en la fase de formación de grupos bajo consideraciones realistas de un canal inalámbrico, en donde la detección de la señal tiene errores debido a la interferencia y ruido en el canal de acceso
Analysis of soil moisture probability in a tree cropped watershed
Espejo-Perez, Antonio Jesus; Giraldez Cervera, Juan Vicente; Pedrera, Aura; Vanderlinden, Karl
2015-04-01
Probability density functions (pdfs) of soil moisture were estimated for an experimental watershed in Southern Spain, cropped with olive trees. Measurements were made using a capacitance sensors network from June 2011 until May 2013. The network consisted of 22 profiles of sensors, installed close to the tree trunk under the canopy and in the adjacent inter-row area, at 11 locations across the watershed to assess the influence of rain interception and root-water uptake on the soil moisture distribution. A bimodal pdf described the moisture dynamics at the 11 sites, both under and in-between the trees. Each mode represented the moisture status during either the dry or the wet period of the year. The observed histograms could be decomposed into a Lognormal pdf for dry period and a Gaussian pdf for the wet period. The pdfs showed a larger variation among the different locations at inter-row positions, as compared to under the canopy, reflecting the strict control of the vegetation on soil moisture. At both positions this variability was smaller during the wet season than during the dry period.
Directory of Open Access Journals (Sweden)
Ranauli Sihombing
2016-12-01
Full Text Available Errors analysis has become one of the most interesting issues in the study of Second Language Acquisition. It can not be denied that some teachers do not know a lot about error analysis and related theories of how L1, L2 or foreign language acquired. In addition, the students often feel upset since they find a gap between themselves and the teachers for the errors the students make and the teachers’ understanding about the error correction. The present research aims to investigate what errors adult English learners make in written production of English. The significances of the study is to know what errors students make in writing that the teachers can find solution to the errors the students make for a better English language teaching and learning especially in teaching English for adults. The study employed qualitative method. The research was undertaken at an airline education center in Bandung. The result showed that syntax errors are more frequently found than morphology errors, especially in terms of verb phrase errors. It is recommended that it is important for teacher to know the theory of second language acquisition in order to know how the students learn and produce theirlanguage. In addition, it will be advantages for teachers if they know what errors students frequently make in their learning, so that the teachers can give solution to the students for a better English language learning achievement. DOI: https://doi.org/10.24071/llt.2015.180205
Predicting positional error of MLC using volumetric analysis
International Nuclear Information System (INIS)
Hareram, E.S.
2008-01-01
IMRT normally using multiple beamlets (small width of the beam) for a particular field to deliver so that it is imperative to maintain the positional accuracy of the MLC in order to deliver integrated computed dose accurately. Different manufacturers have reported high precession on MLC devices with leaf positional accuracy nearing 0.1 mm but measuring and rectifying the error in this accuracy is very difficult. Various methods are used to check MLC position and among this volumetric analysis is one of the technique. Volumetric approach was adapted in our method using primus machine and 0.6cc chamber at 5 cm depth In perspex. MLC of 1 mm error introduces an error of 20%, more sensitive to other methods
An Error Analysis of Structured Light Scanning of Biological Tissue
DEFF Research Database (Denmark)
Jensen, Sebastian Hoppe Nesgaard; Wilm, Jakob; Aanæs, Henrik
2017-01-01
This paper presents an error analysis and correction model for four structured light methods applied to three common types of biological tissue; skin, fat and muscle. Despite its many advantages, structured light is based on the assumption of direct reflection at the object surface only....... This assumption is violated by most biological material e.g. human skin, which exhibits subsurface scattering. In this study, we find that in general, structured light scans of biological tissue deviate significantly from the ground truth. We show that a large portion of this error can be predicted with a simple......, statistical linear model based on the scan geometry. As such, scans can be corrected without introducing any specially designed pattern strategy or hardware. We can effectively reduce the error in a structured light scanner applied to biological tissue by as much as factor of two or three....
Risk analysis: assessing uncertainties beyond expected values and probabilities
National Research Council Canada - National Science Library
Aven, T. (Terje)
2008-01-01
... . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Selectionofanalysismethod ... 3.2.1 Checklist-basedapproach... 3.2.2 Risk-basedapproach ... 29 29 34 35 36 4 The risk analysis process: risk a...
Consolidity analysis for fully fuzzy functions, matrices, probability and statistics
Walaa Ibrahim Gabr
2015-01-01
The paper presents a comprehensive review of the know-how for developing the systems consolidity theory for modeling, analysis, optimization and design in fully fuzzy environment. The solving of systems consolidity theory included its development for handling new functions of different dimensionalities, fuzzy analytic geometry, fuzzy vector analysis, functions of fuzzy complex variables, ordinary differentiation of fuzzy functions and partial fraction of fuzzy polynomials. On the other hand, ...
Directory of Open Access Journals (Sweden)
Rekaya R
2016-11-01
Full Text Available Romdhane Rekaya,1–3 Shannon Smith,4 El Hamidi Hay,5 Nourhene Farhat,6 Samuel E Aggrey3,7 1Department of Animal and Dairy Science, College of Agricultural and Environmental Sciences, 2Department of Statistics, Franklin College of Arts and Sciences, 3Institute of Bioinformatics, The University of Georgia, Athens, GA, 4Zoetis, Kalamazoo, MI, 5United States Department of Agriculture, Agricultural Research Service, Beltsville, MD, 6Carolinas HealthCare System Blue Ridge, Morganton, NC, 7Department of Poultry Science, College of Agricultural and Environmental Sciences, University of Georgia, Athens, GA, USA Abstract: Errors in the binary status of some response traits are frequent in human, animal, and plant applications. These error rates tend to differ between cases and controls because diagnostic and screening tests have different sensitivity and specificity. This increases the inaccuracies of classifying individuals into correct groups, giving rise to both false-positive and false-negative cases. The analysis of these noisy binary responses due to misclassification will undoubtedly reduce the statistical power of genome-wide association studies (GWAS. A threshold model that accommodates varying diagnostic errors between cases and controls was investigated. A simulation study was carried out where several binary data sets (case–control were generated with varying effects for the most influential single nucleotide polymorphisms (SNPs and different diagnostic error rate for cases and controls. Each simulated data set consisted of 2000 individuals. Ignoring misclassification resulted in biased estimates of true influential SNP effects and inflated estimates for true noninfluential markers. A substantial reduction in bias and increase in accuracy ranging from 12% to 32% was observed when the misclassification procedure was invoked. In fact, the majority of influential SNPs that were not identified using the noisy data were captured using the
Consolidity analysis for fully fuzzy functions, matrices, probability and statistics
Directory of Open Access Journals (Sweden)
Walaa Ibrahim Gabr
2015-03-01
Full Text Available The paper presents a comprehensive review of the know-how for developing the systems consolidity theory for modeling, analysis, optimization and design in fully fuzzy environment. The solving of systems consolidity theory included its development for handling new functions of different dimensionalities, fuzzy analytic geometry, fuzzy vector analysis, functions of fuzzy complex variables, ordinary differentiation of fuzzy functions and partial fraction of fuzzy polynomials. On the other hand, the handling of fuzzy matrices covered determinants of fuzzy matrices, the eigenvalues of fuzzy matrices, and solving least-squares fuzzy linear equations. The approach demonstrated to be also applicable in a systematic way in handling new fuzzy probabilistic and statistical problems. This included extending the conventional probabilistic and statistical analysis for handling fuzzy random data. Application also covered the consolidity of fuzzy optimization problems. Various numerical examples solved have demonstrated that the new consolidity concept is highly effective in solving in a compact form the propagation of fuzziness in linear, nonlinear, multivariable and dynamic problems with different types of complexities. Finally, it is demonstrated that the implementation of the suggested fuzzy mathematics can be easily embedded within normal mathematics through building special fuzzy functions library inside the computational Matlab Toolbox or using other similar software languages.
Snow, L. S.; Kuhn, A. E.
1975-01-01
Previous error analyses conducted by the Guidance and Dynamics Branch of NASA have used the Guidance Analysis Program (GAP) as the trajectory simulation tool. Plans are made to conduct all future error analyses using the Space Vehicle Dynamics Simulation (SVDS) program. A study was conducted to compare the inertial measurement unit (IMU) error simulations of the two programs. Results of the GAP/SVDS comparison are presented and problem areas encountered while attempting to simulate IMU errors, vehicle performance uncertainties and environmental uncertainties using SVDS are defined. An evaluation of the SVDS linear error analysis capability is also included.
Risk prediction, safety analysis and quantitative probability methods - a caveat
International Nuclear Information System (INIS)
Critchley, O.H.
1976-01-01
Views are expressed on the use of quantitative techniques for the determination of value judgements in nuclear safety assessments, hazard evaluation, and risk prediction. Caution is urged when attempts are made to quantify value judgements in the field of nuclear safety. Criteria are given the meaningful application of reliability methods but doubts are expressed about their application to safety analysis, risk prediction and design guidances for experimental or prototype plant. Doubts are also expressed about some concomitant methods of population dose evaluation. The complexities of new designs of nuclear power plants make the problem of safety assessment more difficult but some possible approaches are suggested as alternatives to the quantitative techniques criticized. (U.K.)
Acquisition of case in Lithuanian as L2: Error analysis
Directory of Open Access Journals (Sweden)
Laura Cubajevaite
2009-05-01
Full Text Available Although teaching Lithuanian as a foreign language is not a new subject, there has not been much research in this field. The paper presents a study based on an analysis of grammatical errors which was carried out at Vytautas Magnus University. The data was selected randomly by analysing written assignments of beginner to advanced level students.DOI: http://dx.doi.org/10.5128/ERYa5.04
Detecting errors in micro and trace analysis by using statistics
DEFF Research Database (Denmark)
Heydorn, K.
1993-01-01
By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to results for chlorine in freshwater from BCR certification analyses by highly competent analytical laboratories in the EC. Titration showed systematic errors of several percent, while radiochemical neutron activation analysis produced results without detectable bias....
Analysis of Error Propagation Within Hierarchical Air Combat Models
2016-06-01
of the factors (variables), the other variables were fixed at their baseline levels. The red dots with the standard deviation error bars represent...conducted an analysis to determine if the means and variances of MOEs of interest were statistically different by experimental design (Pav, 2015). To do...summarized data. In the summarized data set, we summarize each Design Point (DP) by its mean and standard deviation , over the stochastic replications. The
Radiological error: analysis, standard setting, targeted instruction and teamworking
International Nuclear Information System (INIS)
FitzGerald, Richard
2005-01-01
Diagnostic radiology does not have objective benchmarks for acceptable levels of missed diagnoses [1]. Until now, data collection of radiological discrepancies has been very time consuming. The culture within the specialty did not encourage it. However, public concern about patient safety is increasing. There have been recent innovations in compiling radiological interpretive discrepancy rates which may facilitate radiological standard setting. However standard setting alone will not optimise radiologists' performance or patient safety. We must use these new techniques in radiological discrepancy detection to stimulate greater knowledge sharing, targeted instruction and teamworking among radiologists. Not all radiological discrepancies are errors. Radiological discrepancy programmes must not be abused as an instrument for discrediting individual radiologists. Discrepancy rates must not be distorted as a weapon in turf battles. Radiological errors may be due to many causes and are often multifactorial. A systems approach to radiological error is required. Meaningful analysis of radiological discrepancies and errors is challenging. Valid standard setting will take time. Meanwhile, we need to develop top-up training, mentoring and rehabilitation programmes. (orig.)
Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis
Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl
2009-01-01
The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.
Error analysis of compensation cutting technique for wavefront error of KH2PO4 crystal.
Tie, Guipeng; Dai, Yifan; Guan, Chaoliang; Zhu, Dengchao; Song, Bing
2013-09-20
Considering the wavefront error of KH(2)PO(4) (KDP) crystal is difficult to control through face fly cutting process because of surface shape deformation during vacuum suction, an error compensation technique based on a spiral turning method is put forward. An in situ measurement device is applied to measure the deformed surface shape after vacuum suction, and the initial surface figure error, which is obtained off-line, is added to the in situ surface shape to obtain the final surface figure to be compensated. Then a three-axis servo technique is utilized to cut the final surface shape. In traditional cutting processes, in addition to common error sources such as the error in the straightness of guide ways, spindle rotation error, and error caused by ambient environment variance, three other errors, the in situ measurement error, position deviation error, and servo-following error, are the main sources affecting compensation accuracy. This paper discusses the effect of these three errors on compensation accuracy and provides strategies to improve the final surface quality. Experimental verification was carried out on one piece of KDP crystal with the size of Φ270 mm×11 mm. After one compensation process, the peak-to-valley value of the transmitted wavefront error dropped from 1.9λ (λ=632.8 nm) to approximately 1/3λ, and the mid-spatial-frequency error does not become worse when the frequency of the cutting tool trajectory is controlled by use of a low-pass filter.
Landmarking the brain for geometric morphometric analysis: an error study.
Directory of Open Access Journals (Sweden)
Madeleine B Chollet
Full Text Available Neuroanatomic phenotypes are often assessed using volumetric analysis. Although powerful and versatile, this approach is limited in that it is unable to quantify changes in shape, to describe how regions are interrelated, or to determine whether changes in size are global or local. Statistical shape analysis using coordinate data from biologically relevant landmarks is the preferred method for testing these aspects of phenotype. To date, approximately fifty landmarks have been used to study brain shape. Of the studies that have used landmark-based statistical shape analysis of the brain, most have not published protocols for landmark identification or the results of reliability studies on these landmarks. The primary aims of this study were two-fold: (1 to collaboratively develop detailed data collection protocols for a set of brain landmarks, and (2 to complete an intra- and inter-observer validation study of the set of landmarks. Detailed protocols were developed for 29 cortical and subcortical landmarks using a sample of 10 boys aged 12 years old. Average intra-observer error for the final set of landmarks was 1.9 mm with a range of 0.72 mm-5.6 mm. Average inter-observer error was 1.1 mm with a range of 0.40 mm-3.4 mm. This study successfully establishes landmark protocols with a minimal level of error that can be used by other researchers in the assessment of neuroanatomic phenotypes.
Human reliability analysis of errors of commission: a review of methods and applications
International Nuclear Information System (INIS)
Reer, B.
2007-06-01
shortcomings in context identification and evaluation, c) providing concise and effective guidance for the identification of adverse contexts, d) providing reference (or anchor) cases to support context-specific EOC probability assessment and thus to avoid the analyst's need to make direct probability judgments, e) addressing cognitive demands and tendencies, f) applying a simple discrete scale on the correlation between qualitative findings and error probabilities, g) using screening values for initial quantification, and h) aiming at data-based EOC probabilities by means of advanced event analysis techniques. Further development work should be carried out in close connection with large-scale applications of existing EOC HRA approaches. (author)
Human reliability analysis of errors of commission: a review of methods and applications
Energy Technology Data Exchange (ETDEWEB)
Reer, B
2007-06-15
shortcomings in context identification and evaluation, c) providing concise and effective guidance for the identification of adverse contexts, d) providing reference (or anchor) cases to support context-specific EOC probability assessment and thus to avoid the analyst's need to make direct probability judgments, e) addressing cognitive demands and tendencies, f) applying a simple discrete scale on the correlation between qualitative findings and error probabilities, g) using screening values for initial quantification, and h) aiming at data-based EOC probabilities by means of advanced event analysis techniques. Further development work should be carried out in close connection with large-scale applications of existing EOC HRA approaches. (author)
Critical slowing down and error analysis in lattice QCD simulations
Energy Technology Data Exchange (ETDEWEB)
Virotta, Francesco
2012-02-21
In this work we investigate the critical slowing down of lattice QCD simulations. We perform a preliminary study in the quenched approximation where we find that our estimate of the exponential auto-correlation time scales as {tau}{sub exp}(a){proportional_to}a{sup -5}, where a is the lattice spacing. In unquenched simulations with O(a) improved Wilson fermions we do not obtain a scaling law but find results compatible with the behavior that we find in the pure gauge theory. The discussion is supported by a large set of ensembles both in pure gauge and in the theory with two degenerate sea quarks. We have moreover investigated the effect of slow algorithmic modes in the error analysis of the expectation value of typical lattice QCD observables (hadronic matrix elements and masses). In the context of simulations affected by slow modes we propose and test a method to obtain reliable estimates of statistical errors. The method is supposed to help in the typical algorithmic setup of lattice QCD, namely when the total statistics collected is of O(10){tau}{sub exp}. This is the typical case when simulating close to the continuum limit where the computational costs for producing two independent data points can be extremely large. We finally discuss the scale setting in N{sub f}=2 simulations using the Kaon decay constant f{sub K} as physical input. The method is explained together with a thorough discussion of the error analysis employed. A description of the publicly available code used for the error analysis is included.
Optimal alpha reduces error rates in gene expression studies: a meta-analysis approach.
Mudge, J F; Martyniuk, C J; Houlahan, J E
2017-06-21
Transcriptomic approaches (microarray and RNA-seq) have been a tremendous advance for molecular science in all disciplines, but they have made interpretation of hypothesis testing more difficult because of the large number of comparisons that are done within an experiment. The result has been a proliferation of techniques aimed at solving the multiple comparisons problem, techniques that have focused primarily on minimizing Type I error with little or no concern about concomitant increases in Type II errors. We have previously proposed a novel approach for setting statistical thresholds with applications for high throughput omics-data, optimal α, which minimizes the probability of making either error (i.e. Type I or II) and eliminates the need for post-hoc adjustments. A meta-analysis of 242 microarray studies extracted from the peer-reviewed literature found that current practices for setting statistical thresholds led to very high Type II error rates. Further, we demonstrate that applying the optimal α approach results in error rates as low or lower than error rates obtained when using (i) no post-hoc adjustment, (ii) a Bonferroni adjustment and (iii) a false discovery rate (FDR) adjustment which is widely used in transcriptome studies. We conclude that optimal α can reduce error rates associated with transcripts in both microarray and RNA-seq experiments, but point out that improved statistical techniques alone cannot solve the problems associated with high throughput datasets - these approaches need to be coupled with improved experimental design that considers larger sample sizes and/or greater study replication.
Analytical sensitivity analysis of geometric errors in a three axis machine tool
International Nuclear Information System (INIS)
Park, Sung Ryung; Yang, Seung Han
2012-01-01
In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors
Analysis of the “naming game” with learning errors in communications
Yang Lou; Guanrong Chen
2015-01-01
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is ...
Error analysis of short term wind power prediction models
International Nuclear Information System (INIS)
De Giorgi, Maria Grazia; Ficarella, Antonio; Tarantino, Marco
2011-01-01
The integration of wind farms in power networks has become an important problem. This is because the electricity produced cannot be preserved because of the high cost of storage and electricity production must follow market demand. Short-long-range wind forecasting over different lengths/periods of time is becoming an important process for the management of wind farms. Time series modelling of wind speeds is based upon the valid assumption that all the causative factors are implicitly accounted for in the sequence of occurrence of the process itself. Hence time series modelling is equivalent to physical modelling. Auto Regressive Moving Average (ARMA) models, which perform a linear mapping between inputs and outputs, and Artificial Neural Networks (ANNs) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS), which perform a non-linear mapping, provide a robust approach to wind power prediction. In this work, these models are developed in order to forecast power production of a wind farm with three wind turbines, using real load data and comparing different time prediction periods. This comparative analysis takes in the first time, various forecasting methods, time horizons and a deep performance analysis focused upon the normalised mean error and the statistical distribution hereof in order to evaluate error distribution within a narrower curve and therefore forecasting methods whereby it is more improbable to make errors in prediction. (author)
Critical slowing down and error analysis in lattice QCD simulations
International Nuclear Information System (INIS)
Schaefer, Stefan; Sommer, Rainer; Virotta, Francesco
2010-09-01
We study the critical slowing down towards the continuum limit of lattice QCD simulations with Hybrid Monte Carlo type algorithms. In particular for the squared topological charge we find it to be very severe with an effective dynamical critical exponent of about 5 in pure gauge theory. We also consider Wilson loops which we can demonstrate to decouple from the modes which slow down the topological charge. Quenched observables are studied and a comparison to simulations of full QCD is made. In order to deal with the slow modes in the simulation, we propose a method to incorporate the information from slow observables into the error analysis of physical observables and arrive at safer error estimates. (orig.)
Critical slowing down and error analysis in lattice QCD simulations
Energy Technology Data Exchange (ETDEWEB)
Schaefer, Stefan [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Sommer, Rainer; Virotta, Francesco [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC
2010-09-15
We study the critical slowing down towards the continuum limit of lattice QCD simulations with Hybrid Monte Carlo type algorithms. In particular for the squared topological charge we find it to be very severe with an effective dynamical critical exponent of about 5 in pure gauge theory. We also consider Wilson loops which we can demonstrate to decouple from the modes which slow down the topological charge. Quenched observables are studied and a comparison to simulations of full QCD is made. In order to deal with the slow modes in the simulation, we propose a method to incorporate the information from slow observables into the error analysis of physical observables and arrive at safer error estimates. (orig.)
International Nuclear Information System (INIS)
Gilmore, W.E.; Gertman, D.I.; Gilbert, B.G.; Reece, W.J.
1988-11-01
The Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) is an automated data base management system for processing and storing human error probability (HEP) and hardware component failure data (HCFD). The NUCLARR system software resides on an IBM (or compatible) personal micro-computer. Users can perform data base searches to furnish HEP estimates and HCFD rates. In this manner, the NUCLARR system can be used to support a variety of risk assessment activities. This volume, Volume 3 of a 5-volume series, presents the procedures used to process HEP and HCFD for entry in NUCLARR and describes how to modify the existing NUCLARR taxonomy in order to add either equipment types or action verbs. Volume 3 also specifies the various roles of the administrative staff on assignment to the NUCLARR Clearinghouse who are tasked with maintaining the data base, dealing with user requests, and processing NUCLARR data. 5 refs., 34 figs., 3 tabs
Analysis of tap weight errors in CCD transversal filters
Ricco, Bruno; Wallinga, Hans
1978-01-01
A method is presented to determine and evaluate the actual tap weight errors in CCD split-electrode transversal filters. It is concluded that the correlated part in the tap weight errors dominates the random errors.
International Nuclear Information System (INIS)
Watson, Stephen R.
1995-01-01
In their comment on a recent contribution of mine, [Watson, S., The meaning of probability in probabilistic safety analysis. Reliab. Engng and System Safety, 45 (1994) 261-269.] Yellman and Murray assert that (1) I argue in favour of a realistic interpretation of probability for PSAs; (2) that the only satisfactory philosophical theory of probability is the relative frequency theory; (3) that I mean the same thing by the words 'uncertainty' and 'probability'; (4) that my argument can easily lead to the belief that the output of PSAs are meaningless. I take issue with all these points, and in this response I set out my arguments
Error performance analysis in downlink cellular networks with interference management
Afify, Laila H.
2015-05-01
Modeling aggregate network interference in cellular networks has recently gained immense attention both in academia and industry. While stochastic geometry based models have succeeded to account for the cellular network geometry, they mostly abstract many important wireless communication system aspects (e.g., modulation techniques, signal recovery techniques). Recently, a novel stochastic geometry model, based on the Equivalent-in-Distribution (EiD) approach, succeeded to capture the aforementioned communication system aspects and extend the analysis to averaged error performance, however, on the expense of increasing the modeling complexity. Inspired by the EiD approach, the analysis developed in [1] takes into consideration the key system parameters, while providing a simple tractable analysis. In this paper, we extend this framework to study the effect of different interference management techniques in downlink cellular network. The accuracy of the proposed analysis is verified via Monte Carlo simulations.
International Nuclear Information System (INIS)
Detrano, R.; Yiannikas, J.; Salcedo, E.E.; Rincon, G.; Go, R.T.; Williams, G.; Leatherman, J.
1984-01-01
One hundred fifty-four patients referred for coronary arteriography were prospectively studied with stress electrocardiography, stress thallium scintigraphy, cine fluoroscopy (for coronary calcifications), and coronary angiography. Pretest probabilities of coronary disease were determined based on age, sex, and type of chest pain. These and pooled literature values for the conditional probabilities of test results based on disease state were used in Bayes theorem to calculate posttest probabilities of disease. The results of the three noninvasive tests were compared for statistical independence, a necessary condition for their simultaneous use in Bayes theorem. The test results were found to demonstrate pairwise independence in patients with and those without disease. Some dependencies that were observed between the test results and the clinical variables of age and sex were not sufficient to invalidate application of the theorem. Sixty-eight of the study patients had at least one major coronary artery obstruction of greater than 50%. When these patients were divided into low-, intermediate-, and high-probability subgroups according to their pretest probabilities, noninvasive test results analyzed by Bayesian probability analysis appropriately advanced 17 of them by at least one probability subgroup while only seven were moved backward. Of the 76 patients without disease, 34 were appropriately moved into a lower probability subgroup while 10 were incorrectly moved up. We conclude that posttest probabilities calculated from Bayes theorem more accurately classified patients with and without disease than did pretest probabilities, thus demonstrating the utility of the theorem in this application
Error Analysis for Fourier Methods for Option Pricing
Häppölä, Juho
2016-01-06
We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE). Applying a Fourier transformation to the PIDE yields an ordinary differential equation that can be solved analytically in terms of the characteristic exponent of the Levy process. Then, a numerical inverse Fourier transform allows us to obtain the option price. We present a novel bound for the error and use this bound to set the parameters for the numerical method. We analyze the properties of the bound for a dissipative and pure-jump example. The bound presented is independent of the asymptotic behaviour of option prices at extreme asset prices. The error bound can be decomposed into a product of terms resulting from the dynamics and the option payoff, respectively. The analysis is supplemented by numerical examples that demonstrate results comparable to and superior to the existing literature.
ERROR ANALYSIS FOR THE AIRBORNE DIRECT GEOREFERINCING TECHNIQUE
Directory of Open Access Journals (Sweden)
A. S. Elsharkawy
2016-10-01
Full Text Available Direct Georeferencing was shown to be an important alternative to standard indirect image orientation using classical or GPS-supported aerial triangulation. Since direct Georeferencing without ground control relies on an extrapolation process only, particular focus has to be laid on the overall system calibration procedure. The accuracy performance of integrated GPS/inertial systems for direct Georeferencing in airborne photogrammetric environments has been tested extensively in the last years. In this approach, the limiting factor is a correct overall system calibration including the GPS/inertial component as well as the imaging sensor itself. Therefore remaining errors in the system calibration will significantly decrease the quality of object point determination. This research paper presents an error analysis for the airborne direct Georeferencing technique, where integrated GPS/IMU positioning and navigation systems are used, in conjunction with aerial cameras for airborne mapping compared with GPS/INS supported AT through the implementation of certain amount of error on the EOP and Boresight parameters and study the effect of these errors on the final ground coordinates. The data set is a block of images consists of 32 images distributed over six flight lines, the interior orientation parameters, IOP, are known through careful camera calibration procedure, also 37 ground control points are known through terrestrial surveying procedure. The exact location of camera station at time of exposure, exterior orientation parameters, EOP, is known through GPS/INS integration process. The preliminary results show that firstly, the DG and GPS-supported AT have similar accuracy and comparing with the conventional aerial photography method, the two technologies reduces the dependence on ground control (used only for quality control purposes. Secondly, In the DG Correcting overall system calibration including the GPS/inertial component as well as the
Hebbian errors in learning: an analysis using the Oja model.
Rădulescu, Anca; Cox, Kingsley; Adams, Paul
2009-06-21
Recent work on long term potentiation in brain slices shows that Hebb's rule is not completely synapse-specific, probably due to intersynapse diffusion of calcium or other factors. We previously suggested that such errors in Hebbian learning might be analogous to mutations in evolution. We examine this proposal quantitatively, extending the classical Oja unsupervised model of learning by a single linear neuron to include Hebbian inspecificity. We introduce an error matrix E, which expresses possible crosstalk between updating at different connections. When there is no inspecificity, this gives the classical result of convergence to the first principal component of the input distribution (PC1). We show the modified algorithm converges to the leading eigenvector of the matrix EC, where C is the input covariance matrix. In the most biologically plausible case when there are no intrinsically privileged connections, E has diagonal elements Q and off-diagonal elements (1-Q)/(n-1), where Q, the quality, is expected to decrease with the number of inputs n and with a synaptic parameter b that reflects synapse density, calcium diffusion, etc. We study the dependence of the learning accuracy on b, n and the amount of input activity or correlation (analytically and computationally). We find that accuracy increases (learning becomes gradually less useful) with increases in b, particularly for intermediate (i.e., biologically realistic) correlation strength, although some useful learning always occurs up to the trivial limit Q=1/n. We discuss the relation of our results to Hebbian unsupervised learning in the brain. When the mechanism lacks specificity, the network fails to learn the expected, and typically most useful, result, especially when the input correlation is weak. Hebbian crosstalk would reflect the very high density of synapses along dendrites, and inevitably degrades learning.
Fuzzy probability based fault tree analysis to propagate and quantify epistemic uncertainty
International Nuclear Information System (INIS)
Purba, Julwan Hendry; Sony Tjahyani, D.T.; Ekariansyah, Andi Sofrany; Tjahjono, Hendro
2015-01-01
Highlights: • Fuzzy probability based fault tree analysis is to evaluate epistemic uncertainty in fuzzy fault tree analysis. • Fuzzy probabilities represent likelihood occurrences of all events in a fault tree. • A fuzzy multiplication rule quantifies epistemic uncertainty of minimal cut sets. • A fuzzy complement rule estimate epistemic uncertainty of the top event. • The proposed FPFTA has successfully evaluated the U.S. Combustion Engineering RPS. - Abstract: A number of fuzzy fault tree analysis approaches, which integrate fuzzy concepts into the quantitative phase of conventional fault tree analysis, have been proposed to study reliabilities of engineering systems. Those new approaches apply expert judgments to overcome the limitation of the conventional fault tree analysis when basic events do not have probability distributions. Since expert judgments might come with epistemic uncertainty, it is important to quantify the overall uncertainties of the fuzzy fault tree analysis. Monte Carlo simulation is commonly used to quantify the overall uncertainties of conventional fault tree analysis. However, since Monte Carlo simulation is based on probability distribution, this technique is not appropriate for fuzzy fault tree analysis, which is based on fuzzy probabilities. The objective of this study is to develop a fuzzy probability based fault tree analysis to overcome the limitation of fuzzy fault tree analysis. To demonstrate the applicability of the proposed approach, a case study is performed and its results are then compared to the results analyzed by a conventional fault tree analysis. The results confirm that the proposed fuzzy probability based fault tree analysis is feasible to propagate and quantify epistemic uncertainties in fault tree analysis
A Conceptual Framework of Human Reliability Analysis for Execution Human Error in NPP Advanced MCRs
International Nuclear Information System (INIS)
Jang, In Seok; Kim, Ar Ryum; Seong, Poong Hyun; Jung, Won Dea
2014-01-01
The operation environment of Main Control Rooms (MCRs) in Nuclear Power Plants (NPPs) has changed with the adoption of new human-system interfaces that are based on computer-based technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called Advanced MCRs. Among the many features of Advanced MCRs, soft controls are a particularly important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, and touch screens, operators can select a specific screen, then choose the controller, and finally manipulate the given devices. Due to the different interfaces between soft control and hardwired conventional type control, different human error probabilities and a new Human Reliability Analysis (HRA) framework should be considered in the HRA for advanced MCRs. In other words, new human error modes should be considered for interface management tasks such as navigation tasks, and icon (device) selection tasks in monitors and a new framework of HRA method taking these newly generated human error modes into account should be considered. In this paper, a conceptual framework for a HRA method for the evaluation of soft control execution human error in advanced MCRs is suggested by analyzing soft control tasks
Post-Error Slowing in Patients With ADHD: A Meta-Analysis.
Balogh, Lívia; Czobor, Pál
2016-12-01
Post-error slowing (PES) is a cognitive mechanism for adaptive responses to reduce the probability of error in subsequent trials after error. To date, no meta-analytic summary of individual studies has been conducted to assess whether ADHD patients differ from controls in PES. We identified 15 relevant publications, reporting 26 pairs of comparisons (ADHD, n = 1,053; healthy control, n = 614). Random-effect meta-analysis was used to determine the statistical effect size (ES) for PES. PES was diminished in the ADHD group as compared with controls, with an ES in the medium range (Cohen's d = 0.42). Significant group difference was observed in relation to the inter-stimulus interval (ISI): While healthy participants slowed down after an error during long (3,500 ms) compared with short ISIs (1,500 ms), ADHD participants sustained or even increased their speed. The pronounced group difference suggests that PES may be considered as a behavioral indicator for differentiating ADHD patients from healthy participants. © The Author(s) 2014.
Analysis of Random Segment Errors on Coronagraph Performance
Stahl, Mark T.; Stahl, H. Philip; Shaklan, Stuart B.; N'Diaye, Mamadou
2016-01-01
At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt
Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students
Priyani, H. A.; Ekawati, R.
2018-01-01
Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.
Kozak, J; Krysztoforski, K; Kroll, T; Helbig, S; Helbig, M
2009-01-01
The use of conventional CT- or MRI-based navigation systems for head and neck surgery is unsatisfactory due to tissue shift. Moreover, changes occurring during surgical procedures cannot be visualized. To overcome these drawbacks, we developed a novel ultrasound-guided navigation system for head and neck surgery. A comprehensive error analysis was undertaken to determine the accuracy of this new system. The evaluation of the system accuracy was essentially based on the method of error definition for well-established fiducial marker registration methods (point-pair matching) as used in, for example, CT- or MRI-based navigation. This method was modified in accordance with the specific requirements of ultrasound-guided navigation. The Fiducial Localization Error (FLE), Fiducial Registration Error (FRE) and Target Registration Error (TRE) were determined. In our navigation system, the real error (the TRE actually measured) did not exceed a volume of 1.58 mm(3) with a probability of 0.9. A mean value of 0.8 mm (standard deviation: 0.25 mm) was found for the FRE. The quality of the coordinate tracking system (Polaris localizer) could be defined with an FLE of 0.4 +/- 0.11 mm (mean +/- standard deviation). The quality of the coordinates of the crosshairs of the phantom was determined with a deviation of 0.5 mm (standard deviation: 0.07 mm). The results demonstrate that our newly developed ultrasound-guided navigation system shows only very small system deviations and therefore provides very accurate data for practical applications.
Berg, M. D.; Kim, H. S.; Friendlich, M. A.; Perez, C. E.; Seidlick, C. M.; LaBel, K. A.
2011-01-01
We present SEU test and analysis of the Microsemi ProASIC3 FPGA. SEU Probability models are incorporated for device evaluation. Included is a comparison to the RTAXS FPGA illustrating the effectiveness of the overall testing methodology.
Analysis of en route operational errors : probability of resolution and time-on-position.
2012-02-01
The Federation Administrations Air Traffic Control Organization Safety Management System (SMS) is : designed to prevent the introduction of unacceptable safety risk into the National Airspace System. One of the : most important safety metrics used...
Close-range radar rainfall estimation and error analysis
van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.
2016-08-01
Quantitative precipitation estimation (QPE) using ground-based weather radar is affected by many sources of error. The most important of these are (1) radar calibration, (2) ground clutter, (3) wet-radome attenuation, (4) rain-induced attenuation, (5) vertical variability in rain drop size distribution (DSD), (6) non-uniform beam filling and (7) variations in DSD. This study presents an attempt to separate and quantify these sources of error in flat terrain very close to the radar (1-2 km), where (4), (5) and (6) only play a minor role. Other important errors exist, like beam blockage, WLAN interferences and hail contamination and are briefly mentioned, but not considered in the analysis. A 3-day rainfall event (25-27 August 2010) that produced more than 50 mm of precipitation in De Bilt, the Netherlands, is analyzed using radar, rain gauge and disdrometer data. Without any correction, it is found that the radar severely underestimates the total rain amount (by more than 50 %). The calibration of the radar receiver is operationally monitored by analyzing the received power from the sun. This turns out to cause a 1 dB underestimation. The operational clutter filter applied by KNMI is found to incorrectly identify precipitation as clutter, especially at near-zero Doppler velocities. An alternative simple clutter removal scheme using a clear sky clutter map improves the rainfall estimation slightly. To investigate the effect of wet-radome attenuation, stable returns from buildings close to the radar are analyzed. It is shown that this may have caused an underestimation of up to 4 dB. Finally, a disdrometer is used to derive event and intra-event specific Z-R relations due to variations in the observed DSDs. Such variations may result in errors when applying the operational Marshall-Palmer Z-R relation. Correcting for all of these effects has a large positive impact on the radar-derived precipitation estimates and yields a good match between radar QPE and gauge
Error Analysis of CM Data Products Sources of Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Hunt, Brian D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eckert-Gallup, Aubrey Celia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cochran, Lainy Dromgoole [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kraus, Terrence D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Allen, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beal, Bill [National Security Technologies, Joint Base Andrews, MD (United States); Okada, Colin [National Security Technologies, LLC. (NSTec), Las Vegas, NV (United States); Simpson, Mathew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-02-01
This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across a wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.
SIRTF Focal Plane Survey: A Pre-flight Error Analysis
Bayard, David S.; Brugarolas, Paul B.; Boussalis, Dhemetrios; Kang, Bryan H.
2003-01-01
This report contains a pre-flight error analysis of the calibration accuracies expected from implementing the currently planned SIRTF focal plane survey strategy. The main purpose of this study is to verify that the planned strategy will meet focal plane survey calibration requirements (as put forth in the SIRTF IOC-SV Mission Plan [4]), and to quantify the actual accuracies expected. The error analysis was performed by running the Instrument Pointing Frame (IPF) Kalman filter on a complete set of simulated IOC-SV survey data, and studying the resulting propagated covariances. The main conclusion of this study is that the all focal plane calibration requirements can be met with the currently planned survey strategy. The associated margins range from 3 to 95 percent, and tend to be smallest for frames having a 0.14" requirement, and largest for frames having a more generous 0.28" (or larger) requirement. The smallest margin of 3 percent is associated with the IRAC 3.6 and 5.8 micron array centers (frames 068 and 069), and the largest margin of 95 percent is associated with the MIPS 160 micron array center (frame 087). For pointing purposes, the most critical calibrations are for the IRS Peakup sweet spots and short wavelength slit centers (frames 019, 023, 052, 028, 034). Results show that these frames are meeting their 0.14" requirements with an expected accuracy of approximately 0.1", which corresponds to a 28 percent margin.
Improving patient safety in radiotherapy through error reporting and analysis
International Nuclear Information System (INIS)
Findlay, Ú.; Best, H.; Ottrey, M.
2016-01-01
Aim: To improve patient safety in radiotherapy (RT) through the analysis and publication of radiotherapy errors and near misses (RTE). Materials and methods: RTE are submitted on a voluntary basis by NHS RT departments throughout the UK to the National Reporting and Learning System (NRLS) or directly to Public Health England (PHE). RTE are analysed by PHE staff using frequency trend analysis based on the classification and pathway coding from Towards Safer Radiotherapy (TSRT). PHE in conjunction with the Patient Safety in Radiotherapy Steering Group publish learning from these events, on a triannual and summarised on a biennial basis, so their occurrence might be mitigated. Results: Since the introduction of this initiative in 2010, over 30,000 (RTE) reports have been submitted. The number of RTE reported in each biennial cycle has grown, ranging from 680 (2010) to 12,691 (2016) RTE. The vast majority of the RTE reported are lower level events, thus not affecting the outcome of patient care. Of the level 1 and 2 incidents reported, it is known the majority of them affected only one fraction of a course of treatment. This means that corrective action could be taken over the remaining treatment fractions so the incident did not have a significant impact on the patient or the outcome of their treatment. Analysis of the RTE reports demonstrates that generation of error is not confined to one professional group or to any particular point in the pathway. It also indicates that the pattern of errors is replicated across service providers in the UK. Conclusion: Use of the terminology, classification and coding of TSRT, together with implementation of the national voluntary reporting system described within this report, allows clinical departments to compare their local analysis to the national picture. Further opportunities to improve learning from this dataset must be exploited through development of the analysis and development of proactive risk management strategies
Alignment error analysis of detector array for spatial heterodyne spectrometer.
Jin, Wei; Chen, Di-Hu; Li, Zhi-Wei; Luo, Hai-Yan; Hong, Jin
2017-12-10
Spatial heterodyne spectroscopy (SHS) is a new spatial interference spectroscopy which can achieve high spectral resolution. The alignment error of the detector array can lead to a significant influence with the spectral resolution of a SHS system. Theoretical models for analyzing the alignment errors which are divided into three kinds are presented in this paper. Based on these models, the tolerance angle of these errors has been given, respectively. The result of simulation experiments shows that when the angle of slope error, tilt error, and rotation error are less than 1.21°, 1.21°, 0.066° respectively, the alignment reaches an acceptable level.
Effects of Correlated Errors on the Analysis of Space Geodetic Data
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
A Framework for Examining Mathematics Teacher Knowledge as Used in Error Analysis
Peng, Aihui; Luo, Zengru
2009-01-01
Error analysis is a basic and important task for mathematics teachers. Unfortunately, in the present literature there is a lack of detailed understanding about teacher knowledge as used in it. Based on a synthesis of the literature in error analysis, a framework for prescribing and assessing mathematics teacher knowledge in error analysis was…
Error Analysis and Compensation Method Of 6-axis Industrial Robot
Zhang, Jianhao; Cai, Jinda
2017-01-01
A method of compensation is proposed based on the error model with the robot's parameters of kinematic structure and the joint angle. Using the robot kinematics equation depending on D-H algorithm, a kinematic error model is deduced relative to the end actuator of the robot, a comprehensive compensation method of kinematic parameters' error by mapping structural parameters to the joint angular parameter is proposed. In order to solve the angular error problem in the compensation process of ea...
Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMS
International Nuclear Information System (INIS)
Diehl, S.E.; Ochoa, A. Jr.; Dressendorfer, P.V.; Koga, R.; Kolasinski, W.A.
1982-06-01
Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors
Error treatment in students' written assignments in Discourse Analysis
African Journals Online (AJOL)
... is generally no consensus on how lecturers should treat students' errors in written assignments, observations in this study enabled the researcher to provide certain strategies that lecturers can adopt. Key words: Error treatment; error handling; corrective feedback, positive cognitive feedback; negative cognitive feedback; ...
Error Analysis in Composition of Iranian Lower Intermediate Students
Taghavi, Mehdi
2012-01-01
Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…
Estimating Probability of Default on Peer to Peer Market – Survival Analysis Approach
Directory of Open Access Journals (Sweden)
Đurović Andrija
2017-05-01
Full Text Available Arguably a cornerstone of credit risk modelling is the probability of default. This article aims is to search for the evidence of relationship between loan characteristics and probability of default on peer-to-peer (P2P market. In line with that, two loan characteristics are analysed: 1 loan term length and 2 loan purpose. The analysis is conducted using survival analysis approach within the vintage framework. Firstly, 12 months probability of default through the cycle is used to compare riskiness of analysed loan characteristics. Secondly, log-rank test is employed in order to compare complete survival period of cohorts. Findings of the paper suggest that there is clear evidence of relationship between analysed loan characteristics and probability of default. Longer term loans are more risky than the shorter term ones and the least risky loans are those used for credit card payoff.
Spectral analysis of growing graphs a quantum probability point of view
Obata, Nobuaki
2017-01-01
This book is designed as a concise introduction to the recent achievements on spectral analysis of graphs or networks from the point of view of quantum (or non-commutative) probability theory. The main topics are spectral distributions of the adjacency matrices of finite or infinite graphs and their limit distributions for growing graphs. The main vehicle is quantum probability, an algebraic extension of the traditional probability theory, which provides a new framework for the analysis of adjacency matrices revealing their non-commutative nature. For example, the method of quantum decomposition makes it possible to study spectral distributions by means of interacting Fock spaces or equivalently by orthogonal polynomials. Various concepts of independence in quantum probability and corresponding central limit theorems are used for the asymptotic study of spectral distributions for product graphs. This book is written for researchers, teachers, and students interested in graph spectra, their (asymptotic) spectr...
Errors in practical measurement in surveying, engineering, and technology
International Nuclear Information System (INIS)
Barry, B.A.; Morris, M.D.
1991-01-01
This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems
Moqimipour, Kourosh; Shahrokhi, Mohsen
2015-01-01
The present study aimed at analyzing writing errors caused by the interference of the Persian language, regarded as the first language (L1), in three writing genres, namely narration, description, and comparison/contrast by Iranian EFL students. 65 English paragraphs written by the participants, who were at the intermediate level based on their…
Analysis of personnel error occurrence reports across Defense Program facilities
Energy Technology Data Exchange (ETDEWEB)
Stock, D.A.; Shurberg, D.A.; O`Brien, J.N.
1994-05-01
More than 2,000 reports from the Occurrence Reporting and Processing System (ORPS) database were examined in order to identify weaknesses in the implementation of the guidance for the Conduct of Operations (DOE Order 5480.19) at Defense Program (DP) facilities. The analysis revealed recurrent problems involving procedures, training of employees, the occurrence of accidents, planning and scheduling of daily operations, and communications. Changes to DOE 5480.19 and modifications of the Occurrence Reporting and Processing System are recommended to reduce the frequency of these problems. The primary tool used in this analysis was a coding scheme based on the guidelines in 5480.19, which was used to classify the textual content of occurrence reports. The occurrence reports selected for analysis came from across all DP facilities, and listed personnel error as a cause of the event. A number of additional reports, specifically from the Plutonium Processing and Handling Facility (TA55), and the Chemistry and Metallurgy Research Facility (CMR), at Los Alamos National Laboratory, were analyzed separately as a case study. In total, 2070 occurrence reports were examined for this analysis. A number of core issues were consistently found in all analyses conducted, and all subsets of data examined. When individual DP sites were analyzed, including some sites which have since been transferred, only minor variations were found in the importance of these core issues. The same issues also appeared in different time periods, in different types of reports, and at the two Los Alamos facilities selected for the case study.
The ranking probability approach and its usage in design and analysis of large-scale studies.
Kuo, Chia-Ling; Zaykin, Dmitri
2013-01-01
In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Formula: see text]-level such as 0.05 is adjusted by the number of tests, [Formula: see text], i.e., as 0.05/[Formula: see text]. Assuming that some proportion of tests represent "true signals", that is, originate from a scenario where the null hypothesis is false, power depends on the number of true signals and the respective distribution of effect sizes. One way to define power is for it to be the probability of making at least one correct rejection at the assumed [Formula: see text]-level. We advocate an alternative way of establishing how "well-powered" a study is. In our approach, useful for studies with multiple tests, the ranking probability [Formula: see text] is controlled, defined as the probability of making at least [Formula: see text] correct rejections while rejecting hypotheses with [Formula: see text] smallest P-values. The two approaches are statistically related. Probability that the smallest P-value is a true signal (i.e., [Formula: see text]) is equal to the power at the level [Formula: see text], to an very good excellent approximation. Ranking probabilities are also related to the false discovery rate and to the Bayesian posterior probability of the null hypothesis. We study properties of our approach when the effect size distribution is replaced for convenience by a single "typical" value taken to be the mean of the underlying distribution. We conclude that its performance is often satisfactory under this simplification; however, substantial imprecision is to be expected when [Formula: see text] is very large and [Formula: see text] is small. Precision is largely restored when three values with the respective abundances are used instead of a single typical effect size value.
2001-02-01
The Human Factors Analysis and Classification System (HFACS) is a general human error framework : originally developed and tested within the U.S. military as a tool for investigating and analyzing the human : causes of aviation accidents. Based upon ...
Error analysis of acceleration control loops of a synchrotron
International Nuclear Information System (INIS)
Zhang, S.Y.; Weng, W.T.
1991-01-01
For beam control during acceleration, it is conventional to derive the frequency from an external reference, be it a field marker or an external oscillator, to provide phase and radius feedback loops to ensure the phase stability, radial position and emittance integrity of the beam. The open and closed loop behaviors of both feedback control and their response under the possible frequency, phase and radius errors are derived from fundamental principles and equations. The stability of the loops is investigated under a wide range of variations of the gain and time delays. Actual system performance of the AGS Booster is analyzed and compared to commissioning experiences. Such analysis is useful for setting design criteria and tolerances for new proton synchrotrons. 4 refs., 13 figs
Kitchen Physics: Lessons in Fluid Pressure and Error Analysis
Vieyra, Rebecca Elizabeth; Vieyra, Chrystian; Macchia, Stefano
2017-02-01
Although the advent and popularization of the "flipped classroom" tends to center around at-home video lectures, teachers are increasingly turning to at-home labs for enhanced student engagement. This paper describes two simple at-home experiments that can be accomplished in the kitchen. The first experiment analyzes the density of four liquids using a waterproof case and a smartphone barometer in a container, sink, or tub. The second experiment determines the relationship between pressure and temperature of an ideal gas in a constant volume container placed momentarily in a refrigerator freezer. These experiences provide a ripe opportunity both for learning fundamental physics concepts as well as to investigate a variety of error analysis techniques that are frequently overlooked in introductory physics courses.
An analysis of tracking error in image-guided neurosurgery.
Gerard, Ian J; Collins, D Louis
2015-10-01
This study quantifies some of the technical and physical factors that contribute to error in image-guided interventions. Errors associated with tracking, tool calibration and registration between a physical object and its corresponding image were investigated and compared with theoretical descriptions of these errors. A precision milled linear testing apparatus was constructed to perform the measurements. The tracking error was shown to increase in linear fashion with distance normal to the camera, and the tracking error ranged between 0.15 and 0.6 mm. The tool calibration error increased as a function of distance from the camera and the reference tool (0.2-0.8 mm). The fiducial registration error was shown to improve when more points were used up until a plateau value was reached which corresponded to the total fiducial localization error ([Formula: see text]0.8 mm). The target registration error distributions followed a [Formula: see text] distribution with the largest error and variation around fiducial points. To minimize errors, tools should be calibrated as close as possible to the reference tool and camera, and tools should be used as close to the front edge of the camera throughout the intervention, with the camera pointed in the direction where accuracy is least needed during surgery.
Fixed-point error analysis of Winograd Fourier transform algorithms
Patterson, R. W.; Mcclellan, J. H.
1978-01-01
The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.
Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna
2016-08-24
Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate
Analysis of error functions in speckle shearing interferometry
International Nuclear Information System (INIS)
Wan Saffiey Wan Abdullah
2001-01-01
Electronic Speckle Pattern Shearing Interferometry (ESPSI) or shearography has successfully been used in NDT for slope (∂w/ (∂x and / or (∂w/ (∂y) measurement while strain measurement (∂u/ ∂x, ∂v/ ∂y, ∂u/ ∂y and (∂v/ (∂x) is still under investigation. This method is well accepted in industrial applications especially in the aerospace industry. Demand of this method is increasing due to complexity of the test materials and objects. ESPSI has successfully performed in NDT only for qualitative measurement whilst quantitative measurement is the current aim of many manufacturers. Industrial use of such equipment is being completed without considering the errors arising from numerous sources, including wavefront divergence. The majority of commercial systems are operated with diverging object illumination wave fronts without considering the curvature of the object illumination wavefront or the object geometry, when calculating the interferometer fringe function and quantifying data. This thesis reports the novel approach in quantified maximum phase change difference analysis for derivative out-of-plane (OOP) and in-plane (IP) cases that propagate from the divergent illumination wavefront compared to collimated illumination. The theoretical of maximum phase difference is formulated by means of three dependent variables, these being the object distance, illuminated diameter, center of illuminated area and camera distance and illumination angle. The relative maximum phase change difference that may contributed to the error in the measurement analysis in this scope of research is defined by the difference of maximum phase difference value measured by divergent illumination wavefront relative to the maximum phase difference value of collimated illumination wavefront, taken at the edge of illuminated area. Experimental validation using test objects for derivative out-of-plane and derivative in-plane deformation, using a single illumination wavefront
Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...
Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach
Afify, Laila H.
2015-09-14
In this work, we develop an analytical paradigm to analyze the average symbol error probability (ASEP) performance of uplink traffic in a multi-tier cellular network. The analysis is based on the recently developed Equivalent-in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important communication system parameters and goes beyond signal-to-interference-plus-noise ratio characterization. That is, the presented model accounts for the modulation scheme, constellation type, and signal recovery techniques to model the ASEP. To this end, we derive single integral expressions for the ASEP for different modulation schemes due to aggregate network interference. Finally, all theoretical findings of the paper are verified via Monte Carlo simulations.
Review of advances in human reliability analysis of errors of commission-Part 2: EOC quantification
International Nuclear Information System (INIS)
Reer, Bernhard
2008-01-01
In close connection with examples relevant to contemporary probabilistic safety assessment (PSA), a review of advances in human reliability analysis (HRA) of post-initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions, has been carried out. The review comprises both EOC identification (part 1) and quantification (part 2); part 2 is presented in this article. Emerging HRA methods in this field are: ATHEANA, MERMOS, the EOC HRA method developed by Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS), the MDTA method and CREAM. The essential advanced features are on the conceptual side, especially to envisage the modeling of multiple contexts for an EOC to be quantified (ATHEANA, MERMOS and MDTA), in order to explicitly address adverse conditions. There is promising progress in providing systematic guidance to better account for cognitive demands and tendencies (GRS, CREAM), and EOC recovery (MDTA). Problematic issues are associated with the implementation of multiple context modeling and the assessment of context-specific error probabilities. Approaches for task or error opportunity scaling (CREAM, GRS) and the concept of reference cases (ATHEANA outlook) provide promising orientations for achieving progress towards data-based quantification. Further development work is needed and should be carried out in close connection with large-scale applications of existing approaches
On the effects of systematic errors in analysis of nuclear scattering data.
Energy Technology Data Exchange (ETDEWEB)
Bennett, M.T.; Steward, C.; Amos, K.; Allen, L.J.
1995-07-05
The effects of systematic errors on elastic scattering differential cross-section data upon the assessment of quality fits to that data have been studied. Three cases are studied, namely the differential cross-section data sets from elastic scattering of 200 MeV protons from {sup 12}C, of 350 MeV {sup 16}O-{sup 16}O scattering and of 288.6 MeV {sup 12}C-{sup 12}C scattering. First, to estimate the probability of any unknown systematic errors, select sets of data have been processed using the method of generalized cross validation; a method based upon the premise that any data set should satisfy an optimal smoothness criterion. In another case, the S function that provided a statistically significant fit to data, upon allowance for angle variation, became overdetermined. A far simpler S function form could then be found to describe the scattering process. The S functions so obtained have been used in a fixed energy inverse scattering study to specify effective, local, Schroedinger potentials for the collisions. An error analysis has been performed on the results to specify confidence levels for those interactions. 19 refs., 6 tabs., 15 figs.
On the error analysis of the meshless FDM and its multipoint extension
Jaworska, Irena
2018-01-01
The error analysis for the meshless methods, especially for the Meshless Finite Difference Method (MFDM), is discussed in the paper. Both a priori and a posteriori error estimations are considered. Experimental order of convergence confirms the theoretically developed a priori error bound. The higher order extension of the MFDM - the multipoint approach may be used as a source of the improved reference solution, instead of the true analytical one, for the global and local error estimation of the solution and residual errors. Several types of a posteriori error estimators are described. A variety of performed tests confirm high quality of a posteriori error estimation based on the multipoint MFDM.
International Nuclear Information System (INIS)
Lopes, Valdir Maciel
2010-01-01
This study aims to evaluate the potential risks submitted by the incidents in nuclear research reactors. For its development, two databases of the International Atomic Energy Agency, IAEA, were used, the Incident Report System for Research Reactor and Research Reactor Data Base. For this type of assessment was used the Probabilistic Safety Analysis (PSA), within a confidence level of 90% and the Deterministic Probability Analysis (DPA). To obtain the results of calculations of probabilities for PSA, were used the theory and equations in the paper IAEA TECDOC - 636. The development of the calculations of probabilities for PSA was used the program Scilab version 5.1.1, free access, executable on Windows and Linux platforms. A specific program to get the results of probability was developed within the main program Scilab 5.1.1., for two distributions Fischer and Chi-square, both with the confidence level of 90%. Using the Sordi equations and Origin 6.0 program, were obtained the maximum admissible doses related to satisfy the risk limits established by the International Commission on Radiological Protection, ICRP, and were also obtained these maximum doses graphically (figure 1) resulting from the calculations of probabilities x maximum admissible doses. It was found that the reliability of the results of probability is related to the operational experience (reactor x year and fractions) and that the larger it is, greater the confidence in the outcome. Finally, a suggested list of future work to complement this paper was gathered. (author)
Space Shuttle Launch Probability Analysis: Understanding History so We Can Predict the Future
Cates, Grant R.
2014-01-01
The Space Shuttle was launched 135 times and nearly half of those launches required 2 or more launch attempts. The Space Shuttle launch countdown historical data of 250 launch attempts provides a wealth of data that is important to analyze for strictly historical purposes as well as for use in predicting future launch vehicle launch countdown performance. This paper provides a statistical analysis of all Space Shuttle launch attempts including the empirical probability of launch on any given attempt and the cumulative probability of launch relative to the planned launch date at the start of the initial launch countdown. This information can be used to facilitate launch probability predictions of future launch vehicles such as NASA's Space Shuttle derived SLS. Understanding the cumulative probability of launch is particularly important for missions to Mars since the launch opportunities are relatively short in duration and one must wait for 2 years before a subsequent attempt can begin.
Probability problems in seismic risk analysis and load combinations for nuclear power plants
International Nuclear Information System (INIS)
George, L.L.
1983-01-01
This paper describes seismic risk, load combination, and probabilistic risk problems in power plant reliability, and it suggests applications of extreme value theory. Seismic risk analysis computes the probability of power plant failure in an earthquake and the resulting risk. Components fail if their peak responses to an earthquake exceed their strengths. Dependent stochastic processes represent responses, and peak responses are maxima. A Boolean function of component failures and survivals represents plant failure. Load combinations analysis computes the cdf of the peak of the superposition of stochastic processes that represent earthquake and operating loads. It also computes the probability of pipe fracture due to crack growth, a Markov process, caused by loads. Pipe fracture is an absorbing state. Probabilistic risk analysis computes the cdf's of probabilities which represent uncertainty. These Cdf's are induced by randomizing parameters of cdf's and by randomizing properties of stochastic processes such as initial crack size distributions, marginal cdf's, and failure criteria
Error Analysis of Ia Supernova and Query on Cosmic Dark Energy
Indian Academy of Sciences (India)
2016-01-27
Jan 27, 2016 ... Some serious faults in error analysis of observations for SNIa have been found. Redoing the same error analysis of SNIa, by our idea, it is found that the average total observational error of SNIa is obviously greater than 0.55, so we can not decide whether the Universe is an accelerating expansion or not.
An error taxonomy system for analysis of haemodialysis incidents.
Gu, Xiuzhu; Itoh, Kenji; Suzuki, Satoshi
2014-12-01
This paper describes the development of a haemodialysis error taxonomy system for analysing incidents and predicting the safety status of a dialysis organisation. The error taxonomy system was developed by adapting an error taxonomy system which assumed no specific specialty to haemodialysis situations. Its application was conducted with 1,909 incident reports collected from two dialysis facilities in Japan. Over 70% of haemodialysis incidents were reported as problems or complications related to dialyser, circuit, medication and setting of dialysis condition. Approximately 70% of errors took place immediately before and after the four hours of haemodialysis therapy. Error types most frequently made in the dialysis unit were omission and qualitative errors. Failures or complications classified to staff human factors, communication, task and organisational factors were found in most dialysis incidents. Device/equipment/materials, medicine and clinical documents were most likely to be involved in errors. Haemodialysis nurses were involved in more incidents related to medicine and documents, whereas dialysis technologists made more errors with device/equipment/materials. This error taxonomy system is able to investigate incidents and adverse events occurring in the dialysis setting but is also able to estimate safety-related status of an organisation, such as reporting culture. © 2014 European Dialysis and Transplant Nurses Association/European Renal Care Association.
Probability problems in seismic risk analysis and load combinations for nuclear power plants
International Nuclear Information System (INIS)
George, L.L.
1983-01-01
This workshop describes some probability problems in power plant reliability and maintenance analysis. The problems are seismic risk analysis, loss of load probability, load combinations, and load sharing. The seismic risk problem is to compute power plant reliability given an earthquake and the resulting risk. Component survival occurs if its peak random response to the earthquake does not exceed its strength. Power plant survival is a complicated Boolean function of component failures and survivals. The responses and strengths of components are dependent random processes, and the peak responses are maxima of random processes. The resulting risk is the expected cost of power plant failure
THE PRACTICAL ANALYSIS OF FINITE ELEMENTS METHOD ERRORS
Directory of Open Access Journals (Sweden)
Natalia Bakhova
2011-03-01
Full Text Available Abstract. The most important in the practical plan questions of reliable estimations of finite elementsmethod errors are considered. Definition rules of necessary calculations accuracy are developed. Methodsand ways of the calculations allowing receiving at economical expenditures of computing work the best finalresults are offered.Keywords: error, given the accuracy, finite element method, lagrangian and hermitian elements.
Error Analysis for Interferometric SAR Measurements of Ice Sheet Flow
DEFF Research Database (Denmark)
Mohr, Johan Jacob; Madsen, Søren Nørvang
1999-01-01
and slope errors in conjunction with a surface parallel flow assumption. The most surprising result is that assuming a stationary flow the east component of the three-dimensional flow derived from ascending and descending orbit data is independent of slope errors and of the vertical flow....
Factor Rotation and Standard Errors in Exploratory Factor Analysis
Zhang, Guangjian; Preacher, Kristopher J.
2015-01-01
In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…
Evaluation and Error Analysis for a Solar Thermal Receiver
International Nuclear Information System (INIS)
Pfander, M.
2001-01-01
In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Pro hermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. The ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver modules entry plane and the receiver operating temperature. (Author) 26 refs
Evaluation and Error Analysis for a Solar thermal Receiver
Energy Technology Data Exchange (ETDEWEB)
Pfander, M.
2001-07-01
In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Prohermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. the ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver module's entry plane and the receiver operating temperature. (Author) 26 refs.
Chiu, Ming-Chuan; Hsieh, Min-Chih
2016-05-01
The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Liang Yang,
2013-06-01
In this paper, we consider the performance of a two-way amplify-and-forward relaying network (AF TWRN) in the presence of unequal power co-channel interferers (CCI). Specifically, we first consider AF TWRN with an interference-limited relay and two noisy-nodes with channel estimation errors and CCI. We derive the approximate signal-to-interference plus noise ratio expressions and then use them to evaluate the outage probability, error probability, and achievable rate. Subsequently, to investigate the joint effects of the channel estimation error and CCI on the system performance, we extend our analysis to a multiple-relay network and derive several asymptotic performance expressions. For comparison purposes, we also provide the analysis for the relay selection scheme under the total power constraint at the relays. For AF TWRN with channel estimation error and CCI, numerical results show that the performance of the relay selection scheme is not always better than that of the all-relay participating case. In particular, the relay selection scheme can improve the system performance in the case of high power levels at the sources and small powers at the relays.
Analysis of intra-pulse frequency-modulated, low probability of ...
Indian Academy of Sciences (India)
In this paper, we investigate the problem of analysis of low probability of interception (LPI) radar signals with intra-pulse frequency modulation (FM) under low signal-to-noise ratio conditions from the perspective of an airborne electronic warfare (EW) digital receiver. EW receivers are designed to intercept andanalyse threat ...
Four-dimensional targeting error analysis in image-guided radiotherapy
International Nuclear Information System (INIS)
Riboldi, M; Baroni, G; Sharp, G C; Chen, G T Y
2009-01-01
Image-guided therapy (IGT) involves acquisition and processing of biomedical images to actively guide medical interventions. The proliferation of IGT technologies has been particularly significant in image-guided radiotherapy (IGRT), as a way to increase the tumor targeting accuracy. When IGRT is applied to moving tumors, image guidance becomes challenging, as motion leads to increased uncertainty. Different strategies may be applied to mitigate the effects of motion: each technique is related to a different technological effort and complexity in treatment planning and delivery. The objective comparison of different motion mitigation strategies can be achieved by quantifying the residual uncertainties in tumor targeting, to be detected by means of IGRT technologies. Such quantification requires an extension of targeting error theory to a 4D space, where the 3D tumor trajectory as a function of time measured (4D Targeting Error, 4DTE). Accurate 4DTE analysis can be represented by a motion probability density function, describing the statistical fluctuations of tumor trajectory. We illustrate the application of 4DTE analysis through examples, including weekly variations in tumor trajectory as detected by 4DCT, respiratory gating via external surrogates and real-time tumor tracking.
Spectrogram Image Analysis of Error Signals for Minimizing Impulse Noise
Directory of Open Access Journals (Sweden)
Jeakwan Kim
2016-01-01
Full Text Available This paper presents the theoretical and experimental study on the spectrogram image analysis of error signals for minimizing the impulse input noises in the active suppression of noise. Impulse inputs of some specific wave patterns as primary noises to a one-dimensional duct with the length of 1800 mm are shown. The convergence speed of the adaptive feedforward algorithm based on the least mean square approach was controlled by a normalized step size which was incorporated into the algorithm. The variations of the step size govern the stability as well as the convergence speed. Because of this reason, a normalized step size is introduced as a new method for the control of impulse noise. The spectrogram images which indicate the degree of the attenuation of the impulse input noises are considered to represent the attenuation with the new method. The algorithm is extensively investigated in both simulation and real-time control experiment. It is demonstrated that the suggested algorithm worked with a nice stability and performance against impulse noises. The results in this study can be used for practical active noise control systems.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-03-13
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Effective training based on the cause analysis of operation errors
International Nuclear Information System (INIS)
Fujita, Eimitsu; Noji, Kunio; Kobayashi, Akira.
1991-01-01
The authors have investigated typical error types through our training experience, and analyzed the causes of them. Error types which are observed in simulator training are: (1) lack of knowledge or lack of its applying ability to actual operation; (2) defective mastery of skillbase operation; (3) rote operation or stereotyped manner; (4) mind-setting or lack of redundant verification; (5) lack of team work; (6) misjudgement for the plant overall conditions by operation chief, who directs a reactor operator and a turbine operator in the training. The paper describes training methods used in Japan for BWR utilities to overcome these error types
Zernikow, B; Michel, E; Fleischhack, G; Bode, U
1999-07-01
Drug errors are quite common. Many of them become harmful only if they remain undetected, ultimately resulting in injury to the patient. Errors with cytotoxic drugs are especially dangerous because of the highly toxic potential of the drugs involved. For medico-legal reasons, only 1 case of accidental iatrogenic intoxication by cytotoxic drugs tends to be investigated at a time, because the focus is placed on individual responsibility rather than on system errors. The aim of our study was to investigate whether accidental iatrogenic intoxications by cytotoxic drugs are faults of either the individual or the system. The statistical analysis of distribution and quality of such errors, and the in-depth analysis of contributing factors delivered a rational basis for the development of practical preventive strategies. A total of 134 cases of accidental iatrogenic intoxication by a cytotoxic drug (from literature reports since 1966 identified by an electronic literature survey, as well as our own unpublished cases) underwent a systematic error analysis based on a 2-dimensional model of error generation. Incidents were classified by error characteristics and point in time of occurrence, and their distribution was statistically evaluated. The theories of error research, informatics, sensory physiology, cognitive psychology, occupational medicine and management have helped to classify and depict potential sources of error as well as reveal clues for error prevention. Monocausal errors were the exception. In the majority of cases, a confluence of unfavourable circumstances either brought about the error, or prevented its timely interception. Most cases with a fatal outcome involved erroneous drug administration. Object-inherent factors were the predominant causes. A lack of expert as well as general knowledge was a contributing element. In error detection and prevention of error sequelae, supervision and back-checking are essential. Improvement of both the individual
Tao, Guocai; Chen, Yan; Wen, Changyun; Bi, Min
2011-12-01
Although a validated oscillometry sphygmomanometer satisfies the accuracy criteria of Advancement of Medical Instrumentation (AAMI), its long-term blood pressure (BP) measurement error during operations remains to be determined. We aim to (a) compare the error range throughout surgical operations with the accuracy criteria of AAMI, and (b) investigate the probabilities of occurrence of abnormal, large errors and clinically meaningful errors. BP level were measured from 270 participants using oscillometry and arterial cannulation (invasive method) in the same BP monitor throughout surgeries. Mean deviation and SD (oscillometry vs. invasive method) were calculated from 6640 sets of data and presented in the Bland-Altman Plots. Also, the average, the largest, and the smallest measurement errors (errormean, errormax, and errormin) per patient were obtained. The probability distributions of the three types of errors were shown using histograms (percentage vs. SD). In addition, the clinically meaningful large errors (≥ 10 mmHg) of the adult patients when their systolic blood pressure (SBP) values were around 90 mmHg were investigated. The mean deviation (1.98 mmHg for SBP and 4.31 mmHg for diastolic blood pressure (DBP) satisfies the AAMI criterion (≤ 5 mmHg), but the SD (14.87 mmHg for SBP and 11.21 mmHg for DBP) exceeds the AAMI criterion (≤ 8 mmHg). The probability of errormax more than 40 mmHg is 14% for SBP and 6% for DBP. The probability of errormean more than 24 mmHg (4.07% for SBP and 1.48% for DBP), and that of errormin more than 24 mmHg (0.37% for SBP and 0.37% for DBP) are all greater than the criterion of 0.26%. The clinically meaningful errors are found in 28.78% of the adult patients. The SD of long-term BP measurement by our oscillometric method during operations exceeds AAMI accuracy criteria. And it is important to be aware of the abnormal large errors and clinically meaningful errors as their probabilities are rather significant. We analyze the
El-khateeb, Mahmoud M. A.
2016-01-01
The purpose of this study aims to investigate the errors classes occurred by the Preparatory year students at King Saud University, through analysis student responses to the items of the study test, and to identify the varieties of the common errors and ratios of common errors that occurred in solving inequalities. In the collection of the data,…
Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas
Herzberg, Tina
2010-01-01
In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…
US-LHC IR magnet error analysis and compensation
International Nuclear Information System (INIS)
Wei, J.; Ptitsin, V.; Pilat, F.; Tepikian, S.; Gelfand, N.; Wan, W.; Holt, J.
1998-01-01
This paper studies the impact of the insertion-region (IR) magnet field errors on LHC collision performance. Compensation schemes including magnet orientation optimization, body-end compensation, tuning shims, and local nonlinear correction are shown to be highly effective
Applying hierarchical task analysis to medication administration errors
Lane, R; Stanton, NA; Harrison, DJ
2006-01-01
Medication use in hospitals is a complex process and is dependent on the successful interaction of health professionals functioning within different disciplines. Errors can occur at any one of the five main stages of prescribing, documenting, dispensing or preparation, administering and monitoring. The responsibility for the error is often placed on the nurse, as she or he is the last person in the drug administration chain whilst more pressing underlying causal factors remain unresolved. ...
Analysis of Random Errors in Horizontal Sextant Angles
1980-09-01
sea horizon, bringing the direct and ref’lected images into coincidence and reading the micrometer and vernier . This is repeated several times...differences due to the direction of rotation of the micrometer drum were examined as well as the variability in the determination of sextant index error. / DD...minutes of arc respec- tively. In addition, systematic errors resulting from angular differences due to the direction of rotation of the micrometer drum
WORKING MEMORY STRUCTURE REVEALED IN ANALYSIS OF RECALL ERRORS
Directory of Open Access Journals (Sweden)
Regina V Ershova
2017-12-01
Full Text Available We analyzed working memory errors stemming from 193 Russian college students taking the Tarnow Unchunkable Test utilizing double digit items on a visual display.In three-item trials with at most one error per trial, single incorrect tens and ones digits (“singlets” were overrepresented and made up the majority of errors, indicating a base 10 organization.These errors indicate that there are separate memory maps for each position and that there are pointers that can move primarily within these maps. Several pointers make up a pointer collection. The number of pointer collections possible is the working memory capacity limit. A model for self-organizing maps is constructed in which the organization is created by turning common pointer collections into maps thereby replacing a pointer collection with a single pointer.The factors 5 and 11 were underrepresented in the errors, presumably because base 10 properties beyond positional order were used for error correction, perhaps reflecting the existence of additional maps of integers divisible by 5 and integers divisible by 11.
Analysis and research on curved surface's prototyping error based on FDM process
Gong, Y. D.; Zhang, Y. C.; Yang, T. B.; Wang, W. S.
2008-12-01
Analysis and research methods on curved surface's prototyping error with FDM (Fused Deposition Modeling) process are introduced in this paper, then the experiment result of curved surface's prototyping error is analyzed, and the integrity of point cloud information and the fitting method of curved surface prototyping are discussed as well as the influence on curved surface's prototyping error with different software. Finally, the qualitative and quantitative conclusions on curved surface's prototyping error are acquired in this paper.
Error analysis of the freshmen Criminology students’ grammar in the written English
Directory of Open Access Journals (Sweden)
Maico Demi Banate Aperocho
2017-12-01
Full Text Available This study identifies the various syntactical errors of the fifty (50 freshmen B.S. Criminology students of the University of Mindanao in Davao City. Specifically, this study aims to answer the following: (1 What are the common errors present in the argumentative essays of the respondents? (2 What are the reasons of the existence of these errors? This study is descriptive-qualitative. It also uses error analysis to point out the syntactical errors present in the compositions of the participants. The fifty essays are subjected to error analysis. Errors are classified based on Chanquoy’s Classification of Writing Errors. Furthermore, Hourani’s Common Reasons of Grammatical Errors Checklist was also used to determine the common reasons of the identified syntactical errors. To create a meaningful interpretation of data and to solicit further ideas from the participants, a focus group discussion is also done. Findings show that students’ most common errors are on the grammatical aspect. In the grammatical aspect, students have more frequently committed errors in the verb aspect (tense, subject agreement, and auxiliary and linker choice compared to spelling and punctuation aspects. Moreover, there are three topmost reasons of committing errors in the paragraph: mother tongue interference, incomprehensibility of the grammar rules, and the incomprehensibility of the writing mechanics. Despite the difficulty in learning English as a second language, students are still very motivated to master the concepts and applications of the language.
Error analysis of 3D-PTV through unsteady interfaces
Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier
2018-03-01
The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the
Ansari, Imran Shafique
2015-08-12
In this work, we present a unified performance analysis of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection (IM/DD) as well as heterodyne detection). More specifically, we present unified exact closedform expressions for the cumulative distribution function, the probability density function, the moment generating function, and the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system, all in terms of the Meijer’s G function except for the moments that is in terms of simple elementary functions. We then capitalize on these unified results to offer unified exact closed-form expressions for various performance metrics of FSO link transmission systems, such as, the outage probability, the scintillation index (SI), the average error rate for binary and M-ary modulation schemes, and the ergodic capacity (except for IM/DD technique, where we present closed-form lower bound results), all in terms of Meijer’s G functions except for the SI that is in terms of simple elementary functions. Additionally, we derive the asymptotic results for all the expressions derived earlier in terms of Meijer’s G function in the high SNR regime in terms of simple elementary functions via an asymptotic expansion of the Meijer’s G function. We also derive new asymptotic expressions for the ergodic capacity in the low as well as high SNR regimes in terms of simple elementary functions via utilizing moments. All the presented results are verified via computer-based Monte-Carlo simulations.
Error Consistency in Acquired Apraxia of Speech With Aphasia: Effects of the Analysis Unit.
Haley, Katarina L; Cunningham, Kevin T; Eaton, Catherine Torrington; Jacks, Adam
2018-02-15
Diagnostic recommendations for acquired apraxia of speech (AOS) have been contradictory concerning whether speech sound errors are consistent or variable. Studies have reported divergent findings that, on face value, could argue either for or against error consistency as a diagnostic criterion. The purpose of this study was to explain discrepancies in error consistency results based on the unit of analysis (segment, syllable, or word) to help determine which diagnostic recommendation is most appropriate. We analyzed speech samples from 14 left-hemisphere stroke survivors with clinical diagnoses of AOS and aphasia. Each participant produced 3 multisyllabic words 5 times in succession. Broad phonetic transcriptions of these productions were coded for consistency of error location and type using the word and its constituent syllables and sound segments as units of analysis. Consistency of error type varied systematically with the unit of analysis, showing progressively greater consistency as the analysis unit changed from the word to the syllable and then to the sound segment. Consistency of error location varied considerably across participants and correlated positively with error frequency. Low to moderate consistency of error type at the word level confirms original diagnostic accounts of speech output and sound errors in AOS as variable in form. Moderate to high error type consistency at the syllable and sound levels indicate that phonetic error patterns are present. The results are complementary and logically compatible with each other and with the literature.
An advanced human reliability analysis methodology: analysis of cognitive errors focused on
International Nuclear Information System (INIS)
Kim, J. H.; Jeong, W. D.
2001-01-01
The conventional Human Reliability Analysis (HRA) methods such as THERP/ASEP, HCR and SLIM has been criticised for their deficiency in analysing cognitive errors which occurs during operator's decision making process. In order to supplement the limitation of the conventional methods, an advanced HRA method, what is called the 2 nd generation HRA method, including both qualitative analysis and quantitative assessment of cognitive errors has been being developed based on the state-of-the-art theory of cognitive systems engineering and error psychology. The method was developed on the basis of human decision-making model and the relation between the cognitive function and the performance influencing factors. The application of the proposed method to two emergency operation tasks is presented
2016-12-01
and identifying sources of smuggled nuclear material; however, it may also be used to determine a material’s origin in analysis of post detonation...RIMS analysis . Within this equation from [10], the desired cross section for ionization is contained. 21 U ion A ex N e N σ ω − = − 18... analysis : 21 U ion A ex N e N σ ω − = − After the curve fitting was complete, the ionization probability model was executed and the results
Preliminary Analysis of Effect of Random Segment Errors on Coronagraph Performance
Stahl, Mark T.; Shaklan, Stuart B.; Stahl, H. Philip
2015-01-01
Are we alone in the Universe is probably the most compelling science question of our generation. To answer it requires a large aperture telescope with extreme wavefront stability. To image and characterize Earth-like planets requires the ability to block 10(exp 10) of the host stars light with a 10(exp -11) stability. For an internal coronagraph, this requires correcting wavefront errors and keeping that correction stable to a few picometers rms for the duration of the science observation. This requirement places severe specifications upon the performance of the observatory, telescope and primary mirror. A key task of the AMTD project (initiated in FY12) is to define telescope level specifications traceable to science requirements and flow those specifications to the primary mirror. From a systems perspective, probably the most important question is: What is the telescope wavefront stability specification? Previously, we suggested this specification should be 10 picometers per 10 minutes; considered issues of how this specification relates to architecture, i.e. monolithic or segmented primary mirror; and asked whether it was better to have few or many segmented. This paper reviews the 10 picometers per 10 minutes specification; provides analysis related to the application of this specification to segmented apertures; and suggests that a 3 or 4 ring segmented aperture is more sensitive to segment rigid body motion that an aperture with fewer or more segments.
Semiparametric analysis of linear transformation models with covariate measurement errors.
Sinha, Samiran; Ma, Yanyuan
2014-03-01
We take a semiparametric approach in fitting a linear transformation model to a right censored data when predictive variables are subject to measurement errors. We construct consistent estimating equations when repeated measurements of a surrogate of the unobserved true predictor are available. The proposed approach applies under minimal assumptions on the distributions of the true covariate or the measurement errors. We derive the asymptotic properties of the estimator and illustrate the characteristics of the estimator in finite sample performance via simulation studies. We apply the method to analyze an AIDS clinical trial data set that motivated the work. © 2013, The International Biometric Society.
Secondary data analysis of large data sets in urology: successes and errors to avoid.
Schlomer, Bruce J; Copp, Hillary L
2014-03-01
Secondary data analysis is the use of data collected for research by someone other than the investigator. In the last several years there has been a dramatic increase in the number of these studies being published in urological journals and presented at urological meetings, especially involving secondary data analysis of large administrative data sets. Along with this expansion, skepticism for secondary data analysis studies has increased for many urologists. In this narrative review we discuss the types of large data sets that are commonly used for secondary data analysis in urology, and discuss the advantages and disadvantages of secondary data analysis. A literature search was performed to identify urological secondary data analysis studies published since 2008 using commonly used large data sets, and examples of high quality studies published in high impact journals are given. We outline an approach for performing a successful hypothesis or goal driven secondary data analysis study and highlight common errors to avoid. More than 350 secondary data analysis studies using large data sets have been published on urological topics since 2008 with likely many more studies presented at meetings but never published. Nonhypothesis or goal driven studies have likely constituted some of these studies and have probably contributed to the increased skepticism of this type of research. However, many high quality, hypothesis driven studies addressing research questions that would have been difficult to conduct with other methods have been performed in the last few years. Secondary data analysis is a powerful tool that can address questions which could not be adequately studied by another method. Knowledge of the limitations of secondary data analysis and of the data sets used is critical for a successful study. There are also important errors to avoid when planning and performing a secondary data analysis study. Investigators and the urological community need to strive to use
Processing and Probability Analysis of Pulsed Terahertz NDE of Corrosion under Shuttle Tile Data
Anastasi, Robert F.; Madaras, Eric I.; Seebo, Jeffrey P.; Ely, Thomas M.
2009-01-01
This paper examines data processing and probability analysis of pulsed terahertz NDE scans of corrosion defects under a Shuttle tile. Pulsed terahertz data collected from an aluminum plate with fabricated corrosion defects and covered with a Shuttle tile is presented. The corrosion defects imaged were fabricated by electrochemically etching areas of various diameter and depth in the plate. In this work, the aluminum plate echo signal is located in the terahertz time-of-flight data and a threshold is applied to produce a binary image of sample features. Feature location and area are examined and identified as corrosion through comparison with the known defect layout. The results are tabulated with hit, miss, or false call information for a probability of detection analysis that is used to identify an optimal processing threshold.
Water flux in animals: analysis of potential errors in the tritiated water method
International Nuclear Information System (INIS)
Nagy, K.A.; Costa, D.
1979-03-01
Laboratory studies indicate that tritiated water measurements of water flux are accurate to within -7 to +4% in mammals, but errors are larger in some reptiles. However, under conditions that can occur in field studies, errors may be much greater. Influx of environmental water vapor via lungs and skin can cause errors exceeding +-50% in some circumstances. If water flux rates in an animal vary through time, errors approach +-15% in extreme situations, but are near +-3% in more typical circumstances. Errors due to fractional evaporation of tritiated water may approach -9%. This error probably varies between species. Use of an inappropriate equation for calculating water flux from isotope data can cause errors exceeding +-100%. The following sources of error are either negligible or avoidable: use of isotope dilution space as a measure of body water volume, loss of nonaqueous tritium bound to excreta, binding of tritium with nonaqueous substances in the body, radiation toxicity effects, and small analytical errors in isotope measurements. Water flux rates measured with tritiated water should be within +-10% of actual flux rates in most situations
Water flux in animals: analysis of potential errors in the tritiated water method
Energy Technology Data Exchange (ETDEWEB)
Nagy, K.A.; Costa, D.
1979-03-01
Laboratory studies indicate that tritiated water measurements of water flux are accurate to within -7 to +4% in mammals, but errors are larger in some reptiles. However, under conditions that can occur in field studies, errors may be much greater. Influx of environmental water vapor via lungs and skin can cause errors exceeding +-50% in some circumstances. If water flux rates in an animal vary through time, errors approach +-15% in extreme situations, but are near +-3% in more typical circumstances. Errors due to fractional evaporation of tritiated water may approach -9%. This error probably varies between species. Use of an inappropriate equation for calculating water flux from isotope data can cause errors exceeding +-100%. The following sources of error are either negligible or avoidable: use of isotope dilution space as a measure of body water volume, loss of nonaqueous tritium bound to excreta, binding of tritium with nonaqueous substances in the body, radiation toxicity effects, and small analytical errors in isotope measurements. Water flux rates measured with tritiated water should be within +-10% of actual flux rates in most situations.
International Nuclear Information System (INIS)
Kang Kejun; Wang Xuewu; Gao Wenhuan
1999-01-01
After several-decade of development, nano science/nano technology has become a scientific and technical frontier that with major trends foreseen in several disciplines. By connecting with the development of nano science/nano technology and considering the human body environment that the nano system is applicable in, the author analyzes the probability of the present nuclear detection technologies integrating and application with the monitoring of nano system, and draws an analysis of optimality choice
Pinkerton, David K; Reaser, Brooke C; Berrier, Kelsey L; Synovec, Robert E
2017-09-19
A new approach is presented to determine the probability of achieving a successful quantitative analysis for gas chromatography coupled with mass spectrometry (GC-MS). The proposed theory is based upon a probabilistic description of peak overlap in GC-MS separations to determine the probability of obtaining a successful quantitative analysis, which has its lower limit of chromatographic resolution R s at some minimum chemometric resolution, R s *; that is to say, successful quantitative analysis can be achieved when R s ≥ R s *. The value of R s * must be experimentally determined and is dependent on the chemometric method to be applied. The approach presented makes use of the assumption that analyte peaks are independent and randomly distributed across the separation space or are at least locally random, namely, that each analyte represents an independent Bernoulli random variable, which is then used to predict the binomial probability of successful quantitative analysis. The theoretical framework is based on the chromatographic-saturation factor and chemometric-enhanced peak capacity. For a given separation, the probability of quantitative success can be improved via two pathways, a chromatographic-efficiency pathway that reduces the saturation of the sample and a chemometric pathway that reduces R s * and improves the chemometric-enhanced peak capacity. This theory is demonstrated through a simulation-based study to approximate the resolution limit, R s *, of multivariate curve resolution-alternating least-squares (MCR-ALS). For this study, R s * was determined to be ∼0.3, and depending on the analytical expectations for the quantitative bias and the obtained mass-spectral match value, a lower value of R s * ∼ 0.2 may be achievable.
Human error in strabismus surgery: Quantification with a sensitivity analysis
S. Schutte (Sander); J.R. Polling (Jan Roelof); F.C.T. van der Helm (Frans); H.J. Simonsz (Huib)
2009-01-01
textabstractBackground: Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods: We identified the primary factors that influence
Human error in strabismus surgery : Quantification with a sensitivity analysis
Schutte, S.; Polling, J.R.; Van der Helm, F.C.T.; Simonsz, H.J.
2008-01-01
Background- Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods- We identified the primary factors that influence the outcome of
Geometric Error Analysis in Applied Calculus Problem Solving
Usman, Ahmed Ibrahim
2017-01-01
The paper investigates geometric errors students made as they tried to use their basic geometric knowledge in the solution of the Applied Calculus Optimization Problem (ACOP). Inaccuracies related to the drawing of geometric diagrams (visualization skills) and those associated with the application of basic differentiation concepts into ACOP…
Error analysis to improve the speech recognition accuracy on ...
Indian Academy of Sciences (India)
Telugu language is one of the most widely spoken south Indian languages. In the proposed Telugu speech recognition system, errors obtained from decoder are analysed to improve the performance of the speech recognition system. Static pronunciation dictionary plays a key role in the speech recognition accuracy.
Reading and Spelling Error Analysis of Native Arabic Dyslexic Readers
Abu-rabia, Salim; Taha, Haitham
2004-01-01
This study was an investigation of reading and spelling errors of dyslexic Arabic readers ("n"=20) compared with two groups of normal readers: a young readers group, matched with the dyslexics by reading level ("n"=20) and an age-matched group ("n"=20). They were tested on reading and spelling of texts, isolated…
Linguistic Error Analysis on Students' Thesis Proposals
Pescante-Malimas, Mary Ann; Samson, Sonrisa C.
2017-01-01
This study identified and analyzed the common linguistic errors encountered by Linguistics, Literature, and Advertising Arts majors in their Thesis Proposal classes in the First Semester 2016-2017. The data were the drafts of the thesis proposals of the students from the three different programs. A total of 32 manuscripts were analyzed which was…
Pitch Error Analysis of Young Piano Students' Music Reading Performances
Rut Gudmundsdottir, Helga
2010-01-01
This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…
Young Children's Mental Arithmetic Errors: A Working-Memory Analysis.
Brainerd, Charles J.
1983-01-01
Presents a stochastic model for distinguishing mental arithmetic errors according to causes of failure. A series of experiments (1) studied questions of goodness of fit and model validity among four and five year olds and (2) used the model to measure the relative contributions of developmental improvements in short-term memory and arithmetical…
Oral Definitions of Newly Learned Words: An Error Analysis
Steele, Sara C.
2012-01-01
This study examined and compared patterns of errors in the oral definitions of newly learned words. Fifteen 9- to 11-year-old children with language learning disability (LLD) and 15 typically developing age-matched peers inferred the meanings of 20 nonsense words from four novel reading passages. After reading, children provided oral definitions…
Analysis of Students' Error in Learning of Quadratic Equations
Zakaria, Effandi; Ibrahim; Maat, Siti Mistima
2010-01-01
The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…
International Nuclear Information System (INIS)
Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun
2016-01-01
Highlights: • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • The operation action in NPP Advanced MCRs is performed by soft control. • New HRA framework should be considered in the HRA for advanced MCRs. • HRA framework for evaluation of soft control execution human error is suggested. • Suggested method will be helpful to analyze human reliability in advance MCRs. - Abstract: Since the Three Mile Island (TMI)-2 accident, human error has been recognized as one of the main causes of Nuclear Power Plant (NPP) accidents, and numerous studies related to Human Reliability Analysis (HRA) have been carried out. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). However, the operating environment of MCRs in NPPs has changed with the adoption of new Human-System Interfaces (HSIs) that are based on computer-based technologies. The MCRs that include these digital technologies, such as large display panels, computerized procedures, and soft controls, are called advanced MCRs. Among the many features of advanced MCRs, soft controls are a particularly important feature because operating actions in NPP advanced MCRs are performed by soft control. Due to the differences in interfaces between soft control and hardwired conventional type control, different Human Error Probabilities (HEPs) and a new HRA framework should be considered in the HRA for advanced MCRs. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing a soft control task analysis and the literature regarding widely accepted human error taxonomies is reviewed. Moreover, since most current HRA databases deal with operation in conventional MCRs and are not explicitly designed to deal with digital HSIs, empirical analysis of human error and error recovery considering soft controls under an advanced MCR mockup are carried out to collect human error data, which is
Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure
Directory of Open Access Journals (Sweden)
Hesheng Zhang
2016-01-01
Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.
The application of two recently developed human reliability techniques to cognitive error analysis
International Nuclear Information System (INIS)
Gall, W.
1990-01-01
Cognitive error can lead to catastrophic consequences for manned systems, including those whose design renders them immune to the effects of physical slips made by operators. Four such events, pressurized water and boiling water reactor accidents which occurred recently, were analysed. The analysis identifies the factors which contributed to the errors and suggests practical strategies for error recovery or prevention. Two types of analysis were conducted: an unstructured analysis based on the analyst's knowledge of psychological theory, and a structured analysis using two recently-developed human reliability analysis techniques. In general, the structured techniques required less effort to produce results and these were comparable to those of the unstructured analysis. (author)
Spectral analysis of forecast error investigated with an observing system simulation experiment
Directory of Open Access Journals (Sweden)
Nikki C. Privé
2015-02-01
Full Text Available The spectra of analysis and forecast error are examined using the observing system simulation experiment framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office. A global numerical weather prediction model, the Global Earth Observing System version 5 with Gridpoint Statistical Interpolation data assimilation, is cycled for 2 months with once-daily forecasts to 336 hours to generate a Control case. Verification of forecast errors using the nature run (NR as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self-analysis verification significantly overestimates the error growth rates of the early forecast, as well as mis-characterising the spatial scales at which the strongest growth occurs. The NR-verified error variances exhibit a complicated progression of growth, particularly for low wavenumber errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realisation of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
Error analysis of terrestrial laser scanning data by means of spherical statistics and 3D graphs.
Cuartero, Aurora; Armesto, Julia; Rodríguez, Pablo G; Arias, Pedro
2010-01-01
This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics.
Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment
Prive, N. C.; Errico, Ronald M.
2015-01-01
The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
Probability of Failure Analysis Standards and Guidelines for Expendable Launch Vehicles
Wilde, Paul D.; Morse, Elisabeth L.; Rosati, Paul; Cather, Corey
2013-09-01
Recognizing the central importance of probability of failure estimates to ensuring public safety for launches, the Federal Aviation Administration (FAA), Office of Commercial Space Transportation (AST), the National Aeronautics and Space Administration (NASA), and U.S. Air Force (USAF), through the Common Standards Working Group (CSWG), developed a guide for conducting valid probability of failure (POF) analyses for expendable launch vehicles (ELV), with an emphasis on POF analysis for new ELVs. A probability of failure analysis for an ELV produces estimates of the likelihood of occurrence of potentially hazardous events, which are critical inputs to launch risk analysis of debris, toxic, or explosive hazards. This guide is intended to document a framework for POF analyses commonly accepted in the US, and should be useful to anyone who performs or evaluates launch risk analyses for new ELVs. The CSWG guidelines provide performance standards and definitions of key terms, and are being revised to address allocation to flight times and vehicle response modes. The POF performance standard allows a launch operator to employ alternative, potentially innovative methodologies so long as the results satisfy the performance standard. Current POF analysis practice at US ranges includes multiple methodologies described in the guidelines as accepted methods, but not necessarily the only methods available to demonstrate compliance with the performance standard. The guidelines include illustrative examples for each POF analysis method, which are intended to illustrate an acceptable level of fidelity for ELV POF analyses used to ensure public safety. The focus is on providing guiding principles rather than "recipe lists." Independent reviews of these guidelines were performed to assess their logic, completeness, accuracy, self- consistency, consistency with risk analysis practices, use of available information, and ease of applicability. The independent reviews confirmed the
Error analysis to improve the speech recognition accuracy on ...
Indian Academy of Sciences (India)
and P N GIRIJA. Department of Computer and Information Sciences, University of Hyderabad,. Hyderabad 500 046, India e-mail: usha552@yahoo.com. MS received 8 ... After particular transition occurs, output probability ... HMM is used to determine the sequence of (hidden) states (transitions) occurred in observed signal.
Study on error analysis and accuracy improvement for aspheric profile measurement
Gao, Huimin; Zhang, Xiaodong; Fang, Fengzhou
2017-06-01
Aspheric surfaces are important to the optical systems and need high precision surface metrology. Stylus profilometry is currently the most common approach to measure axially symmetric elements. However, if the asphere has the rotational alignment errors, the wrong cresting point would be located deducing the significantly incorrect surface errors. This paper studied the simulated results of an asphere with rotational angles around X-axis and Y-axis, and the stylus tip shift in X, Y and Z direction. Experimental results show that the same absolute value of rotational errors around X-axis would cause the same profile errors and different value of rotational errors around Y-axis would cause profile errors with different title angle. Moreover, the greater the rotational errors, the bigger the peak-to-valley value of profile errors. To identify the rotational angles in X-axis and Y-axis, the algorithms are performed to analyze the X-axis and Y-axis rotational angles respectively. Then the actual profile errors with multiple profile measurement around X-axis are calculated according to the proposed analysis flow chart. The aim of the multiple measurements strategy is to achieve the zero position of X-axis rotational errors. Finally, experimental results prove the proposed algorithms achieve accurate profile errors for aspheric surfaces avoiding both X-axis and Y-axis rotational errors. Finally, a measurement strategy for aspheric surface is presented systematically.
Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class
Novitasari, N.; Lukito, A.; Ekawati, R.
2018-01-01
A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.
English Language Error Analysis of the Written Texts Produced by Ukrainian Learners: Data Collection
Directory of Open Access Journals (Sweden)
Lessia Mykolayivna Kotsyuk
2015-12-01
Full Text Available English Language Error Analysis of the Written Texts Produced by Ukrainian Learners: Data Collection Recently, the studies of second language acquisition have tended to focus on learners errors as they help to predict the difficulties involved in acquiring a second language. Thus, teachers can be made aware of the difficult areas to be encountered by the students and pay special attention and devote emphasis to them. The research goals of the article are to define what error analysis is and how it is important in L2 teaching process, to state the significance of corpus studies in identifying of different types of errors and mistakes, to provide the results of error analysis of the corpus of written texts produced by Ukrainian learners. In this article, major types of errors in English as a second language for Ukrainian students are mentioned.
McGuire, Patrick
2013-01-01
This article describes how a free, web-based intelligent tutoring system, (ASSISTment), was used to create online error analysis items for preservice elementary and secondary mathematics teachers. The online error analysis items challenged preservice teachers to analyze, diagnose, and provide targeted instructional remediation intended to help…
Directory of Open Access Journals (Sweden)
Ngoc Phuc Le
2017-01-01
Full Text Available We study the performance of the secondary relay system in a power-beacon (PB assisted energy harvesting cognitive relay wireless network. In our system model, a secondary source node and a relay node first harvest energy from distributed PBs. Then, the source node transmits its data to the destination node with the help of the relay node. Also, fading coefficients of the links from the PBs to the source node and relay node are assumed independent but not necessarily identically distributed (i.n.i.d Nakagami-m random variables. We derive exact expressions for the power outage probability and the channel outage probability. Based on that, we analyze the total outage probability of the secondary relay system. Asymptotic analysis is also performed, which provides insights into the system behavior. Moreover, we evaluate impacts of the primary network on the performance of the secondary network with respect to the tolerant interference threshold at the primary receiver as well as the interference introduced by the primary transmitter at the secondary source and relay nodes. Simulation results are provided to validate the analysis.
Error analysis of pupils in calculating with fractions
Uranič, Petra
2016-01-01
In this thesis I examine the correlation between the frequency of errors that seventh grade pupils make in their calculations with fractions and their level of understanding of fractions. Fractions are a relevant and demanding theme in the mathematics curriculum. Although we use fractions on a daily basis, pupils find learning fractions to be very difficult. They generally do not struggle with the concept of fractions itself, but they frequently have problems with mathematical operations ...
Analysis of Periodic Errors for Synthesized-Reference-Wave Holography
Directory of Open Access Journals (Sweden)
V. Schejbal
2009-12-01
Full Text Available Synthesized-reference-wave holographic techniques offer relatively simple and cost-effective measurement of antenna radiation characteristics and reconstruction of complex aperture fields using near-field intensity-pattern measurement. These methods allow utilization of advantages of methods for probe compensations for amplitude and phasing near-field measurements for the planar and cylindrical scanning including accuracy analyses. The paper analyzes periodic errors, which can be created during scanning, using both theoretical results and numerical simulations.
DEFF Research Database (Denmark)
Nielsen, Søren R. K.; Peng, Yongbo; Sichani, Mahdi Teimouri
2016-01-01
The paper deals with the response and reliability analysis of hysteretic or geometric nonlinear uncertain dynamical systems of arbitrary dimensionality driven by stochastic processes. The approach is based on the probability density evolution method proposed by Li and Chen (Stochastic dynamics...... of structures, 1st edn. Wiley, London, 2009; Probab Eng Mech 20(1):33–44, 2005), which circumvents the dimensional curse of traditional methods for the determination of non-stationary probability densities based on Markov process assumptions and the numerical solution of the related Fokker–Planck and Kolmogorov......–Feller equations. The main obstacle of the method is that a multi-dimensional convolution integral needs to be carried out over the sample space of a set of basic random variables, for which reason the number of these need to be relatively low. In order to handle this problem an approach is suggested, which...
Rank-Ordered Multifractal Analysis (ROMA of probability distributions in fluid turbulence
Directory of Open Access Journals (Sweden)
C. C. Wu
2011-04-01
Full Text Available Rank-Ordered Multifractal Analysis (ROMA was introduced by Chang and Wu (2008 to describe the multifractal characteristic of intermittent events. The procedure provides a natural connection between the rank-ordered spectrum and the idea of one-parameter scaling for monofractals. This technique has successfully been applied to MHD turbulence simulations and turbulence data observed in various space plasmas. In this paper, the technique is applied to the probability distributions in the inertial range of the turbulent fluid flow, as given in the vast Johns Hopkins University (JHU turbulence database. In addition, a new way of finding the continuous ROMA spectrum and the scaled probability distribution function (PDF simultaneously is introduced.
Analysis of error type and frequency in apraxia of speech among Portuguese speakers
Directory of Open Access Journals (Sweden)
Maysa Luchesi Cera
Full Text Available Abstract Most studies characterizing errors in the speech of patients with apraxia involve English language. Objectives: To analyze the types and frequency of errors produced by patients with apraxia of speech whose mother tongue was Brazilian Portuguese. Methods: 20 adults with apraxia of speech caused by stroke were assessed. The types of error committed by patients were analyzed both quantitatively and qualitatively, and frequencies compared. Results: We observed the presence of substitution, omission, trial-and-error, repetition, self-correction, anticipation, addition, reiteration and metathesis, in descending order of frequency, respectively. Omission type errors were one of the most commonly occurring whereas addition errors were infrequent. These findings differed to those reported in English speaking patients, probably owing to differences in the methodologies used for classifying error types; the inclusion of speakers with apraxia secondary to aphasia; and the difference in the structure of Portuguese language to English in terms of syllable onset complexity and effect on motor control. Conclusions: The frequency of omission and addition errors observed differed to the frequency reported for speakers of English.
Analysis of error type and frequency in apraxia of speech among Portuguese speakers.
Cera, Maysa Luchesi; Minett, Thaís Soares Cianciarullo; Ortiz, Karin Zazo
2010-01-01
Most studies characterizing errors in the speech of patients with apraxia involve English language. To analyze the types and frequency of errors produced by patients with apraxia of speech whose mother tongue was Brazilian Portuguese. 20 adults with apraxia of speech caused by stroke were assessed. The types of error committed by patients were analyzed both quantitatively and qualitatively, and frequencies compared. We observed the presence of substitution, omission, trial-and-error, repetition, self-correction, anticipation, addition, reiteration and metathesis, in descending order of frequency, respectively. Omission type errors were one of the most commonly occurring whereas addition errors were infrequent. These findings differed to those reported in English speaking patients, probably owing to differences in the methodologies used for classifying error types; the inclusion of speakers with apraxia secondary to aphasia; and the difference in the structure of Portuguese language to English in terms of syllable onset complexity and effect on motor control. The frequency of omission and addition errors observed differed to the frequency reported for speakers of English.
Analysis of Sources of Large Positioning Errors in Deterministic Fingerprinting.
Torres-Sospedra, Joaquín; Moreira, Adriano
2017-11-27
Wi-Fi fingerprinting is widely used for indoor positioning and indoor navigation due to the ubiquity of wireless networks, high proliferation of Wi-Fi-enabled mobile devices, and its reasonable positioning accuracy. The assumption is that the position can be estimated based on the received signal strength intensity from multiple wireless access points at a given point. The positioning accuracy, within a few meters, enables the use of Wi-Fi fingerprinting in many different applications. However, it has been detected that the positioning error might be very large in a few cases, which might prevent its use in applications with high accuracy positioning requirements. Hybrid methods are the new trend in indoor positioning since they benefit from multiple diverse technologies (Wi-Fi, Bluetooth, and Inertial Sensors, among many others) and, therefore, they can provide a more robust positioning accuracy. In order to have an optimal combination of technologies, it is crucial to identify when large errors occur and prevent the use of extremely bad positioning estimations in hybrid algorithms. This paper investigates why large positioning errors occur in Wi-Fi fingerprinting and how to detect them by using the received signal strength intensities.
Simultaneous control of error rates in fMRI data analysis.
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-12-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. Copyright © 2015 Elsevier Inc. All rights reserved.
Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.
2012-01-01
1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
Error Floor Analysis of Coded Slotted ALOHA over Packet Erasure Channels
DEFF Research Database (Denmark)
Ivanov, Mikhail; Graell i Amat, Alexandre; Brannstrom, F.
2014-01-01
We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore ...... identify the most dominant stopping sets for the distributions of practical interest. The derived analytical expressions allow us to accurately predict the error floor at low to moderate channel loads and characterize the unequal error protection inherent in CSA.......We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore...
ERROR ANALYSIS IN THE TRAVEL WRITING MADE BY THE STUDENTS OF ENGLISH STUDY PROGRAM
Directory of Open Access Journals (Sweden)
Vika Agustina
2015-05-01
Full Text Available This study was conducted to identify the kinds of errors in surface strategy taxonomy and to know the dominant type of errors made by the fifth semester students of English Department of one State University in Malang-Indonesia in producing their travel writing. The type of research of this study is document analysis since it analyses written materials, in this case travel writing texts. The analysis finds that the grammatical errors made by the students based on surface strategy taxonomy theory consist of four types. They are (1 omission, (2 addition, (3 misformation and (4 misordering. The most frequent errors occuring in misformation are in the use of tense form. Secondly, the errors are in omission of noun/verb inflection. The next error, there are many clauses that contain unnecessary phrase added there.
The high order dispersion analysis based on first-passage-time probability in financial markets
Liu, Chenggong; Shang, Pengjian; Feng, Guochen
2017-04-01
The study of first-passage-time (FPT) event about financial time series has gained broad research recently, which can provide reference for risk management and investment. In this paper, a new measurement-high order dispersion (HOD)-is developed based on FPT probability to explore financial time series. The tick-by-tick data of three Chinese stock markets and three American stock markets are investigated. We classify the financial markets successfully through analyzing the scaling properties of FPT probabilities of six stock markets and employing HOD method to compare the differences of FPT decay curves. It can be concluded that long-range correlation, fat-tailed broad probability density function and its coupling with nonlinearity mainly lead to the multifractality of financial time series by applying HOD method. Furthermore, we take the fluctuation function of multifractal detrended fluctuation analysis (MF-DFA) to distinguish markets and get consistent results with HOD method, whereas the HOD method is capable of fractionizing the stock markets effectively in the same region. We convince that such explorations are relevant for a better understanding of the financial market mechanisms.
Geospatial Analysis of Earthquake Damage Probability of Water Pipelines Due to Multi-Hazard Failure
Directory of Open Access Journals (Sweden)
Mohammad Eskandari
2017-06-01
Full Text Available The main purpose of this study is to develop a Geospatial Information System (GIS model with the ability to assess the seismic damage to pipelines for two well-known hazards, including ground shaking and ground failure simultaneously. The model that is developed and used in this study includes four main parts of database implementation, seismic hazard analysis, vulnerability assessment and seismic damage assessment to determine the pipeline’s damage probability. This model was implemented for main water distribution pipelines of Iran and tested for two different earthquake scenarios. The final damage probability of pipelines was estimated to be about 74% for water distribution pipelines of Mashhad including 40% and 34% for leak and break, respectively. In the next step, the impact of each earthquake input parameter on this model was extracted, and each of the three parameters had a huge impact on changing the results of pipelines’ damage probability. Finally, the dependency of the model in liquefaction susceptibility, landslide susceptibility, vulnerability functions and segment length was checked out and specified that the model is sensitive just to liquefaction susceptibility and vulnerability functions.
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
Oflazer, Kemal
1995-01-01
Error-tolerant recognition enables the recognition of strings that deviate mildly from any string in the regular set recognized by the underlying finite state recognizer. Such recognition has applications in error-tolerant morphological processing, spelling correction, and approximate string matching in information retrieval. After a description of the concepts and algorithms involved, we give examples from two applications: In the context of morphological analysis, error-tolerant recognition...
Analysis of Free-Space Coupling to Photonic Lanterns in the Presence of Tilt Errors
2017-05-01
Analysis of Free- Space Coupling to Photonic Lanterns in the Presence of Tilt Errors Timothy M. Yarnall, David J. Geisler, Curt M. Schieler...Massachusetts Avenue Cambridge, MA 02139, USA Abstract—Free space coupling to photonic lanterns is more tolerant to tilt errors and F -number mismatch than...these errors. I. INTRODUCTION Photonic lanterns provide a means for transitioning from the free space regime to the single-mode fiber (SMF) regime by
International Nuclear Information System (INIS)
Halliwell, J. J.
2009-01-01
In the quantization of simple cosmological models (minisuperspace models) described by the Wheeler-DeWitt equation, an important step is the construction, from the wave function, of a probability distribution answering various questions of physical interest, such as the probability of the system entering a given region of configuration space at any stage in its entire history. A standard but heuristic procedure is to use the flux of (components of) the wave function in a WKB approximation. This gives sensible semiclassical results but lacks an underlying operator formalism. In this paper, we address the issue of constructing probability distributions linked to the Wheeler-DeWitt equation using the decoherent histories approach to quantum theory. The key step is the construction of class operators characterizing questions of physical interest. Taking advantage of a recent decoherent histories analysis of the arrival time problem in nonrelativistic quantum mechanics, we show that the appropriate class operators in quantum cosmology are readily constructed using a complex potential. The class operator for not entering a region of configuration space is given by the S matrix for scattering off a complex potential localized in that region. We thus derive the class operators for entering one or more regions in configuration space. The class operators commute with the Hamiltonian, have a sensible classical limit, and are closely related to an intersection number operator. The definitions of class operators given here handle the key case in which the underlying classical system has multiple crossings of the boundaries of the regions of interest. We show that oscillatory WKB solutions to the Wheeler-DeWitt equation give approximate decoherence of histories, as do superpositions of WKB solutions, as long as the regions of configuration space are sufficiently large. The corresponding probabilities coincide, in a semiclassical approximation, with standard heuristic procedures
Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis
Energy Technology Data Exchange (ETDEWEB)
Ferson, Scott [Applied Biomathematics, Setauket, NY (United States); Nelsen, Roger B. [Lewis & Clark College, Portland OR (United States); Hajagos, Janos [Applied Biomathematics, Setauket, NY (United States); Berleant, Daniel J. [Iowa State Univ., Ames, IA (United States); Zhang, Jianzhong [Iowa State Univ., Ames, IA (United States); Tucker, W. Troy [Applied Biomathematics, Setauket, NY (United States); Ginzburg, Lev R. [Applied Biomathematics, Setauket, NY (United States); Oberkampf, William L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-05-01
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
Analysis of Student Errors on Division of Fractions
Maelasari, E.; Jupri, A.
2017-02-01
This study aims to describe the type of student errors that typically occurs at the completion of the division arithmetic operations on fractions, and to describe the causes of students’ mistakes. This research used a descriptive qualitative method, and involved 22 fifth grade students at one particular elementary school in Kuningan, Indonesia. The results of this study showed that students’ error answers caused by students changing their way of thinking to solve multiplication and division operations on the same procedures, the changing of mix fractions to common fraction have made students confused, and students are careless in doing calculation. From student written work, in solving the fraction problems, we found that there is influence between the uses of learning methods and student response, and some of student responses beyond researchers’ prediction. We conclude that the teaching method is not only the important thing that must be prepared, but the teacher should also prepare about predictions of students’ answers to the problems that will be given in the learning process. This could be a reflection for teachers to be better and to achieve the expected learning goals.
Yang, Liang
2014-12-01
In this study, we consider a relay-assisted free-space optical communication scheme over strong atmospheric turbulence channels with misalignment-induced pointing errors. The links from the source to the destination are assumed to be all-optical links. Assuming a variable gain relay with amplify-and-forward protocol, the electrical signal at the source is forwarded to the destination with the help of this relay through all-optical links. More specifically, we first present a cumulative density function (CDF) analysis for the end-to-end signal-to-noise ratio. Based on this CDF, the outage probability, bit-error rate, and average capacity of our proposed system are derived. Results show that the system diversity order is related to the minimum value of the channel parameters.
Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis
Ghrist, Richard W.; Plakalovic, Dragan
2012-01-01
An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.
Incremental Volumetric Remapping Method: Analysis and Error Evaluation
International Nuclear Information System (INIS)
Baptista, A. J.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.; Alves, J. L.
2007-01-01
In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90 deg. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the
van der Eijk, Cees; Rose, Jonathan
2015-01-01
This paper undertakes a systematic assessment of the extent to which factor analysis the correct number of latent dimensions (factors) when applied to ordered-categorical survey items (so-called Likert items). We simulate 2400 data sets of uni-dimensional Likert items that vary systematically over a range of conditions such as the underlying population distribution, the number of items, the level of random error, and characteristics of items and item-sets. Each of these datasets is factor analysed in a variety of ways that are frequently used in the extant literature, or that are recommended in current methodological texts. These include exploratory factor retention heuristics such as Kaiser's criterion, Parallel Analysis and a non-graphical scree test, and (for exploratory and confirmatory analyses) evaluations of model fit. These analyses are conducted on the basis of Pearson and polychoric correlations. We find that, irrespective of the particular mode of analysis, factor analysis applied to ordered-categorical survey data very often leads to over-dimensionalisation. The magnitude of this risk depends on the specific way in which factor analysis is conducted, the number of items, the properties of the set of items, and the underlying population distribution. The paper concludes with a discussion of the consequences of over-dimensionalisation, and a brief mention of alternative modes of analysis that are much less prone to such problems.
van der Eijk, Cees; Rose, Jonathan
2015-01-01
This paper undertakes a systematic assessment of the extent to which factor analysis the correct number of latent dimensions (factors) when applied to ordered-categorical survey items (so-called Likert items). We simulate 2400 data sets of uni-dimensional Likert items that vary systematically over a range of conditions such as the underlying population distribution, the number of items, the level of random error, and characteristics of items and item-sets. Each of these datasets is factor analysed in a variety of ways that are frequently used in the extant literature, or that are recommended in current methodological texts. These include exploratory factor retention heuristics such as Kaiser’s criterion, Parallel Analysis and a non-graphical scree test, and (for exploratory and confirmatory analyses) evaluations of model fit. These analyses are conducted on the basis of Pearson and polychoric correlations. We find that, irrespective of the particular mode of analysis, factor analysis applied to ordered-categorical survey data very often leads to over-dimensionalisation. The magnitude of this risk depends on the specific way in which factor analysis is conducted, the number of items, the properties of the set of items, and the underlying population distribution. The paper concludes with a discussion of the consequences of over-dimensionalisation, and a brief mention of alternative modes of analysis that are much less prone to such problems. PMID:25789992
Pseudorange error analysis for precise indoor positioning system
Pola, Marek; Bezoušek, Pavel
2017-05-01
There is a currently developed system of a transmitter indoor localization intended for fire fighters or members of rescue corps. In this system the transmitter of an ultra-wideband orthogonal frequency-division multiplexing signal position is determined by the time difference of arrival method. The position measurement accuracy highly depends on the directpath signal time of arrival estimation accuracy which is degraded by severe multipath in complicated environments such as buildings. The aim of this article is to assess errors in the direct-path signal time of arrival determination caused by multipath signal propagation and noise. Two methods of the direct-path signal time of arrival estimation are compared here: the cross correlation method and the spectral estimation method.
Contribution of Error Analysis to Foreign Language Teaching
Directory of Open Access Journals (Sweden)
Vacide ERDOĞAN
2014-01-01
Full Text Available It is inevitable that learners make mistakes in the process of foreign language learning.However, what is questioned by language teachers is why students go on making the same mistakeseven when such mistakes have been repeatedly pointed out to them. Yet not all mistakes are the same;sometimes they seem to be deeply ingrained, but at other times students correct themselves with ease.Thus, researchers and teachers of foreign language came to realize that the mistakes a person made inthe process of constructing a new system of language is needed to be analyzed carefully, for theypossibly held in them some of the keys to the understanding of second language acquisition. In thisrespect, the aim of this study is to point out the significance of learners’ errors for they provideevidence of how language is learned and what strategies or procedures the learners are employing inthe discovery of language.
Error Analysis of Remotely-Acquired Mossbauer Spectra
Schaefer, Martha W.; Dyar, M. Darby; Agresti, David G.; Schaefer, Bradley E.
2005-01-01
On the Mars Exploration Rovers, Mossbauer spectroscopy has recently been called upon to assist in the task of mineral identification, a job for which it is rarely used in terrestrial studies. For example, Mossbauer data were used to support the presence of olivine in Martian soil at Gusev and jarosite in the outcrop at Meridiani. The strength (and uniqueness) of these interpretations lies in the assumption that peak positions can be determined with high degrees of both accuracy and precision. We summarize here what we believe to be the major sources of error associated with peak positions in remotely-acquired spectra, and speculate on their magnitudes. Our discussion here is largely qualitative because necessary background information on MER calibration sources, geometries, etc., have not yet been released to the PDS; we anticipate that a more quantitative discussion can be presented by March 2005.
An Analysis of College Students' Attitudes towards Error Correction in EFL Context
Zhu, Honglin
2010-01-01
This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…
Analysis of Errors and Misconceptions in the Learning of Calculus by Undergraduate Students
Muzangwa, Jonatan; Chifamba, Peter
2012-01-01
This paper is going to analyse errors and misconceptions in an undergraduate course in Calculus. The study will be based on a group of 10 BEd. Mathematics students at Great Zimbabwe University. Data is gathered through use of two exercises on Calculus 1&2.The analysis of the results from the tests showed that a majority of the errors were due…
Kingsdorf, Sheri; Krawec, Jennifer
2014-01-01
Solving word problems is a common area of struggle for students with learning disabilities (LD). In order for instruction to be effective, we first need to have a clear understanding of the specific errors exhibited by students with LD during problem solving. Error analysis has proven to be an effective tool in other areas of math but has had…
A Linguistic Analysis of Errors in the Compositions of Arba Minch University Students
Tizazu, Yoseph
2014-01-01
This study reports the dominant linguistic errors that occur in the written productions of Arba Minch University (hereafter AMU) students. A sample of paragraphs was collected for two years from students ranging from freshmen to graduating level. The sampled compositions were then coded, described, and explained using error analysis method. Both…
Boundary error analysis and categorization in the TRECVID news story segmentation task
Arlandis, J.; Over, P.; Kraaij, W.
2005-01-01
In this paper, an error analysis based on boundary error popularity (frequency) including semantic boundary categorization is applied in the context of the news story segmentation task from TRECVTD1. Clusters of systems were defined based on the input resources they used including video, audio and
Perceptual Error Analysis of Human and Synthesized Voices.
Englert, Marina; Madazio, Glaucya; Gielow, Ingrid; Lucero, Jorge; Behlau, Mara
2017-07-01
To assess the quality of synthesized voices through listeners' skills in discriminating human and synthesized voices. Prospective study. Eighteen human voices with different types and degrees of deviation (roughness, breathiness, and strain, with three degrees of deviation: mild, moderate, and severe) were selected by three voice specialists. Synthesized samples with the same deviations of human voices were produced by the VoiceSim system. The manipulated parameters were vocal frequency perturbation (roughness), additive noise (breathiness), increasing tension, subglottal pressure, and decreasing vocal folds separation (strain). Two hundred sixty-nine listeners were divided in three groups: voice specialist speech language pathologists (V-SLPs), general clinician SLPs (G-SLPs), and naive listeners (NLs). The SLP listeners also indicated the type and degree of deviation. The listeners misclassified 39.3% of the voices, both synthesized (42.3%) and human (36.4%) samples (P = 0.001). V-SLPs presented the lowest error percentage considering the voice nature (34.6%); G-SLPs and NLs identified almost half of the synthesized samples as human (46.9%, 45.6%). The male voices were more susceptible for misidentification. The synthesized breathy samples generated a greater perceptual confusion. The samples with severe deviation seemed to be more susceptible for errors. The synthesized female deviations were correctly classified. The male breathiness and strain were identified as roughness. VoiceSim produced stimuli very similar to the voices of patients with dysphonia. V-SLPs had a better ability to classify human and synthesized voices. VoiceSim is better to simulate vocal breathiness and female deviations; the male samples need adjustment. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
The treatment of commission errors in first generation human reliability analysis methods
International Nuclear Information System (INIS)
Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da; Melo, Paulo Fernando Frutuoso e
2011-01-01
Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)
ERROR ANALYSIS OF ENGLISH WRITTEN ESSAY OF HIGHER EFL LEARNERS: A CASE STUDY
Directory of Open Access Journals (Sweden)
Rina Husnaini Febriyanti
2016-09-01
Full Text Available The aim of the research is to identify grammatical error and to investigate the most and the least of grammatical error occurred on the students’ English written essay. The approach of research is qualitative descriptive with descriptive analysis. The samples were taken from the essays made by 34 students in writing class. The findings resulted in: the most common error occurred was subject-verb agreement error and the score was 28, 25%. The second place of frequent error was on verb tense and form with 24, 66% as the score. The third was on spellings errors and the value is 17, 94%. The fourth was error on using auxiliaries and the score 9, 87%. The fifth was error on word order with the score was 8.07%. The rest error was applying passive voice with the score is 4.93%, articles (3.59%, prepositions (1.79%, and pronoun and run-on sentence with the same scores, 0. 45%. This may indicate that most students still made errors even for the usage of basic grammar rules in their writing.
Shimazoe, Toshiyuki; Ishikawa, Hideto; Takei, Tsuyoshi; Tanaka, Kenji
Recent types of train protection systems such as ATC require the large amounts of low-level configuration data compared to conventional types of them. Hence management of the configuration data is becoming more important than before. Because of this, the authors developed an error-proof system focusing on human operations in the configuration data management. This error-proof system has already been introduced to the Tokaido Shinkansen ATC data management system. However, as effectiveness of the system has not been presented objectively, its full perspective is not clear. To clarify the effectiveness, this paper analyses error-proofing cases introduced to the system, using the concept of QFD and the error-proofing principles. From this analysis, the following methods of evaluation for error-proof systems are proposed: metrics to review the rationality of required qualities are provided by arranging the required qualities according to hazard levels and work phases; metrics to evaluate error-proof systems are provided to improve their reliability effectively by mapping the error-proofing principles onto the error-proofing cases which are applied according to the required qualities and the corresponding hazard levels. In addition, these objectively-analysed error-proofing cases are available to be used as error-proofing-cases database or guidelines for safer HMI design especially for data management.
Error Analysis on the Use of “Be” in the Students’ Composition
Directory of Open Access Journals (Sweden)
Rochmat Budi Santosa
2016-07-01
Full Text Available This study aims to identify, analyze and describe the structure of the use of some errors in the writing of English sentences in the text and the aspects surrounding the Student Semester 3 of English Department STAIN Surakarta. In this study, the researcher describes the error use of 'be' both as a linking verb or auxiliary verb. This is a qualitative-descriptive research. Source data used is a document that is the writing assignment undertaken by the Students taking Writing course. Writing tasks are in narrative, descriptive, expositive, and argumentative forms. To analyze the data, researcher uses intra lingual and extra lingual method. This method is used to connect the linguistic elements in sentences, especially some of the elements either as a linking verb or auxiliary verb in English sentences in the text. Based on the analysis of error regarding the use of 'be' it can be concluded that there are 5 (five types of errors made by students; error about the absence (omission of 'be', error about the addition of 'be', the error on the application of 'be', errors in placements 'be', and a complex error in the use of 'be'. These errors occur due to inter lingual transfer, intra lingual transfer and learning context.
The treatment of commission errors in first generation human reliability analysis methods
Energy Technology Data Exchange (ETDEWEB)
Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da, E-mail: bayout@cnen.gov.b, E-mail: rfonseca@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN) Rio de Janeiro, RJ (Brazil); Melo, Paulo Fernando Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear
2011-07-01
Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)
International Nuclear Information System (INIS)
Cooper, S.E.; Wreathall, J.; Thompson, C.M., Drouin, M.; Bley, D.C.
1996-01-01
This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ''A Technique for Human Error Analysis'' (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst
Analysis technique for controlling system wavefront error with active/adaptive optics
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Hu, Juju; Hu, Haijiang; Ji, Yinghua
2010-03-15
Periodic nonlinearity that ranges from tens of nanometers to a few nanometers in heterodyne interferometer limits its use in high accuracy measurement. A novel method is studied to detect the nonlinearity errors based on the electrical subdivision and the analysis method of statistical signal in heterodyne Michelson interferometer. Under the movement of micropositioning platform with the uniform velocity, the method can detect the nonlinearity errors by using the regression analysis and Jackknife estimation. Based on the analysis of the simulations, the method can estimate the influence of nonlinearity errors and other noises for the dimensions measurement in heterodyne Michelson interferometer.
Error rates in forensic DNA analysis: definition, numbers, impact and communication.
Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid
2014-09-01
Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed
Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.
Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F
2001-01-01
When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.
Research on Human-Error Factors of Civil Aircraft Pilots Based On Grey Relational Analysis
Directory of Open Access Journals (Sweden)
Guo Yundong
2018-01-01
Full Text Available In consideration of the situation that civil aviation accidents involve many human-error factors and show the features of typical grey systems, an index system of civil aviation accident human-error factors is built using human factor analysis and classification system model. With the data of accidents happened worldwide between 2008 and 2011, the correlation between human-error factors can be analyzed quantitatively using the method of grey relational analysis. Research results show that the order of main factors affecting pilot human-error factors is preconditions for unsafe acts, unsafe supervision, organization and unsafe acts. The factor related most closely with second-level indexes and pilot human-error factors is the physical/mental limitations of pilots, followed by supervisory violations. The relevancy between the first-level indexes and the corresponding second-level indexes and the relevancy between second-level indexes can also be analyzed quantitatively.
Quality of IT service delivery — Analysis and framework for human error prevention
Shwartz, L.
2010-12-01
In this paper, we address the problem of reducing the occurrence of Human Errors that cause service interruptions in IT Service Support and Delivery operations. Analysis of a large volume of service interruption records revealed that more than 21% of interruptions were caused by human error. We focus on Change Management, the process with the largest risk of human error, and identify the main instances of human errors as the 4 Wrongs: request, time, configuration item, and command. Analysis of change records revealed that the humanerror prevention by partial automation is highly relevant. We propose the HEP Framework, a framework for execution of IT Service Delivery operations that reduces human error by addressing the 4 Wrongs using content integration, contextualization of operation patterns, partial automation of command execution, and controlled access to resources.
International Nuclear Information System (INIS)
Castillo, Enrique; Conejo, Antonio J.; Minguez, Roberto; Castillo, Carmen
2003-01-01
The paper introduces a method for solving the failure probability-safety factor problem for designing engineering works proposed by Castillo et al. that optimizes an objective function subject to the standard geometric and code constraints, and two more sets of constraints that simultaneously guarantee given safety factors and failure probability bounds associated with a given set of failure modes. The method uses the dual variables and is especially convenient to perform a sensitivity analysis, because sensitivities of the objective function and the reliability indices can be obtained with respect to all data values. To this end, the optimization problems are transformed into other equivalent ones, in which the data parameters are converted into artificial variables, and locked to their actual values. In this way, some variables of the associated dual problems become the desired sensitivities. In addition, using the proposed methodology, calibration of codes based on partial safety factors can be done. The method is illustrated by its application to the design of a simple rubble mound breakwater and a bridge crane
Potentialities of Program Analysis of Probable Outcome in Patients with Acute Poisoning by Mushrooms
Directory of Open Access Journals (Sweden)
V. I. Cherniy
2006-01-01
Full Text Available The authors have developed a method for predicting a probable outcome in patients with acute poisoning by mushrooms, by examining the data of routine laboratory studies and dynamic interphase tensiometry and rheometry of sera from patients. Sixty-eight patients with acute mushroom intoxication were followed up. According to the outcome of the disease, they were divided into two groups: A survivors and B (deceased. In the sera of the deceased, the time of T monolayer relaxation and the angle of L1 tensiogram slope were 2.5 greater and PN3-PN4 and L2/L1 were 6 and 3.7 times less, respectively, than those in the survivors. Based on these data, by using the discriminant analysis, the authors have derived a classification rule that permits prediction of the outcome of the disease with a high degree of probability, by examining the results of dynamic interphase tensiometry and the data of routine laboratory studies of blood samples from patients with acute poisoning by mushrooms. The derived rule is of high significance (p<0.05.
Mixed Methods Analysis of Medical Error Event Reports: A Report from the ASIPS Collaborative
National Research Council Canada - National Science Library
Harris, Daniel M; Westfall, John M; Fernald, Douglas H; Duclos, Christine W; West, David R; Niebauer, Linda; Marr, Linda; Quintela, Javan; Main, Deborah S
2005-01-01
.... This paper presents a mixed methods approach to analyzing narrative error event reports. Mixed methods studies integrate one or more qualitative and quantitative techniques for data collection and analysis...
J. McDonnell; A.J. Goverde (Angelique); J.P.W. Vermeiden; F.F.H. Rutten (Frans)
2002-01-01
textabstractBACKGROUND: Estimating the probability of pregnancy leading to delivery and the influence of clinical factors on that probability is of fundamental importance in the treatment counselling of infertile couples. A variety of statistical techniques have been used to
Error Patterns Analysis of Hearing Aid and Cochlear Implant Users as a Function of Noise.
Chun, Hyungi; Ma, Sunmi; Han, Woojae; Chun, Youngmyoung
2015-12-01
Not all impaired listeners may have the same speech perception ability although they will have similar pure-tone threshold and configuration. For this reason, the present study analyzes error patterns in the hearing-impaired compared to normal hearing (NH) listeners as a function of signal-to-noise ratio (SNR). Forty-four adults participated: 10 listeners with NH, 20 hearing aids (HA) users and 14 cochlear implants (CI) users. The Korean standardized monosyllables were presented as the stimuli in quiet and three different SNRs. Total error patterns were classified into types of substitution, omission, addition, fail, and no response, using stacked bar plots. Total error percent for the three groups significantly increased as the SNRs decreased. For error pattern analysis, the NH group showed substitution errors dominantly regardless of the SNRs compared to the other groups. Both the HA and CI groups had substitution errors that declined, while no response errors appeared as the SNRs increased. The CI group was characterized by lower substitution and higher fail errors than did the HA group. Substitutions of initial and final phonemes in the HA and CI groups were limited by place of articulation errors. However, the HA group had missed consonant place cues, such as formant transitions and stop consonant bursts, whereas the CI group usually had limited confusions of nasal consonants with low frequency characteristics. Interestingly, all three groups showed /k/ addition in the final phoneme, a trend that magnified as noise increased. The HA and CI groups had their unique error patterns even though the aided thresholds of the two groups were similar. We expect that the results of this study will focus on high error patterns in auditory training of hearing-impaired listeners, resulting in reducing those errors and improving their speech perception ability.
Generalized multiplicative error models: Asymptotic inference and empirical analysis
Li, Qian
This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.
Error Analysis for Discontinuous Galerkin Method for Parabolic Problems
Kaneko, Hideaki
2004-01-01
In the proposal, the following three objectives are stated: (1) A p-version of the discontinuous Galerkin method for a one dimensional parabolic problem will be established. It should be recalled that the h-version in space was used for the discontinuous Galerkin method. An a priori error estimate as well as a posteriori estimate of this p-finite element discontinuous Galerkin method will be given. (2) The parameter alpha that describes the behavior double vertical line u(sub t)(t) double vertical line 2 was computed exactly. This was made feasible because of the explicitly specified initial condition. For practical heat transfer problems, the initial condition may have to be approximated. Also, if the parabolic problem is proposed on a multi-dimensional region, the parameter alpha, for most cases, would be difficult to compute exactly even in the case that the initial condition is known exactly. The second objective of this proposed research is to establish a method to estimate this parameter. This will be done by computing two discontinuous Galerkin approximate solutions at two different time steps starting from the initial time and use them to derive alpha. (3) The third objective is to consider the heat transfer problem over a two dimensional thin plate. The technique developed by Vogelius and Babuska will be used to establish a discontinuous Galerkin method in which the p-element will be used for through thickness approximation. This h-p finite element approach, that results in a dimensional reduction method, was used for elliptic problems, but the application appears new for the parabolic problem. The dimension reduction method will be discussed together with the time discretization method.
Energy Technology Data Exchange (ETDEWEB)
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.
Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W
2017-06-22
Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.
Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E
2013-12-01
In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.
Bursuk, Laura; Matteoni, Louise
This module is the second in a two-module cluster. Together, the modules are designed to enable students to recognize and identify by type the errors that occur in recorded samples of oral reading. This one--Module B--focuses on the actual analysis of oral reading errors. Using the understanding of the phonemic and morphemic elements of English…
Bayesian operational modal analysis with asynchronous data, part I: Most probable value
Zhu, Yi-Chen; Au, Siu-Kui
2018-01-01
In vibration tests, multiple sensors are used to obtain detailed mode shape information about the tested structure. Time synchronisation among data channels is required in conventional modal identification approaches. Modal identification can be more flexibly conducted if this is not required. Motivated by the potential gain in feasibility and economy, this work proposes a Bayesian frequency domain method for modal identification using asynchronous 'output-only' ambient data, i.e. 'operational modal analysis'. It provides a rigorous means for identifying the global mode shape taking into account the quality of the measured data and their asynchronous nature. This paper (Part I) proposes an efficient algorithm for determining the most probable values of modal properties. The method is validated using synthetic and laboratory data. The companion paper (Part II) investigates identification uncertainty and challenges in applications to field vibration data.
Geometry, analysis and probability in honor of Jean-Michel Bismut
Hofer, Helmut; Labourie, François; Jan, Yves; Ma, Xiaonan; Zhang, Weiping
2017-01-01
This volume presents original research articles and extended surveys related to the mathematical interest and work of Jean-Michel Bismut. His outstanding contributions to probability theory and global analysis on manifolds have had a profound impact on several branches of mathematics in the areas of control theory, mathematical physics and arithmetic geometry. Contributions by: K. Behrend N. Bergeron S. K. Donaldson J. Dubédat B. Duplantier G. Faltings E. Getzler G. Kings R. Mazzeo J. Millson C. Moeglin W. Müller R. Rhodes D. Rössler S. Sheffield A. Teleman G. Tian K-I. Yoshikawa H. Weiss W. Werner The collection is a valuable resource for graduate students and researchers in these fields.
Directory of Open Access Journals (Sweden)
Osmar Abílio de Carvalho Júnior
2014-04-01
Full Text Available Speckle noise (salt and pepper is inherent to synthetic aperture radar (SAR, which causes a usual noise-like granular aspect and complicates the image classification. In SAR image analysis, the spatial information might be a particular benefit for denoising and mapping classes characterized by a statistical distribution of the pixel intensities from a complex and heterogeneous spectral response. This paper proposes the Probability Density Components Analysis (PDCA, a new alternative that combines filtering and frequency histogram to improve the classification procedure for the single-channel synthetic aperture radar (SAR images. This method was tested on L-band SAR data from the Advanced Land Observation System (ALOS Phased-Array Synthetic-Aperture Radar (PALSAR sensor. The study area is localized in the Brazilian Amazon rainforest, northern Rondônia State (municipality of Candeias do Jamari, containing forest and land use patterns. The proposed algorithm uses a moving window over the image, estimating the probability density curve in different image components. Therefore, a single input image generates an output with multi-components. Initially the multi-components should be treated by noise-reduction methods, such as maximum noise fraction (MNF or noise-adjusted principal components (NAPCs. Both methods enable reducing noise as well as the ordering of multi-component data in terms of the image quality. In this paper, the NAPC applied to multi-components provided large reductions in the noise levels, and the color composites considering the first NAPC enhance the classification of different surface features. In the spectral classification, the Spectral Correlation Mapper and Minimum Distance were used. The results obtained presented as similar to the visual interpretation of optical images from TM-Landsat and Google Maps.
Error Modeling and Sensitivity Analysis of a Five-Axis Machine Tool
Directory of Open Access Journals (Sweden)
Wenjie Tian
2014-01-01
Full Text Available Geometric error modeling and its sensitivity analysis are carried out in this paper, which is helpful for precision design of machine tools. Screw theory and rigid body kinematics are used to establish the error model of an RRTTT-type five-axis machine tool, which enables the source errors affecting the compensable and uncompensable pose accuracy of the machine tool to be explicitly separated, thereby providing designers and/or field engineers with an informative guideline for the accuracy improvement by suitable measures, that is, component tolerancing in design, manufacturing, and assembly processes, and error compensation. The sensitivity analysis method is proposed, and the sensitivities of compensable and uncompensable pose accuracies are analyzed. The analysis results will be used for the precision design of the machine tool.
Nitz, D. E.; Curry, J. J.; Buuck, M.; DeMann, A.; Mitchell, N.; Shull, W.
2018-02-01
We report radiative transition probabilities for 5029 emission lines of neutral cerium within the wavelength range 417-1110 nm. Transition probabilities for only 4% of these lines have been previously measured. These results are obtained from a Boltzmann analysis of two high resolution Fourier transform emission spectra used in previous studies of cerium, obtained from the digital archives of the National Solar Observatory at Kitt Peak. The set of transition probabilities used for the Boltzmann analysis are those published by Lawler et al (2010 J. Phys. B: At. Mol. Opt. Phys. 43 085701). Comparisons of branching ratios and transition probabilities for lines common to the two spectra provide important self-consistency checks and test for the presence of self-absorption effects. Estimated 1σ uncertainties for our transition probability results range from 10% to 18%.
Error analysis for pesticide detection performed on paper-based microfluidic chip devices
Yang, Ning; Shen, Kai; Guo, Jianjiang; Tao, Xinyi; Xu, Peifeng; Mao, Hanping
2017-07-01
Paper chip is an efficient and inexpensive device for pesticide residues detection. However, the reasons of detection error are not clear, which is the main problem to hinder the development of pesticide residues detection. This paper focuses on error analysis for pesticide detection performed on paper-based microfluidic chip devices, which test every possible factor to build the mathematical models for detection error. In the result, double-channel structure is selected as the optimal chip structure to reduce detection error effectively. The wavelength of 599.753 nm is chosen since it is the most sensitive detection wavelength to the variation of pesticide concentration. At last, the mathematical models of detection error for detection temperature and prepared time are concluded. This research lays a theory foundation on accurate pesticide residues detection based on paper-based microfluidic chip devices.
An Analysis of Errors in a Reuse-Oriented Development Environment
Thomas, William M.; Delis, Alex; Basili, Victor R.
1995-01-01
Component reuse is widely considered vital for obtaining significant improvement in development productivity. However, as an organization adopts a reuse-oriented development process, the nature of the problems in development is likely to change. In this paper, we use a measurement-based approach to better understand and evaluate an evolving reuse process. More specifically, we study the effects of reuse across seven projects in narrow domain from a single development organization. An analysis of the errors that occur in new and reused components across all phases of system development provides insight into the factors influencing the reuse process. We found significant differences between errors associated with new and various types of reused components in terms of the types of errors committed, when errors are introduced, and the effect that the errors have on the development process.
Phonological analysis of substitution errors of patients with apraxia of speech
Directory of Open Access Journals (Sweden)
Maysa Luchesi Cera
Full Text Available Abstract The literature on apraxia of speech describes the types and characteristics of phonological errors in this disorder. In general, phonemes affected by errors are described, but the distinctive features involved have not yet been investigated. Objective: To analyze the features involved in substitution errors produced by Brazilian-Portuguese speakers with apraxia of speech. Methods: 20 adults with apraxia of speech were assessed. Phonological analysis of the distinctive features involved in substitution type errors was carried out using the protocol for the evaluation of verbal and non-verbal apraxia. Results: The most affected features were: voiced, continuant, high, anterior, coronal, posterior. Moreover, the mean of the substitutions of marked to markedness features was statistically greater than the markedness to marked features. Conclusions: This study contributes toward a better characterization of the phonological errors found in apraxia of speech, thereby helping to diagnose communication disorders and the selection criteria of phonemes for rehabilitation in these patients.
Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors
Boussalis, Dhemetrios; Bayard, David S.
2013-01-01
G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to
Minimizing treatment planning errors in proton therapy using failure mode and effects analysis
International Nuclear Information System (INIS)
Zheng, Yuanshui; Johnson, Randall; Larson, Gary
2016-01-01
Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their
Wagar, Elizabeth A; Tamashiro, Lorraine; Yasin, Bushra; Hilborne, Lee; Bruckner, David A
2006-11-01
Patient safety is an increasingly visible and important mission for clinical laboratories. Attention to improving processes related to patient identification and specimen labeling is being paid by accreditation and regulatory organizations because errors in these areas that jeopardize patient safety are common and avoidable through improvement in the total testing process. To assess patient identification and specimen labeling improvement after multiple implementation projects using longitudinal statistical tools. Specimen errors were categorized by a multidisciplinary health care team. Patient identification errors were grouped into 3 categories: (1) specimen/requisition mismatch, (2) unlabeled specimens, and (3) mislabeled specimens. Specimens with these types of identification errors were compared preimplementation and postimplementation for 3 patient safety projects: (1) reorganization of phlebotomy (4 months); (2) introduction of an electronic event reporting system (10 months); and (3) activation of an automated processing system (14 months) for a 24-month period, using trend analysis and Student t test statistics. Of 16,632 total specimen errors, mislabeled specimens, requisition mismatches, and unlabeled specimens represented 1.0%, 6.3%, and 4.6% of errors, respectively. Student t test showed a significant decrease in the most serious error, mislabeled specimens (P < .001) when compared to before implementation of the 3 patient safety projects. Trend analysis demonstrated decreases in all 3 error types for 26 months. Applying performance-improvement strategies that focus longitudinally on specimen labeling errors can significantly reduce errors, therefore improving patient safety. This is an important area in which laboratory professionals, working in interdisciplinary teams, can improve safety and outcomes of care.
Single trial time-frequency domain analysis of error processing in post-traumatic stress disorder.
Clemans, Zachary A; El-Baz, Ayman S; Hollifield, Michael; Sokhadze, Estate M
2012-09-13
Error processing studies in psychology and psychiatry are relatively common. Event-related potentials (ERPs) are often used as measures of error processing, two such response-locked ERPs being the error-related negativity (ERN) and the error-related positivity (Pe). The ERN and Pe occur following committed error in reaction time tasks as low frequency (4-8 Hz) electroencephalographic (EEG) oscillations registered at the midline fronto-central sites. We created an alternative method for analyzing error processing using time-frequency analysis in the form of a wavelet transform. A study was conducted in which subjects with PTSD and healthy control completed a forced-choice task. Single trial EEG data from errors in the task were processed using a continuous wavelet transform. Coefficients from the transform that corresponded to the theta range were averaged to isolate a theta waveform in the time-frequency domain. Measures called the time-frequency ERN and Pe were obtained from these waveforms for five different channels and then averaged to obtain a single time-frequency ERN and Pe for each error trial. A comparison of the amplitude and latency for the time-frequency ERN and Pe between the PTSD and control group was performed. A significant group effect was found on the amplitude of both measures. These results indicate that the developed single trial time-frequency error analysis method is suitable for examining error processing in PTSD and possibly other psychiatric disorders. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Dealing with Uncertainties A Guide to Error Analysis
Drosg, Manfred
2007-01-01
Dealing with Uncertainties proposes and explains a new approach for the analysis of uncertainties. Firstly, it is shown that uncertainties are the consequence of modern science rather than of measurements. Secondly, it stresses the importance of the deductive approach to uncertainties. This perspective has the potential of dealing with the uncertainty of a single data point and of data of a set having differing weights. Both cases cannot be dealt with the inductive approach, which is usually taken. This innovative monograph also fully covers both uncorrelated and correlated uncertainties. The weakness of using statistical weights in regression analysis is discussed. Abundant examples are given for correlation in and between data sets and for the feedback of uncertainties on experiment design.
DEFF Research Database (Denmark)
Chen, Yangyang; Yang, Ming; Long, Jiang
2017-01-01
For motor control applications, the speed loop performance is largely depended on the accuracy of speed feedback signal. M/T method, due to its high theoretical accuracy, is the most widely used in incremental encoder adopted speed measurement. However, the inherent encoder optical grating error...... and A/D conversion error make it hard to achieve theoretical speed measurement accuracy. In this paper, hardware caused speed measurement errors are analyzed and modeled in detail; a Single-Phase Self-adaptive M/T method is proposed to ideally suppress speed measurement error. In the end, simulation...
Thermal error analysis and compensation for digital image/volume correlation
Pan, Bing
2018-02-01
Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.
Verb retrieval in brain-damaged subjects: 2. Analysis of errors.
Kemmerer, D; Tranel, D
2000-07-01
Verb retrieval for action naming was assessed in 53 brain-damaged subjects by administering a standardized test with 100 items. In a companion paper (Kemmerer & Tranel, 2000), it was shown that impaired and unimpaired subjects did not differ as groups in their sensitivity to a variety of stimulus, lexical, and conceptual factors relevant to the test. For this reason, the main goal of the present study was to determine whether the two groups of subjects manifested theoretically interesting differences in the kinds of errors that they made. All of the subjects' errors were classified according to an error coding system that contains 27 distinct types of errors belonging to five broad categories-verbs, phrases, nouns, adpositional words, and "other" responses. Errors involving the production of verbs that are semantically related to the target were especially prevalent for the unimpaired group, which is similar to the performance of normal control subjects. By contrast, the impaired group had a significantly smaller proportion of errors in the verb category and a significantly larger proportion of errors in each of the nonverb categories. This relationship between error rate and error type is consistent with previous research on both object and action naming errors, and it suggests that subjects with only mild damage to putative lexical systems retain an appreciation of most of the semantic, phonological, and grammatical category features of words, whereas subjects with more severe damage retain a much smaller set of features. At the level of individual subjects, a wide range of "predominant error types" were found, especially among the impaired subjects, which may reflect either different action naming strategies or perhaps different patterns of preservation and impairment of various lexical components. Overall, this study provides a novel addition to the existing literature on the analysis of naming errors made by brain-damaged subjects. Not only does the study
Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi
2017-08-01
Formulation of mathematical learning goals now is not only oriented on cognitive product, but also leads to cognitive process, which is probabilistic thinking. Probabilistic thinking is needed by students to make a decision. Elementary school students are required to develop probabilistic thinking as foundation to learn probability at higher level. A framework of probabilistic thinking of students had been developed by using SOLO taxonomy, which consists of prestructural probabilistic thinking, unistructural probabilistic thinking, multistructural probabilistic thinking and relational probabilistic thinking. This study aimed to analyze of probability task completion based on taxonomy of probabilistic thinking. The subjects were two students of fifth grade; boy and girl. Subjects were selected by giving test of mathematical ability and then based on high math ability. Subjects were given probability tasks consisting of sample space, probability of an event and probability comparison. The data analysis consisted of categorization, reduction, interpretation and conclusion. Credibility of data used time triangulation. The results was level of boy's probabilistic thinking in completing probability tasks indicated multistructural probabilistic thinking, while level of girl's probabilistic thinking in completing probability tasks indicated unistructural probabilistic thinking. The results indicated that level of boy's probabilistic thinking was higher than level of girl's probabilistic thinking. The results could contribute to curriculum developer in developing probability learning goals for elementary school students. Indeed, teachers could teach probability with regarding gender difference.
International Nuclear Information System (INIS)
Buergisser, H.M.; Herrnberger, V.
1981-01-01
The literature study assesses, in the form of expert analysis, geological processes and events for a 1200 km 2 -area of northern Switzerland, with regard to repositories for medium- and high-active waste (depth 100 to 600 m and 600 to 2500 m, respectively) over the next 10 6 years. The area, which comprises parts of the Tabular Jura, the folded Jura and the Molasse Basin, the latter two being parts of the Alpine Orogene, has undergone a non-uniform geologic development since the Oligocene. Within the next 10 4 to 10 5 years a maximum earthquake intensity of VIII-IX (MSK-scale) has been predicted. After this period, particularly in the southern and eastern parts of the area, glaciations will probably occur, with associated erosion of possibly 200 to 300 m. Fluvial erosion as a reponse to an uplift could reach similar values after 10 5 to 10 6 years; however, there are no data on the recent relative vertical crustal movements of the area. The risk of a meteorite impact is considered small as compared to that of these factors. Seismic activity and the position and extent of faults are so poorly known within the area that the faulting probability cannot be derived at present. Flooding by the sea, intrusion of magma, diapirism, metamorphism and volcanic eruptions are not considered to be risk factors for final repositories in northern Switzerland. For the shallow-type repositories, the risk of denudation and landslides have to be judged when locality-bound projects have been proposed. (Auth.)
A meta-analysis of nonsystematic responding in delay and probability reward discounting.
Smith, Kathleen R; Lawyer, Steven R; Swift, Joshua K
2018-02-01
Delay discounting (DD) and probability discounting (PD) are behavioral measures of choice that index sensitivity to delayed and probabilistic outcomes, which are associated with a range of negative health-related outcomes. Patterns of discounting tend to be predictable, where preferences for immediate (vs. delayed) and certain (vs. probabilistic) rewards change as a function of delay and probability. However, some participants yield nonsystematic response patterns (NSR) that cannot be accounted for by theories of choice and could have implications for the validity of discounting-related experiments. Johnson and Bickel (2008) outline an algorithm for identifying NSR patterns in discounting, but the typical frequency of and methodological predictors of NSR patterns are not yet established in the extant literature. In this meta-analytic review, we identified papers for analysis by searching Web of Science, PubMed, and PsycInfo databases until November 8, 2015 for experiments identifying nonsystematic responders using Johnson and Bickel's algorithm. This yielded 114 experiments with nonsystematic data reported. The overall frequency of NSR across DD and PD studies was 18% and 19%, respectively. Nonmonetary outcomes (e.g., drugs, food, sex) yielded significantly more NSR patterns than did discounting for monetary outcomes. Participants recruited from a university setting had significantly more NSR patterns than did participants recruited from nonuniversity settings. Our review also indicates that researchers are inconsistent in whether or how they report NSR in discounting studies, which is relevant for a clearer understanding of the behavioral mechanisms that underlie impulsive choice. We make several recommendations regarding the assessment of NSR in discounting research. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Use of error files in uncertainty analysis and data adjustment
International Nuclear Information System (INIS)
Chestnutt, M.M.; McCracken, A.K.; McCracken, A.K.
1979-01-01
Some results are given from uncertainty analyses on Pressurized Water Reactor (PWR) and Fast Reactor Theoretical Benchmarks. Upper limit estimates of calculated quantities are shown to be significantly reduced by the use of ENDF/B data covariance files and recently published few-group covariance matrices. Some problems in the analysis of single-material benchmark experiments are discussed with reference to the Winfrith iron benchmark experiment. Particular attention is given to the difficulty of making use of very extensive measurements which are likely to be a feature of this type of experiment. Preliminary results of an adjustment in iron are shown
Development of safety analysis and constraint detection techniques for process interaction errors
International Nuclear Information System (INIS)
Fan, Chin-Feng; Tsai, Shang-Lin; Tseng, Wan-Hui
2011-01-01
Among the new failure modes introduced by computer into safety systems, the process interaction error is the most unpredictable and complicated failure mode, which may cause disastrous consequences. This paper presents safety analysis and constraint detection techniques for process interaction errors among hardware, software, and human processes. Among interaction errors, the most dreadful ones are those that involve run-time misinterpretation from a logic process. We call them the 'semantic interaction errors'. Such abnormal interaction is not adequately emphasized in current research. In our static analysis, we provide a fault tree template focusing on semantic interaction errors by checking conflicting pre-conditions and post-conditions among interacting processes. Thus, far-fetched, but highly risky, interaction scenarios involve interpretation errors can be identified. For run-time monitoring, a range of constraint types is proposed for checking abnormal signs at run time. We extend current constraints to a broader relational level and a global level, considering process/device dependencies and physical conservation rules in order to detect process interaction errors. The proposed techniques can reduce abnormal interactions; they can also be used to assist in safety-case construction.
Dumas, Raphael; Branemark, Rickard; Frossard, Laurent
2017-06-01
Quantitative assessments of prostheses performances rely more and more frequently on gait analysis focusing on prosthetic knee joint forces and moments computed by inverse dynamics. However, this method is prone to errors, as demonstrated in comparison with direct measurements of these forces and moments. The magnitude of errors reported in the literature seems to vary depending on prosthetic components. Therefore, the purposes of this study were (A) to quantify and compare the magnitude of errors in knee joint forces and moments obtained with inverse dynamics and direct measurements on ten participants with transfemoral amputation during walking and (B) to investigate if these errors can be characterised for different prosthetic knees. Knee joint forces and moments computed by inverse dynamics presented substantial errors, especially during the swing phase of gait. Indeed, the median errors in percentage of the moment magnitude were 4% and 26% in extension/flexion, 6% and 19% in adduction/abduction as well as 14% and 27% in internal/external rotation during stance and swing phase, respectively. Moreover, errors varied depending on the prosthetic limb fitted with mechanical or microprocessor-controlled knees. This study confirmed that inverse dynamics should be used cautiously while performing gait analysis of amputees. Alternatively, direct measurements of joint forces and moments could be relevant for mechanical characterising of components and alignments of prosthetic limbs.
CO2 production in animals: analysis of potential errors in the doubly labeled water method
International Nuclear Information System (INIS)
Nagy, K.A.
1979-03-01
Laboratory validation studies indicate that doubly labeled water ( 3 HH 18 O and 2 HH 18 O) measurements of CO 2 production are accurate to within +-9% in nine species of mammals and reptiles, a bird, and an insect. However, in field studies, errors can be much larger under certain circumstances. Isotopic fraction of labeled water can cause large errors in animals whose evaporative water loss comprises a major proportion of total water efflux. Input of CO 2 across lungs and skin caused errors exceeding +80% in kangaroo rats exposed to air containing 3.4% unlabeled CO 2 . Analytical errors of +-1% in isotope concentrations can cause calculated rates of CO 2 production to contain errors exceeding +-70% in some circumstances. These occur: 1) when little decline in isotope concentractions has occured during the measurement period; 2) when final isotope concentrations closely approach background levels; and 3) when the rate of water flux in an animal is high relative to its rate of CO 2 production. The following sources of error are probably negligible in most situations: 1) use of an inappropriate equation for calculating CO 2 production, 2) variations in rates of water or CO 2 flux through time, 3) use of H 2 O-18 dilution space as a measure of body water volume, 4) exchange of 0-18 between water and nonaqueous compounds in animals (including excrement), 5) incomplete mixing of isotopes in the animal, and 6) input of unlabeled water via lungs and skin. Errors in field measurements of CO 2 production can be reduced to acceptable levels (< 10%) by appropriate selection of study subjects and recapture intervals
SYNTACTIC ERRORS ANALYSIS IN THE CASUAL CONVERSATION 60 COMMITED BY TWO SENIOR HIGH STUDENTS
Directory of Open Access Journals (Sweden)
Anjar Setiawan
2017-12-01
Full Text Available Syntactic structures are the base of English grammar. This study was aimed to analyze the syntactic errors in the casual conversation commited by two senior high students of MAN 2 Semarang. The researcher used qualitative approach to analyze and interpret the meaning of casual conversation. Furthermore, the data collection had been transcribed and analyzed based on the areas of syntactic errors analysis. The findings of the study showed that all areas of syntactic errors happened during the conversation, included auxiliaries, tenses, article, preposition, and conjunction. Both speakers also had a relatively weak vocabulary and their sentences which were sometimes incomprehensible by the interlocutor.
Directory of Open Access Journals (Sweden)
Zhigao Zeng
2016-01-01
Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.
Analysis of Factors Influencing Mayo Adhesive Probability Score in Partial Nephrectomy
Ji, Chaoyue; Tang, Shiying; Yang, Kunlin; Xiong, Gengyan; Fang, Dong; Zhang, Cuijian; Li, Xuesong; Zhou, Liqun
2017-01-01
Background To retrospectively explore the factors influencing Mayo Adhesive Probability (MAP) score in the setting of partial nephrectomy. Material/Methods Data of 93 consecutive patients who underwent laparoscopic and open partial nephrectomy from September 2015 to June 2016 were collected and analyzed retrospectively. Preoperative radiological elements were independently assessed by 2 readers. Ordinal logistic regression analyses were performed to evaluate radiological and clinicopathologic influencing factors of MAP score. Results On univariate analysis, MAP score was associated with male sex, older age, higher body mass index (BMI), history of hypertension and diabetes mellitus, and perirenal fat thickness (posterolateral, lateral, anterior, anterolateral, and medial). On multivariate analysis, only posterolateral perirenal fat thickness (odds ratio [OR]=0.88 [0.82–0.95], p=0.001), medial perirenal fat thickness (OR=0.90 [0.83–0.98], p=0.01), and history of diabetes mellitus (OR=5.42 [1.74–16.86], p=0.004) remained statistically significant. Tumor type (malignant vs. benign) was not statistically different. In patients with renal cell carcinoma (RCC), there was no difference in tumor stage or grade. Conclusions MAP score is significantly correlated with some preoperative factors such as posterolateral and medial perirenal fat thickness and diabetes mellitus. A new radioclinical scoring system including these patient-specific factors may become a better predictive tool than MAP score alone. PMID:29261641
Wavefront-error evaluation by mathematical analysis of experimental Foucault-test data
Wilson, R. G.
1975-01-01
The diffraction theory of the Foucault test provides an integral formula expressing the complex amplitude and irradiance distribution in the Foucault pattern of a test mirror (lens) as a function of wavefront error. Recent literature presents methods of inverting this formula to express wavefront error in terms of irradiance in the Foucault pattern. The present paper describes a study in which the inversion formulation was applied to photometric Foucault-test measurements on a nearly diffraction-limited mirror to determine wavefront errors for direct comparison with ones determined from scatter-plate interferometer measurements. The results affirm the practicability of the Foucault test for quantitative wavefront analysis of very small errors, and they reveal the fallacy of the prevalent belief that the test is limited to qualitative use only. Implications of the results with regard to optical testing and the potential use of the Foucault test for wavefront analysis in orbital space telescopes are discussed.
Study on Network Error Analysis and Locating based on Integrated Information Decision System
Yang, F.; Dong, Z. H.
2017-10-01
Integrated information decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, which provides with various services, such as email, short messages, drawing and sharing. Because the under-layer protocols are different, user standards are not unified, many errors are occurred during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which may be happened in different operation phases, stages, TCP/IP communication protocol layers, sub-system software, it is necessary to design a network error analysis and locating tool for IIDS to solve the above problems. This paper studies on network error analysis and locating based on IIDS, which provides strong theory and technology supports for the running and communicating of IIDS.
Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?
Hou, Arthur Y.; Zhang, Sara Q.
2004-01-01
Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.
Human error and the problem of causality in analysis of accidents
DEFF Research Database (Denmark)
Rasmussen, Jens
1990-01-01
and for termination of the search for `causes'. In addition, the concept of human error is analysed and its intimate relation with human adaptation and learning is discussed. It is concluded that identification of errors as a separate class of behaviour is becoming increasingly difficult in modern work environments......Present technology is characterized by complexity, rapid change and growing size of technical systems. This has caused increasing concern with the human involvement in system safety. Analyses of the major accidents during recent decades have concluded that human errors on part of operators......, designers or managers have played a major role. There are, however, several basic problems in analysis of accidents and identification of human error. This paper addresses the nature of causal explanations and the ambiguity of the rules applied for identification of the events to include in analysis...
Mars Entry Atmospheric Data System Modeling, Calibration, and Error Analysis
Karlgaard, Christopher D.; VanNorman, John; Siemers, Paul M.; Schoenenberger, Mark; Munk, Michelle M.
2014-01-01
The Mars Science Laboratory (MSL) Entry, Descent, and Landing Instrumentation (MEDLI)/Mars Entry Atmospheric Data System (MEADS) project installed seven pressure ports through the MSL Phenolic Impregnated Carbon Ablator (PICA) heatshield to measure heatshield surface pressures during entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the dynamic pressure, angle of attack, and angle of sideslip. This report describes the calibration of the pressure transducers utilized to reconstruct the atmospheric data and associated uncertainty models, pressure modeling and uncertainty analysis, and system performance results. The results indicate that the MEADS pressure measurement system hardware meets the project requirements.
International Nuclear Information System (INIS)
Bickel, J.H.
1983-01-01
The quantification of the containment event trees in the Millstone Unit 3 Probabilistic Safety Study utilizes a conditional probability of failure given high pressure which is based on a new approach. The generation of this conditional probability was based on a weakest link failure mode model which considered contributions from a number of overlapping failure modes. This overlap effect was due to a number of failure modes whose mean failure pressures were clustered within a 5 psi range and which had uncertainties due to variances in material strengths and analytical uncertainties which were between 9 and 15 psi. Based on a review of possible probability laws to describe the failure probability of individual structural failure modes, it was determined that a Weibull probability law most adequately described the randomness in the physical process of interest. The resultant conditional probability of failure is found to have a median failure pressure of 132.4 psia. The corresponding 5-95 percentile values are 112 psia and 146.7 psia respectively. The skewed nature of the conditional probability of failure vs. pressure results in a lower overall containment failure probability for an appreciable number of the severe accident sequences of interest, but also probabilities which are more rigorously traceable from first principles
Ebhardt, H. Alexander; Tsang, Herbert H.; Dai, Denny C.; Liu, Yifeng; Bostan, Babak; Fahlman, Richard P.
2009-01-01
Recent advances in DNA-sequencing technology have made it possible to obtain large datasets of small RNA sequences. Here we demonstrate that not all non-perfectly matched small RNA sequences are simple technological sequencing errors, but many hold valuable biological information. Analysis of three small RNA datasets originating from Oryza sativa and Arabidopsis thaliana small RNA-sequencing projects demonstrates that many single nucleotide substitution errors overlap when aligning homologous...
Fiske, David R.
2004-01-01
In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.
Radin, Rose G; Rothman, Kenneth J; Hatch, Elizabeth E; Mikkelsen, Ellen M; Sorensen, Henrik T; Riis, Anders H; Fox, Matthew P; Wise, Lauren A
2015-11-01
Epidemiologic studies of fecundability often use retrospectively measured time-to-pregnancy (TTP), thereby introducing potential for recall error. Little is known about how recall error affects the bias and precision of the fecundability odds ratio (FOR) in such studies. Using data from the Danish Snart-Gravid Study (2007-12), we quantified error for TTP recalled in the first trimester of pregnancy relative to prospectively measured TTP among 421 women who enrolled at the start of their pregnancy attempt and became pregnant within 12 months. We defined recall error as retrospectively measured TTP minus prospectively measured TTP. Using linear regression, we assessed mean differences in recall error by maternal characteristics. We evaluated the resulting bias in the FOR and 95% confidence interval (CI) using simulation analyses that compared corrected and uncorrected retrospectively measured TTP values. Recall error (mean = -0.11 months, 95% CI -0.25, 0.04) was not appreciably associated with maternal age, gravidity, or recent oral contraceptive use. Women with TTP > 2 months were more likely to underestimate their TTP than women with TTP ≤ 2 months (unadjusted mean difference in error: -0.40 months, 95% CI -0.71, -0.09). FORs of recent oral contraceptive use calculated from prospectively measured, retrospectively measured, and corrected TTPs were 0.82 (95% CI 0.67, 0.99), 0.74 (95% CI 0.61, 0.90), and 0.77 (95% CI 0.62, 0.96), respectively. Recall error was small on average among pregnancy planners who became pregnant within 12 months. Recall error biased the FOR of recent oral contraceptive use away from the null by 10%. Quantitative bias analysis of the FOR can help researchers quantify the bias from recall error. © 2015 John Wiley & Sons Ltd.
Failure probability under parameter uncertainty.
Gerrard, R; Tsanakas, A
2011-05-01
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.
Detecting medication errors in the New Zealand pharmacovigilance database: a retrospective analysis.
Kunac, Desireé L; Tatley, Michael V
2011-01-01
Despite the traditional focus being adverse drug reactions (ADRs), pharmacovigilance centres have recently been identified as a potentially rich and important source of medication error data. To identify medication errors in the New Zealand Pharmacovigilance database (Centre for Adverse Reactions Monitoring [CARM]), and to describe the frequency and characteristics of these events. A retrospective analysis of the CARM pharmacovigilance database operated by the New Zealand Pharmacovigilance Centre was undertaken for the year 1 January-31 December 2007. All reports, excluding those relating to vaccines, clinical trials and pharmaceutical company reports, underwent a preventability assessment using predetermined criteria. Those events deemed preventable were subsequently classified to identify the degree of patient harm, type of error, stage of medication use process where the error occurred and origin of the error. A total of 1412 reports met the inclusion criteria and were reviewed, of which 4.3% (61/1412) were deemed preventable. Not all errors resulted in patient harm: 29.5% (18/61) were 'no harm' errors but 65.5% (40/61) of errors were deemed to have been associated with some degree of patient harm (preventable adverse drug events [ADEs]). For 5.0% (3/61) of events, the degree of patient harm was unable to be determined as the patient outcome was unknown. The majority of preventable ADEs (62.5% [25/40]) occurred in adults aged 65 years and older. The medication classes most involved in preventable ADEs were antibacterials for systemic use and anti-inflammatory agents, with gastrointestinal and respiratory system disorders the most common adverse events reported. For both preventable ADEs and 'no harm' events, most errors were incorrect dose and drug therapy monitoring problems consisting of failures in detection of significant drug interactions, past allergies or lack of necessary clinical monitoring. Preventable events were mostly related to the prescribing and
Review of human error analysis methodologies and case study for accident management
International Nuclear Information System (INIS)
Jung, Won Dae; Kim, Jae Whan; Lee, Yong Hee; Ha, Jae Joo
1998-03-01
In this research, we tried to establish the requirements for the development of a new human error analysis method. To achieve this goal, we performed a case study as following steps; 1. review of the existing HEA methods 2. selection of those methods which are considered to be appropriate for the analysis of operator's tasks in NPPs 3. choice of tasks for the application, selected for the case study: HRMS (Human reliability management system), PHECA (Potential Human Error Cause Analysis), CREAM (Cognitive Reliability and Error Analysis Method). And, as the tasks for the application, 'bleed and feed operation' and 'decision-making for the reactor cavity flooding' tasks are chosen. We measured the applicability of the selected methods to the NPP tasks, and evaluated the advantages and disadvantages between each method. The three methods are turned out to be applicable for the prediction of human error. We concluded that both of CREAM and HRMS are equipped with enough applicability for the NPP tasks, however, compared two methods. CREAM is thought to be more appropriate than HRMS from the viewpoint of overall requirements. The requirements for the new HEA method obtained from the study can be summarized as follows; firstly, it should deal with cognitive error analysis, secondly, it should have adequate classification system for the NPP tasks, thirdly, the description on the error causes and error mechanisms should be explicit, fourthly, it should maintain the consistency of the result by minimizing the ambiguity in each step of analysis procedure, fifty, it should be done with acceptable human resources. (author). 25 refs., 30 tabs., 4 figs
Review of human error analysis methodologies and case study for accident management
Energy Technology Data Exchange (ETDEWEB)
Jung, Won Dae; Kim, Jae Whan; Lee, Yong Hee; Ha, Jae Joo
1998-03-01
In this research, we tried to establish the requirements for the development of a new human error analysis method. To achieve this goal, we performed a case study as following steps; 1. review of the existing HEA methods 2. selection of those methods which are considered to be appropriate for the analysis of operator`s tasks in NPPs 3. choice of tasks for the application, selected for the case study: HRMS (Human reliability management system), PHECA (Potential Human Error Cause Analysis), CREAM (Cognitive Reliability and Error Analysis Method). And, as the tasks for the application, `bleed and feed operation` and `decision-making for the reactor cavity flooding` tasks are chosen. We measured the applicability of the selected methods to the NPP tasks, and evaluated the advantages and disadvantages between each method. The three methods are turned out to be applicable for the prediction of human error. We concluded that both of CREAM and HRMS are equipped with enough applicability for the NPP tasks, however, compared two methods. CREAM is thought to be more appropriate than HRMS from the viewpoint of overall requirements. The requirements for the new HEA method obtained from the study can be summarized as follows; firstly, it should deal with cognitive error analysis, secondly, it should have adequate classification system for the NPP tasks, thirdly, the description on the error causes and error mechanisms should be explicit, fourthly, it should maintain the consistency of the result by minimizing the ambiguity in each step of analysis procedure, fifty, it should be done with acceptable human resources. (author). 25 refs., 30 tabs., 4 figs.
De-noising of GPS structural monitoring observation error using wavelet analysis
Directory of Open Access Journals (Sweden)
Mosbeh R. Kaloop
2016-03-01
Full Text Available In the process of the continuous monitoring of the structure's state properties such as static and dynamic responses using Global Positioning System (GPS, there are unavoidable errors in the observation data. These GPS errors and measurement noises have their disadvantages in the precise monitoring applications because these errors cover up the available signals that are needed. The current study aims to apply three methods, which are used widely to mitigate sensor observation errors. The three methods are based on wavelet analysis, namely principal component analysis method, wavelet compressed method, and the de-noised method. These methods are used to de-noise the GPS observation errors and to prove its performance using the GPS measurements which are collected from the short-time monitoring system designed for Mansoura Railway Bridge located in Egypt. The results have shown that GPS errors can effectively be removed, while the full-movement components of the structure can be extracted from the original signals using wavelet analysis.
Analysis of the probability of channel satisfactory state in P2P live ...
African Journals Online (AJOL)
In this paper a model based on user behaviour of P2P live streaming systems was developed in order to analyse one of the key QoS parameter of such systems, i.e. the probability of channel-satisfactory state, the impact of upload bandwidths and channels' popularity on the probability of channel-satisfactory state was also ...
analysis of the probability of channel satisfactory state in p2p live
African Journals Online (AJOL)
userpc
ABSTRACT. In this paper a model based on user behaviour of P2P live streaming systems was developed in order to analyse one of the key QoS parameter of such systems, i.e. the probability of channel-satisfactory state, the impact of upload bandwidths and channels' popularity on the probability of channel-satisfactory ...
Influencing the Probability for Graduation at Four-Year Institutions: A Multi-Model Analysis
Cragg, Kristina M.
2009-01-01
The purpose of this study is to identify student and institutional characteristics that influence the probability for graduation. The study delves further into the probability for graduation by examining how far the student deviates from the institutional mean with respect to academics and affordability; this concept is referred to as the "match."…
The Analysis of Insulation Breakdown Probabilities by the Up-And-Down Method
DEFF Research Database (Denmark)
Vibholm (fratrådt), Svend; Thyregod, Poul
1986-01-01
This paper discusses the assessment of breakdown probability by means of the up-and-down method. The Dixon and Mood approximation to the maximum-likelihood estimate is compared with the exact maximum-likelihood estimate for a number of response patterns. Estimates of the 50D probability breakdown...
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Directory of Open Access Journals (Sweden)
Zheng You
2013-04-01
Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.
Optical system error analysis and calibration method of high-accuracy star trackers.
Sun, Ting; Xing, Fei; You, Zheng
2013-04-08
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.
Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.
Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D
2017-06-01
The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.
A Linguistic Analysis of Errors in the Compositions of Arba Minch University Students
Directory of Open Access Journals (Sweden)
Yoseph Tizazu
2014-06-01
Full Text Available This study reports the dominant linguistic errors that occur in the written productions of Arba Minch University (hereafter AMU students. A sample of paragraphs was collected for two years from students ranging from freshmen to graduating level. The sampled compositions were then coded, described, and explained using error analysis method. Both quantitative and qualitative analyses showed that almost all components of the English language (such as orthography, morphology, syntax, mechanics, and semantics in learners’ compositions have been affected by the errors. On the basis of surface structures affected by the errors, the following kinds of errors have been identified: addition of an auxiliary (*I was read by gass light, omission of a verb (*Sex before marriage ^ many disadvantages, misformation in word class (*riskable for risky and misordering of major constituents in utterances (*I joined in 2003 Arba minch university. The study also identified two causes which triggered learners’ errors: intralingual and interlingual. The majority of the errors, however, attributed to intralingual causes, which mainly resulted from the lack of full mastery on the basics of the English language.
Error Analysis of Satellite Precipitation-Driven Modeling of Flood Events in Complex Alpine Terrain
Directory of Open Access Journals (Sweden)
Yiwen Mei
2016-03-01
Full Text Available The error in satellite precipitation-driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin-scale event properties (i.e., rainfall and runoff cumulative depth and time series shape. Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite precipitation exhibits good agreement with the reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of the time series shows a significant dampening effect. The random error dampening effect is less pronounced for the flash flood events and the rain flood events with a high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.
LEARNING FROM MISTAKES Error Analysis in the English Speech of Indonesian Tertiary Students
Directory of Open Access Journals (Sweden)
Imelda Gozali
2017-12-01
Full Text Available This study is part of a series of Classroom Action Research conducted with the aim of improving the English speech of students in one of the tertiary institutes in Indonesia. After some years of teaching English conversation, the writer noted that students made various types of errors in their speech, which can be classified generally into morphological, phonological, and lexical. While some of the errors are still generally acceptable, some others elicit laughter or inhibit comprehension altogether. Therefore, the writer is keen to analyze the more common errors made by the students, so as to be able to compile a teaching material that could be utilized to address those errors more effectively in future classes. This research used Error Analysis by Richards (1971 as the basis of classification. It was carried out in five classes with a total number of 80 students for a period of one semester (14 weeks. The results showed that most of the errors were phonological (errors in pronunciation, while others were morphological or grammatical in nature. This prompted the writer to design simple Phonics lessons for future classes.
An analysis of problems in statistics and probability in second year educational text books
Directory of Open Access Journals (Sweden)
Nicolás Andrés Sánchez
2017-09-01
Full Text Available At present society demands that every citizen manage to develop the capacity to interpret and question different phenomena present in tables, graphs and data, capacities that must be developed progressively from the earliest years of education. For this, it is also necessary that the resources point to the development of these skills, such as the textbook of Mathematics. The objective of the present work is to analyze the types of problems proposed in two Secondary Mathematics textbooks in the thematic area of Statistics and Probability. Both texts were those that were tendered and distributed free of charge and respond to two different curricular periods: 1 the one in which the old curriculum bases were in force and the other 2 when the current curricula were implemented. The use of the school textbook book by students and teachers assumes the premise that the various tasks proposed should tend to solve problems. The research was carried out through a qualitative methodology through content analysis. The theoretical categories proposed by Díaz and Poblete (2001 were used. Among the results found most routine problems are identified that serve to mechanize processes; The non-routine problems or real context, appear in very few cases.
A joint probability density function of wind speed and direction for wind energy analysis
International Nuclear Information System (INIS)
Carta, Jose A.; Ramirez, Penelope; Bueno, Celia
2008-01-01
A very flexible joint probability density function of wind speed and direction is presented in this paper for use in wind energy analysis. A method that enables angular-linear distributions to be obtained with specified marginal distributions has been used for this purpose. For the marginal distribution of wind speed we use a singly truncated from below Normal-Weibull mixture distribution. The marginal distribution of wind direction comprises a finite mixture of von Mises distributions. The proposed model is applied in this paper to wind direction and wind speed hourly data recorded at several weather stations located in the Canary Islands (Spain). The suitability of the distributions is judged from the coefficient of determination R 2 . The conclusions reached are that the joint distribution proposed in this paper: (a) can represent unimodal, bimodal and bitangential wind speed frequency distributions, (b) takes into account the frequency of null winds, (c) represents the wind direction regimes in zones with several modes or prevailing wind directions, (d) takes into account the correlation between wind speeds and its directions. It can therefore be used in several tasks involved in the evaluation process of the wind resources available at a potential site. We also conclude that, in the case of the Canary Islands, the proposed model provides better fits in all the cases analysed than those obtained with the models used in the specialised literature on wind energy
Directory of Open Access Journals (Sweden)
Yun Shi
2014-01-01
Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Practical Implementation and Error Analysis of PSCPWM-Based Switching Audio Power Amplifiers
DEFF Research Database (Denmark)
Christensen, Frank Schwartz; Frederiksen, Thomas Mansachs; Andersen, Michael Andreas E.
1999-01-01
The paper presents an in-depth analysis of practical results for Parallel Phase-Shifted Carrier Pulse-Width Modulation (PSCPWM) - amplifier. Spectral analyses of error sources involved in PSCPWM are presented. The analysis is performed both by numerical means in MATLAB and by simulation in PSPICE...
Statistical design and analysis for plant cover studies with multiple sources of observation errors
Wright, Wilson; Irvine, Kathryn M.; Warren, Jeffrey M .; Barnett, Jenny K.
2017-01-01
Effective wildlife habitat management and conservation requires understanding the factors influencing distribution and abundance of plant species. Field studies, however, have documented observation errors in visually estimated plant cover including measurements which differ from the true value (measurement error) and not observing a species that is present within a plot (detection error). Unlike the rapid expansion of occupancy and N-mixture models for analysing wildlife surveys, development of statistical models accounting for observation error in plants has not progressed quickly. Our work informs development of a monitoring protocol for managed wetlands within the National Wildlife Refuge System.Zero-augmented beta (ZAB) regression is the most suitable method for analysing areal plant cover recorded as a continuous proportion but assumes no observation errors. We present a model extension that explicitly includes the observation process thereby accounting for both measurement and detection errors. Using simulations, we compare our approach to a ZAB regression that ignores observation errors (naïve model) and an “ad hoc” approach using a composite of multiple observations per plot within the naïve model. We explore how sample size and within-season revisit design affect the ability to detect a change in mean plant cover between 2 years using our model.Explicitly modelling the observation process within our framework produced unbiased estimates and nominal coverage of model parameters. The naïve and “ad hoc” approaches resulted in underestimation of occurrence and overestimation of mean cover. The degree of bias was primarily driven by imperfect detection and its relationship with cover within a plot. Conversely, measurement error had minimal impacts on inferences. We found >30 plots with at least three within-season revisits achieved reasonable posterior probabilities for assessing change in mean plant cover.For rapid adoption and application, code
Gharouni-Nik, Morteza; Naeimi, Meysam; Ahadi, Sodayf; Alimoradi, Zahra
2014-06-01
In order to determine the overall safety of a tunnel support lining, a reliability-based approach is presented in this paper. Support elements in jointed rock tunnels are provided to control the ground movement caused by stress redistribution during the tunnel drive. Main support elements contribute to stability of the tunnel structure are recognized owing to identify various aspects of reliability and sustainability in the system. The selection of efficient support methods for rock tunneling is a key factor in order to reduce the number of problems during construction and maintain the project cost and time within the limited budget and planned schedule. This paper introduces a smart approach by which decision-makers will be able to find the overall reliability of tunnel support system before selecting the final scheme of the lining system. Due to this research focus, engineering reliability which is a branch of statistics and probability is being appropriately applied to the field and much effort has been made to use it in tunneling while investigating the reliability of the lining support system for the tunnel structure. Therefore, reliability analysis for evaluating the tunnel support performance is the main idea used in this research. Decomposition approaches are used for producing system block diagram and determining the failure probability of the whole system. Effectiveness of the proposed reliability model of tunnel lining together with the recommended approaches is examined using several case studies and the final value of reliability obtained for different designing scenarios. Considering the idea of linear correlation between safety factors and reliability parameters, the values of isolated reliabilities determined for different structural components of tunnel support system. In order to determine individual safety factors, finite element modeling is employed for different structural subsystems and the results of numerical analyses are obtained in
DEFF Research Database (Denmark)
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
is 2- to 15-fold more efficient than the common systematic, uniformly random sampling. The simulations also indicate that the lack of a simple predictor of the coefficient of error (CE) due to field-to-field variation is a more severe problem for uniform sampling strategies than anticipated. Because...... of its entirely different sampling strategy, based on known but non-uniform sampling probabilities, the proportionator for the first time allows the real CE at the section level to be automatically estimated (not just predicted), unbiased - for all estimators and at no extra cost to the user....
Parameter Analysis of the VPIN (Volume synchronized Probability of Informed Trading) Metric
Energy Technology Data Exchange (ETDEWEB)
Song, Jung Heon; Wu, Kesheng; Simon, Horst D.
2014-03-01
VPIN (Volume synchronized Probability of Informed trading) is a leading indicator of liquidity-induced volatility. It is best known for having produced a signal more than hours before the Flash Crash of 2010. On that day, the market saw the biggest one-day point decline in the Dow Jones Industrial Average, which culminated to the market value of $1 trillion disappearing, but only to recover those losses twenty minutes later (Lauricella 2010). The computation of VPIN requires the user to set up a handful of free parameters. The values of these parameters significantly affect the effectiveness of VPIN as measured by the false positive rate (FPR). An earlier publication reported that a brute-force search of simple parameter combinations yielded a number of parameter combinations with FPR of 7%. This work is a systematic attempt to find an optimal parameter set using an optimization package, NOMAD (Nonlinear Optimization by Mesh Adaptive Direct Search) by Audet, le digabel, and tribes (2009) and le digabel (2011). We have implemented a number of techniques to reduce the computation time with NOMAD. Tests show that we can reduce the FPR to only 2%. To better understand the parameter choices, we have conducted a series of sensitivity analysis via uncertainty quantification on the parameter spaces using UQTK (Uncertainty Quantification Toolkit). Results have shown dominance of 2 parameters in the computation of FPR. Using the outputs from NOMAD optimization and sensitivity analysis, We recommend A range of values for each of the free parameters that perform well on a large set of futures trading records.
Improving the PSA quality in the human reliability analysis of pre-accident human errors
International Nuclear Information System (INIS)
Kang, D.-I.; Jung, W.-D.; Yang, J.-E.
2004-01-01
This paper describes the activities for improving the Probabilistic Safety Assessment (PSA) quality in the human reliability analysis (HRA) of the pre-accident human errors for the Korea Standard Nuclear Power Plant (KSNP). We evaluate the HRA results of the PSA for the KSNP and identify the items to be improved using the ASME PRA Standard. Evaluation results show that the ratio of items to be improved for pre-accident human errors is relatively high when compared with the ratio of those for post-accident human errors. They also show that more than 50% of the items to be improved for pre-accident human errors are related to the identification and screening analysis for them. In this paper, we develop the modeling guidelines for pre-accident human errors and apply them to the auxiliary feedwater system of the KSNP. Application results show that more than 50% of the items to be improved for the pre-accident human errors of the auxiliary feedwater system are resolved. (author)
Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T
2016-02-01
The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.
Zollanvari, Amin
2013-05-24
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Longwave surface radiation over the globe from satellite data - An error analysis
Gupta, S. K.; Wilber, A. C.; Darnell, W. L.; Suttles, J. T.
1993-01-01
Errors have been analyzed for monthly-average downward and net longwave surface fluxes derived on a 5-deg equal-area grid over the globe, using a satellite technique. Meteorological data used in this technique are available from the TIROS Operational Vertical Sounder (TOVS) system flown aboard NOAA's operational sun-synchronous satellites. The data used are for February 1982 from NOAA-6 and NOAA-7 satellites. The errors in the parametrized equations were estimated by comparing their results with those from a detailed radiative transfer model. The errors in the TOVS-derived surface temperature, water vapor burden, and cloud cover were estimated by comparing these meteorological parameters with independent measurements obtained from other satellite sources. Analysis of the overall errors shows that the present technique could lead to underestimation of downward fluxes by 5 to 15 W/sq m and net fluxes by 4 to 12 W/sq m.
Statistical analysis with measurement error or misclassification strategy, method and application
Yi, Grace Y
2017-01-01
This monograph on measurement error and misclassification covers a broad range of problems and emphasizes unique features in modeling and analyzing problems arising from medical research and epidemiological studies. Many measurement error and misclassification problems have been addressed in various fields over the years as well as with a wide spectrum of data, including event history data (such as survival data and recurrent event data), correlated data (such as longitudinal data and clustered data), multi-state event data, and data arising from case-control studies. Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application brings together assorted methods in a single text and provides an update of recent developments for a variety of settings. Measurement error effects and strategies of handling mismeasurement for different models are closely examined in combination with applications to specific problems. Readers with diverse backgrounds and objectives can utilize th...
Error Analysis of the K-Rb-21Ne Comagnetometer Space-Stable Inertial Navigation System
Directory of Open Access Journals (Sweden)
Qingzhong Cai
2018-02-01
Full Text Available According to the application characteristics of the K-Rb-21Ne comagnetometer, a space-stable navigation mechanization is designed and the requirements of the comagnetometer prototype are presented. By analysing the error propagation rule of the space-stable Inertial Navigation System (INS, the three biases, the scale factor of the z-axis, and the misalignment of the x- and y-axis non-orthogonal with the z-axis, are confirmed to be the main error source. A numerical simulation of the mathematical model for each single error verified the theoretical analysis result of the system’s error propagation rule. Thus, numerical simulation based on the semi-physical data result proves the feasibility of the navigation scheme proposed in this paper.
A Posteriori Error Analysis of Stochastic Differential Equations Using Polynomial Chaos Expansions
Butler, T.
2011-01-01
We develop computable a posteriori error estimates for linear functionals of a solution to a general nonlinear stochastic differential equation with random model/source parameters. These error estimates are based on a variational analysis applied to stochastic Galerkin methods for forward and adjoint problems. The result is a representation for the error estimate as a polynomial in the random model/source parameter. The advantage of this method is that we use polynomial chaos representations for the forward and adjoint systems to cheaply produce error estimates by simple evaluation of a polynomial. By comparison, the typical method of producing such estimates requires repeated forward/adjoint solves for each new choice of random parameter. We present numerical examples showing that there is excellent agreement between these methods. © 2011 Society for Industrial and Applied Mathematics.
Error Analysis of the K-Rb-21Ne Comagnetometer Space-Stable Inertial Navigation System.
Cai, Qingzhong; Yang, Gongliu; Quan, Wei; Song, Ningfang; Tu, Yongqiang; Liu, Yiliang
2018-02-24
According to the application characteristics of the K-Rb- 21 Ne comagnetometer, a space-stable navigation mechanization is designed and the requirements of the comagnetometer prototype are presented. By analysing the error propagation rule of the space-stable Inertial Navigation System (INS), the three biases, the scale factor of the z -axis, and the misalignment of the x - and y -axis non-orthogonal with the z -axis, are confirmed to be the main error source. A numerical simulation of the mathematical model for each single error verified the theoretical analysis result of the system's error propagation rule. Thus, numerical simulation based on the semi-physical data result proves the feasibility of the navigation scheme proposed in this paper.
Zollanvari, Amin; Genton, Marc G
2013-08-01
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Error-rate performance analysis of incremental decode-and-forward opportunistic relaying
Tourki, Kamel
2011-06-01
In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.
Errors in accident data, its types, causes and methods of rectification-analysis of the literature.
Ahmed, Ashar; Sadullah, Ahmad Farhan Mohd; Yahya, Ahmad Shukri
2017-07-29
Most of the decisions taken to improve road safety are based on accident data, which makes it the back bone of any country's road safety system. Errors in this data will lead to misidentification of black spots and hazardous road segments, projection of false estimates pertinent to accidents and fatality rates, and detection of wrong parameters responsible for accident occurrence, thereby making the entire road safety exercise ineffective. Its extent varies from country to country depending upon various factors. Knowing the type of error in the accident data and the factors causing it enables the application of the correct method for its rectification. Therefore there is a need for a systematic literature review that addresses the topic at a global level. This paper fulfils the above research gap by providing a synthesis of literature for the different types of errors found in the accident data of 46 countries across the six regions of the world. The errors are classified and discussed with respect to each type and analysed with respect to income level; assessment with regard to the magnitude for each type is provided; followed by the different causes that result in their occurrence, and the various methods used to address each type of error. Among high-income countries the extent of error in reporting slight, severe, non-fatal and fatal injury accidents varied between 39-82%, 16-52%, 12-84%, and 0-31% respectively. For middle-income countries the error for the same categories varied between 93-98%, 32.5-96%, 34-99% and 0.5-89.5% respectively. The only four studies available for low-income countries showed that the error in reporting non-fatal and fatal accidents varied between 69-80% and 0-61% respectively. The logistic relation of error in accident data reporting, dichotomised at 50%, indicated that as the income level of a country increases the probability of having less error in accident data also increases. Average error in recording information related to the
International Nuclear Information System (INIS)
Hirotsu, Yuko; Suzuki, Kunihiko; Takano, Kenichi; Kojima, Mitsuhiro
2000-01-01
It is essential for preventing the recurrence of human error incidents to analyze and evaluate them with the emphasis on human factor. Detailed and structured analyses of all incidents at domestic nuclear power plants (NPPs) reported during last 31 years have been conducted based on J-HPES, in which total 193 human error cases are identified. Results obtained by the analyses have been stored into the J-HPES database. In the previous study, by applying multivariate analysis to above case studies, it was suggested that there were several occurrence patterns identified of how errors occur at NPPs. It was also clarified that the causes related to each human error are different depending on age of their occurrence. This paper described the obtained results in respects of periodical transition of human error occurrence patterns. By applying multivariate analysis to the above data, it was suggested there were two types of error occurrence patterns as to each human error type. First type is common occurrence patterns, not depending on the age, and second type is the one influenced by periodical characteristics. (author)
Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua
2014-07-01
Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
Energy Technology Data Exchange (ETDEWEB)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan, E-mail: liushuhuan@mail.xjtu.edu.cn; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-21
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
On the Optimal Detection and Error Performance Analysis of the Hardware Impaired Systems
Javed, Sidrah
2018-01-15
The conventional minimum Euclidean distance (MED) receiver design is based on the assumption of ideal hardware transceivers and proper Gaussian noise in communication systems. Throughout this study, an accurate statistical model of various hardware impairments (HWIs) is presented. Then, an optimal maximum likelihood (ML) receiver is derived considering the distinct characteristics of the HWIs comprised of additive improper Gaussian noise and signal distortion. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds are derived. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver and the tightness of the derived bounds.
The distributed failure probability approach to dependent failure analysis, and its application
International Nuclear Information System (INIS)
Hughes, R.P.
1989-01-01
The Distributed Failure Probability (DFP) approach to the problem of dependent failures in systems is presented. The basis of the approach is that the failure probability of a component is a variable. The source of this variability is the change in the 'environment' of the component, where the term 'environment' is used to mean not only obvious environmental factors such as temperature etc., but also such factors as the quality of maintenance and manufacture. The failure probability is distributed among these various 'environments' giving rise to the Distributed Failure Probability method. Within the framework which this method represents, modelling assumptions can be made, based both on engineering judgment and on the data directly. As such, this DFP approach provides a soundly based and scrutable technique by which dependent failures can be quantitatively assessed. (orig.)
Fischer, Egil Andreas Joor; Martínez López, Evelyn Pamela; De Vos, Clazien J; Faverjon, Céline
2016-09-01
Equine encephalosis is a midge-borne viral disease of equines caused by equine encephalosis virus (EEV, Orbivirus, Reoviridae), and closely related to African horse sickness virus (AHSV). EEV and AHSV share common vectors and show similar transmission patterns. Until now EEV has caused outbreaks in Africa and Israel. This study aimed to provide insight in the probability of an EEV outbreak in The Netherlands caused by infected vectors or hosts, the contribution of potential source areas (risk regions) to this probability, and the effectiveness of preventive measures (sanitary regimes). A stochastic risk model constructed for risk assessment of AHSV introduction was adapted to EEV. Source areas were categorized in risk regions (high, low, and very low risk) based on EEV history and the presence of competent vectors. Two possible EEV introduction pathways were considered: importation of infected equines and importation of infected vectors along with their vertebrate hosts. The probability of EEV introduction (PEEV) was calculated by combining the probability of EEV release by either pathway and the probability of EEV establishment. The median current annual probability of EEV introduction by an infected equine was estimated at 0.012 (90% uncertainty interval 0.002-0.020), and by an infected vector at 4.0 10(-5) (90% uncertainty interval 5.3 10(-6)-2.0 10(-4)). Equines from high risk regions contributed most to the probability of EEV introduction with 74% on the EEV introduction by equines, whereas low and very low risk regions contributed 18% and 8%, respectively. International movements of horses participating in equestrian events contributed most to the probability of EEV introduction by equines from high risk regions (86%), but also contributed substantially for low and very low risk regions with 47% and 56%. The probability of introducing EEV into The Netherlands is much higher than the probability of introducing AHSV with equines from high risk countries
D'Astolfo, Lisa; Rief, Winfried
2017-01-01
Modifying patients' expectations by exposing them to expectation violation situations (thus maximizing the difference between the expected and the actual situational outcome) is proposed to be a crucial mechanism for therapeutic success for a variety of different mental disorders. However, clinical observations suggest that patients often maintain their expectations regardless of experiences contradicting their expectations. It remains unclear which information processing mechanisms lead to modification or persistence of patients' expectations. Insight in the processing could be provided by Neuroimaging studies investigating prediction error (PE, i.e., neuronal reactions to non-expected stimuli). Two methods are often used to investigate the PE: (1) paradigms, in which participants passively observe PEs ("passive" paradigms) and (2) paradigms, which encourage a behavioral adaptation following a PE ("active" paradigms). These paradigms are similar to the methods used to induce expectation violations in clinical settings: (1) the confrontation with an expectation violation situation and (2) an enhanced confrontation in which the patient actively challenges his expectation. We used this similarity to gain insight in the different neuronal processing of the two PE paradigms. We performed a meta-analysis contrasting neuronal activity of PE paradigms encouraging a behavioral adaptation following a PE and paradigms enforcing passiveness following a PE. We found more neuronal activity in the striatum, the insula and the fusiform gyrus in studies encouraging behavioral adaptation following a PE. Due to the involvement of reward assessment and avoidance learning associated with the striatum and the insula we propose that the deliberate execution of action alternatives following a PE is associated with the integration of new information into previously existing expectations, therefore leading to an expectation change. While further research is needed to directly assess