Systematic sampling with errors in sample locations
DEFF Research Database (Denmark)
Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton
2010-01-01
analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...
Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G
2014-10-01
Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Simulating systematic errors in X-ray absorption spectroscopy experiments: Sample and beam effects
Energy Technology Data Exchange (ETDEWEB)
Curis, Emmanuel [Laboratoire de Biomathematiques, Faculte de Pharmacie, Universite Rene, Descartes (Paris V)-4, Avenue de l' Observatoire, 75006 Paris (France)]. E-mail: emmanuel.curis@univ-paris5.fr; Osan, Janos [KFKI Atomic Energy Research Institute (AEKI)-P.O. Box 49, H-1525 Budapest (Hungary); Falkenberg, Gerald [Hamburger Synchrotronstrahlungslabor (HASYLAB), Deutsches Elektronen-Synchrotron (DESY)-Notkestrasse 85, 22607 Hamburg (Germany); Benazeth, Simone [Laboratoire de Biomathematiques, Faculte de Pharmacie, Universite Rene, Descartes (Paris V)-4, Avenue de l' Observatoire, 75006 Paris (France); Laboratoire d' Utilisation du Rayonnement Electromagnetique (LURE)-Ba-hat timent 209D, Campus d' Orsay, 91406 Orsay (France); Toeroek, Szabina [KFKI Atomic Energy Research Institute (AEKI)-P.O. Box 49, H-1525 Budapest (Hungary)
2005-07-15
The article presents an analytical model to simulate experimental imperfections in the realization of an X-ray absorption spectroscopy experiment, performed in transmission or fluorescence mode. Distinction is made between sources of systematic errors on a time-scale basis, to select the more appropriate model for their handling. For short time-scale, statistical models are the most suited. For large time-scale, the model is developed for sample and beam imperfections: mainly sample inhomogeneity, sample self-absorption, beam achromaticity. The ability of this model to reproduce the effects of these imperfections is exemplified, and the model is validated on real samples. Various potential application fields of the model are then presented.
Simulating systematic errors in X-ray absorption spectroscopy experiments: Sample and beam effects
International Nuclear Information System (INIS)
Curis, Emmanuel; Osan, Janos; Falkenberg, Gerald; Benazeth, Simone; Toeroek, Szabina
2005-01-01
The article presents an analytical model to simulate experimental imperfections in the realization of an X-ray absorption spectroscopy experiment, performed in transmission or fluorescence mode. Distinction is made between sources of systematic errors on a time-scale basis, to select the more appropriate model for their handling. For short time-scale, statistical models are the most suited. For large time-scale, the model is developed for sample and beam imperfections: mainly sample inhomogeneity, sample self-absorption, beam achromaticity. The ability of this model to reproduce the effects of these imperfections is exemplified, and the model is validated on real samples. Various potential application fields of the model are then presented
Spatial effects, sampling errors, and task specialization in the honey bee.
Johnson, B R
2010-05-01
Task allocation patterns should depend on the spatial distribution of work within the nest, variation in task demand, and the movement patterns of workers, however, relatively little research has focused on these topics. This study uses a spatially explicit agent based model to determine whether such factors alone can generate biases in task performance at the individual level in the honey bees, Apis mellifera. Specialization (bias in task performance) is shown to result from strong sampling error due to localized task demand, relatively slow moving workers relative to nest size, and strong spatial variation in task demand. To date, specialization has been primarily interpreted with the response threshold concept, which is focused on intrinsic (typically genotypic) differences between workers. Response threshold variation and sampling error due to spatial effects are not mutually exclusive, however, and this study suggests that both contribute to patterns of task bias at the individual level. While spatial effects are strong enough to explain some documented cases of specialization; they are relatively short term and not explanatory for long term cases of specialization. In general, this study suggests that the spatial layout of tasks and fluctuations in their demand must be explicitly controlled for in studies focused on identifying genotypic specialists.
Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A
2013-01-01
Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.
Directory of Open Access Journals (Sweden)
Pitchaiah Mandava
Full Text Available OBJECTIVE: Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS, a range of scores ("Shift" is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. METHODS: We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. RESULTS: Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD. Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001. Taking errors into account, SAINT I would have required 24% more subjects than were randomized. CONCLUSION: We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We
Effect of measurement error budgets and hybrid metrology on qualification metrology sampling
Sendelbach, Matthew; Sarig, Niv; Wakamoto, Koichi; Kim, Hyang Kyun (Helen); Isbester, Paul; Asano, Masafumi; Matsuki, Kazuto; Osorio, Carmen; Archie, Chas
2014-10-01
Until now, metrologists had no statistics-based method to determine the sampling needed for an experiment before the start that accuracy experiment. We show a solution to this problem called inverse total measurement uncertainty (TMU) analysis, by presenting statistically based equations that allow the user to estimate the needed sampling after providing appropriate inputs, allowing him to make important "risk versus reward" sampling, cost, and equipment decisions. Application examples using experimental data from scatterometry and critical dimension scanning electron microscope tools are used first to demonstrate how the inverse TMU analysis methodology can be used to make intelligent sampling decisions and then to reveal why low sampling can lead to unstable and misleading results. One model is developed that can help experimenters minimize sampling costs. A second cost model reveals the inadequacy of some current sampling practices-and the enormous costs associated with sampling that provides reasonable levels of certainty in the result. We introduce the strategies on how to manage and mitigate these costs and begin the discussion on how fabs are able to manufacture devices using minimal reference sampling when qualifying metrology steps. Finally, the relationship between inverse TMU analysis and hybrid metrology is explored.
Parés-Pollán, L; Gonzalez-Quintana, A; Docampo-Cordeiro, J; Vargas-Gallego, C; García-Álvarez, G; Ramos-Rodríguez, V; Diaz Rubio-García, M P
2014-01-01
Owing to the decrease in values of biochemical glucose parameter in some samples from external extraction centres, and the risk this implies to patient safety; it was decided to apply an adaptation of the «Health Services Failure Mode and Effects Analysis» (HFMEA) to manage risk during the pre-analytical phase of sample transportation from external centres to clinical laboratories. A retrospective study of glucose parameter was conducted during two consecutive months. The analysis was performed in its different phases: to define the HFMEA topic, assemble the team, graphically describe the process, conduct a hazard analysis, design the intervention and indicators, and identify a person to be responsible for ensuring completion of each action. The results of glucose parameter in one of the transport routes, were significantly lower (P=.006). The errors and potential causes of this problem were analysed, and criteria of criticality and detectability were applied (score≥8) in the decision tree. It was decided to: develop a document management system; reorganise extractions and transport routes in some centres; quality control of the sample container ice-packs, and the time and temperature during transportation. This work proposes quality indicators for controlling time and temperature of transported samples in the pre-analytical phase. Periodic review of certain laboratory parameters can help to detect problems in transporting samples. The HFMEA technique is useful for the clinical laboratory. Copyright © 2013 SECA. Published by Elsevier Espana. All rights reserved.
Preanalytical Blood Sampling Errors in Clinical Settings
International Nuclear Information System (INIS)
Zehra, N.; Malik, A. H.; Arshad, Q.; Sarwar, S.; Aslam, S.
2016-01-01
Background: Blood sampling is one of the common procedures done in every ward for disease diagnosis and prognosis. Daily hundreds of samples are collected from different wards but lack of appropriate knowledge of blood sampling by paramedical staff and accidental errors make the samples inappropriate for testing. Thus the need to avoid these errors for better results still remains. We carried out this research with an aim to determine the common errors during blood sampling; find factors responsible and propose ways to reduce these errors. Methods: A cross sectional descriptive study was carried out at the Military and Combined Military Hospital Rawalpindi during February and March 2014. A Venous Blood Sampling questionnaire (VBSQ) was filled by the staff on voluntary basis in front of the researchers. The staff was briefed on the purpose of the survey before filling the questionnaire. Sample size was 228. Results were analysed using SPSS-21. Results: When asked in the questionnaire, around 61.6 percent of the paramedical staff stated that they cleaned the vein by moving the alcohol swab from inward to outwards while 20.8 percent of the staff reported that they felt the vein after disinfection. On contrary to WHO guidelines, 89.6 percent identified that they had a habit of placing blood in the test tube by holding it in the other hand, which should actually be done after inserting it into the stand. Although 86 percent thought that they had ample knowledge regarding the blood sampling process but they did not practice it properly. Conclusion: Pre analytical blood sampling errors are common in our setup. Eighty six percent participants though thought that they had adequate knowledge regarding blood sampling, but most of them were not adhering to standard protocols. There is a need of continued education and refresher courses. (author)
Creel, Scott; Creel, Michael
2009-11-01
1. Sampling error in annual estimates of population size creates two widely recognized problems for the analysis of population growth. First, if sampling error is mistakenly treated as process error, one obtains inflated estimates of the variation in true population trajectories (Staples, Taper & Dennis 2004). Second, treating sampling error as process error is thought to overestimate the importance of density dependence in population growth (Viljugrein et al. 2005; Dennis et al. 2006). 2. In ecology, state-space models are used to account for sampling error when estimating the effects of density and other variables on population growth (Staples et al. 2004; Dennis et al. 2006). In econometrics, regression with instrumental variables is a well-established method that addresses the problem of correlation between regressors and the error term, but requires fewer assumptions than state-space models (Davidson & MacKinnon 1993; Cameron & Trivedi 2005). 3. We used instrumental variables to account for sampling error and fit a generalized linear model to 472 annual observations of population size for 35 Elk Management Units in Montana, from 1928 to 2004. We compared this model with state-space models fit with the likelihood function of Dennis et al. (2006). We discuss the general advantages and disadvantages of each method. Briefly, regression with instrumental variables is valid with fewer distributional assumptions, but state-space models are more efficient when their distributional assumptions are met. 4. Both methods found that population growth was negatively related to population density and winter snow accumulation. Summer rainfall and wolf (Canis lupus) presence had much weaker effects on elk (Cervus elaphus) dynamics [though limitation by wolves is strong in some elk populations with well-established wolf populations (Creel et al. 2007; Creel & Christianson 2008)]. 5. Coupled with predictions for Montana from global and regional climate models, our results
International Nuclear Information System (INIS)
Meor Yusoff Meor Sulaiman; Masliana Muhammad; Wilfred, P.
2013-01-01
Even though EDXRF analysis has major advantages in the analysis of stainless steel samples such as simultaneous determination of the minor elements, analysis can be done without sample preparation and non-destructive analysis, the matrix issue arise from the inter element interaction can make the the final quantitative result to be in accurate. The paper relates a comparative quantitative analysis using standard and standard less methods in the determination of these elements. Standard method was done by plotting regression calibration graphs of the interested elements using BCS certified stainless steel standards. Different calibration plots were developed based on the available certified standards and these stainless steel grades include low alloy steel, austenitic, ferritic and high speed. The standard less method on the other hand uses a mathematical modelling with matrix effect correction derived from Lucas-Tooth and Price model. Further improvement on the accuracy of the standard less method was done by inclusion of pure elements into the development of the model. Discrepancy tests were then carried out for these quantitative methods on different certified samples and the results show that the high speed method is most reliable for determining of Ni and the standard less method for Mn. (Author)
Blood transfusion sampling and a greater role for error recovery.
Oldham, Jane
Patient identification errors in pre-transfusion blood sampling ('wrong blood in tube') are a persistent area of risk. These errors can potentially result in life-threatening complications. Current measures to address root causes of incidents and near misses have not resolved this problem and there is a need to look afresh at this issue. PROJECT PURPOSE: This narrative review of the literature is part of a wider system-improvement project designed to explore and seek a better understanding of the factors that contribute to transfusion sampling error as a prerequisite to examining current and potential approaches to error reduction. A broad search of the literature was undertaken to identify themes relating to this phenomenon. KEY DISCOVERIES: Two key themes emerged from the literature. Firstly, despite multi-faceted causes of error, the consistent element is the ever-present potential for human error. Secondly, current focus on error prevention could potentially be augmented with greater attention to error recovery. Exploring ways in which clinical staff taking samples might learn how to better identify their own errors is proposed to add to current safety initiatives.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Using snowball sampling method with nurses to understand medication administration errors.
Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In
2009-02-01
We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non
Directory of Open Access Journals (Sweden)
Qin Guo-jie
2014-08-01
Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.
Assessing human error during collecting a hydrocarbon sample of ...
African Journals Online (AJOL)
This paper reports the assessment method of the hydrocarbon sample collection standard operation procedure (SOP) using THERP. The Performance Shaping Factors (PSF) from THERP analyzed and assessed the human errors during collecting a hydrocarbon sample of a petrochemical refinery plant. Twenty-two ...
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS
Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...
Quantification and handling of sampling errors in instrumental measurements: a case study
DEFF Research Database (Denmark)
Andersen, Charlotte Møller; Bro, R.
2004-01-01
in certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...
The Impact of Soil Sampling Errors on Variable Rate Fertilization
Energy Technology Data Exchange (ETDEWEB)
R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink
2004-07-01
Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences
On the Spatial and Temporal Sampling Errors of Remotely Sensed Precipitation Products
Directory of Open Access Journals (Sweden)
Ali Behrangi
2017-11-01
Full Text Available Observation with coarse spatial and temporal sampling can cause large errors in quantification of the amount, intensity, and duration of precipitation events. In this study, the errors resulting from temporal and spatial sampling of precipitation events were quantified and examined using the latest version (V4 of the Global Precipitation Measurement (GPM mission integrated multi-satellite retrievals for GPM (IMERG, which is available since spring of 2014. Relative mean square error was calculated at 0.1° × 0.1° every 0.5 h between the degraded (temporally and spatially and original IMERG products. The temporal and spatial degradation was performed by producing three-hour (T3, six-hour (T6, 0.5° × 0.5° (S5, and 1.0° × 1.0° (S10 maps. The results show generally larger errors over land than ocean, especially over mountainous regions. The relative error of T6 is almost 20% larger than T3 over tropical land, but is smaller in higher latitudes. Over land relative error of T6 is larger than S5 across all latitudes, while T6 has larger relative error than S10 poleward of 20°S–20°N. Similarly, the relative error of T3 exceeds S5 poleward of 20°S–20°N, but does not exceed S10, except in very high latitudes. Similar results are also seen over ocean, but the error ratios are generally less sensitive to seasonal changes. The results also show that the spatial and temporal relative errors are not highly correlated. Overall, lower correlations between the spatial and temporal relative errors are observed over ocean than over land. Quantification of such spatiotemporal effects provides additional insights into evaluation studies, especially when different products are cross-compared at a range of spatiotemporal scales.
Determination of optimal samples for robot calibration based on error similarity
Directory of Open Access Journals (Sweden)
Tian Wei
2015-06-01
Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.
Principal Stratification in sample selection problems with non normal error terms
DEFF Research Database (Denmark)
Rocci, Roberto; Mellace, Giovanni
The aim of the paper is to relax distributional assumptions on the error terms, often imposed in parametric sample selection models to estimate causal effects, when plausible exclusion restrictions are not available. Within the principal stratification framework, we approximate the true distribut...... an application to the Job Corps training program....
Spanish exit polls. Sampling error or nonresponse bias?
Directory of Open Access Journals (Sweden)
Pavía, Jose M.
2016-09-01
Full Text Available Countless examples of misleading forecasts on behalf of both pre-election and exit polls can be found all over the world. Non-representative samples due to differential nonresponse have been claimed as being the main reason for inaccurate exit-poll projections. In real inference problems, it is seldom possible to compare estimates and true values. Electoral forecasts are an exception. Comparisons between estimates and final outcomes can be carried out once votes have been tallied. In this paper, we examine the raw data collected in seven exit polls conducted in Spain and test the likelihood that the data collected in each sampled voting location can be considered as a random sample of actual results. Knowing the answer to this is relevant for both electoral analysts and forecasters as, if the hypothesis is rejected, the shortcomings of the collected data would need amending. Analysts could improve the quality of their computations by implementing local correction strategies. We find strong evidence of nonsampling error in Spanish exit polls and evidence that the political context matters. Nonresponse bias is larger in polarized elections and in a climate of fearExiste un gran número de ejemplos de predicciones inexactas obtenidas a partir tanto de encuestas pre-electorales como de encuestas a pie de urna a lo largo del mundo. La presencia de tasas de no-respuesta diferencial entre distintos tipos de electores ha sido la principal razón esgrimida para justificar las proyecciones erróneas en las encuestas a pie de urna. En problemas de inferencia rara vez es posible comparar estimaciones y valores reales. Las predicciones electorales son una excepción. La comparación entre estimaciones y resultados finales puede realizarse una vez los votos han sido contabilizados. En este trabajo, examinamos los datos brutos recogidos en siete encuestas a pie de urna realizadas en España y testamos la hipótesis de que los datos recolectados en cada punto
Type-II generalized family-wise error rate formulas with application to sample size determination.
Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie
2016-07-20
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
International Nuclear Information System (INIS)
Hadjiloucas, S; Walker, G C; Bowen, J W; Zafiropoulos, A
2009-01-01
The THz water content index of a sample is defined and advantages in using such metric in estimating a sample's relative water content are discussed. The errors from reflectance measurements performed at two different THz frequencies using a quasi-optical null-balance reflectometer are propagated to the errors in estimating the sample water content index.
International Nuclear Information System (INIS)
Liu Xiaoxue; Li Ziqiang; Zhao Hongsheng; Zhang Kaihong; Tang Chunhe
2014-01-01
The thicknesses of four coatings of HTR coated fuel particle are very important parameters. It is indispensable to control the thickness of four coatings of coated fuel particles for the safety of HTR. A measurement method, ceramographic sample-microanalysis method, to analyze the thickness of coatings was developed. During the process of ceramographic sample-microanalysis, there are two main errors, including ceramographic sample preparation error and thickness measurement error. With the development of microscopic techniques, thickness measurement error can be easily controlled to meet the design requirements. While, due to the coated particles are spherical particles of different diameters ranged from 850 to 1000μm, the sample preparation process will introduce an error. And this error is different from one sample to another. It’s also different from one particle to another in the same sample. In this article, the error of the ceramographic sample preparation was calculated and analyzed. Results show that the error introduced by sample preparation is minor. The minor error of sample preparation guarantees the high accuracy of the mentioned method, which indicates this method is a proper method to measure the thickness of four coatings of coated particles. (author)
An Investigation of effective factors on nurses\\' speech errors
Directory of Open Access Journals (Sweden)
Maryam Tafaroji yeganeh
2017-03-01
Full Text Available Background : Speech errors are a branch of psycholinguistic science. Speech error or slip of tongue is a natural process that happens to everyone. The importance of this research is because of sensitivity and importance of nursing in which the speech errors may be interfere in the treatment of patients, but unfortunately no research has been done yet in this field.This research has been done to study the factors (personality, stress, fatigue and insomnia which cause speech errors happen to nurses of Ilam province. Materials and Methods: The sample of this correlation-descriptive research consists of 50 nurses working in Mustafa Khomeini Hospital of Ilam province who were selected randomly. Our data were collected using The Minnesota Multiphasic Personality Inventory, NEO-Five Factor Inventory and Expanded Nursing Stress Scale, and were analyzed using SPSS version 20, descriptive, inferential and multivariate linear regression or two-variable statistical methods (with significant level: p≤0. 05. Results: 30 (60% of nurses participating in the study were female and 19 (38% were male. In this study, all three factors (type of personality, stress and fatigue have significant effects on nurses' speech errors Conclusion: 30 (60% of nurses participating in the study were female and 19 (38% were male. In this study, all three factors (type of personality, stress and fatigue have significant effects on nurses' speech errors.
Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.
Reddon, John R.; And Others
1985-01-01
Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)
Carroll, Raymond J.
2010-05-01
This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.
The predicted CLARREO sampling error of the inter-annual SW variability
Doelling, D. R.; Keyes, D. F.; Nguyen, C.; Macdonnell, D.; Young, D. F.
2009-12-01
The NRC Decadal Survey has called for SI traceability of long-term hyper-spectral flux measurements in order to monitor climate variability. This mission is called the Climate Absolute Radiance and Refractivity Observatory (CLARREO) and is currently defining its mission requirements. The requirements are focused on the ability to measure decadal change of key climate variables at very high accuracy. The accuracy goals are set using anticipated climate change magnitudes, but the accuracy achieved for any given climate variable must take into account the temporal and spatial sampling errors based on satellite orbits and calibration accuracy. The time period to detect a significant trend in the CLARREO record depends on the magnitude of the sampling calibration errors relative to the current inter-annual variability. The largest uncertainty in climate feedbacks remains the effect of changing clouds on planetary energy balance. Some regions on earth have strong diurnal cycles, such as maritime stratus and afternoon land convection; other regions have strong seasonal cycles, such as the monsoon. However, when monitoring inter-annual variability these cycles are only important if the strength of these cycles vary on decadal time scales. This study will attempt to determine the best satellite constellations to reduce sampling error and to compare the error with the current inter-annual variability signal to ensure the viability of the mission. The study will incorporate Clouds and the Earth's Radiant Energy System (CERES) (Monthly TOA/Surface Averages) SRBAVG product TOA LW and SW climate quality fluxes. The fluxes are derived by combining Terra (10:30 local equator crossing time) CERES fluxes with 3-hourly 5-geostationary satellite estimated broadband fluxes, which are normalized using the CERES fluxes, to complete the diurnal cycle. These fluxes were saved hourly during processing and considered the truth dataset. 90°, 83° and 74° inclination precessionary orbits as
In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample
Wang, B.
2017-11-27
The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.
In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample
Wang, B.; Pan, B.; Lubineau, Gilles
2017-01-01
The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
International Nuclear Information System (INIS)
Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Estimation of sampling error uncertainties in observed surface air temperature change in China
Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun
2017-08-01
This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.
Data error effects on net radiation and evapotranspiration estimation
International Nuclear Information System (INIS)
Llasat, M.C.; Snyder, R.L.
1998-01-01
The objective of this paper is to evaluate the potential error in estimating the net radiation and reference evapotranspiration resulting from errors in the measurement or estimation of weather parameters. A methodology for estimating the net radiation using hourly weather variables measured at a typical agrometeorological station (e.g., solar radiation, temperature and relative humidity) is presented. Then the error propagation analysis is made for net radiation and for reference evapotranspiration. Data from the Raimat weather station, which is located in the Catalonia region of Spain, are used to illustrate the error relationships. The results show that temperature, relative humidity and cloud cover errors have little effect on the net radiation or reference evapotranspiration. A 5°C error in estimating surface temperature leads to errors as big as 30 W m −2 at high temperature. A 4% solar radiation (R s ) error can cause a net radiation error as big as 26 W m −2 when R s ≈ 1000 W m −2 . However, the error is less when cloud cover is calculated as a function of the solar radiation. The absolute error in reference evapotranspiration (ET o ) equals the product of the net radiation error and the radiation term weighting factor [W = Δ(Δ1+γ)] in the ET o equation. Therefore, the ET o error varies between 65 and 85% of the R n error as air temperature increases from about 20° to 40°C. (author)
International Nuclear Information System (INIS)
Hirschfeld, T.; Honigs, D.; Hieftje, G.
1985-01-01
Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable
Shipitsin, M; Small, C; Choudhury, S; Giladi, E; Friedlander, S; Nardone, J; Hussain, S; Hurley, A D; Ernst, C; Huang, Y E; Chang, H; Nifong, T P; Rimm, D L; Dunyak, J; Loda, M; Berman, D M; Blume-Jensen, P
2014-09-09
Key challenges of biopsy-based determination of prostate cancer aggressiveness include tumour heterogeneity, biopsy-sampling error, and variations in biopsy interpretation. The resulting uncertainty in risk assessment leads to significant overtreatment, with associated costs and morbidity. We developed a performance-based strategy to identify protein biomarkers predictive of prostate cancer aggressiveness and lethality regardless of biopsy-sampling variation. Prostatectomy samples from a large patient cohort with long follow-up were blindly assessed by expert pathologists who identified the tissue regions with the highest and lowest Gleason grade from each patient. To simulate biopsy-sampling error, a core from a high- and a low-Gleason area from each patient sample was used to generate a 'high' and a 'low' tumour microarray, respectively. Using a quantitative proteomics approach, we identified from 160 candidates 12 biomarkers that predicted prostate cancer aggressiveness (surgical Gleason and TNM stage) and lethal outcome robustly in both high- and low-Gleason areas. Conversely, a previously reported lethal outcome-predictive marker signature for prostatectomy tissue was unable to perform under circumstances of maximal sampling error. Our results have important implications for cancer biomarker discovery in general and development of a sampling error-resistant clinical biopsy test for prediction of prostate cancer aggressiveness.
Error framing effects on performance: cognitive, motivational, and affective pathways.
Steele-Johnson, Debra; Kalinoski, Zachary T
2014-01-01
Our purpose was to examine whether positive error framing, that is, making errors salient and cuing individuals to see errors as useful, can benefit learning when task exploration is constrained. Recent research has demonstrated the benefits of a newer approach to training, that is, error management training, that includes the opportunity to actively explore the task and framing errors as beneficial to learning complex tasks (Keith & Frese, 2008). Other research has highlighted the important role of errors in on-the-job learning in complex domains (Hutchins, 1995). Participants (N = 168) from a large undergraduate university performed a class scheduling task. Results provided support for a hypothesized path model in which error framing influenced cognitive, motivational, and affective factors which in turn differentially affected performance quantity and quality. Within this model, error framing had significant direct effects on metacognition and self-efficacy. Our results suggest that positive error framing can have beneficial effects even when tasks cannot be structured to support extensive exploration. Whereas future research can expand our understanding of error framing effects on outcomes, results from the current study suggest that positive error framing can facilitate learning from errors in real-time performance of tasks.
Bhadra, Anindya; Carroll, Raymond J
2016-07-01
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.
Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz
2014-07-01
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Error baseline rates of five sample preparation methods used to characterize RNA virus populations.
Directory of Open Access Journals (Sweden)
Jeffrey R Kugelman
Full Text Available Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5 of all compared methods.
Quantifying Configuration-Sampling Error in Langevin Simulations of Complex Molecular Systems
Directory of Open Access Journals (Sweden)
Josh Fass
2018-04-01
Full Text Available While Langevin integrators are popular in the study of equilibrium properties of complex systems, it is challenging to estimate the timestep-induced discretization error: the degree to which the sampled phase-space or configuration-space probability density departs from the desired target density due to the use of a finite integration timestep. Sivak et al., introduced a convenient approach to approximating a natural measure of error between the sampled density and the target equilibrium density, the Kullback-Leibler (KL divergence, in phase space, but did not specifically address the issue of configuration-space properties, which are much more commonly of interest in molecular simulations. Here, we introduce a variant of this near-equilibrium estimator capable of measuring the error in the configuration-space marginal density, validating it against a complex but exact nested Monte Carlo estimator to show that it reproduces the KL divergence with high fidelity. To illustrate its utility, we employ this new near-equilibrium estimator to assess a claim that a recently proposed Langevin integrator introduces extremely small configuration-space density errors up to the stability limit at no extra computational expense. Finally, we show how this approach to quantifying sampling bias can be applied to a wide variety of stochastic integrators by following a straightforward procedure to compute the appropriate shadow work, and describe how it can be extended to quantify the error in arbitrary marginal or conditional distributions of interest.
International Nuclear Information System (INIS)
Rieger, J.T.; Bryce, R.W.
1990-01-01
Ground-water samples collected for hazardous-waste and radiological monitoring have come under strict regulatory and quality assurance requirements as a result of laws such as the Resource Conservation and Recovery Act. To comply with these laws, the labeling system used to identify environmental samples had to be upgraded to ensure proper handling and to protect collection personnel from exposure to sample contaminants and sample preservatives. The sample label now used as the Pacific Northwest Laboratory is a complete sample document. In the event other paperwork on a labeled sample were lost, the necessary information could be found on the label
The effect of errors in charged particle beams
International Nuclear Information System (INIS)
Carey, D.C.
1987-01-01
Residual errors in a charged particle optical system determine how well the performance of the system conforms to the theory on which it is based. Mathematically possible optical modes can sometimes be eliminated as requiring precisions not attainable. Other plans may require introduction of means of correction for the occurrence of various errors. Error types include misalignments, magnet fabrication precision limitations, and magnet current regulation errors. A thorough analysis of a beam optical system requires computer simulation of all these effects. A unified scheme for the simulation of errors and their correction is discussed
Shipitsin, M; Small, C; Choudhury, S; Giladi, E; Friedlander, S; Nardone, J; Hussain, S; Hurley, A D; Ernst, C; Huang, Y E; Chang, H; Nifong, T P; Rimm, D L; Dunyak, J; Loda, M
2014-01-01
Background: Key challenges of biopsy-based determination of prostate cancer aggressiveness include tumour heterogeneity, biopsy-sampling error, and variations in biopsy interpretation. The resulting uncertainty in risk assessment leads to significant overtreatment, with associated costs and morbidity. We developed a performance-based strategy to identify protein biomarkers predictive of prostate cancer aggressiveness and lethality regardless of biopsy-sampling variation. Methods: Prostatectom...
Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem
Energy Technology Data Exchange (ETDEWEB)
Reer, B
2004-03-01
The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)
Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem
International Nuclear Information System (INIS)
Reer, B.
2004-01-01
The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)
Effect of GPS errors on Emission model
DEFF Research Database (Denmark)
Lehmann, Anders; Gross, Allan
n this paper we will show how Global Positioning Services (GPS) data obtained from smartphones can be used to model air quality in urban settings. The paper examines the uncertainty of smartphone location utilising GPS, and ties this location uncertainty to air quality models. The results presented...... in this paper indicates that the location error from using smartphones is within the accuracy needed to use the location data in air quality modelling. The nature of smartphone location data enables more accurate and near real time air quality modelling and monitoring. The location data is harvested from user...
On the errors on Omega(0): Monte Carlo simulations of the EMSS cluster sample
DEFF Research Database (Denmark)
Oukbir, J.; Arnaud, M.
2001-01-01
We perform Monte Carlo simulations of synthetic EMSS cluster samples, to quantify the systematic errors and the statistical uncertainties on the estimate of Omega (0) derived from fits to the cluster number density evolution and to the X-ray temperature distribution up to z=0.83. We identify...... the scatter around the relation between cluster X-ray luminosity and temperature to be a source of systematic error, of the order of Delta (syst)Omega (0) = 0.09, if not properly taken into account in the modelling. After correcting for this bias, our best Omega (0) is 0.66. The uncertainties on the shape...
Mitigating Observation Perturbation Sampling Errors in the Stochastic EnKF
Hoteit, Ibrahim
2015-03-17
The stochastic ensemble Kalman filter (EnKF) updates its ensemble members with observations perturbed with noise sampled from the distribution of the observational errors. This was shown to introduce noise into the system and may become pronounced when the ensemble size is smaller than the rank of the observational error covariance, which is often the case in real oceanic and atmospheric data assimilation applications. This work introduces an efficient serial scheme to mitigate the impact of observations’ perturbations sampling in the analysis step of the EnKF, which should provide more accurate ensemble estimates of the analysis error covariance matrices. The new scheme is simple to implement within the serial EnKF algorithm, requiring only the approximation of the EnKF sample forecast error covariance matrix by a matrix with one rank less. The new EnKF scheme is implemented and tested with the Lorenz-96 model. Results from numerical experiments are conducted to compare its performance with the EnKF and two standard deterministic EnKFs. This study shows that the new scheme enhances the behavior of the EnKF and may lead to better performance than the deterministic EnKFs even when implemented with relatively small ensembles.
Mitigating Observation Perturbation Sampling Errors in the Stochastic EnKF
Hoteit, Ibrahim; Pham, D.-T.; El Gharamti, Mohamad; Luo, X.
2015-01-01
The stochastic ensemble Kalman filter (EnKF) updates its ensemble members with observations perturbed with noise sampled from the distribution of the observational errors. This was shown to introduce noise into the system and may become pronounced when the ensemble size is smaller than the rank of the observational error covariance, which is often the case in real oceanic and atmospheric data assimilation applications. This work introduces an efficient serial scheme to mitigate the impact of observations’ perturbations sampling in the analysis step of the EnKF, which should provide more accurate ensemble estimates of the analysis error covariance matrices. The new scheme is simple to implement within the serial EnKF algorithm, requiring only the approximation of the EnKF sample forecast error covariance matrix by a matrix with one rank less. The new EnKF scheme is implemented and tested with the Lorenz-96 model. Results from numerical experiments are conducted to compare its performance with the EnKF and two standard deterministic EnKFs. This study shows that the new scheme enhances the behavior of the EnKF and may lead to better performance than the deterministic EnKFs even when implemented with relatively small ensembles.
Directory of Open Access Journals (Sweden)
M. Toohey
2013-04-01
Full Text Available Climatologies of atmospheric observations are often produced by binning measurements according to latitude and calculating zonal means. The uncertainty in these climatological means is characterised by the standard error of the mean (SEM. However, the usual estimator of the SEM, i.e., the sample standard deviation divided by the square root of the sample size, holds only for uncorrelated randomly sampled measurements. Measurements of the atmospheric state along a satellite orbit cannot always be considered as independent because (a the time-space interval between two nearest observations is often smaller than the typical scale of variations in the atmospheric state, and (b the regular time-space sampling pattern of a satellite instrument strongly deviates from random sampling. We have developed a numerical experiment where global chemical fields from a chemistry climate model are sampled according to real sampling patterns of satellite-borne instruments. As case studies, the model fields are sampled using sampling patterns of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS and Atmospheric Chemistry Experiment Fourier-Transform Spectrometer (ACE-FTS satellite instruments. Through an iterative subsampling technique, and by incorporating information on the random errors of the MIPAS and ACE-FTS measurements, we produce empirical estimates of the standard error of monthly mean zonal mean model O3 in 5° latitude bins. We find that generally the classic SEM estimator is a conservative estimate of the SEM, i.e., the empirical SEM is often less than or approximately equal to the classic estimate. Exceptions occur only when natural variability is larger than the random measurement error, and specifically in instances where the zonal sampling distribution shows non-uniformity with a similar zonal structure as variations in the sampled field, leading to maximum sensitivity to arbitrary phase shifts between the sample distribution and
Effects of Target Positioning Error on Motion Compensation for Airborne Interferometric SAR
Directory of Open Access Journals (Sweden)
Li Yin-wei
2013-12-01
Full Text Available The measurement inaccuracies of Inertial Measurement Unit/Global Positioning System (IMU/GPS as well as the positioning error of the target may contribute to the residual uncompensated motion errors in the MOtion COmpensation (MOCO approach based on the measurement of IMU/GPS. Aiming at the effects of target positioning error on MOCO for airborne interferometric SAR, the paper firstly deduces a mathematical model of residual motion error bring out by target positioning error under the condition of squint. And the paper analyzes the effects on the residual motion error caused by system sampling delay error, the Doppler center frequency error and reference DEM error which result in target positioning error based on the model. Then, the paper discusses the effects of the reference DEM error on the interferometric SAR image quality, the interferometric phase and the coherent coefficient. The research provides theoretical bases for the MOCO precision in signal processing of airborne high precision SAR and airborne repeat-pass interferometric SAR.
Death Certification Errors and the Effect on Mortality Statistics.
McGivern, Lauri; Shulman, Leanne; Carney, Jan K; Shapiro, Steven; Bundock, Elizabeth
Errors in cause and manner of death on death certificates are common and affect families, mortality statistics, and public health research. The primary objective of this study was to characterize errors in the cause and manner of death on death certificates completed by non-Medical Examiners. A secondary objective was to determine the effects of errors on national mortality statistics. We retrospectively compared 601 death certificates completed between July 1, 2015, and January 31, 2016, from the Vermont Electronic Death Registration System with clinical summaries from medical records. Medical Examiners, blinded to original certificates, reviewed summaries, generated mock certificates, and compared mock certificates with original certificates. They then graded errors using a scale from 1 to 4 (higher numbers indicated increased impact on interpretation of the cause) to determine the prevalence of minor and major errors. They also compared International Classification of Diseases, 10th Revision (ICD-10) codes on original certificates with those on mock certificates. Of 601 original death certificates, 319 (53%) had errors; 305 (51%) had major errors; and 59 (10%) had minor errors. We found no significant differences by certifier type (physician vs nonphysician). We did find significant differences in major errors in place of death ( P statistics. Surveillance and certifier education must expand beyond local and state efforts. Simplifying and standardizing underlying literal text for cause of death may improve accuracy, decrease coding errors, and improve national mortality statistics.
Error baseline rates of five sample preparation methods used to characterize RNA virus populations
Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.
2017-01-01
Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717
The Effect of Divided Attention on Inhibiting the Gravity Error
Hood, Bruce M.; Wilson, Alice; Dyson, Sally
2006-01-01
Children who could overcome the gravity error on Hood's (1995) tubes task were tested in a condition where they had to monitor two falling balls. This condition significantly impaired search performance with the majority of mistakes being gravity errors. In a second experiment, the effect of monitoring two balls was compared in the tubes task and…
Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.
Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko
2017-06-01
Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.
An analytical examination of distortions in power spectra due to sampling errors
International Nuclear Information System (INIS)
Njau, E.C.
1982-06-01
Distortions introduced into spectral energy densities of sinusoid signals as well as those of more complex signals through different forms of errors in signal sampling are developed and shown analytically. The approach we have adopted in doing this involves, firstly, developing for each type of signal and for the corresponding form of sampling errors an analytical expression that gives the faulty digitization process involved in terms of the features of the particular signal. Secondly, we take advantage of a method described elsewhere [IC/82/44] to relate, as much as possible, the true spectral energy density of the signal and the corresponding spectral energy density of the faulty digitization process. Thirdly, we then develop expressions which reveal the distortions that are formed in the directly computed spectral energy density of the digitized signal. It is evident from the formulations developed herein that the types of sampling errors taken into consideration may create false peaks and other distortions that are of non-negligible concern in computed power spectra. (author)
Refractive Error in a Sample of Black High School Children in South Africa.
Wajuihian, Samuel Otabor; Hansraj, Rekha
2017-12-01
This study focused on a cohort that has not been studied and who currently have limited access to eye care services. The findings, while improving the understanding of the distribution of refractive errors, also enabled identification of children requiring intervention and provided a guide for future resource allocation. The aim of conducting the study was to determine the prevalence and distribution of refractive error and its association with gender, age, and school grade level. Using a multistage random cluster sampling, 1586 children, 632 males (40%) and 954 females (60%), were selected. Their ages ranged between 13 and 18 years with a mean of 15.81 ± 1.56 years. The visual functions evaluated included visual acuity using the logarithm of minimum angle of resolution chart and refractive error measured using the autorefractor and then refined subjectively. Axis astigmatism was presented in the vector method where positive values of J0 indicated with-the-rule astigmatism, negative values indicated against-the-rule astigmatism, whereas J45 represented oblique astigmatism. Overall, patients were myopic with a mean spherical power for right eye of -0.02 ± 0.47; mean astigmatic cylinder power was -0.09 ± 0.27 with mainly with-the-rule astigmatism (J0 = 0.01 ± 0.11). The prevalence estimates were as follows: myopia (at least -0.50) 7% (95% confidence interval [CI], 6 to 9%), hyperopia (at least 0.5) 5% (95% CI, 4 to 6%), astigmatism (at least -0.75 cylinder) 3% (95% CI, 2 to 4%), and anisometropia 3% (95% CI, 2 to 4%). There was no significant association between refractive error and any of the categories (gender, age, and grade levels). The prevalence of refractive error in the sample of high school children was relatively low. Myopia was the most prevalent, and findings on its association with age suggest that the prevalence of myopia may be stabilizing at late teenage years.
Slattery, Jim
2005-01-01
To evaluate various designs for a quality assurance system to detect and control human errors in a national screening programme for diabetic retinopathy. A computer simulation was performed of some possible ways of sampling the referral decisions made during grading and of different criteria for initiating more intensive QA investigations. The effectiveness of QA systems was assessed by the ability to detect a grader making occasional errors in referral. Substantial QA sample sizes are needed to ensure against inappropriate failure to refer. Detection of a grader who failed to refer one in ten cases can be achieved with a probability of 0.58 using an annual sample size of 300 and 0.77 using a sample size of 500. An unmasked verification of a sample of non-referrals by a specialist is the most effective method of internal QA for the diabetic retinopathy screening programme. Preferential sampling of those with some degree of disease may improve the efficiency of the system.
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Software error masking effect on hardware faults
International Nuclear Information System (INIS)
Choi, Jong Gyun; Seong, Poong Hyun
1999-01-01
Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile
DEFF Research Database (Denmark)
Warnecke, Solveig
of the pharmaceutical drug, both during development and throughout the production. The usage of PAT tools is encouraged by the regulatory authorities, and therefore the interest in new and improved PAT tools is increasing. The main purpose of introducing Quality by Design (QbD) and PAT in pharmaceutical production...... practice in traditional pharmaceutical industry. In this thesis, three spectroscopic PAT tools are investigated, near-infrared-, terahertz-, and fluorescence-spectroscopy. These techniques have been evaluated with chemometrics and theory of sampling. The first study focused on the critical but rather...... overlooked sampling uncertainty that exist in all analytical measurements. The sampling error was studied using an example involving near infrared transmission (NIT) spectroscopy to study content of uniformity of five batches of escitalopram tablets, produced at different active pharmaceutical ingredients...
Effects of variable transformations on errors in FORM results
International Nuclear Information System (INIS)
Qin Quan; Lin Daojin; Mei Gang; Chen Hao
2006-01-01
On the basis of studies on second partial derivatives of the variable transformation functions for nine different non-normal variables the paper comprehensively discusses the effects of the transformation on FORM results and shows that senses and values of the errors in FORM results depend on distributions of the basic variables, whether resistances or actions basic variables represent, and the design point locations in the standard normal space. The transformations of the exponential or Gamma resistance variables can generate +24% errors in the FORM failure probability, and the transformation of Frechet action variables could generate -31% errors
Error correcting circuit design with carbon nanotube field effect transistors
Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong
2018-03-01
In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.
Bandwagon effects and error bars in particle physics
Jeng, Monwhea
2007-02-01
We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.
Bandwagon effects and error bars in particle physics
International Nuclear Information System (INIS)
Jeng, Monwhea
2007-01-01
We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit 'bandwagon effects': reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations
Rach, Stefanie; Ufer, Stefan; Heinze, Aiso
2013-01-01
Constructive error handling is considered an important factor for individual learning processes. In a quasi-experimental study with Grades 6 to 9 students, we investigate effects on students' attitudes towards errors as learning opportunities in two conditions: an error-tolerant classroom culture, and the first condition along with additional…
Understanding reliance on automation: effects of error type, error distribution, age and experience
Sanchez, Julian; Rogers, Wendy A.; Fisk, Arthur D.; Rovira, Ericka
2015-01-01
An obstacle detection task supported by “imperfect” automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation. PMID:25642142
The Attraction Effect Modulates Reward Prediction Errors and Intertemporal Choices.
Gluth, Sebastian; Hotaling, Jared M; Rieskamp, Jörg
2017-01-11
Classical economic theory contends that the utility of a choice option should be independent of other options. This view is challenged by the attraction effect, in which the relative preference between two options is altered by the addition of a third, asymmetrically dominated option. Here, we leveraged the attraction effect in the context of intertemporal choices to test whether both decisions and reward prediction errors (RPE) in the absence of choice violate the independence of irrelevant alternatives principle. We first demonstrate that intertemporal decision making is prone to the attraction effect in humans. In an independent group of participants, we then investigated how this affects the neural and behavioral valuation of outcomes using a novel intertemporal lottery task and fMRI. Participants' behavioral responses (i.e., satisfaction ratings) were modulated systematically by the attraction effect and this modulation was correlated across participants with the respective change of the RPE signal in the nucleus accumbens. Furthermore, we show that, because exponential and hyperbolic discounting models are unable to account for the attraction effect, recently proposed sequential sampling models might be more appropriate to describe intertemporal choices. Our findings demonstrate for the first time that the attraction effect modulates subjective valuation even in the absence of choice. The findings also challenge the prospect of using neuroscientific methods to measure utility in a context-free manner and have important implications for theories of reinforcement learning and delay discounting. Many theories of value-based decision making assume that people first assess the attractiveness of each option independently of each other and then pick the option with the highest subjective value. The attraction effect, however, shows that adding a new option to a choice set can change the relative value of the existing options, which is a violation of the independence
Impact of shrinking measurement error budgets on qualification metrology sampling and cost
Sendelbach, Matthew; Sarig, Niv; Wakamoto, Koichi; Kim, Hyang Kyun (Helen); Isbester, Paul; Asano, Masafumi; Matsuki, Kazuto; Vaid, Alok; Osorio, Carmen; Archie, Chas
2014-04-01
When designing an experiment to assess the accuracy of a tool as compared to a reference tool, semiconductor metrologists are often confronted with the situation that they must decide on the sampling strategy before the measurements begin. This decision is usually based largely on the previous experience of the metrologist and the available resources, and not on the statistics that are needed to achieve acceptable confidence limits on the final result. This paper shows a solution to this problem, called inverse TMU analysis, by presenting statistically-based equations that allow the user to estimate the needed sampling after providing appropriate inputs, allowing him to make important "risk vs. reward" sampling, cost, and equipment decisions. Application examples using experimental data from scatterometry and critical dimension scanning electron microscope (CD-SEM) tools are used first to demonstrate how the inverse TMU analysis methodology can be used to make intelligent sampling decisions before the start of the experiment, and then to reveal why low sampling can lead to unstable and misleading results. A model is developed that can help an experimenter minimize the costs associated both with increased sampling and with making wrong decisions caused by insufficient sampling. A second cost model is described that reveals the inadequacy of current TEM (Transmission Electron Microscopy) sampling practices and the enormous costs associated with TEM sampling that is needed to provide reasonable levels of certainty in the result. These high costs reach into the tens of millions of dollars for TEM reference metrology as the measurement error budgets reach angstrom levels. The paper concludes with strategies on how to manage and mitigate these costs.
Li, Zipeng; Lai, Kelvin Yi-Tse; Chakrabarty, Krishnendu; Ho, Tsung-Yi; Lee, Chen-Yi
2017-12-01
Sample preparation in digital microfluidics refers to the generation of droplets with target concentrations for on-chip biochemical applications. In recent years, digital microfluidic biochips (DMFBs) have been adopted as a platform for sample preparation. However, there remain two major problems associated with sample preparation on a conventional DMFB. First, only a (1:1) mixing/splitting model can be used, leading to an increase in the number of fluidic operations required for sample preparation. Second, only a limited number of sensors can be integrated on a conventional DMFB; as a result, the latency for error detection during sample preparation is significant. To overcome these drawbacks, we adopt a next generation DMFB platform, referred to as micro-electrode-dot-array (MEDA), for sample preparation. We propose the first sample-preparation method that exploits the MEDA-specific advantages of fine-grained control of droplet sizes and real-time droplet sensing. Experimental demonstration using a fabricated MEDA biochip and simulation results highlight the effectiveness of the proposed sample-preparation method.
Effect of refractive error on temperament and character properties
Institute of Scientific and Technical Information of China (English)
Emine; Kalkan; Akcay; Fatih; Canan; Huseyin; Simavli; Derya; Dal; Hacer; Yalniz; Nagihan; Ugurlu; Omer; Gecici; Nurullah; Cagil
2015-01-01
AIM: To determine the effect of refractive error on temperament and character properties using Cloninger’s psychobiological model of personality.METHODS: Using the Temperament and Character Inventory(TCI), the temperament and character profiles of 41 participants with refractive errors(17 with myopia,12 with hyperopia, and 12 with myopic astigmatism) were compared to those of 30 healthy control participants.Here, temperament comprised the traits of novelty seeking, harm-avoidance, and reward dependence, while character comprised traits of self-directedness,cooperativeness, and self-transcendence.RESULTS: Participants with refractive error showed significantly lower scores on purposefulness,cooperativeness, empathy, helpfulness, and compassion(P <0.05, P <0.01, P <0.05, P <0.05, and P <0.01,respectively).CONCLUSION: Refractive error might have a negative influence on some character traits, and different types of refractive error might have different temperament and character properties. These personality traits may be implicated in the onset and/or perpetuation of refractive errors and may be a productive focus for psychotherapy.
Ironic Effects of Drawing Attention to Story Errors
Eslick, Andrea N.; Fazio, Lisa K.; Marsh, Elizabeth J.
2014-01-01
Readers learn errors embedded in fictional stories and use them to answer later general knowledge questions (Marsh, Meade, & Roediger, 2003). Suggestibility is robust and occurs even when story errors contradict well-known facts. The current study evaluated whether suggestibility is linked to participants’ inability to judge story content as correct versus incorrect. Specifically, participants read stories containing correct and misleading information about the world; some information was familiar (making error discovery possible), while some was more obscure. To improve participants’ monitoring ability, we highlighted (in red font) a subset of story phrases requiring evaluation; readers no longer needed to find factual information. Rather, they simply needed to evaluate its correctness. Readers were more likely to answer questions with story errors if they were highlighted in red font, even if they contradicted well-known facts. Though highlighting to-be-evaluated information freed cognitive resources for monitoring, an ironic effect occurred: Drawing attention to specific errors increased rather than decreased later suggestibility. Failure to monitor for errors, not failure to identify the information requiring evaluation, leads to suggestibility. PMID:21294039
2016-01-01
Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627
Error Estimates for the Approximation of the Effective Hamiltonian
International Nuclear Information System (INIS)
Camilli, Fabio; Capuzzo Dolcetta, Italo; Gomes, Diogo A.
2008-01-01
We study approximation schemes for the cell problem arising in homogenization of Hamilton-Jacobi equations. We prove several error estimates concerning the rate of convergence of the approximation scheme to the effective Hamiltonian, both in the optimal control setting and as well as in the calculus of variations setting
Quality assurance and human error effects on the structural safety
International Nuclear Information System (INIS)
Bertero, R.; Lopez, R.; Sarrate, M.
1991-01-01
Statistical surveys show that the frequency of failure of structures is much larger than that expected by the codes. Evidence exists that human errors (especially during the design process) is the main cause for the difference between the failure probability admitted by codes and the reality. In this paper, the attenuation of human error effects using tools of quality assurance is analyzed. In particular, the importance of the independent design review is highlighted, and different approaches are discussed. The experience from the Atucha II project, as well as the USA and German practice on independent design review, are summarized. (Author)
Directory of Open Access Journals (Sweden)
Marlina A. Turnodihardjo
2016-09-01
Full Text Available Patient’s safety is now a prominent issue in pharmaceutical care because of adverse drug events that is common in hospitalized patients. Majority of error are likely occured during prescribing, which is the first stage of pharmacy process. Prescription errors mostly occured in an Intensive Care Unit (ICU, which is due to the severity of the illness of its patients as well as the large number of medications prescribed. Pharmacist participation actually could reduce prescribing error made by doctors. The main objective of this study was to determine the effect of pharmacist participation during physician rounds on prescription errors in the ICU. This study was a quasi-experimental design with one group pre-post test. A prospective study was conducted from April to May 2015 by screening 110 samples of orders. Screening was done to identify type of prescription errors. Prescription error was defined as error in the prescription writing process – incomplete information and not according to agreement. Mann-Whitney test was used to analyze the differences in prescribing errors. The results showed that there was the differences between prescription errors before and during the pharmacist participation (p<0.05. There was also a significant negative correlation between the frequency of pharmacist recommendation on drug ordering and prescription errors (r= –0.638; p<0.05. It means the pharmacist participation was one of the strategies that can be adopted to prevent in prescribing errors and implementation of collaboration between both doctors and pharmacists. In other words, the supporting hospital management system which would encourage interpersonal communication among health care proffesionals is needed.
Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E
2011-06-22
Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling
Smith, G. L.; Bess, T. D.; Minnis, P.
1983-01-01
The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.
Swan, B.; Laverdiere, M.; Yang, L.
2017-12-01
In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process
The effect of experimental sleep fragmentation on error monitoring.
Ko, Cheng-Hung; Fang, Ya-Wen; Tsai, Ling-Ling; Hsieh, Shulan
2015-01-01
Experimental sleep fragmentation (SF) is characterized by frequent brief arousals without reduced total sleep time and causes daytime sleepiness and impaired neurocognitive processes. This study explored the impact of SF on error monitoring. Thirteen adults underwent auditory stimuli-induced high-level (H) and low-level (L) SF nights. Flanker task performance and electroencephalogram data were collected in the morning following SF nights. Compared to LSF, HSF induced more arousals and stage N1 sleep, decreased slow wave sleep and rapid-eye-movement sleep (REMS), decreased subjective sleep quality, increased daytime sleepiness, and decreased amplitudes of P300 and error-related positivity (Pe). SF effects on N1 sleep were negatively correlated with SF effects on the Pe amplitude. Furthermore, as REMS was reduced by SF, post-error accuracy compensations were greatly reduced. In conclusion, attentional processes and error monitoring were impaired following one night of frequent sleep disruptions, even when total sleep time was not reduced. Copyright © 2014 Elsevier B.V. All rights reserved.
Negative Input for Grammatical Errors: Effects after a Lag of 12 Weeks
Saxton, Matthew; Backley, Phillip; Gallaway, Clare
2005-01-01
Effects of negative input for 13 categories of grammatical error were assessed in a longitudinal study of naturalistic adult-child discourse. Two-hour samples of conversational interaction were obtained at two points in time, separated by a lag of 12 weeks, for 12 children (mean age 2;0 at the start). The data were interpreted within the framework…
Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators
International Nuclear Information System (INIS)
Flammia, Steven T; Gross, David; Liu, Yi-Kai; Eisert, Jens
2012-01-01
Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
Errors and Understanding: The Effects of Error-Management Training on Creative Problem-Solving
Robledo, Issac C.; Hester, Kimberly S.; Peterson, David R.; Barrett, Jamie D.; Day, Eric A.; Hougen, Dean P.; Mumford, Michael D.
2012-01-01
People make errors in their creative problem-solving efforts. The intent of this article was to assess whether error-management training would improve performance on creative problem-solving tasks. Undergraduates were asked to solve an educational leadership problem known to call for creative thought where problem solutions were scored for…
Perceived Effects of Prevalent Errors in Contract Documents on Construction Projects
Directory of Open Access Journals (Sweden)
Oluwaseun Sunday Dosumu
2018-03-01
Full Text Available One of the highly rated causes of poor performance is errors in contract documents. The objectives of this study are to investigate the prevalent errors in contract documents and their effects on construction projects. Questionnaire survey and 51 case study projects (mixed method were adopted for the study. The study also involved the use of Delphi technique to extract the possible errors that may be contained in contract documents; it did not however constitute the empirical data for the study. The sample of the study consists of 985 consulting and 275 contracting firms that engaged in the construction of building projects that were completed between 2013 and 2016 and were above the ground floor. The two-stage stratified random sampling technique was adopted for the study. The data for the study were analysed with descriptive and inferential statistics (based on Shapiro-Wilk’s test. The results of the study indicate that errors in contract documents were moderately prevalent. However, overmeasurement in bill of quantities was prevalent in private, institutional and management procured projects. Traditionally procured projects contain 68% of the errors in contract documents among the procurement methods. Drawings contain the highest number of errors, followed by bill of quantities and specifications. The severe effects of errors in contract documents were structural collapse, deterioration of buildings and contractors’ claims among others. The result of the study implies that, management procurement method is the route to error minimization in developing countries, but it may need to be backed by law and guarded against overmeasurement.
Directory of Open Access Journals (Sweden)
Roberta Ronchi
Full Text Available During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: as consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects. Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion "to feel" the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors.
Directory of Open Access Journals (Sweden)
Hugues Santin-Janin
Full Text Available BACKGROUND: Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal with respect to extrinsic factors (the Moran effect in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. METHODOLOGY/PRINCIPAL FINDINGS: The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i has been previously estimated, and (ii has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. CONCLUSION/SIGNIFICANCE: The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for
MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION
Directory of Open Access Journals (Sweden)
Ashit Chakraborty
2013-09-01
Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.
Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.
2015-06-01
Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.
Bakker, Marjan; Wicherts, Jelte M
2014-09-01
In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Calculation error of collective effective dose of external exposure during works at 'Shelter' object
International Nuclear Information System (INIS)
Batij, V.G.; Derengovskij, V.V.; Kochnev, N.A.; Sizov, A.A.
2001-01-01
Collective effective dose (CED) error assessment is the most important task for optimal planning of works in the 'Shelter' object conditions. The main components of CED error are as follows: error in transient factor determination from exposition dose to equivalent dose; error in working hours determination in 'Shelter' object conditions; error in determination of dose rate at workplaces; additional CED error introduced by shielding of workplaces
Quantification of the effects of dependence on human error probabilities
International Nuclear Information System (INIS)
Bell, B.J.; Swain, A.D.
1980-01-01
In estimating the probabilities of human error in the performance of a series of tasks in a nuclear power plant, the situation-specific characteristics of the series must be considered. A critical factor not to be overlooked in this estimation is the dependence or independence that pertains to any of the several pairs of task performances. In discussing the quantification of the effects of dependence, the event tree symbology described will be used. In any series of tasks, the only dependence considered for quantification in this document will be that existing between the task of interest and the immediately preceeding task. Tasks performed earlier in the series may have some effect on the end task, but this effect is considered negligible
Tyo, J Scott; LaCasse, Charles F; Ratliff, Bradley M
2009-10-15
Microgrid polarimeters operate by integrating a focal plane array with an array of micropolarizers. The Stokes parameters are estimated by comparing polarization measurements from pixels in a neighborhood around the point of interest. The main drawback is that the measurements used to estimate the Stokes vector are made at different locations, leading to a false polarization signature owing to instantaneous field-of-view (IFOV) errors. We demonstrate for the first time, to our knowledge, that spatially band limited polarization images can be ideally reconstructed with no IFOV error by using a linear system framework.
Directory of Open Access Journals (Sweden)
Semanur Adalı
2017-09-01
Full Text Available In this study, the ethical dimensions of accounting professionals related to accounting errors and frauds were examined. Firstly, general and technical information about accounting were provided. Then, some terminology on error, fraud and ethics in accounting were discussed. Study also included recent statistics about accounting errors and fraud as well as presenting a literature review. As the methodology of research, a questionnaire was distributed to 36 accounting professionals residing in Edirne city of Turkey. The collected data were then entered to the SPSS package program for analysis. The study revealed very important results. Accounting professionals think that, accounting chambers do not organize enough seminars/conferences on errors and fraud. They also believe that supervision and disciplinary boards of professional accounting chambers fulfill their responsibilities partially. Attitude of professional accounting chambers in terms of errors, fraud and ethics is considered neither strict nor lenient. But, most accounting professionals are aware of colleagues who had disciplinary penalties. Most important and effective tool to prevent errors and fraud is indicated as external audit, but internal audit and internal control are valued as well. According to accounting professionals, most errors occur due to incorrect data received from clients and as a result of recording. Fraud is generally made in order to get credit from banks and for providing benefits to the organization by not showing the real situation of the firm. Finally, accounting professionals state that being honest, trustworthy and impartial is the basis of accounting profession and accountants must adhere to ethical rules.
The Effect of Type-1 Error on Deterrence
DEFF Research Database (Denmark)
Lando, Henrik; Mungan, Murat C.
2018-01-01
According to a conventional view, type-1 error (wrongful conviction) is as detrimental to deterrence as type-2 error (wrongful acquittal), because type-1 error lowers the pay-off from acting within the law. This view has led to the claim that the pro-defendant bias of criminal procedure, and its ...
Effective connectivity of visual word recognition and homophone orthographic errors
Guàrdia-Olmos, Joan; Peró-Cebollero, Maribel; Zarabozo-Hurtado, Daniel; González-Garrido, Andrés A.; Gudayol-Ferré, Esteve
2015-01-01
The study of orthographic errors in a transparent language like Spanish is an important topic in relation to writing acquisition. The development of neuroimaging techniques, particularly functional magnetic resonance imaging (fMRI), has enabled the study of such relationships between brain areas. The main objective of the present study was to explore the patterns of effective connectivity by processing pseudohomophone orthographic errors among subjects with high and low spelling skills. Two groups of 12 Mexican subjects each, matched by age, were formed based on their results in a series of ad hoc spelling-related out-scanner tests: a high spelling skills (HSSs) group and a low spelling skills (LSSs) group. During the f MRI session, two experimental tasks were applied (spelling recognition task and visuoperceptual recognition task). Regions of Interest and their signal values were obtained for both tasks. Based on these values, structural equation models (SEMs) were obtained for each group of spelling competence (HSS and LSS) and task through maximum likelihood estimation, and the model with the best fit was chosen in each case. Likewise, dynamic causal models (DCMs) were estimated for all the conditions across tasks and groups. The HSS group’s SEM results suggest that, in the spelling recognition task, the right middle temporal gyrus, and, to a lesser extent, the left parahippocampal gyrus receive most of the significant effects, whereas the DCM results in the visuoperceptual recognition task show less complex effects, but still congruent with the previous results, with an important role in several areas. In general, these results are consistent with the major findings in partial studies about linguistic activities but they are the first analyses of statistical effective brain connectivity in transparent languages. PMID:26042070
The Effects of Discrete-Trial Training Commission Errors on Learner Outcomes: An Extension
Jenkins, Sarah R.; Hirst, Jason M.; DiGennaro Reed, Florence D.
2015-01-01
We conducted a parametric analysis of treatment integrity errors during discrete-trial training and investigated the effects of three integrity conditions (0, 50, or 100 % errors of commission) on performance in the presence and absence of programmed errors. The presence of commission errors impaired acquisition for three of four participants.…
Broberg, Per
2013-07-19
One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.
Directory of Open Access Journals (Sweden)
Finch Stephen J
2005-04-01
Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.
1983-08-01
Standard Errors for B1 Bell-shaped distribution Rectangular Item b Bn-45 n=90 n-45 n=45 -No. i i N-1500 N=1500 N-6000 N=1500 1 -2.01 -1.75 0.516 0.466...34th Streets Lawrence, KS 66045 Baltimore, MD 21218 ENIC Facility-Acquisitions 1 Dr. Ron Hambleton 4t33 Rugby Avenue School of Education Lcthesda, !ID
The effect of monetary punishment on error evaluation in a Go/No-go task.
Maruo, Yuya; Sommer, Werner; Masaki, Hiroaki
2017-10-01
Little is known about the effects of the motivational significance of errors in Go/No-go tasks. We investigated the impact of monetary punishment on the error-related negativity (ERN) and error positivity (Pe) for both overt errors and partial errors, that is, no-go trials without overt responses but with covert muscle activities. We compared high and low punishment conditions where errors were penalized with 50 or 5 yen, respectively, and a control condition without monetary consequences for errors. Because we hypothesized that the partial-error ERN might overlap with the no-go N2, we compared ERPs between correct rejections (i.e., successful no-go trials) and partial errors in no-go trials. We also expected that Pe amplitudes should increase with the severity of the penalty for errors. Mean error rates were significantly lower in the high punishment than in the control condition. Monetary punishment did not influence the overt-error ERN and partial-error ERN in no-go trials. The ERN in no-go trials did not differ between partial errors and overt errors; in addition, ERPs for correct rejections in no-go trials without partial errors were of the same size as in go-trial. Therefore the overt-error ERN and the partial-error ERN may share similar error monitoring processes. Monetary punishment increased Pe amplitudes for overt errors, suggesting enhanced error evaluation processes. For partial errors an early Pe was observed, presumably representing inhibition processes. Interestingly, even partial errors elicited the Pe, suggesting that covert erroneous activities could be detected in Go/No-go tasks. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Sample sizes to control error estimates in determining soil bulk density in California forest soils
Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber
2016-01-01
Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...
Sample positioning effects in x-ray spectrometry
International Nuclear Information System (INIS)
Carpenter, D.
Instrument error due to variation in sample position in a crystal x-ray spectrometer can easily exceed the total instrumental error. Lack of reproducibility in sample position in the x-ray optics is the single largest source of system error. The factors that account for sample positioning error are described, and many of the details of flat crystal x-ray optics are discussed
DEFF Research Database (Denmark)
Picchini, Umberto; Forman, Julie Lyng
2016-01-01
a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...
Effective training based on the cause analysis of operation errors
International Nuclear Information System (INIS)
Fujita, Eimitsu; Noji, Kunio; Kobayashi, Akira.
1991-01-01
The authors have investigated typical error types through our training experience, and analyzed the causes of them. Error types which are observed in simulator training are: (1) lack of knowledge or lack of its applying ability to actual operation; (2) defective mastery of skillbase operation; (3) rote operation or stereotyped manner; (4) mind-setting or lack of redundant verification; (5) lack of team work; (6) misjudgement for the plant overall conditions by operation chief, who directs a reactor operator and a turbine operator in the training. The paper describes training methods used in Japan for BWR utilities to overcome these error types
Sequential effects in pigeon delayed matching-to-sample performance.
Roitblat, H L; Scopatz, R A
1983-04-01
Pigeons were tested in a three-alternative delayed matching-to-sample task in which second-choices were permitted following first-choice errors. Sequences of responses both within and between trials were examined in three experiments. The first experiment demonstrates that the sample information contained in first-choice errors is not sufficient to account for the observed pattern of second choices. This result implies that second-choices following first-choice errors are based on a second examination of the contents of working memory. Proactive interference was found in the second experiment in the form of a dependency, beyond that expected on the basis of trial independent response bias, of first-choices from one trial on the first-choice emitted on the previous trial. Samples from the previous trial were not found to exert a significant influence on later trials. The magnitude of the intertrial association (Experiment 3) did not depend on the duration of the intertrial interval. In contrast, longer intertrial intervals and longer sample durations did facilitate choice accuracy, by strengthening the association between current samples and choices. These results are incompatible with a trace-decay and competition model; they suggest strongly that multiple influences act simultaneously and independently to control delayed matching-to-sample responding. These multiple influences include memory for the choice occurring on the previous trial, memory for the sample, and general effects of trial spacing.
Effect of video decoder errors on video interpretability
Young, Darrell L.
2014-06-01
The advancement in video compression technology can result in more sensitivity to bit errors. Bit errors can propagate causing sustained loss of interpretability. In the worst case, the decoder "freezes" until it can re-synchronize with the stream. Detection of artifacts enables downstream processes to avoid corrupted frames. A simple template approach to detect block stripes and a more advanced cascade approach to detect compression artifacts was shown to correlate to the presence of artifacts and decoder messages.
Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)
2000-01-01
Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.
Directory of Open Access Journals (Sweden)
Githure John I
2009-09-01
Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction
DEFF Research Database (Denmark)
Ji, Hua; Pu, Minhao; Hu, Hao
2011-01-01
This paper presents the experimental demonstrations of using a pure nanoengineered silicon waveguide for 1.28 Tb/s serial data optical waveform sampling and 1.28 Tb/s–10 Gb/s error free demultiplexing. The 330-fs pulses are resolved in each 780-fs time slot in waveform sampling. Error...
Effects of human errors on the determination of surveillance test interval
International Nuclear Information System (INIS)
Chung, Dae Wook; Koo, Bon Hyun
1990-01-01
This paper incorporates the effects of human error relevant to the periodic test on the unavailability of the safety system as well as the component unavailability. Two types of possible human error during the test are considered. One is the possibility that a good safety system is inadvertently left in a bad state after the test (Type A human error) and the other is the possibility that bad safety system is undetected upon the test (Type B human error). An event tree model is developed for the steady-state unavailability of safety system to determine the effects of human errors on the component unavailability and the test interval. We perform the reliability analysis of safety injection system (SIS) by applying aforementioned two types of human error to safety injection pumps. Results of various sensitivity analyses show that; 1) the appropriate test interval decreases and steady-state unavailability increases as the probabilities of both types of human errors increase, and they are far more sensitive to Type A human error than Type B and 2) the SIS unavailability increases slightly as the probability of Type B human error increases, and significantly as the probability of Type A human error increases. Therefore, to avoid underestimation, the effects of human error should be incorporated in the system reliability analysis which aims at the relaxations of the surveillance test intervals, and Type A human error has more important effect on the unavailability and surveillance test interval
Undesirable effects of covariance matrix techniques for error analysis
International Nuclear Information System (INIS)
Seibert, D.
1994-01-01
Regression with χ 2 constructed from covariance matrices should not be used for some combinations of covariance matrices and fitting functions. Using the technique for unsuitable combinations can amplify systematic errors. This amplification is uncontrolled, and can produce arbitrarily inaccurate results that might not be ruled out by a χ 2 test. In addition, this technique can give incorrect (artificially small) errors for fit parameters. I give a test for this instability and a more robust (but computationally more intensive) method for fitting correlated data
Effects of learning climate and registered nurse staffing on medication errors.
Chang, YunKyung; Mark, Barbara
2011-01-01
Despite increasing recognition of the significance of learning from errors, little is known about how learning climate contributes to error reduction. The purpose of this study was to investigate whether learning climate moderates the relationship between error-producing conditions and medication errors. A cross-sectional descriptive study was done using data from 279 nursing units in 146 randomly selected hospitals in the United States. Error-producing conditions included work environment factors (work dynamics and nurse mix), team factors (communication with physicians and nurses' expertise), personal factors (nurses' education and experience), patient factors (age, health status, and previous hospitalization), and medication-related support services. Poisson models with random effects were used with the nursing unit as the unit of analysis. A significant negative relationship was found between learning climate and medication errors. It also moderated the relationship between nurse mix and medication errors: When learning climate was negative, having more registered nurses was associated with fewer medication errors. However, no relationship was found between nurse mix and medication errors at either positive or average levels of learning climate. Learning climate did not moderate the relationship between work dynamics and medication errors. The way nurse mix affects medication errors depends on the level of learning climate. Nursing units with fewer registered nurses and frequent medication errors should examine their learning climate. Future research should be focused on the role of learning climate as related to the relationships between nurse mix and medication errors.
Mao, Jiening; Gao, Zhen; Wu, Yongpeng; Alouini, Mohamed-Slim
2018-01-01
Hybrid precoding design is challenging for millimeter-wave (mmWave) massive MIMO. Most prior hybrid precoding schemes are designed to maximize the sum spectral efficiency (SSE), while seldom investigate the bit-error-rate (BER). Therefore, this letter designs an over-sampling codebook (OSC)-based hybrid minimum sum-mean-square-error (min-SMSE) precoding to optimize the BER. Specifically, given the effective baseband channel consisting of the real channel and analog precoding, we first design the digital precoder/combiner based on min-SMSE criterion to optimize the BER. To further reduce the SMSE between the transmit and receive signals, we propose an OSC-based joint analog precoder/combiner (JAPC) design. Simulation results show that the proposed scheme can achieve the better performance than its conventional counterparts.
Mao, Jiening
2018-05-23
Abstract: Hybrid precoding design is challenging for millimeter-wave (mmWave) massive MIMO. Most prior hybrid precoding schemes are designed to maximize the sum spectral efficiency (SSE), while seldom investigate the bit-error-rate (BER). Therefore, this letter designs an over-sampling codebook (OSC)-based hybrid minimum sum-mean-square-error (min-SMSE) precoding to optimize the BER. Specifically, given the effective baseband channel consisting of the real channel and analog precoding, we first design the digital precoder/combiner based on min-SMSE criterion to optimize the BER. To further reduce the SMSE between the transmit and receive signals, we propose an OSC-based joint analog precoder/combiner (JAPC) design. Simulation results show that the proposed scheme can achieve the better performance than its conventional counterparts.
Effect of error propagation of nuclide number densities on Monte Carlo burn-up calculations
International Nuclear Information System (INIS)
Tohjoh, Masayuki; Endo, Tomohiro; Watanabe, Masato; Yamamoto, Akio
2006-01-01
As a result of improvements in computer technology, the continuous energy Monte Carlo burn-up calculation has received attention as a good candidate for an assembly calculation method. However, the results of Monte Carlo calculations contain the statistical errors. The results of Monte Carlo burn-up calculations, in particular, include propagated statistical errors through the variance of the nuclide number densities. Therefore, if statistical error alone is evaluated, the errors in Monte Carlo burn-up calculations may be underestimated. To make clear this effect of error propagation on Monte Carlo burn-up calculations, we here proposed an equation that can predict the variance of nuclide number densities after burn-up calculations, and we verified this equation using enormous numbers of the Monte Carlo burn-up calculations by changing only the initial random numbers. We also verified the effect of the number of burn-up calculation points on Monte Carlo burn-up calculations. From these verifications, we estimated the errors in Monte Carlo burn-up calculations including both statistical and propagated errors. Finally, we made clear the effects of error propagation on Monte Carlo burn-up calculations by comparing statistical errors alone versus both statistical and propagated errors. The results revealed that the effects of error propagation on the Monte Carlo burn-up calculations of 8 x 8 BWR fuel assembly are low up to 60 GWd/t
Efendiev, Y.
2009-11-01
The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.
Cuadrado-Cenzual, M A; García Briñón, M; de Gracia Hills, Y; González Estecha, M; Collado Yurrita, L; de Pedro Moro, J A; Fernández Pérez, C; Arroyo Fernández, M
2015-01-01
Patient identification errors and biological samples are one of the problems with the highest risk factor in causing an adverse event in the patient. To detect and analyse the causes of patient identification errors in analytical requests (PIEAR) from emergency departments, and to develop improvement strategies. A process and protocol was designed, to be followed by all professionals involved in the requesting and performing of laboratory tests. Evaluation and monitoring indicators of PIEAR were determined, before and after the implementation of these improvement measures (years 2010-2014). A total of 316 PIEAR were detected in a total of 483,254 emergency service requests during the study period, representing a mean of 6.80/10,000 requests. Patient identification failure was the most frequent in all the 6-monthly periods assessed, with a significant difference (Perrors. However, we must continue working with this strategy, promoting a culture of safety for all the professionals involved, and trying to achieve the goal that 100% of the analytical and samples are properly identified. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.
Effect of neoclassical toroidal viscosity on error-field penetration thresholds in tokamak plasmas.
Cole, A J; Hegna, C C; Callen, J D
2007-08-10
A model for field-error penetration is developed that includes nonresonant as well as the usual resonant field-error effects. The nonresonant components cause a neoclassical toroidal viscous torque that keeps the plasma rotating at a rate comparable to the ion diamagnetic frequency. The new theory is used to examine resonant error-field penetration threshold scaling in Ohmic tokamak plasmas. Compared to previous theoretical results, we find the plasma is less susceptible to error-field penetration and locking, by a factor that depends on the nonresonant error-field amplitude.
Effects of Measurement Error on the Output Gap in Japan
Koichiro Kamada; Kazuto Masuda
2000-01-01
Potential output is the largest amount of products that can be produced by fully utilizing available labor and capital stock; the output gap is defined as the discrepancy between actual and potential output. If data on production factors contain measurement errors, total factor productivity (TFP) cannot be estimated accurately from the Solow residual(i.e., the portion of output that is not attributable to labor and capital inputs). This may give rise to distortions in the estimation of potent...
Effects of errors on the dynamic aperture of the Advanced Photon Source storage ring
International Nuclear Information System (INIS)
Bizek, H.; Crosbie, E.; Lessner, E.; Teng, L.; Wirsbinski, J.
1991-01-01
The individual tolerance limits for alignment errors and magnet fabrication errors in the 7-GeV Advanced Photon Source storage ring are determined by computer-simulated tracking. Limits are established for dipole strength and roll errors, quadrupole strength and alignment errors, sextupole strength and alignment errors, as well as higher order multipole strengths in dipole and quadrupole magnets. The effects of girder misalignments on the dynamic aperture are also studied. Computer simulations are obtained with the tracking program RACETRACK, with errors introduced from a user-defined Gaussian distribution, truncated at ±5 standard deviation units. For each error, the average and rms spread of the stable amplitudes are determined for ten distinct machines, defined as ten different seeds to the random distribution, and for five distinct initial directions of the tracking particle. 4 refs., 4 figs., 1 tab
The effect of misclassification errors on case mix measurement.
Sutherland, Jason M; Botz, Chas K
2006-12-01
Case mix systems have been implemented for hospital reimbursement and performance measurement across Europe and North America. Case mix categorizes patients into discrete groups based on clinical information obtained from patient charts in an attempt to identify clinical or cost difference amongst these groups. The diagnosis related group (DRG) case mix system is the most common methodology, with variants adopted in many countries. External validation studies of coding quality have confirmed that widespread variability exists between originally recorded diagnoses and re-abstracted clinical information. DRG assignment errors in hospitals that share patient level cost data for the purpose of establishing cost weights affects cost weight accuracy. The purpose of this study is to estimate bias in cost weights due to measurement error of reported clinical information. DRG assignment error rates are simulated based on recent clinical re-abstraction study results. Our simulation study estimates that 47% of cost weights representing the least severe cases are over weight by 10%, while 32% of cost weights representing the most severe cases are under weight by 10%. Applying the simulated weights to a cross-section of hospitals, we find that teaching hospitals tend to be under weight. Since inaccurate cost weights challenges the ability of case mix systems to accurately reflect patient mix and may lead to potential distortions in hospital funding, bias in hospital case mix measurement highlights the role clinical data quality plays in hospital funding in countries that use DRG-type case mix systems. Quality of clinical information should be carefully considered from hospitals that contribute financial data for establishing cost weights.
Yang, Chunliang; Potts, Rosalind; Shanks, David R.
2017-01-01
Generating errors followed by corrective feedback enhances retention more effectively than does reading--the benefit of errorful generation--but people tend to be unaware of this benefit. The current research explored this metacognitive unawareness, its effect on self-regulated learning, and how to alleviate or reverse it. People's beliefs about…
Intra- and interobserver error of the Greulich-Pyle method as used on a Danish forensic sample
DEFF Research Database (Denmark)
Lynnerup, N; Belard, E; Buch-Olsen, K
2008-01-01
that atlas-based techniques are obsolete and ought to be replaced by other methods. Specifically, the GPA test sample consisted of American "white" children "above average in economic and educational status", leading to the question as to how comparable subjects being scored by the GPA method today...... and intraoral dental radiographs. Different methods are used depending on the maturity of the individual examined; and (3) a carpal X-ray examination, using the Greulich and Pyle Atlas (GPA) method. We present the results of intra- and interobserver tests of carpal X-rays in blind trials, and a comparison...... of the age estimations by carpal X-rays and odontological age estimation. We retrieved 159 cases from the years 2000-2002 (inclusive). The intra- and interobserver errors are overall small. We found full agreement in 126/159 cases, and this was between experienced users and novices. Overall, the mean...
Cozzolino, Daniel
2015-03-30
Vibrational spectroscopy encompasses a number of techniques and methods including ultra-violet, visible, Fourier transform infrared or mid infrared, near infrared and Raman spectroscopy. The use and application of spectroscopy generates spectra containing hundreds of variables (absorbances at each wavenumbers or wavelengths), resulting in the production of large data sets representing the chemical and biochemical wine fingerprint. Multivariate data analysis techniques are then required to handle the large amount of data generated in order to interpret the spectra in a meaningful way in order to develop a specific application. This paper focuses on the developments of sample presentation and main sources of error when vibrational spectroscopy methods are applied in wine analysis. Recent and novel applications will be discussed as examples of these developments. © 2014 Society of Chemical Industry.
Effects of error feedback on a nonlinear bistable system with stochastic resonance
International Nuclear Information System (INIS)
Li Jian-Long; Zhou Hui
2012-01-01
In this paper, we discuss the effects of error feedback on the output of a nonlinear bistable system with stochastic resonance. The bit error rate is employed to quantify the performance of the system. The theoretical analysis and the numerical simulation are presented. By investigating the performances of the nonlinear systems with different strengths of error feedback, we argue that the presented system may provide guidance for practical nonlinear signal processing
Safe and effective error rate monitors for SS7 signaling links
Schmidt, Douglas C.
1994-04-01
This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.
ADC border effect and suppression of quantization error in the digital dynamic measurement
International Nuclear Information System (INIS)
Bai Li-Na; Liu Hai-Dong; Zhou Wei; Zhai Hong-Qi; Cui Zhen-Jian; Zhao Ming-Ying; Gu Xiao-Qian; Liu Bei-Ling; Huang Li-Bei; Zhang Yong
2017-01-01
The digital measurement and processing is an important direction in the measurement and control field. The quantization error widely existing in the digital processing is always the decisive factor that restricts the development and applications of the digital technology. In this paper, we find that the stability of the digital quantization system is obviously better than the quantization resolution. The application of a border effect in the digital quantization can greatly improve the accuracy of digital processing. Its effective precision has nothing to do with the number of quantization bits, which is only related to the stability of the quantization system. The high precision measurement results obtained in the low level quantization system with high sampling rate have an important application value for the progress in the digital measurement and processing field. (paper)
The effectiveness of risk management program on pediatric nurses' medication error.
Dehghan-Nayeri, Nahid; Bayat, Fariba; Salehi, Tahmineh; Faghihzadeh, Soghrat
2013-09-01
Medication therapy is one of the most complex and high-risk clinical processes that nurses deal with. Medication error is the most common type of error that brings about damage and death to patients, especially pediatric ones. However, these errors are preventable. Identifying and preventing undesirable events leading to medication errors are the main risk management activities. The aim of this study was to investigate the effectiveness of a risk management program on the pediatric nurses' medication error rate. This study is a quasi-experimental one with a comparison group. In this study, 200 nurses were recruited from two main pediatric hospitals in Tehran. In the experimental hospital, we applied the risk management program for a period of 6 months. Nurses of the control hospital did the hospital routine schedule. A pre- and post-test was performed to measure the frequency of the medication error events. SPSS software, t-test, and regression analysis were used for data analysis. After the intervention, the medication error rate of nurses at the experimental hospital was significantly lower (P error-reporting rate was higher (P medical environment, applying the quality-control programs such as risk management can effectively prevent the occurrence of the hospital undesirable events. Nursing mangers can reduce the medication error rate by applying risk management programs. However, this program cannot succeed without nurses' cooperation.
Research on effects of phase error in phase-shifting interferometer
Wang, Hongjun; Wang, Zhao; Zhao, Hong; Tian, Ailing; Liu, Bingcai
2007-12-01
Referring to phase-shifting interferometry technology, the phase shifting error from the phase shifter is the main factor that directly affects the measurement accuracy of the phase shifting interferometer. In this paper, the resources and sorts of phase shifting error were introduction, and some methods to eliminate errors were mentioned. Based on the theory of phase shifting interferometry, the effects of phase shifting error were analyzed in detail. The Liquid Crystal Display (LCD) as a new shifter has advantage as that the phase shifting can be controlled digitally without any mechanical moving and rotating element. By changing coded image displayed on LCD, the phase shifting in measuring system was induced. LCD's phase modulation characteristic was analyzed in theory and tested. Based on Fourier transform, the effect model of phase error coming from LCD was established in four-step phase shifting interferometry. And the error range was obtained. In order to reduce error, a new error compensation algorithm was put forward. With this method, the error can be obtained by process interferogram. The interferogram can be compensated, and the measurement results can be obtained by four-step phase shifting interferogram. Theoretical analysis and simulation results demonstrate the feasibility of this approach to improve measurement accuracy.
Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans
International Nuclear Information System (INIS)
Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen; Williamson, Jeffrey F.; Schmidt-Ullrich, Rupert K.
2005-01-01
Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errors of σ = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with Σ = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for σ = Σ = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D 98 ), clinical target volume (CTV) D 90 , nodes D 90 , cord D 2 , and parotid D 50 and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error σ exceeded 3 mm. Simulated systematic setup errors with Σ = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a Σ = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error
Effects of structural error on the estimates of parameters of dynamical systems
Hadaegh, F. Y.; Bekey, G. A.
1986-01-01
In this paper, the notion of 'near-equivalence in probability' is introduced for identifying a system in the presence of several error sources. Following some basic definitions, necessary and sufficient conditions for the identifiability of parameters are given. The effects of structural error on the parameter estimates for both the deterministic and stochastic cases are considered.
Effects of Shame and Guilt on Error Reporting Among Obstetric Clinicians.
Zabari, Mara Lynne; Southern, Nancy L
2018-04-17
To understand how the experiences of shame and guilt, coupled with organizational factors, affect error reporting by obstetric clinicians. Descriptive cross-sectional. A sample of 84 obstetric clinicians from three maternity units in Washington State. In this quantitative inquiry, a variant of the Test of Self-Conscious Affect was used to measure proneness to guilt and shame. In addition, we developed questions to assess attitudes regarding concerns about damaging one's reputation if an error was reported and the choice to keep an error to oneself. Both assessments were analyzed separately and then correlated to identify relationships between constructs. Interviews were used to identify organizational factors that affect error reporting. As a group, mean scores indicated that obstetric clinicians would not choose to keep errors to themselves. However, bivariate correlations showed that proneness to shame was positively correlated to concerns about one's reputation if an error was reported, and proneness to guilt was negatively correlated with keeping errors to oneself. Interview data analysis showed that Past Experience with Responses to Errors, Management and Leadership Styles, Professional Hierarchy, and Relationships With Colleagues were influential factors in error reporting. Although obstetric clinicians want to report errors, their decisions to report are influenced by their proneness to guilt and shame and perceptions of the degree to which organizational factors facilitate or create barriers to restore their self-images. Findings underscore the influence of the organizational context on clinicians' decisions to report errors. Copyright © 2018 AWHONN, the Association of Women’s Health, Obstetric and Neonatal Nurses. Published by Elsevier Inc. All rights reserved.
Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D
2012-09-01
It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed
The dose distribution and DVH change analysis wing to effect of the patient setup error
International Nuclear Information System (INIS)
Kim, Kyung Tae; Ju, Sang Gyu; Ahn, Jae Hong; Park, Young Hwan
2004-01-01
The setup error due to the patient and the staff from radiation treatment as the reason which is important the treatment record could be decided is a possibility of effect. The SET-UP ERROR of the patient analyzes the effect of dose distribution and DVH from radiation treatment of the patient. This test uses human phantom and when C-T scan doing, It rotated the Left direction of the human phantom and it made SET-UP ERROR, Standard plan and 3 mm, 5 mm, 7 mm, 10 mm, 15 mm, 20 mm with to distinguish, it made the C-T scan error. With the result, The SET-UP ERROR got each C-T image Using RTP equipment It used the plan which is used generally from clinical - Box plan, 3 Dimension plan( identical angle 5beam plan) Also, ( CTV+1cm margin, CTV+0.5cm margin, CTV+0.3,cm margin = PTV) it distinguished the standard plan and each set-up error plan and the plan used a dose distribution and the DVH and it analyzed. The Box 4 the plan and 3 Dimension plan which it bites it got similar an dose distribution and DVH in 3 mm, 5 mm From rotation error and Rectilinear movement (0%-2%). Rotation error and rectilinear error 7 mm, 10 mm, 15 mm, 20 mm appeared effect it will go mad to a enough change in treatment (2%-11%) The diminishes the effect of the SET-UP ERROR must reduce move with tension of the patient Also, we are important accessory development and the supply that it reducing of reproducibility and the move.
White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E
2018-05-01
Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.
Temperature Dependence of Faraday Effect-Induced Bias Error in a Fiber Optic Gyroscope.
Li, Xuyou; Liu, Pan; Guang, Xingxing; Xu, Zhenlong; Guan, Lianwu; Li, Guangchun
2017-09-07
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environments, such as magnetic field and temperature field variation, is necessary for its practical applications. This paper presents an investigation of Faraday effect-induced bias error of IFOG under varying temperature. Jones matrix method is utilized to formulize the temperature dependence of Faraday effect-induced bias error. Theoretical results show that the Faraday effect-induced bias error changes with the temperature in the non-skeleton polarization maintaining (PM) fiber coil. This phenomenon is caused by the temperature dependence of linear birefringence and Verdet constant of PM fiber. Particularly, Faraday effect-induced bias errors of two polarizations always have opposite signs that can be compensated optically regardless of the changes of the temperature. Two experiments with a 1000 m non-skeleton PM fiber coil are performed, and the experimental results support these theoretical predictions. This study is promising for improving the bias stability of IFOG.
International Nuclear Information System (INIS)
Knuefer; Lindauer
1980-01-01
Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)
Effects of systematic phase errors on optimized quantum random-walk search algorithm
International Nuclear Information System (INIS)
Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun
2015-01-01
This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)
Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.
2016-01-01
Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.
Directory of Open Access Journals (Sweden)
Yun Shi
2014-01-01
Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Nascimento, Eduarda Helena Leandro; Gaêta-Araujo, Hugo; Andrade, Maria Fernanda Silva; Freitas, Deborah Queiroz
2018-01-21
The aims of this study are to identify the most frequent technical errors in endodontically treated teeth and to determine which root canals were most often associated with those errors, as well as to relate endodontic technical errors and the presence of coronal restorations with periapical status by means of cone-beam computed tomography images. Six hundred eighteen endodontically treated teeth (1146 root canals) were evaluated for the quality of their endodontic treatment and for the presence of coronal restorations and periapical lesions. Each root canal was classified according to dental groups, and the endodontic technical errors were recorded. Chi-square's test and descriptive analyses were performed. Six hundred eighty root canals (59.3%) had periapical lesions. Maxillary molars and anterior teeth showed higher prevalence of periapical lesions (p technical error in all root canals, except for the second mesiobuccal root canal of maxillary molars and the distobuccal root canal of mandibular molars, which were non-filled in 78.4 and 30% of the cases, respectively. There is a high prevalence of apical radiolucencies, which increased in the presence of poor coronal restorations, endodontic technical errors, and when both conditions were concomitant. Underfilling was the most frequent technical error, followed by non-homogeneous and non-filled canals. Evaluation of endodontic treatment quality that considers every single root canal aims on warning dental practitioners of the prevalence of technical errors that could be avoided with careful treatment planning and execution.
Effects of averaging over motion and the resulting systematic errors in radiation therapy
International Nuclear Information System (INIS)
Evans, Philip M; Coolens, Catherine; Nioutsikou, Elena
2006-01-01
The potential for systematic errors in radiotherapy of a breathing patient is considered using the statistical model of Bortfeld et al (2002 Phys. Med. Biol. 47 2203-20). It is shown that although averaging over 30 fractions does result in a narrow Gaussian distribution of errors, as predicted by the central limit theorem, the fact that one or a few samples of the breathing patient's motion distribution are used for treatment planning (in contrast to the many treatment fractions that are likely to be delivered) may result in a much larger error with a systematic component. The error distribution may be particularly large if a scan at breath-hold is used for planning. (note)
Minimizing treatment planning errors in proton therapy using failure mode and effects analysis
Energy Technology Data Exchange (ETDEWEB)
Zheng, Yuanshui, E-mail: yuanshui.zheng@okc.procure.com [ProCure Proton Therapy Center, 5901 W Memorial Road, Oklahoma City, Oklahoma 73142 and Department of Physics, Oklahoma State University, Stillwater, Oklahoma 74078-3072 (United States); Johnson, Randall; Larson, Gary [ProCure Proton Therapy Center, 5901 W Memorial Road, Oklahoma City, Oklahoma 73142 (United States)
2016-06-15
Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their
The effects of error augmentation on learning to walk on a narrow balance beam.
Domingo, Antoinette; Ferris, Daniel P
2010-10-01
Error augmentation during training has been proposed as a means to facilitate motor learning due to the human nervous system's reliance on performance errors to shape motor commands. We studied the effects of error augmentation on short-term learning of walking on a balance beam to determine whether it had beneficial effects on motor performance. Four groups of able-bodied subjects walked on a treadmill-mounted balance beam (2.5-cm wide) before and after 30 min of training. During training, two groups walked on the beam with a destabilization device that augmented error (Medium and High Destabilization groups). A third group walked on a narrower beam (1.27-cm) to augment error (Narrow). The fourth group practiced walking on the 2.5-cm balance beam (Wide). Subjects in the Wide group had significantly greater improvements after training than the error augmentation groups. The High Destabilization group had significantly less performance gains than the Narrow group in spite of similar failures per minute during training. In a follow-up experiment, a fifth group of subjects (Assisted) practiced with a device that greatly reduced catastrophic errors (i.e., stepping off the beam) but maintained similar pelvic movement variability. Performance gains were significantly greater in the Wide group than the Assisted group, indicating that catastrophic errors were important for short-term learning. We conclude that increasing errors during practice via destabilization and a narrower balance beam did not improve short-term learning of beam walking. In addition, the presence of qualitatively catastrophic errors seems to improve short-term learning of walking balance.
Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions
McCullough, Christopher; Bettadpur, Srinivas
2015-04-01
In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.
Minimizing treatment planning errors in proton therapy using failure mode and effects analysis
International Nuclear Information System (INIS)
Zheng, Yuanshui; Johnson, Randall; Larson, Gary
2016-01-01
Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their
Effects of digital human-machine interface characteristics on human error in nuclear power plants
International Nuclear Information System (INIS)
Li Pengcheng; Zhang Li; Dai Licao; Huang Weigang
2011-01-01
In order to identify the effects of digital human-machine interface characteristics on human error in nuclear power plants, the new characteristics of digital human-machine interface are identified by comparing with the traditional analog control systems in the aspects of the information display, user interface interaction and management, control systems, alarm systems and procedures system, and the negative effects of digital human-machine interface characteristics on human error are identified by field research and interviewing with operators such as increased cognitive load and workload, mode confusion, loss of situation awareness. As to the adverse effects related above, the corresponding prevention and control measures of human errors are provided to support the prevention and minimization of human errors and the optimization of human-machine interface design. (authors)
Simulation of sampling effects in FPAs
Cook, Thomas H.; Hall, Charles S.; Smith, Frederick G.; Rogne, Timothy J.
1991-09-01
The use of multiplexers and large focal plane arrays in advanced thermal imaging systems has drawn renewed attention to sampling and aliasing issues in imaging applications. As evidenced by discussions in a recent workshop, there is no clear consensus among experts whether aliasing in sensor designs can be readily tolerated, or must be avoided at all cost. Further, there is no straightforward, analytical method that can answer the question, particularly when considering image interpreters as different as humans and autonomous target recognizers (ATR). However, the means exist for investigating sampling and aliasing issues through computer simulation. The U.S. Army Tank-Automotive Command (TACOM) Thermal Image Model (TTIM) provides realistic sensor imagery that can be evaluated by both human observers and TRs. This paper briefly describes the history and current status of TTIM, explains the simulation of FPA sampling effects, presents validation results of the FPA sensor model, and demonstrates the utility of TTIM for investigating sampling effects in imagery.
Errors and Uncertainties in Dose Reconstruction for Radiation Effects Research
Energy Technology Data Exchange (ETDEWEB)
Strom, Daniel J.
2008-04-14
Dose reconstruction for studies of the health effects of ionizing radiation have been carried out for many decades. Major studies have included Japanese bomb survivors, atomic veterans, downwinders of the Nevada Test Site and Hanford, underground uranium miners, and populations of nuclear workers. For such studies to be credible, significant effort must be put into applying the best science to reconstructing unbiased absorbed doses to tissues and organs as a function of time. In many cases, more and more sophisticated dose reconstruction methods have been developed as studies progressed. For the example of the Japanese bomb survivors, the dose surrogate “distance from the hypocenter” was replaced by slant range, and then by TD65 doses, DS86 doses, and more recently DS02 doses. Over the years, it has become increasingly clear that an equal level of effort must be expended on the quantitative assessment of uncertainty in such doses, and to reducing and managing uncertainty. In this context, this paper reviews difficulties in terminology, explores the nature of Berkson and classical uncertainties in dose reconstruction through examples, and proposes a path forward for Joint Coordinating Committee for Radiation Effects Research (JCCRER) Project 2.4 that requires a reasonably small level of effort for DOSES-2008.
Rausch, R; MacDonald, K
1997-03-01
We used a protocol consisting of a continuous presentation of stimuli with associated response requests during an intracarotid sodium amobarbital procedure (IAP) to study the effects of hemisphere injected (speech dominant vs. nondominant) and seizure focus (left temporal lobe vs. right temporal lobe) on the pattern of behavioral response errors for three types of visual stimuli (pictures of common objects, words, and abstract forms). Injection of the left speech dominant hemisphere compared to the right nondominant hemisphere increased overall errors and affected the pattern of behavioral errors. The presence of a seizure focus in the contralateral hemisphere increased overall errors, particularly for the right temporal lobe seizure patients, but did not affect the pattern of behavioral errors. Left hemisphere injections disrupted both naming and reading responses at a rate similar to that of matching-to-sample performance. Also, a short-term memory deficit was observed with all three stimuli. Long-term memory testing following the left hemisphere injection indicated that only for pictures of common objects were there fewer errors during the early postinjection period than for the later long-term memory testing. Therefore, despite the inability to respond to picture stimuli, picture items, but not words or forms, could be sufficiently encoded for later recall. In contrast, right hemisphere injections resulted in few errors, with a pattern suggesting a mild general cognitive decrease. A selective weakness in learning unfamiliar forms was found. Our findings indicate that different patterns of behavioral deficits occur following the left vs. right hemisphere injections, with selective patterns specific to stimulus type.
Sample design effects in landscape genetics
Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.
2012-01-01
An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.
The effect of reporting speed on plain film reporting errors
International Nuclear Information System (INIS)
Edwards, A.J.; Ricketts, C.; Dubbins, P.A.; Roobottom, C.A.; Wells, I.P.
2003-01-01
AIM: To determine whether reporting plain films at faster rates lead to a deterioration in accuracy. METHODS: Fourteen consultant radiologists were asked to report a total of 90 radiographs in three sets of 30. They reported the first set at the rate they would report normally and the subsequent two sets in two thirds and one half of the original time. The 90 radiographs were the same for each radiologist, however, the order was randomly generated for each. RESULTS: There was no significant difference in overall accuracy for each of the three film sets (p=0.74). Additionally no significant difference in the total number of false-negatives for each film set was detected (p=0.14). However, there was a significant decrease in the number of false-positive reports when the radiologists were asked to report at higher speeds (p=0.003). CONCLUSIONS: When reporting accident and emergency radiographs increasing reporting speed has no overall effect upon accuracy, however, it does lead to less false-positive reports
Galaxy Cluster Shapes and Systematic Errors in H_0 as Determined by the Sunyaev-Zel'dovich Effect
Sulkanen, Martin E.; Patel, Sandeep K.
1998-01-01
Imaging of the Sunyaev-Zeldovich (SZ) effect in galaxy clusters combined with cluster plasma x-ray diagnostics promises to measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic error's in the Hubble constant, H_0, because the true shape of the cluster is not known. In this paper we present a study of the systematic errors in the value of H_0, as determined by the x-ray and SZ properties of theoretical samples of triaxial isothermal "beta-model" clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. We calculate three estimates for H_0 for each cluster, based on their large and small apparent angular core radii, and their arithmetic mean. We average the estimates for H_0 for a sample of 25 clusters and find that the estimates have limited systematic error: the 99.7% confidence intervals for the mean estimated H_0 analyzing the clusters using either their large or mean angular core r;dius are within 14% of the "true" (assumed) value of H_0 (and enclose it), for a triaxial beta model cluster sample possessing a distribution of apparent x-ray cluster ellipticities consistent with that of observed x-ray clusters.
Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.
Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D
2017-06-01
The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.
An estimate and evaluation of design error effects on nuclear power plant design adequacy
International Nuclear Information System (INIS)
Stevenson, J.D.
1984-01-01
An area of considerable concern in evaluating Design Control Quality Assurance procedures applied to design and analysis of nuclear power plant is the level of design error expected or encountered. There is very little published data 1 on the level of error typically found in nuclear power plant design calculations and even less on the impact such errors would be expected to have on overall design adequacy of the plant. This paper is concerned with design error associated with civil and mechanical structural design and analysis found in calculations which form part of the Design or Stress reports. These reports are meant to document the design basis and adequacy of the plant. The estimates contained in this paper are based on the personal experiences of the author. In Table 1 is a partial listing of the design docummentation review performed by the author on which the observations contained in this paper are based. In the preparation of any design calculations, it is a utopian dream to presume such calculations can be made error free. The intent of this paper is to define error levels which might be expected in a competent engineering organizations employing currently technically qualified engineers and accepted methods of Design Control. In addition, the effects of these errors on the probability of failure to meet applicable design code requirements also are estimated
Sample Lesson Plans. Management for Effective Teaching.
Fairfax County Public Schools, VA. Dept. of Instructional Services.
This guide is part of the Management for Effective Teaching (MET) support kit, a pilot project developed by the Fairfax County (Virginia) Public Schools to assist elementary school teachers in planning, managaing, and implementing the county's curriculum, Program of Studies (POS). In this guide, a sample lesson plan of a teaching-learning activity…
Effect of lethality on the extinction and on the error threshold of quasispecies.
Tejero, Hector; Marín, Arturo; Montero, Francisco
2010-02-21
In this paper the effect of lethality on error threshold and extinction has been studied in a population of error-prone self-replicating molecules. For given lethality and a simple fitness landscape, three dynamic regimes can be obtained: quasispecies, error catastrophe, and extinction. Using a simple model in which molecules are classified as master, lethal and non-lethal mutants, it is possible to obtain the mutation rates of the transitions between the three regimes analytically. The numerical resolution of the extended model, in which molecules are classified depending on their Hamming distance to the master sequence, confirms the results obtained in the simple model and shows how an error catastrophe regime changes when lethality is taken in account. (c) 2009 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.
2015-01-01
Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.
Energy Technology Data Exchange (ETDEWEB)
Menelaou, Evdokia; Paul, Latoya T. [Department of Biological Sciences, Louisiana State University, Baton Rouge, LA 70803 (United States); Perera, Surangi N. [Joseph J. Zilber School of Public Health, University of Wisconsin — Milwaukee, Milwaukee, WI 53205 (United States); Svoboda, Kurt R., E-mail: svobodak@uwm.edu [Department of Biological Sciences, Louisiana State University, Baton Rouge, LA 70803 (United States); Joseph J. Zilber School of Public Health, University of Wisconsin — Milwaukee, Milwaukee, WI 53205 (United States)
2015-04-01
Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.
Effect of Antenna Pointing Errors on SAR Imaging Considering the Change of the Point Target Location
Zhang, Xin; Liu, Shijie; Yu, Haifeng; Tong, Xiaohua; Huang, Guoman
2018-04-01
Towards spaceborne spotlight SAR, the antenna is regulated by the SAR system with specific regularity, so the shaking of the internal mechanism is inevitable. Moreover, external environment also has an effect on the stability of SAR platform. Both of them will cause the jitter of the SAR platform attitude. The platform attitude instability will introduce antenna pointing error on both the azimuth and range directions, and influence the acquisition of SAR original data and ultimate imaging quality. In this paper, the relations between the antenna pointing errors and the three-axis attitude errors are deduced, then the relations between spaceborne spotlight SAR imaging of the point target and antenna pointing errors are analysed based on the paired echo theory, meanwhile, the change of the azimuth antenna gain is considered as the spotlight SAR platform moves ahead. The simulation experiments manifest the effects on spotlight SAR imaging caused by antenna pointing errors are related to the target location, that is, the pointing errors of the antenna beam will severely influence the area far away from the scene centre of azimuth direction in the illuminated scene.
The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model.
Fritz, Matthew S; Kenny, David A; MacKinnon, David P
2016-01-01
Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator-to-outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. To explore the combined effect of measurement error and omitted confounders in the same model, the effect of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect.
Joch, Michael; Hegele, Mathias; Maurer, Heiko; Müller, Hermann; Maurer, Lisa Katharina
2017-07-01
The error (related) negativity (Ne/ERN) is an event-related potential in the electroencephalogram (EEG) correlating with error processing. Its conditions of appearance before terminal external error information suggest that the Ne/ERN is indicative of predictive processes in the evaluation of errors. The aim of the present study was to specifically examine the Ne/ERN in a complex motor task and to particularly rule out other explaining sources of the Ne/ERN aside from error prediction processes. To this end, we focused on the dependency of the Ne/ERN on visual monitoring about the action outcome after movement termination but before result feedback (action effect monitoring). Participants performed a semi-virtual throwing task by using a manipulandum to throw a virtual ball displayed on a computer screen to hit a target object. Visual feedback about the ball flying to the target was masked to prevent action effect monitoring. Participants received a static feedback about the action outcome (850 ms) after each trial. We found a significant negative deflection in the average EEG curves of the error trials peaking at ~250 ms after ball release, i.e., before error feedback. Furthermore, this Ne/ERN signal did not depend on visual ball-flight monitoring after release. We conclude that the Ne/ERN has the potential to indicate error prediction in motor tasks and that it exists even in the absence of action effect monitoring. NEW & NOTEWORTHY In this study, we are separating different kinds of possible contributors to an electroencephalogram (EEG) error correlate (Ne/ERN) in a throwing task. We tested the influence of action effect monitoring on the Ne/ERN amplitude in the EEG. We used a task that allows us to restrict movement correction and action effect monitoring and to control the onset of result feedback. We ascribe the Ne/ERN to predictive error processing where a conscious feeling of failure is not a prerequisite. Copyright © 2017 the American Physiological
Effects and Correction of Closed Orbit Magnet Errors in the SNS Ring
Energy Technology Data Exchange (ETDEWEB)
Bunch, S.C.; Holmes, J.
2004-01-01
We consider the effect and correction of three types of orbit errors in SNS: quadrupole displacement errors, dipole displacement errors, and dipole field errors. Using the ORBIT beam dynamics code, we focus on orbit deflection of a standard pencil beam and on beam losses in a high intensity injection simulation. We study the correction of these orbit errors using the proposed system of 88 (44 horizontal and 44 vertical) ring beam position monitors (BPMs) and 52 (24 horizontal and 28 vertical) dipole corrector magnets. Correction is carried out numerically by adjusting the kick strengths of the dipole corrector magnets to minimize the sum of the squares of the BPM signals for the pencil beam. In addition to using the exact BPM signals as input to the correction algorithm, we also consider the effect of random BPM signal errors. For all three types of error and for perturbations of individual magnets, the correction algorithm always chooses the three-bump method to localize the orbit displacement to the region between the magnet and its adjacent correctors. The values of the BPM signals resulting from specified settings of the dipole corrector kick strengths can be used to set up the orbit response matrix, which can then be applied to the correction in the limit that the signals from the separate errors add linearly. When high intensity calculations are carried out to study beam losses, it is seen that the SNS orbit correction system, even with BPM uncertainties, is sufficient to correct losses to less than 10-4 in nearly all cases, even those for which uncorrected losses constitute a large portion of the beam.
Goedeme, Tim
2013-01-01
If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…
The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model
Fritz, Matthew S.; Kenny, David A.; MacKinnon, David P.
2016-01-01
Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator to outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. In order to explore the combined effect of measurement error and omitted confounders in the same model, the impact of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect. PMID:27739903
A new method for weakening the combined effect of residual errors on multibeam bathymetric data
Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue
2014-12-01
Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.
Gold price effect on stock market: A Markov switching vector error correction approach
Wai, Phoong Seuk; Ismail, Mohd Tahir; Kun, Sek Siok
2014-06-01
Gold is a popular precious metal where the demand is driven not only for practical use but also as a popular investments commodity. While stock market represents a country growth, thus gold price effect on stock market behavior as interest in the study. Markov Switching Vector Error Correction Models are applied to analysis the relationship between gold price and stock market changes since real financial data always exhibit regime switching, jumps or missing data through time. Besides, there are numerous specifications of Markov Switching Vector Error Correction Models and this paper will compare the intercept adjusted Markov Switching Vector Error Correction Model and intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model to determine the best model representation in capturing the transition of the time series. Results have shown that gold price has a positive relationship with Malaysia, Thailand and Indonesia stock market and a two regime intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model is able to provide the more significance and reliable result compare to intercept adjusted Markov Switching Vector Error Correction Models.
On the effect of numerical errors in large eddy simulations of turbulent flows
International Nuclear Information System (INIS)
Kravchenko, A.G.; Moin, P.
1997-01-01
Aliased and dealiased numerical simulations of a turbulent channel flow are performed using spectral and finite difference methods. Analytical and numerical studies show that aliasing errors are more destructive for spectral and high-order finite-difference calculations than for low-order finite-difference simulations. Numerical errors have different effects for different forms of the nonlinear terms in the Navier-Stokes equations. For divergence and convective forms, spectral methods are energy-conserving only if dealiasing is performed. For skew-symmetric and rotational forms, both spectral and finite-difference methods are energy-conserving even in the presence of aliasing errors. It is shown that discrepancies between the results of dealiased spectral and standard nondialiased finite-difference methods are due to both aliasing and truncation errors with the latter being the leading source of differences. The relative importance of aliasing and truncation errors as compared to subgrid scale model terms in large eddy simulations is analyzed and discussed. For low-order finite-difference simulations, truncation errors can exceed the magnitude of the subgrid scale term. 25 refs., 17 figs., 1 tab
Effects of a direct refill program for automated dispensing cabinets on medication-refill errors.
Helmons, Pieter J; Dalton, Ashley J; Daniels, Charles E
2012-10-01
The effects of a direct refill program for automated dispensing cabinets (ADCs) on medication-refill errors were studied. This study was conducted in designated acute care areas of a 386-bed academic medical center. A wholesaler-to-ADC direct refill program, consisting of prepackaged delivery of medications and bar-code-assisted ADC refilling, was implemented in the inpatient pharmacy of the medical center in September 2009. Medication-refill errors in 26 ADCs from the general medicine units, the infant special care unit, the surgical and burn intensive care units, and intermediate units were assessed before and after the implementation of this program. Medication-refill errors were defined as an ADC pocket containing the wrong drug, wrong strength, or wrong dosage form. ADC refill errors decreased by 77%, from 62 errors per 6829 refilled pockets (0.91%) to 8 errors per 3855 refilled pockets (0.21%) (p error type detected before the intervention was the incorrect medication (wrong drug, wrong strength, or wrong dosage form) in the ADC pocket. Of the 54 incorrect medications found before the intervention, 38 (70%) were loaded in a multiple-drug drawer. After the implementation of the new refill process, 3 of the 5 incorrect medications were loaded in a multiple-drug drawer. There were 3 instances of expired medications before and only 1 expired medication after implementation of the program. A redesign of the ADC refill process using a wholesaler-to-ADC direct refill program that included delivery of prepackaged medication and bar-code-assisted refill significantly decreased the occurrence of ADC refill errors.
Leniency programs and socially beneficial cooperation: Effects of type I errors
Directory of Open Access Journals (Sweden)
Natalia Pavlova
2016-12-01
Full Text Available This study operationalizes the concept of hostility tradition in antitrust as mentioned by Oliver Williamson and Ronald Coase through erroneous law enforcement effects. The antitrust agency may commit type I, not just type II, errors when evaluating an agreement in terms of cartels. Moreover, firms can compete in a standard way, collude or engage in cooperative agreements that improve efficiency. The antitrust agency may misinterpret such cooperative agreements, committing a type I error (over-enforcement. The model set-up is drawn from Motta and Polo (2003 and is extended as described above using the findings of Ghebrihiwet and Motchenkova (2010. Three effects play a role in this environment. Type I errors may induce firms that would engage in socially efficient cooperation absent errors to opt for collusion (the deserved punishment effect. For other parameter configurations, type I errors may interrupt ongoing cooperation when investigated. In this case, the firms falsely report collusion and apply for leniency, fearing being erroneously fined (the disrupted cooperation effect. Finally, over-enforcement may prevent beneficial cooperation from starting given the threat of being mistakenly fined (the prevented cooperation effect. The results help us understand the negative impact that a hostility tradition in antitrust — which is more likely for inexperienced regimes and regimes with low standards of evidence — and the resulting type I enforcement errors can have on social welfare when applied to the regulation of horizontal agreements. Additional interpretations are discussed in light of leniency programs for corruption and compliance policies for antitrust violations.
A study of the effect of measurement error in predictor variables in nondestructive assay
International Nuclear Information System (INIS)
Burr, Tom L.; Knepper, Paula L.
2000-01-01
It is not widely known that ordinary least squares estimates exhibit bias if there are errors in the predictor variables. For example, enrichment measurements are often fit to two predictors: Poisson-distributed count rates in the region of interest and in the background. Both count rates have at least random variation due to counting statistics. Therefore, the parameter estimates will be biased. In this case, the effect of bias is a minor issue because there is almost no interest in the parameters themselves. Instead, the parameters will be used to convert count rates into estimated enrichment. In other cases, this bias source is potentially more important. For example, in tomographic gamma scanning, there is an emission stage which depends on predictors (the 'system matrix') that are estimated with error during the transmission stage. In this paper, we provide background information for the impact and treatment of errors in predictors, present results of candidate methods of compensating for the effect, review some of the nondestructive assay situations where errors in predictors occurs, and provide guidance for when errors in predictors should be considered in nondestructive assay
Effects of systematic sampling on satellite estimates of deforestation rates
International Nuclear Information System (INIS)
Steininger, M K; Godoy, F; Harper, G
2009-01-01
Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1 deg. intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1 deg., 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25 deg. produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the
International Nuclear Information System (INIS)
Burr, T.L.; Mercer, D.J.; Prettyman, T.H.
1998-01-01
Field experience with the tomographic gamma scanner to assay nuclear material suggests that the analysis techniques can significantly impact the assay uncertainty. For example, currently implemented image reconstruction methods exhibit a positive bias for low-activity samples. Preliminary studies indicate that bias reduction could be achieved at the expense of increased random error variance. In this paper, the authors examine three possible bias sources: (1) measurement error in the estimated transmission matrix, (2) the positivity constraint on the estimated mass of nuclear material, and (3) improper treatment of the measurement error structure. The authors present results from many small-scale simulation studies to examine this bias/variance tradeoff for a few image reconstruction methods in the presence of the three possible bias sources
Directory of Open Access Journals (Sweden)
Shuting Wan
2015-06-01
Full Text Available Natural wind is stochastic, being characterized by its speed and direction which change randomly and frequently. Because of the certain lag in control systems and the yaw body itself, wind turbines cannot be accurately aligned toward the wind direction when the wind speed and wind direction change frequently. Thus, wind turbines often suffer from a series of engineering issues during operation, including frequent yaw, vibration overruns and downtime. This paper aims to study the effects of yaw error on wind turbine running characteristics at different wind speeds and control stages by establishing a wind turbine model, yaw error model and the equivalent wind speed model that includes the wind shear and tower shadow effects. Formulas for the relevant effect coefficients Tc, Sc and Pc were derived. The simulation results indicate that the effects of the aerodynamic torque, rotor speed and power output due to yaw error at different running stages are different and that the effect rules for each coefficient are not identical when the yaw error varies. These results may provide theoretical support for optimizing the yaw control strategies for each stage to increase the running stability of wind turbines and the utilization rate of wind energy.
Effect of Error Augmentation on Brain Activation and Motor Learning of a Complex Locomotor Task
Directory of Open Access Journals (Sweden)
Laura Marchal-Crespo
2017-09-01
Full Text Available Up to date, the functional gains obtained after robot-aided gait rehabilitation training are limited. Error augmenting strategies have a great potential to enhance motor learning of simple motor tasks. However, little is known about the effect of these error modulating strategies on complex tasks, such as relearning to walk after a neurologic accident. Additionally, neuroimaging evaluation of brain regions involved in learning processes could provide valuable information on behavioral outcomes. We investigated the effect of robotic training strategies that augment errors—error amplification and random force disturbance—and training without perturbations on brain activation and motor learning of a complex locomotor task. Thirty-four healthy subjects performed the experiment with a robotic stepper (MARCOS in a 1.5 T MR scanner. The task consisted in tracking a Lissajous figure presented on a display by coordinating the legs in a gait-like movement pattern. Behavioral results showed that training without perturbations enhanced motor learning in initially less skilled subjects, while error amplification benefited better-skilled subjects. Training with error amplification, however, hampered transfer of learning. Randomly disturbing forces induced learning and promoted transfer in all subjects, probably because the unexpected forces increased subjects' attention. Functional MRI revealed main effects of training strategy and skill level during training. A main effect of training strategy was seen in brain regions typically associated with motor control and learning, such as, the basal ganglia, cerebellum, intraparietal sulcus, and angular gyrus. Especially, random disturbance and no perturbation lead to stronger brain activation in similar brain regions than error amplification. Skill-level related effects were observed in the IPS, in parts of the superior parietal lobe (SPL, i.e., precuneus, and temporal cortex. These neuroimaging findings
Directory of Open Access Journals (Sweden)
I Alimohammadi
2012-12-01
Full Text Available Background and Aims: Traffic noise is one of the most important urban noise pollution, which causes various physical and mental effects, impairment in daily activities, sleep disturbances, hearing loss and the impact on job performance. Thus it can reduce concentration significantly and increase the rate of traffic accidents. Some individual differences such as personality types in noise effects, affect. Methods : Traffic noise has been measured and recorded in 10 arterial streets in Tehran, and the average sound pressure level measured was72/9 dB during two hours played for participants in the acousticroom . The sample size consisted of 80 patients (40 cases and 40 controls who were students of Tehran University of Medical Sciences. Personality type was determined by using Eysenc’s Personality Inventory (EPIquestionnaire. The error time movement anticipation before and after exposure to traffic noisewas measured by ZBA computerize test. Results: The results revealed that error time movement anticipation before exposure to traffic noise have significant difference for introverts and extraverts Introverts have less errortime movement anticipation than extroversion ,whereas extroverts have less error time movement anticipation that introversion after exposure to traffic noise. Conclusion: According to the obtained results, noise created different effects on the performance of personality type. Extroverts may be expected to adapt better to noise during mental performance, compared to people with opposite personality traits.
Effects of OCR Errors on Ranking and Feedback Using the Vector Space Model.
Taghva, Kazem; And Others
1996-01-01
Reports on the performance of the vector space model in the presence of OCR (optical character recognition) errors in information retrieval. Highlights include precision and recall, a full-text test collection, smart vector representation, impact of weighting parameters, ranking variability, and the effect of relevance feedback. (Author/LRW)
Analysis of interactive fixed effects dynamic linear panel regression with measurement error
Nayoung Lee; Hyungsik Roger Moon; Martin Weidner
2011-01-01
This paper studies a simple dynamic panel linear regression model with interactive fixed effects in which the variable of interest is measured with error. To estimate the dynamic coefficient, we consider the least-squares minimum distance (LS-MD) estimation method.
The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.
Kaskowitz, Gary S.; De Ayala, R. J.
2001-01-01
Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…
A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes
D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)
2005-01-01
textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate
Effects of Crew Resource Management Training on Medical Errors in a Simulated Prehospital Setting
Carhart, Elliot D.
2012-01-01
This applied dissertation investigated the effect of crew resource management (CRM) training on medical errors in a simulated prehospital setting. Specific areas addressed by this program included situational awareness, decision making, task management, teamwork, and communication. This study is believed to be the first investigation of CRM…
Can i just check...? Effects of edit check questions on measurement error and survey estimates
Lugtig, Peter; Jäckle, Annette
2014-01-01
Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to
Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C.
2018-01-01
Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…
Learning Correct Responses and Errors in the Hebb Repetition Effect: Two Faces of the Same Coin
Couture, Mathieu; Lafond, Daniel; Tremblay, Sebastien
2008-01-01
In a serial recall task, the "Hebb repetition effect" occurs when recall performance improves for a sequence repeated throughout the experimental session. This phenomenon has been replicated many times. Nevertheless, such cumulative learning seldom leads to perfect recall of the whole sequence, and errors persist. Here the authors report…
International Nuclear Information System (INIS)
Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng
2017-01-01
The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)
On the effect of systematic errors in near real time accountancy
International Nuclear Information System (INIS)
Avenhaus, R.
1987-01-01
Systematic measurement errors have a decisive impact on nuclear materials accountancy. This has been demonstrated at various occasions for a fixed number of inventory periods, i.e. for situations where the overall probability of detection is taken as the measure of effectiveness. In the framework of Near Real Time Accountancy (NRTA), however, such analyses have not yet been performed. In this paper sequential test procedures are considered which are based on the so-called MUF-Residuals. It is shown that, if the decision maker does not know the systematic error variance, the average run lengths tend towards infinity if this variance is equal or longer than that of the random error. Furthermore, if the decision maker knows this invariance, the average run length for constant loss or diversion is not shorter than that without loss or diversion. These results cast some doubt on the present practice of data evaluation where systematic errors are tacitly assumed to persist for an infinite time. In fact, information about the time dependence of the variances of these errors has to be gathered in order that the efficiency of NRTA evaluation methods can be estimated realistically
Effect of DM Actuator Errors on the WFIRST/AFTA Coronagraph Contrast Performance
Sidick, Erkin; Shi, Fang
2015-01-01
The WFIRST/AFTA 2.4 m space telescope currently under study includes a stellar coronagraph for the imaging and the spectral characterization of extrasolar planets. The coronagraph employs two sequential deformable mirrors (DMs) to compensate for phase and amplitude errors in creating dark holes. DMs are critical elements in high contrast coronagraphs, requiring precision and stability measured in picometers to enable detection of Earth-like exoplanets. Working with a low-order wavefront-sensor the DM that is conjugate to a pupil can also be used to correct low-order wavefront drift during a scientific observation. However, not all actuators in a DM have the same gain. When using such a DM in low-order wavefront sensing and control subsystem, the actuator gain errors introduce high-spatial frequency errors to the DM surface and thus worsen the contrast performance of the coronagraph. We have investigated the effects of actuator gain errors and the actuator command digitization errors on the contrast performance of the coronagraph through modeling and simulations, and will present our results in this paper.
[Nature or nurture: effects of parental ametropia on children's refractive errors].
Landmann, A; Bechrakis, E
2013-12-01
The aim of this study was to quantify the degree of association between juvenile refraction errors and parental refraction status. Using a simple questionnaire we conducted a cross-sectional study to determine the prevalence and magnitudes of refractive errors and of parental refraction status in a sample (n=728) of 10- to 18-year-old Austrian grammar school students. Students with myopia or hyperopia were more likely to have ametropic parents and refraction was more myopic in juveniles with one or two parents being ametropic. The prevalence of myopia in children with 2 ametropic parents was 54%, decreasing to 35% in pupils with 1 and to 13% in children with no ametropic parents. The odds ratio for 1 and 2 compared with no ametropic parents was 8.3 and 3.7 for myopia and 1.3 and 1.6 for hyperopia, respectively. Furthermore, the data indicate a stronger influence of the maternal ametropia on children's refractive errors than paternal ametropia. Genetic factors play a significant role in refractive error and may be of dominant influence for school myopia under conditions of low environmental variation.
Effects of errors and gaps in spatial data sets on assessment of conservation progress.
Visconti, P; Di Marco, M; Álvarez-Romero, J G; Januchowski-Hartley, S R; Pressey, R L; Weeks, R; Rondinini, C
2013-10-01
Data on the location and extent of protected areas, ecosystems, and species' distributions are essential for determining gaps in biodiversity protection and identifying future conservation priorities. However, these data sets always come with errors in the maps and associated metadata. Errors are often overlooked in conservation studies, despite their potential negative effects on the reported extent of protection of species and ecosystems. We used 3 case studies to illustrate the implications of 3 sources of errors in reporting progress toward conservation objectives: protected areas with unknown boundaries that are replaced by buffered centroids, propagation of multiple errors in spatial data, and incomplete protected-area data sets. As of 2010, the frequency of protected areas with unknown boundaries in the World Database on Protected Areas (WDPA) caused the estimated extent of protection of 37.1% of the terrestrial Neotropical mammals to be overestimated by an average 402.8% and of 62.6% of species to be underestimated by an average 10.9%. Estimated level of protection of the world's coral reefs was 25% higher when using recent finer-resolution data on coral reefs as opposed to globally available coarse-resolution data. Accounting for additional data sets not yet incorporated into WDPA contributed up to 6.7% of additional protection to marine ecosystems in the Philippines. We suggest ways for data providers to reduce the errors in spatial and ancillary data and ways for data users to mitigate the effects of these errors on biodiversity assessments. © 2013 Society for Conservation Biology.
The effects of field errors on low-gain free-electron lasers
International Nuclear Information System (INIS)
Esarey, E.; Tang, C.M.; Marable, W.P.
1991-01-01
This paper reports on the effects of random wiggler magnetic field errors on low-gain free-electron lasers that are examined analytically and numerically through the use of ensemble averaging techniques. Wiggler field errors perturb the electron beam as it propagates and lead to a random walk of the beam centroid δx, variations in the axial beam energy δ γz and deviations in the relative phase of the electrons in the ponderomotive wave δψ. In principle, the random walk may be kept as small as desired through the use of transverse focusing and beam steering. Transverse focusing of the electron beam is shown to be ineffective in reducing the phase deviation. Furthermore, it is shown that beam steering at the wiggler entrance reduces the average phase deviation at the end of the wiggler by 1/3. The effect of the field errors (via the phase deviation) on the gain in the low-gain regime is calculated. To avoid significant reduction in gain it is necessary for the phase deviation to be small compared to 2π. The detrimental effects of wiggler errors on low-gain free-electron lasers may be reduced by arranging the magnet poles in an optimal ordering such that the magnitude of the phase deviation is minimized
Radiation effects and soft errors in integrated circuits and electronic devices
Fleetwood, D M
2004-01-01
This book provides a detailed treatment of radiation effects in electronic devices, including effects at the material, device, and circuit levels. The emphasis is on transient effects caused by single ionizing particles (single-event effects and soft errors) and effects produced by the cumulative energy deposited by the radiation (total ionizing dose effects). Bipolar (Si and SiGe), metal-oxide-semiconductor (MOS), and compound semiconductor technologies are discussed. In addition to considering the specific issues associated with high-performance devices and technologies, the book includes th
DEFF Research Database (Denmark)
Ji, Hua; Hu, Hao; Galili, Michael
2010-01-01
We experimentally demonstrate 640 Gbit/s and 1.28 Tbit/s serial data optical waveform sampling and 640-to-10 Gbit/s and 1.28 Tbit/s-to-10 Gbit/s error-free demultiplexing using four-wave mixing in a 300nm$$450nm$$5mm silicon nanowire.......We experimentally demonstrate 640 Gbit/s and 1.28 Tbit/s serial data optical waveform sampling and 640-to-10 Gbit/s and 1.28 Tbit/s-to-10 Gbit/s error-free demultiplexing using four-wave mixing in a 300nm$$450nm$$5mm silicon nanowire....
International Nuclear Information System (INIS)
Hardcastle, Nicholas; Bender, Edward T.; Tomé, Wolfgang A.
2014-01-01
It has previously been shown that deformable image registrations (DIRs) often result in deformation maps that are neither inverse-consistent nor transitive, and that the dose accumulation based on these deformation maps can be inconsistent if different image pathways are used for dose accumulation. A method presented to reduce inverse consistency and transitivity errors has been shown to result in more consistent dose accumulation, regardless of the image pathway selected for dose accumulation. The present study investigates the effect on the dose accumulation accuracy of deformation maps processed to reduce inverse consistency and transitivity errors. A set of lung 4DCT phases were analysed, consisting of four images on which a dose grid was created. Dose to 75 corresponding anatomical locations was manually tracked. Dose accumulation was performed between all image sets with Demons derived deformation maps as well as deformation maps processed to reduce inverse consistency and transitivity errors. The ground truth accumulated dose was then compared with the accumulated dose derived from DIR. Two dose accumulation image pathways were considered. The post-processing method to reduce inverse consistency and transitivity errors had minimal effect on the dose accumulation accuracy. There was a statistically significant improvement in dose accumulation accuracy for one pathway, but for the other pathway there was no statistically significant difference. A post-processing technique to reduce inverse consistency and transitivity errors has a positive, yet minimal effect on the dose accumulation accuracy. Thus the post-processing technique improves consistency of dose accumulation with minimal effect on dose accumulation accuracy.
Estimation of error on the cross-correlation, phase and time lag between evenly sampled light curves
Misra, R.; Bora, A.; Dewangan, G.
2018-04-01
Temporal analysis of radiation from Astrophysical sources like Active Galactic Nuclei, X-ray Binaries and Gamma-ray bursts provides information on the geometry and sizes of the emitting regions. Establishing that two light-curves in different energy bands are correlated, and measuring the phase and time-lag between them is an important and frequently used temporal diagnostic. Generally the estimates are done by dividing the light-curves into large number of adjacent intervals to find the variance or by using numerically expensive simulations. In this work we have presented alternative expressions for estimate of the errors on the cross-correlation, phase and time-lag between two shorter light-curves when they cannot be divided into segments. Thus the estimates presented here allow for analysis of light-curves with relatively small number of points, as well as to obtain information on the longest time-scales available. The expressions have been tested using 200 light curves simulated from both white and 1 / f stochastic processes with measurement errors. We also present an application to the XMM-Newton light-curves of the Active Galactic Nucleus, Akn 564. The example shows that the estimates presented here allow for analysis of light-curves with relatively small (∼ 1000) number of points.
Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A
2015-03-15
The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.
Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.
2012-01-01
The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized
The Effect of an Electronic Checklist on Critical Care Provider Workload, Errors, and Performance.
Thongprayoon, Charat; Harrison, Andrew M; O'Horo, John C; Berrios, Ronaldo A Sevilla; Pickering, Brian W; Herasevich, Vitaly
2016-03-01
The strategy used to improve effective checklist use in intensive care unit (ICU) setting is essential for checklist success. This study aimed to test the hypothesis that an electronic checklist could reduce ICU provider workload, errors, and time to checklist completion, as compared to a paper checklist. This was a simulation-based study conducted at an academic tertiary hospital. All participants completed checklists for 6 ICU patients: 3 using an electronic checklist and 3 using an identical paper checklist. In both scenarios, participants had full access to the existing electronic medical record system. The outcomes measured were workload (defined using the National Aeronautics and Space Association task load index [NASA-TLX]), the number of checklist errors, and time to checklist completion. Two independent clinician reviewers, blinded to participant results, served as the reference standard for checklist error calculation. Twenty-one ICU providers participated in this study. This resulted in the generation of 63 simulated electronic checklists and 63 simulated paper checklists. The median NASA-TLX score was 39 for the electronic checklist and 50 for the paper checklist (P = .005). The median number of checklist errors for the electronic checklist was 5, while the median number of checklist errors for the paper checklist was 8 (P = .003). The time to checklist completion was not significantly different between the 2 checklist formats (P = .76). The electronic checklist significantly reduced provider workload and errors without any measurable difference in the amount of time required for checklist completion. This demonstrates that electronic checklists are feasible and desirable in the ICU setting. © The Author(s) 2014.
The effects of forecast errors on the merchandising of wind power
International Nuclear Information System (INIS)
Roon, Serafin von
2012-01-01
A permanent balance between consumption and generation is essential for a stable supply of electricity. In order to ensure this balance, all relevant load data have to be announced for the following day. Consequently, a day-ahead forecast of the wind power generation is required, which also forms the basis for the sale of the wind power at the wholesale market. The main subject of the study is the short-term power supply, which compensates errors in wind power forecasting for balancing the wind power forecast errors at short notice. These forecast errors effects the revenues and the expenses by selling and buying power in the day-ahead, intraday and balance energy market. These price effects resulting from the forecast errors are derived from an empirical analysis. In a scenario for the year 2020 the potential of conventional power plants to supply power at short notice is evaluated from a technical and economic point of view by a time series analysis and a unit commitment simulation.
Nee, Derek Evan; Kastner, Sabine; Brown, Joshua W
2011-01-01
The last decade has seen considerable discussion regarding a theoretical account of medial prefrontal cortex (mPFC) function with particular focus on the anterior cingulate cortex. The proposed theories have included conflict detection, error likelihood prediction, volatility monitoring, and several distinct theories of error detection. Arguments for and against particular theories often treat mPFC as functionally homogeneous, or at least nearly so, despite some evidence for distinct functional subregions. Here we used functional magnetic resonance imaging (fMRI) to simultaneously contrast multiple effects of error, conflict, and task-switching that have been individually construed in support of various theories. We found overlapping yet functionally distinct subregions of mPFC, with activations related to dominant error, conflict, and task-switching effects successively found along a rostral-ventral to caudal-dorsal gradient within medial prefrontal cortex. Activations in the rostral cingulate zone (RCZ) were strongly correlated with the unexpectedness of outcomes suggesting a role in outcome prediction and preparing control systems to deal with anticipated outcomes. The results as a whole support a resolution of some ongoing debates in that distinct theories may each pertain to corresponding distinct yet overlapping subregions of mPFC. Copyright © 2010 Elsevier Inc. All rights reserved.
Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George
2018-04-01
Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present.
International Nuclear Information System (INIS)
Zhang Wan-Zhen; Chen Zhe-Bo; Xia Bin-Feng; Lin Bin; Cao Xiang-Qun
2014-01-01
Digital structured light (SL) profilometry is increasingly used in three-dimensional (3D) measurement technology. However, the nonlinearity of the off-the-shelf projectors and cameras seriously reduces the measurement accuracy. In this paper, first, we review the nonlinear effects of the projector–camera system in the phase-shifting structured light depth measurement method. We show that high order harmonic wave components lead to phase error in the phase-shifting method. Then a practical method based on frequency domain filtering is proposed for nonlinear error reduction. By using this method, the nonlinear calibration of the SL system is not required. Moreover, both the nonlinear effects of the projector and the camera can be effectively reduced. The simulations and experiments have verified our nonlinear correction method. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
Analysis of Wind Speed Forecasting Error Effects on Automatic Generation Control Performance
Directory of Open Access Journals (Sweden)
H. Rajabi Mashhadi
2014-09-01
Full Text Available The main goal of this paper is to study statistical indices and evaluate AGC indices in power system which has large penetration of the WTGs. Increasing penetration of wind turbine generations, needs to study more about impacts of it on power system frequency control. Frequency control is changed with unbalancing real-time system generation and load . Also wind turbine generations have more fluctuations and make system more unbalance. Then AGC loop helps to adjust system frequency and the scheduled tie-line powers. The quality of AGC loop is measured by some indices. A good index is a proper measure shows the AGC performance just as the power system operates. One of well-known measures in literature which was introduced by NERC is Control Performance Standards(CPS. Previously it is claimed that a key factor in CPS index is related to standard deviation of generation error, installed power and frequency response. This paper focuses on impact of a several hours-ahead wind speed forecast error on this factor. Furthermore evaluation of conventional control performances in the power systems with large-scale wind turbine penetration is studied. Effects of wind speed standard deviation and also degree of wind farm penetration are analyzed and importance of mentioned factor are criticized. In addition, influence of mean wind speed forecast error on this factor is investigated. The study system is a two area system which there is significant wind farm in one of those. The results show that mean wind speed forecast error has considerable effect on AGC performance while the mentioned key factor is insensitive to this mean error.
Fensham, J R; Bubner, E; D'Antignana, T; Landos, M; Caraguel, C G B
2018-05-01
The Australian farmed yellowtail kingfish (Seriola lalandi, YTK) industry monitor skin fluke (Benedenia seriolae) and gill fluke (Zeuxapta seriolae) burden by pooling the fluke count of 10 hooked YTK. The random and systematic error of this sampling strategy was evaluated to assess potential impact on treatment decisions. Fluke abundance (fluke count per fish) in a study cage (estimated 30,502 fish) was assessed five times using the current sampling protocol and its repeatability was estimated the repeatability coefficient (CR) and the coefficient of variation (CV). Individual body weight, fork length, fluke abundance, prevalence, intensity (fluke count per infested fish) and density (fluke count per Kg of fish) were compared between 100 hooked and 100 seined YTK (assumed representative of the entire population) to estimate potential selection bias. Depending on the fluke species and age category, CR (expected difference in parasite count between 2 sampling iterations) ranged from 0.78 to 114 flukes per fish. Capturing YTK by hooking increased the selection of fish of a weight and length in the lowest 5th percentile of the cage (RR = 5.75, 95% CI: 2.06-16.03, P-value = 0.0001). These lower end YTK had on average an extra 31 juveniles and 6 adults Z. seriolae per Kg of fish and an extra 3 juvenile and 0.4 adult B. seriolae per Kg of fish, compared to the rest of the cage population (P-value sampling towards the smallest and most heavily infested fish in the population, resulting in poor repeatability (more variability amongst sampled fish) and an overestimation of parasite burden in the population. In this particular commercial situation these finding supported that health management program, where the finding of an underestimation of parasite burden could provide a production impact on the study population. In instances where fish populations and parasite burdens are more homogenous, sampling error may be less severe. Sampling error when capturing fish
Schofield, Jonathon S; Evans, Katherine R; Hebert, Jacqueline S; Marasco, Paul D; Carey, Jason P
2016-03-21
Force Sensitive Resistors (FSRs) are commercially available thin film polymer sensors commonly employed in a multitude of biomechanical measurement environments. Reasons for such wide spread usage lie in the versatility, small profile, and low cost of these sensors. Yet FSRs have limitations. It is commonly accepted that temperature, curvature and biological tissue compliance may impact sensor conductance and resulting force readings. The effect of these variables and degree to which they interact has yet to be comprehensively investigated and quantified. This work systematically assesses varying levels of temperature, sensor curvature and surface compliance using a full factorial design-of-experiments approach. Three models of Interlink FSRs were evaluated. Calibration equations under 12 unique combinations of temperature, curvature and compliance were determined for each sensor. Root mean squared error, mean absolute error, and maximum error were quantified as measures of the impact these thermo/mechanical factors have on sensor performance. It was found that all three variables have the potential to affect FSR calibration curves. The FSR model and corresponding sensor geometry are sensitive to these three mechanical factors at varying levels. Experimental results suggest that reducing sensor error requires calibration of each sensor in an environment as close to its intended use as possible and if multiple FSRs are used in a system, they must be calibrated independently. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Sharmila Vaz
Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.
Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn
2013-01-01
The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID).
Modeling Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter
Stephenson, Edward; Imig, Astrid
2009-10-01
The Storage Ring EDM Collaboration has obtained a set of measurements detailing the sensitivity of a storage ring polarimeter for deuterons to small geometrical and rate changes. Various schemes, such as the calculation of the cross ratio [1], can cancel effects due to detector acceptance differences and luminosity differences for states of opposite polarization. Such schemes fail at second-order in the errors, becoming sensitive to geometrical changes, polarization magnitude differences between opposite polarization states, and changes to the detector response with changing data rates. An expansion of the polarimeter response in a Taylor series based on small errors about the polarimeter operating point can parametrize such effects, primarily in terms of the logarithmic derivatives of the cross section and analyzing power. A comparison will be made to measurements obtained with the EDDA detector at COSY-J"ulich. [4pt] [1] G.G. Ohlsen and P.W. Keaton, Jr., NIM 109, 41 (1973).
International Nuclear Information System (INIS)
McWilliams, T.P.; Martz, H.F.
1981-01-01
This paper incorporates the effects of four types of human error in a model for determining the optimal time between periodic inspections which maximizes the steady state availability for standby safety systems. Such safety systems are characteristic of nuclear power plant operations. The system is modeled by means of an infinite state-space Markov chain. Purpose of the paper is to demonstrate techniques for computing steady-state availability A and the optimal periodic inspection interval tau* for the system. The model can be used to investigate the effects of human error probabilities on optimal availability, study the benefits of annunciating the standby-system, and to determine optimal inspection intervals. Several examples which are representative of nuclear power plant applications are presented
International Nuclear Information System (INIS)
Hirotsu, Yuko; Ebisu, Mitsuhiro; Aikawa, Takeshi; Matsubara, Katsuyuki
2006-01-01
This paper described methods for analyzing human error events that has been accumulated in the individual plant and for utilizing the result to prevent accidents proactively. Firstly, a categorization framework of trigger action and causal factors of human error events were reexamined, and the procedure to analyze human error events was reviewed based on the framework. Secondly, a method for identifying the common characteristics of trigger action data and of causal factor data accumulated by analyzing human error events was clarified. In addition, to utilize the results of trend analysis effectively, methods to develop teaching material for safety education, to develop the checkpoints for the error prevention and to introduce an error management process for strategic error prevention were proposed. (author)
International Nuclear Information System (INIS)
Gopan, O; Kalet, A; Smith, W; Hendrickson, K; Kim, M; Young, L; Nyflot, M; Chvetsov, A; Phillips, M; Ford, E
2016-01-01
Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included in the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in
Energy Technology Data Exchange (ETDEWEB)
Gopan, O; Kalet, A; Smith, W; Hendrickson, K; Kim, M; Young, L; Nyflot, M; Chvetsov, A; Phillips, M; Ford, E [University of Washington, Seattle, WA (United States)
2016-06-15
Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included in the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in
Statistical errors in Monte Carlo estimates of systematic errors
Roe, Byron P.
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.
Statistical errors in Monte Carlo estimates of systematic errors
Energy Technology Data Exchange (ETDEWEB)
Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.
Statistical errors in Monte Carlo estimates of systematic errors
International Nuclear Information System (INIS)
Roe, Byron P.
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2
CSIR Research Space (South Africa)
Kruger, OA
2000-01-01
Full Text Available on face-to-face angle measurements. The results show that flatness and eccentricity deviations have less effect on angle measurements than do pyramidal errors. 1. Introduction Polygons and angle blocks are the most important transfer standards in the field... of angle metrology. Polygons are used by national metrology institutes (NMIs) as transfer standards to industry, where they are used in conjunction with autocollimators to calibrate index tables, rotary tables and other forms of angle- measuring equipment...
International Nuclear Information System (INIS)
Wada, Y.
1981-01-01
The surface-ionization type mass-spectrometer is widely used as an apparatus for quality assurance, accountability and safeguarding of nuclear materials, and for this analysis it has become an important factor to statistically evaluate an analytical error which consists of a random error and a systematic error. The major factor of this systematic error was the mass-discrimination effect. In this paper, various assays for evaluating the factor of variation on the mass-discrimination effect were studied and the data obtained were statistically evaluated. As a result of these analyses, it was proved that the factor of variation on the mass-discrimination effect was not attributed to the acid concentration of sample, sample size on the filament and supplied voltage for a multiplier, but mainly to the filament temperature during the mass-spectrometric analysis. The mass-discrimination effect values β which were usually calculated from the measured data of uranium, plutonium or boron isotopic standard sample were not so significant dependently of the difference of U-235, Pu-239 or B-10 isotopic abundance. Furthermore, in the case of U and Pu, measurement conditions and the mass range of these isotopes were almost similar, and these values β were not statistically significant between U and Pu. On the other hand, the value β for boron was about a third of the value β for U or Pu, but compared with the coefficient of the correction on the mass-discrimination effect for the difference of mass-number, ΔM, these coefficient values were almost the same among U, Pu, and B.As for the isotopic analysis error of U, Pu, Nd and B, it was proved that the isotopic abundance of these elements and the isotopic analysis error were in a relationship of quadratic curves on a logarithmic-logarithmic scale
Effect of sample size on bias correction performance
Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.
2014-05-01
The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further
On the effects of systematic errors in analysis of nuclear scattering data
International Nuclear Information System (INIS)
Bennett, M.T.; Steward, C.; Amos, K.; Allen, L.J.
1995-01-01
The effects of systematic errors on elastic scattering differential cross-section data upon the assessment of quality fits to that data have been studied. Three cases are studied, namely the differential cross-section data sets from elastic scattering of 200 MeV protons from 12 C, of 350 MeV 16 O- 16 O scattering and of 288.6 MeV 12 C- 12 C scattering. First, to estimate the probability of any unknown systematic errors, select sets of data have been processed using the method of generalized cross validation; a method based upon the premise that any data set should satisfy an optimal smoothness criterion. In another case, the S function that provided a statistically significant fit to data, upon allowance for angle variation, became overdetermined. A far simpler S function form could then be found to describe the scattering process. The S functions so obtained have been used in a fixed energy inverse scattering study to specify effective, local, Schroedinger potentials for the collisions. An error analysis has been performed on the results to specify confidence levels for those interactions. 19 refs., 6 tabs., 15 figs
The Effect of Explicit and Implicit Corrective Feedback on Segmental Word-Level Pronunciation Errors
Directory of Open Access Journals (Sweden)
Mohammad Zohrabi
2017-04-01
Full Text Available Over the last few years, the realm of foreign language learning has witnessed an abundance of research concerning the effectiveness of corrective feedback on the acquisition of grammatical features, with the study of other target language subsystems, such as pronunciation, being few and far between. In order to bridge this gap, the present study intended to investigate and compare the immediate and delayed effect of explicit (overt and implicit (covert corrective feedback (CF on treating segmental word-level pronunciation errors committed by adult EFL learners of an institute in Tabriz named ALC. To this end, through a quasi-experimental study and random sampling, three groups were formed, an explicit, an implicit and a control group, each consisting of 20 low proficient EFL learners. Besides, considering the levels that learners were assigned to, based on the institute’s criteria, a Preliminary English Test (PET was administered in order to determine the proficiency level of learners. Having administered the pretest before treatment, to measure the longer-term effect of explicit vs. implicit CF on segmental word-level pronunciation errors, the study included delayed posttests in addition to immediate posttests all of which included reading passages containing 40 problematic words. The collected data were analyzed by ANCOVA and the obtained findings revealed that both explicit and implicit corrective feedback are effective in reducing pronunciation errors showing significant differences between experimental and control groups. Additionally, the outcomes showed that immediate implicit and immediate explicit corrective feedback have similar effects on reduction of pronunciation errors. The same result comes up regarding the delayed effect of explicit feedback in comparison with delayed effect of implicit feedback. However, the delayed effect of explicit and implicit CF lowered comparing to their immediate effect due to time effect. Pedagogically
Some effects of random dose measurement errors on analysis of atomic bomb survivor data
International Nuclear Information System (INIS)
Gilbert, E.S.
1985-01-01
The effects of random dose measurement errors on analyses of atomic bomb survivor data are described and quantified for several procedures. It is found that the ways in which measurement error is most likely to mislead are through downward bias in the estimated regression coefficients and through distortion of the shape of the dose-response curve. The magnitude of the bias with simple linear regression is evaluated for several dose treatments including the use of grouped and ungrouped data, analyses with and without truncation at 600 rad, and analyses which exclude doses exceeding 200 rad. Limited calculations have also been made for maximum likelihood estimation based on Poisson regression. 16 refs., 6 tabs
Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES
Sarkar, B.; Bhunia, C. T.; Maulik, U.
2012-06-01
Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.
The Effect of Antenna Position Errors on Redundant-Baseline Calibration of HERA
Orosz, Naomi; Dillon, Joshua; Ewall-Wice, Aaron; Parsons, Aaron; HERA Collaboration
2018-01-01
HERA (the Hydrogen Epoch of Reionization Array) is a large, highly-redundant radio interferometer in South Africa currently being built out to 350 14-m dishes. Its mission is to probe large scale structure during and prior to the epoch of reionization using the 21 cm hyperfine transition of neutral hydrogen. The array is designed to be calibrated using redundant baselines of known lengths. However, the dishes can deviate from ideal positions, with errors on the order of a few centimeters. This potentially increases foreground contamination of the 21 cm power spectrum in the cleanest part of Fourier space. The calibration algorithm treats groups of baselines that should be redundant, but are not due to position errors, as if they actually are. Accurate, precise calibration is critical because the foreground signals are 100,000 times stronger than the reionization signal. We explain the origin of this effect and discuss weighting strategies to mitigate it.
Edge Effects in Line Intersect Sampling With
David L. R. Affleck; Timothy G. Gregoire; Harry T. Valentine
2005-01-01
Transects consisting of multiple, connected segments with a prescribed configuration are commonly used in ecological applications of line intersect sampling. The transect configuration has implications for the probability with which population elements are selected and for how the selection probabilities can be modified by the boundary of the tract being sampled. As...
Sulkanen, Martin E.; Joy, M. K.; Patel, S. K.
1998-01-01
Imaging of the Sunyaev-Zei'dovich (S-Z) effect in galaxy clusters combined with the cluster plasma x-ray diagnostics can measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic errors in the Hubble constant, H$-O$, because the true shape of the cluster is not known. This effect remains present for clusters that are otherwise chosen to avoid complications for the S-Z and x-ray analysis, such as plasma temperature variations, cluster substructure, or cluster dynamical evolution. In this paper we present a study of the systematic errors in the value of H$-0$, as determined by the x-ray and S-Z properties of a theoretical sample of triaxial isothermal 'beta-model' clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. The model clusters are not generated as ellipsoids of rotation, but have three independent 'core radii', as well as a random orientation to the plane of the sky.
Wang, Wentao
2012-03-01
Both theoretical analysis and nonlinear 2D numerical simulations are used to study the concentration difference and Peclet number effect on the measurement error of electroosmotic mobility in microchannels. We propose a compact analytical model for this error as a function of normalized concentration difference and Peclet number in micro electroosmotic flow. The analytical predictions of the errors are consistent with the numerical simulations. © 2012 IEEE.
International Nuclear Information System (INIS)
Gilbert, R.O.; Baker, K.R.; Nelson, R.A.; Miller, R.H.; Miller, M.L.
1987-07-01
The decision whether to take additional remedial action (removal of soil) from regions contaminated by uranium mill tailings involves collecting 20 plugs of soil from each 10-m by 10-m plot in the region and analyzing a 500-g portion of the mixed soil for 226 Ra. A soil sampling study was conducted in the windblown mill-tailings flood plain area at Shiprock, New Mexico, to evaluate whether reducing the number of soil plugs to 9 would have any appreciable impact on remedial-action decisions. The results of the Shiprock study are described and used in this paper to develop a simple model of the standard deviation of 226 Ra measurements on composite samples formed from 21 or fewer plugs. This model is used to predict as a function of the number of soil plugs per composite, the percent accuracy with which the mean 226 Ra concentration in surface soil can be estimated, and the probability of making incorrect remedial action decisions on the basis of statistical tests. 8 refs., 15 figs., 9 tabs
Bias Errors due to Leakage Effects When Estimating Frequency Response Functions
Directory of Open Access Journals (Sweden)
Andreas Josefsson
2012-01-01
Full Text Available Frequency response functions are often utilized to characterize a system's dynamic response. For a wide range of engineering applications, it is desirable to determine frequency response functions for a system under stochastic excitation. In practice, the measurement data is contaminated by noise and some form of averaging is needed in order to obtain a consistent estimator. With Welch's method, the discrete Fourier transform is used and the data is segmented into smaller blocks so that averaging can be performed when estimating the spectrum. However, this segmentation introduces leakage effects. As a result, the estimated frequency response function suffers from both systematic (bias and random errors due to leakage. In this paper the bias error in the H1 and H2-estimate is studied and a new method is proposed to derive an approximate expression for the relative bias error at the resonance frequency with different window functions. The method is based on using a sum of real exponentials to describe the window's deterministic autocorrelation function. Simple expressions are derived for a rectangular window and a Hanning window. The theoretical expressions are verified with numerical simulations and a very good agreement is found between the results from the proposed bias expressions and the empirical results.
Directory of Open Access Journals (Sweden)
Mostafa Bizhani
2013-06-01
Full Text Available Background and Objective: The incidence of medical errors is deemed one of the unavoidable cases of serious threats to the health and safety of patients. This study aimed to determine the factors influencing medication errors from the perspective of the nursing staff. Materials and Methods: This descriptive -analytic study recruited 80 nurses working in various wards in Fasa Hospital. The nurses were selected via the availability sampling method, and their perspective on factors affecting medication errors was gathered using a questionnaire designed for this study. The data were analyzed with SPSS-15 software. Results: The most important causes of medication errors were work fatigue, low nurse-to-patient ratio, long working hours, high density of work in units, and doing other tasks. Other variables such as age and gender as well as factors effective on the incidence of medication errors are mentioned in the full text. Conclusion: From the nurses’ standpoint, workload and the patient-to-nurse ratio were the most significant factors leading to medication errors.
Effects of sterilization treatments on the analysis of TOC in water samples.
Shi, Yiming; Xu, Lingfeng; Gong, Dongqin; Lu, Jun
2010-01-01
Decomposition experiments conducted with and without microbial processes are commonly used to study the effects of environmental microorganisms on the degradation of organic pollutants. However, the effects of biological pretreatment (sterilization) on organic matter often have a negative impact on such experiments. Based on the principle of water total organic carbon (TOC) analysis, the effects of physical sterilization treatments on determination of TOC and other water quality parameters were investigated. The results revealed that two conventional physical sterilization treatments, autoclaving and 60Co gamma-radiation sterilization, led to the direct decomposition of some organic pollutants, resulting in remarkable errors in the analysis of TOC in water samples. Furthermore, the extent of the errors varied with the intensity and the duration of sterilization treatments. Accordingly, a novel sterilization method for water samples, 0.45 microm micro-filtration coupled with ultraviolet radiation (MCUR), was developed in the present study. The results indicated that the MCUR method was capable of exerting a high bactericidal effect on the water sample while significantly decreasing the negative impact on the analysis of TOC and other water quality parameters. Before and after sterilization treatments, the relative errors of TOC determination could be controlled to lower than 3% for water samples with different categories and concentrations of organic pollutants by using MCUR.
Nguyen, Huong; Pham, Hong-Tham; Vo, Dang-Khoa; Nguyen, Tuan-Dung; van den Heuvel, Edwin R.; Haaijer-Ruskamp, Flora M.; Taxis, Katja
Background Little is known about interventions to reduce intravenous medication administration errors in hospitals, especially in low-and middle-income countries. Objective To assess the effect of a clinical pharmacist-led training programme on clinically relevant errors during intravenous
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
Press, M F; Hung, G; Godolphin, W; Slamon, D J
1994-05-15
HER-2/neu oncogene amplification and overexpression of breast cancer tissue has been correlated with poor prognosis in women with both node-positive and node-negative disease. However, several studies have not confirmed this association. Review of these studies reveals the presence of considerable methodological variability including differences in study size, follow-up time, techniques and reagents. The majority of papers with clinical follow-up information are immunohistochemical studies using archival, paraffin-embedded breast cancers, and a variety of HER-2/neu antibodies have been used in these studies. Very little information, however, is available about the ability of the antibodies to detect overexpression following tissue processing for paraffin-embedding. Therefore, a series of antibodies, reported in the literature or commercially available, were evaluated to assess their sensitivity and specificity as immunohistochemical reagents. Paraffin-embedded samples of 187 breast cancers, previously characterized as frozen specimens for HER-2/neu amplification by Southern blot and for overexpression by Northern blot, Western blot, and immunohistochemistry, were used. Two multitumor paraffin-embedded tissue blocks were prepared from the previously analyzed breast cancers as a panel of cases to test a series of previously studied and/or commercially available anti-HER-2/neu antibodies. Immunohistochemical staining results obtained with 7 polyclonal and 21 monoclonal antibodies in sections from paraffin-embedded blocks of these breast cancers were compared. The ability of these antibodies to detect overexpression was extremely variable, providing an important explantation for the variable overexpression rate reported in the literature.
A spatial error model with continuous random effects and an application to growth convergence
Laurini, Márcio Poletti
2017-10-01
We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.
2000-08-01
luminance performance and aviation, many aviators develop ametropias refractive error having comparable effects on during their careers. We were... statistically (0.04 logMAR, the non-aviator group. Separate investigators at p=0.01), but not clinically significant (ə/2 line different research facilities... statistically significant (0.11 ± 0.1 logCS, t=4.0, sensitivity on the SLCT decreased for the aviator pɘ.001), yet there is significant overlap group at a
Decision-making and sampling size effect
Ismariah Ahmad; Rohana Abd Rahman; Roda Jean-Marc; Lim Hin Fui; Mohd Parid Mamat
2010-01-01
Sound decision-making requires quality information. Poor information does not help in decision making. Among the sources of low quality information, an important cause is inadequate and inappropriate sampling. In this paper we illustrate the case of information collected on timber prices.
Heterogeneous Causal Effects and Sample Selection Bias
DEFF Research Database (Denmark)
Breen, Richard; Choi, Seongsoo; Holm, Anders
2015-01-01
The role of education in the process of socioeconomic attainment is a topic of long standing interest to sociologists and economists. Recently there has been growing interest not only in estimating the average causal effect of education on outcomes such as earnings, but also in estimating how...... causal effects might vary over individuals or groups. In this paper we point out one of the under-appreciated hazards of seeking to estimate heterogeneous causal effects: conventional selection bias (that is, selection on baseline differences) can easily be mistaken for heterogeneity of causal effects....... This might lead us to find heterogeneous effects when the true effect is homogenous, or to wrongly estimate not only the magnitude but also the sign of heterogeneous effects. We apply a test for the robustness of heterogeneous causal effects in the face of varying degrees and patterns of selection bias...
Arba-Mosquera, Samuel; Aslanides, Ioannis M.
2012-01-01
Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.
Differential Effects of Visual-Acoustic Biofeedback Intervention for Residual Speech Errors
McAllister Byun, Tara; Campbell, Heather
2016-01-01
Recent evidence suggests that the incorporation of visual biofeedback technologies may enhance response to treatment in individuals with residual speech errors. However, there is a need for controlled research systematically comparing biofeedback versus non-biofeedback intervention approaches. This study implemented a single-subject experimental design with a crossover component to investigate the relative efficacy of visual-acoustic biofeedback and traditional articulatory treatment for residual rhotic errors. Eleven child/adolescent participants received ten sessions of visual-acoustic biofeedback and 10 sessions of traditional treatment, with the order of biofeedback and traditional phases counterbalanced across participants. Probe measures eliciting untreated rhotic words were administered in at least three sessions prior to the start of treatment (baseline), between the two treatment phases (midpoint), and after treatment ended (maintenance), as well as before and after each treatment session. Perceptual accuracy of rhotic production was assessed by outside listeners in a blinded, randomized fashion. Results were analyzed using a combination of visual inspection of treatment trajectories, individual effect sizes, and logistic mixed-effects regression. Effect sizes and visual inspection revealed that participants could be divided into categories of strong responders (n = 4), mixed/moderate responders (n = 3), and non-responders (n = 4). Individual results did not reveal a reliable pattern of stronger performance in biofeedback versus traditional blocks, or vice versa. Moreover, biofeedback versus traditional treatment was not a significant predictor of accuracy in the logistic mixed-effects model examining all within-treatment word probes. However, the interaction between treatment condition and treatment order was significant: biofeedback was more effective than traditional treatment in the first phase of treatment, and traditional treatment was more effective
Ground-Wave Propagation Effects on Transmission Lines through Error Images
Directory of Open Access Journals (Sweden)
Uribe-Campos Felipe Alejandro
2014-07-01
Full Text Available Electromagnetic transient calculation of overhead transmission lines is strongly influenced by the natural resistivity of the ground. This varies from 1-10K (Ω·m depending on several media factors and on the physical composition of the ground. The accuracy on the calculation of a system transient response depends in part in the ground return model, which should consider the line geometry, the electrical resistivity and the frequency dependence of the power source. Up to date, there are only a few reports on the specialized literature about analyzing the effects produced by the presence of an imperfectly conducting ground of transmission lines in a transient state. A broad range analysis of three of the most often used ground-return models for calculating electromagnetic transients of overhead transmission lines is performed in this paper. The behavior of modal propagation in ground is analyzed here into effects of first and second order. Finally, a numerical tool based on relative error images is proposed in this paper as an aid for the analyst engineer to estimate the incurred error by using approximate ground-return models when calculating transients of overhead transmission lines.
Chang, Howard H; Peng, Roger D; Dominici, Francesca
2011-10-01
In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.
Concha Larrauri, P.
2015-12-01
Orange production in Florida has experienced a decline over the past decade. Hurricanes in 2004 and 2005 greatly affected production, almost to the same degree as strong freezes that occurred in the 1980's. The spread of the citrus greening disease after the hurricanes has also contributed to a reduction in orange production in Florida. The occurrence of hurricanes and diseases cannot easily be predicted but the additional effects of climate on orange yield can be studied and incorporated into existing production forecasts that are based on physical surveys, such as the October Citrus forecast issued every year by the USDA. Specific climate variables ocurring before and after the October forecast is issued can have impacts on flowering, orange drop rates, growth, and maturation, and can contribute to the forecast error. Here we present a methodology to incorporate local climate variables to predict the USDA's orange production forecast error, and we study the local effects of climate on yield in different counties in Florida. This information can aid farmers to gain an insight on what is to be expected during the orange production cycle, and can help supply chain managers to better plan their strategy.
Mistakes as Stepping Stones: Effects of Errors on Episodic Memory among Younger and Older Adults
Cyr, Andrée-Ann; Anderson, Nicole D.
2015-01-01
The memorial costs and benefits of trial-and-error learning have clear pedagogical implications for students, and increasing evidence shows that generating errors during episodic learning can improve memory among younger adults. Conversely, the aging literature has found that errors impair memory among healthy older adults and has advocated for…
Post-error adjustments and ADHD symptoms in adults : The effect of laterality and state regulation
Mohamed, Saleh M.H.; Borger, Norbert A.; Geuze, Reint H.; van der Meere, Jaap J.
2016-01-01
Evidence is accumulating that individuals with Attention-Deficit/Hyperactivity Disorder (ADHD) do not adjust their responses after committing errors. Post-error response adjustments are taken to reflect, among others, error monitoring that is essential for learning, flexible behavioural adaptation,
The Effect of Error Correlation on Interfactor Correlation in Psychometric Measurement
Westfall, Peter H.; Henning, Kevin S. S.; Howell, Roy D.
2012-01-01
This article shows how interfactor correlation is affected by error correlations. Theoretical and practical justifications for error correlations are given, and a new equivalence class of models is presented to explain the relationship between interfactor correlation and error correlations. The class allows simple, parsimonious modeling of error…
The Effect of In-Game Errors on Learning Outcomes. CRESST Report 835
Kerr, Deirdre; Chung, Gregory K. W. K.
2013-01-01
Student mathematical errors are rarely random and often occur because students are applying procedures that they believe to be accurate. Traditional approaches often view such errors as indicators of students' failure to understand the construct in question, but some theorists view errors as opportunities for students to expand their mental model…
Perlee, Caroline J.; Casasent, David P.
1990-09-01
Error sources in an optical matrix-vector processor are analyzed in terms of their effect on the performance of the algorithms used to solve a set of nonlinear and linear algebraic equations. A direct and an iterative algorithm are used to solve a nonlinear time-dependent case-study from computational fluid dynamics. A simulator which emulates the data flow and number representation of the OLAP is used to studs? these error effects. The ability of each algorithm to tolerate or correct the error sources is quantified. These results are extended to the general case of solving nonlinear and linear algebraic equations on the optical system.
Chedoe, Indra; Molendijk, Harry; Hospes, Wobbe; Van den Heuvel, Edwin B.; Taxis, Katja
Objective To examine the effect of a multifaceted educational intervention on the incidence of medication preparation and administration errors in a neonatal intensive care unit (NICU). Design Prospective study with a preintervention and postintervention measurement using direct observation. Setting
Wang, Wentao; Lee, Yi Kuen
2012-01-01
Both theoretical analysis and nonlinear 2D numerical simulations are used to study the concentration difference and Peclet number effect on the measurement error of electroosmotic mobility in microchannels. We propose a compact analytical model
International Nuclear Information System (INIS)
Lee, Yong-Hee; Jang, Tong-Il; Lee, Soo-Kil
2007-01-01
The management of human factors in nuclear power plants (NPPs) has become one of the burden factors during their operating period after the design and construction period. Almost every study on the major accidents emphasizes the prominent importance of the human errors. Regardless of the regulatory requirements such as Periodic Safety Review, the management of human factors would be a main issue to reduce the human errors and to enhance the performance of plants. However, it is not easy to find out a more effective perspective on human errors to establish the engineering implementation plan for preventing them. This paper describes a system engineer's perspectives on human errors and discusses its application to the recent study on the human error events in Korean NPPs
The fundamental attribution error in detecting deception: the boy-who-cried-wolf effect.
O'Sullivan, Maureen
2003-10-01
Most people are unable to detect accurately when others are lying. Many explanations for this inability have been suggested but the cognitive heuristics involved in lie detection have received little attention. The present study offers evidence from two experiments, based on two different groups of observers, judging two different kinds of lies, presented in two different testing situations, that the fundamental attribution error significantly undermines the ability to detect honesty and deception accurately. Trait judgments of trustworthiness were highly correlated with state judgments of truthfulness, leading, as predicted, to positive correlations with honest detection accuracy and negative correlations with deception detection accuracy. More accurate lie detectors were significantly more likely than less accurate lie detectors to separate state and trait judgments of honesty. The effect of other biases, such as the halo effect and the truthfulness bias, also are examined. Implications for future research and practice are discussed.
The Effects of Lever Arm (Instrument Offset) Error on GRAV-D Airborne Gravity Data
Johnson, J. A.; Youngman, M.; Damiani, T.
2017-12-01
High quality airborne gravity collection with a 2-axis, stabilized platform gravity instrument, such as with a Micro-g LaCoste Turnkey Airborne Gravity System (TAGS), is dependent on the aircraft's ability to maintain "straight and level" flight. However, during flight there is constant rotation about the aircraft's center of gravity. Standard practice is to install the scientific equipment close to the aircraft's estimated center of gravity to minimize the relative rotations with aircraft motion. However, there remain small offsets between the instruments. These distance offsets, the lever arm, are used to define the rigid-body, spatial relationship between the IMU, GPS antenna, and airborne gravimeter within the aircraft body frame. The Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, which is collecting airborne gravity data across the U.S., uses a commercial software package for coupled IMU-GNSS aircraft positioning. This software incorporates a lever arm correction to calculate a precise position for the airborne gravimeter. The positioning software must do a coordinate transformation to relate each epoch of the coupled GNSS-IMU derived position to the position of the gravimeter within the constantly-rotating aircraft. This transformation requires three inputs: accurate IMU-measured aircraft rotations, GNSS positions, and lever arm distances between instruments. Previous studies show that correcting for the lever arm distances improves gravity results, but no sensitivity tests have been done to investigate how error in the lever arm distances affects the final airborne gravity products. This research investigates the effects of lever arm measurement error on airborne gravity data. GRAV-D lever arms are nominally measured to the cm-level using surveying equipment. "Truth" data sets will be created by processing GRAV-D flight lines with both relatively small lever arms and large lever arms. Then negative and positive incremental
TH-A-9A-03: Dosimetric Effect of Rotational Errors for Lung Stereotactic Body Radiotherapy
International Nuclear Information System (INIS)
Lee, J; Kim, H; Park, J; Kim, J; Kim, H; Ye, S
2014-01-01
Purpose: To evaluate the dosimetric effects on target volume and organs at risk (OARs) due to roll rotational errors in treatment setup of stereotactic body radiation therapy (SBRT) for lung cancer. Methods: There were a total of 23 volumetric modulated arc therapy (VMAT) plans for lung SBRT examined in this retrospective study. Each CT image of VMAT plans was intentionally rotated by ±1°, ±2°, and ±3° to simulate roll rotational setup errors. The axis of rotation was set at the center of T-spine. The target volume and OARs in the rotated CT images were re-defined by deformable registration of original contours. The dose distributions on each set of rotated images were re-calculated to cover the planning target volume (PTV) with the prescription dose before and after the couch translational correction. The dose-volumetric changes of PTVs and spinal cords were analyzed. Results: The differences in D95% of PTVs by −3°, −2°, −1°, 1°, 2°, and 3° roll rotations before the couch translational correction were on average −11.3±11.4%, −5.46±7.24%, −1.11±1.38% −3.34±3.97%, −9.64±10.3%, and −16.3±14.7%, respectively. After the couch translational correction, those values were −0.195±0.544%, −0.159±0.391%, −0.188±0.262%, −0.310±0.270%, −0.407±0.331%, and −0.433±0.401%, respectively. The maximum dose difference of spinal cord among the 23 plans even after the couch translational correction was 25.9% at −3° rotation. Conclusions: Roll rotational setup errors in lung SBRT significantly influenced the coverage of target volume using VMAT technique. This could be in part compensated by the translational couch correction. However, in spite of the translational correction, the delivered doses to the spinal cord could be more than the calculated doses. Therefore if rotational setup errors exist during lung SBRT using VMAT technique, the rotational correction would rather be considered to prevent over-irradiation of normal
FUNCTIONAL AND EFFECTIVE CONNECTIVITY OF VISUAL WORD RECOGNITION AND HOMOPHONE ORTHOGRAPHIC ERRORS.
Directory of Open Access Journals (Sweden)
JOAN eGUÀRDIA-OLMOS
2015-05-01
Full Text Available The study of orthographic errors in a transparent language like Spanish is an important topic in relation to writing acquisition. The development of neuroimaging techniques, particularly functional Magnetic Resonance Imaging (fMRI, has enabled the study of such relationships between brain areas. The main objective of the present study was to explore the patterns of effective connectivity by processing pseudohomophone orthographic errors among subjects with high and low spelling skills. Two groups of 12 Mexican subjects each, matched by age, were formed based on their results in a series of ad-hoc spelling-related out-scanner tests: a High Spelling Skills group (HSS and a Low Spelling Skills group (LSS. During the fMRI session, two experimental tasks were applied (spelling recognition task and visuoperceptual recognition task. Regions of Interest (ROIs and their signal values were obtained for both tasks. Based on these values, SEMs (Structural Equation Models were obtained for each group of spelling competence (HSS and LSS and task through ML (Maximum Likelihood estimation, and the model with the best fit was chosen in each case. Likewise, DCM (Dynamic Causal Models were estimated for all the conditions across tasks and groups. The HSS group’s SEM results suggest that, in the spelling recognition task, the right middle temporal gyrus, and, to a lesser extent, the left parahippocampal gyrus receive most of the significant effects, whereas the DCM results in the visuoperceptual recognition task show less complex effects, but still congruent with the previous results, with an important role in several areas. In general, these results are consistent with the major findings in partial studies about linguistic activities but they are the first analyses of statistical effective brain connectivity in transparent languages.
Directory of Open Access Journals (Sweden)
Jian Yan
2018-01-01
Full Text Available In this paper, a flux distribution model of the focal plane in dish concentrator system has been established based on ray tracking method. This model was adopted for researching the influence of the mirror slope error, solar direct normal irradiance, and tracking error of elevation-azimuth tracking device (EATD on the focal spot characteristics (i.e., flux distribution, geometrical shape, centroid position, and intercept factor. The tracking error transmission law of the EATD transferred to dish concentrator was also studied. The results show that the azimuth tracking error of the concentrator decreases with the increase of the concentrator elevation angle and it decreases to 0 mrad when the elevation angle is 90°. The centroid position of focal spot along x-axis and y-axis has linear relationship with azimuth and elevation tracking error of EATD, respectively, which could be used to evaluate and calibrate the tracking error of the dish concentrator. Finally, the transmission law of the EATD azimuth tracking error in solar heliostats is analyzed, and a dish concentrator using a spin-elevation tracking device is proposed, which can reduce the effect of spin tracking error on the dish concentrator. This work could provide fundamental for manufacturing precision allocation of tracking devices and developing a new type of tracking device.
Effect of cooling on thixotropic position-sense error in human biceps muscle.
Sekihara, Chikara; Izumizaki, Masahiko; Yasuda, Tomohiro; Nakajima, Takayuki; Atsumi, Takashi; Homma, Ikuo
2007-06-01
Muscle temperature affects muscle thixotropy. However, it is unclear whether changes in muscle temperature affect thixotropic position-sense errors. We studied the effect of cooling on thixotropic position-sense errors induced by short-length muscle contraction (hold-short conditioning) in the biceps of 12 healthy men. After hold-short conditioning of the right biceps muscle in a cooled (5.0 degrees C) or control (36.5 degrees C) environment, subjects perceived greater extension of the conditioned forearm at 5.0 degrees C. The angle differences between the two forearms following hold-short conditioning of the right biceps muscle in normal or cooled conditions were significantly different (-3.335 +/- 1.680 degrees at 36.5 degrees C vs. -5.317 +/- 1.096 degrees at 5.0 degrees C; P=0.043). Induction of a tonic vibration reflex in the biceps muscle elicited involuntary forearm elevation, and the angular velocities of the elevation differed significantly between arms conditioned in normal and cooled environments (1.583 +/- 0.326 degrees /s at 36.5 degrees C vs. 3.100 +/- 0.555 degrees /s at 5.0 degrees C, P=0.0039). Thus, a cooled environment impairs a muscle's ability to provide positional information, potentially leading to poor muscle performance.
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
Evaluating the effects of modeling errors for isolated finite three-dimensional targets
Henn, Mark-Alexander; Barnes, Bryan M.; Zhou, Hui
2017-10-01
Optical three-dimensional (3-D) nanostructure metrology utilizes a model-based metrology approach to determine critical dimensions (CDs) that are well below the inspection wavelength. Our project at the National Institute of Standards and Technology is evaluating how to attain key CD and shape parameters from engineered in-die capable metrology targets. More specifically, the quantities of interest are determined by varying the input parameters for a physical model until the simulations agree with the actual measurements within acceptable error bounds. As in most applications, establishing a reasonable balance between model accuracy and time efficiency is a complicated task. A well-established simplification is to model the intrinsically finite 3-D nanostructures as either periodic or infinite in one direction, reducing the computationally expensive 3-D simulations to usually less complex two-dimensional (2-D) problems. Systematic errors caused by this simplified model can directly influence the fitting of the model to the measurement data and are expected to become more apparent with decreasing lengths of the structures. We identify these effects using selected simulation results and present experimental setups, e.g., illumination numerical apertures and focal ranges, that can increase the validity of the 2-D approach.
Effective use of pre-job briefing as tool for the prevention of human error
International Nuclear Information System (INIS)
Schlump, Ansgar
2015-01-01
There is a fundamental demand to minimise the risks for workers and facilities while executing maintenance work. To ensure that facilities are secure and reliable, any deviation from normal operation behaviour has to be avoided. Accurate planning is the basis for minimising mistakes and making work more secure. All workers involved should understand how the work should be done and what is expected to avoid human errors. Especially in nuclear power plants, the human performance tools (HPT) have proved to be an effective instrument to minimise human errors. These human performance tools consist of numerous different tools that complement each other (e.g. pre-job briefing). The safety culture of the plants is also characterised by these tools. The choice of using the right HP-Tool is often a difficult task for the work planer. On the one hand, he wants to avoid mistakes during the execution of work but on the other hand he does not want to irritate the workers with unnecessary requirements. The proposed concept uses a simple risk analysis to take into account the complexity of the task, the experience of the past and the consequences of failure in to account. One main result of this risk analysis is a recommendation of the detailing of the pre-job briefing, to reduce the risks for the involved staff to a minimum.
Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.
Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua
2016-09-05
In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.
Naik, Aanand Dinkar; Rao, Raghuram; Petersen, Laura Ann
2008-01-01
Diagnostic errors are poorly understood despite being a frequent cause of medical errors. Recent efforts have aimed to advance the "basic science" of diagnostic error prevention by tracing errors to their most basic origins. Although a refined theory of diagnostic error prevention will take years to formulate, we focus on communication breakdown, a major contributor to diagnostic errors and an increasingly recognized preventable factor in medical mishaps. We describe a comprehensive framework that integrates the potential sources of communication breakdowns within the diagnostic process and identifies vulnerable steps in the diagnostic process where various types of communication breakdowns can precipitate error. We then discuss potential information technology-based interventions that may have efficacy in preventing one or more forms of these breakdowns. These possible intervention strategies include using new technologies to enhance communication between health providers and health systems, improve patient involvement, and facilitate management of information in the medical record. PMID:18373151
Trends and Correlation Estimation in Climate Sciences: Effects of Timescale Errors
Mudelsee, M.; Bermejo, M. A.; Bickert, T.; Chirila, D.; Fohlmeister, J.; Köhler, P.; Lohmann, G.; Olafsdottir, K.; Scholz, D.
2012-12-01
Trend describes time-dependence in the first moment of a stochastic process, and correlation measures the linear relation between two random variables. Accurately estimating the trend and correlation, including uncertainties, from climate time series data in the uni- and bivariate domain, respectively, allows first-order insights into the geophysical process that generated the data. Timescale errors, ubiquitious in paleoclimatology, where archives are sampled for proxy measurements and dated, poses a problem to the estimation. Statistical science and the various applied research fields, including geophysics, have almost completely ignored this problem due to its theoretical almost-intractability. However, computational adaptations or replacements of traditional error formulas have become technically feasible. This contribution gives a short overview of such an adaptation package, bootstrap resampling combined with parametric timescale simulation. We study linear regression, parametric change-point models and nonparametric smoothing for trend estimation. We introduce pairwise-moving block bootstrap resampling for correlation estimation. Both methods share robustness against autocorrelation and non-Gaussian distributional shape. We shortly touch computing-intensive calibration of bootstrap confidence intervals and consider options to parallelize the related computer code. Following examples serve not only to illustrate the methods but tell own climate stories: (1) the search for climate drivers of the Agulhas Current on recent timescales, (2) the comparison of three stalagmite-based proxy series of regional, western German climate over the later part of the Holocene, and (3) trends and transitions in benthic oxygen isotope time series from the Cenozoic. Financial support by Deutsche Forschungsgemeinschaft (FOR 668, FOR 1070, MU 1595/4-1) and the European Commission (MC ITN 238512, MC ITN 289447) is acknowledged.
False memory ≠ false memory: DRM errors are unrelated to the misinformation effect.
Directory of Open Access Journals (Sweden)
James Ost
Full Text Available The DRM method has proved to be a popular and powerful, if controversial, way to study 'false memories'. One reason for the controversy is that the extent to which the DRM effect generalises to other kinds of memory error has been neither satisfactorily established nor subject to much empirical attention. In the present paper we contribute data to this ongoing debate. One hundred and twenty participants took part in a standard misinformation effect experiment, in which they watched some CCTV footage, were exposed to misleading post-event information about events depicted in the footage, and then completed free recall and recognition tests. Participants also completed a DRM test as an ostensibly unrelated filler task. Despite obtaining robust misinformation and DRM effects, there were no correlations between a broad range of misinformation and DRM effect measures (mean r = -.01. This was not due to reliability issues with our measures or a lack of power. Thus DRM 'false memories' and misinformation effect 'false memories' do not appear to be equivalent.
Nicewander, W. Alan
2018-01-01
Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…
Energy Technology Data Exchange (ETDEWEB)
Bard, D.; Chang, C.; Kahn, S. M.; Gilmore, K.; Marshall, S. [KIPAC, Stanford University, 452 Lomita Mall, Stanford, CA 94309 (United States); Kratochvil, J. M.; Huffenberger, K. M. [Department of Physics, University of Miami, Coral Gables, FL 33124 (United States); May, M. [Physics Department, Brookhaven National Laboratory, Upton, NY 11973 (United States); AlSayyad, Y.; Connolly, A.; Gibson, R. R.; Jones, L.; Krughoff, S. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Lorenz, S. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Haiman, Z.; Jernigan, J. G., E-mail: djbard@slac.stanford.edu [Department of Astronomy and Astrophysics, Columbia University, New York, NY 10027 (United States); and others
2013-09-01
We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST Image Simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.
Li, Yue (Inventor); Bruck, Jehoshua (Inventor)
2018-01-01
A data device includes a memory having a plurality of memory cells configured to store data values in accordance with a predetermined rank modulation scheme that is optional and a memory controller that receives a current error count from an error decoder of the data device for one or more data operations of the flash memory device and selects an operating mode for data scrubbing in accordance with the received error count and a program cycles count.
Chedoe, Indra; Molendijk, Harry; Hospes, Wobbe; Van den Heuvel, Edwin R; Taxis, Katja
2012-11-01
To examine the effect of a multifaceted educational intervention on the incidence of medication preparation and administration errors in a neonatal intensive care unit (NICU). Prospective study with a preintervention and postintervention measurement using direct observation. NICU in a tertiary hospital in the Netherlands. A multifaceted educational intervention including teaching and self-study. The incidence of medication preparation and administration errors. Clinical importance was assessed by three experts. The incidence of errors decreased from 49% (43-54%) (151 medications with one or more errors of 311 observations) to 31% (87 of 284) (25-36%). Preintervention, 0.3% (0-2%) medications contained severe errors, 26% (21-31%) moderate and 23% (18-28%) minor errors; postintervention, none 0% (0-2%) was severe, 23% (18-28%) moderate and 8% (5-12%) minor. A generalised estimating equations analysis provided an OR of 0.49 (0.29-0.84) for period (p=0.032), (route of administration (p=0.001), observer within period (p=0.036)). The multifaceted educational intervention seemed to have contributed to a significant reduction of the preparation and administration error rate, but other measures are needed to improve medication safety further.
Hoede, C.; Li, Z.
2001-01-01
In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,
The effectiveness of pretreatment physics plan review for detecting errors in radiation therapy
International Nuclear Information System (INIS)
Gopan, Olga; Zeng, Jing; Novak, Avrey; Nyflot, Matthew; Ford, Eric
2016-01-01
Purpose: The pretreatment physics plan review is a standard tool for ensuring treatment quality. Studies have shown that the majority of errors in radiation oncology originate in treatment planning, which underscores the importance of the pretreatment physics plan review. This quality assurance measure is fundamentally important and central to the safety of patients and the quality of care that they receive. However, little is known about its effectiveness. The purpose of this study was to analyze reported incidents to quantify the effectiveness of the pretreatment physics plan review with the goal of improving it. Methods: This study analyzed 522 potentially severe or critical near-miss events within an institutional incident learning system collected over a three-year period. Of these 522 events, 356 originated at a workflow point that was prior to the pretreatment physics plan review. The remaining 166 events originated after the pretreatment physics plan review and were not considered in the study. The applicable 356 events were classified into one of the three categories: (1) events detected by the pretreatment physics plan review, (2) events not detected but “potentially detectable” by the physics review, and (3) events “not detectable” by the physics review. Potentially detectable events were further classified by which specific checks performed during the pretreatment physics plan review detected or could have detected the event. For these events, the associated specific check was also evaluated as to the possibility of automating that check given current data structures. For comparison, a similar analysis was carried out on 81 events from the international SAFRON radiation oncology incident learning system. Results: Of the 356 applicable events from the institutional database, 180/356 (51%) were detected or could have been detected by the pretreatment physics plan review. Of these events, 125 actually passed through the physics review; however
The effectiveness of pretreatment physics plan review for detecting errors in radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Gopan, Olga; Zeng, Jing; Novak, Avrey; Nyflot, Matthew; Ford, Eric, E-mail: eford@uw.edu [Department of Radiation Oncology, University of Washington Medical Center, 1959 NE Pacific Street, Box 356043, Seattle, Washington 98195 (United States)
2016-09-15
Purpose: The pretreatment physics plan review is a standard tool for ensuring treatment quality. Studies have shown that the majority of errors in radiation oncology originate in treatment planning, which underscores the importance of the pretreatment physics plan review. This quality assurance measure is fundamentally important and central to the safety of patients and the quality of care that they receive. However, little is known about its effectiveness. The purpose of this study was to analyze reported incidents to quantify the effectiveness of the pretreatment physics plan review with the goal of improving it. Methods: This study analyzed 522 potentially severe or critical near-miss events within an institutional incident learning system collected over a three-year period. Of these 522 events, 356 originated at a workflow point that was prior to the pretreatment physics plan review. The remaining 166 events originated after the pretreatment physics plan review and were not considered in the study. The applicable 356 events were classified into one of the three categories: (1) events detected by the pretreatment physics plan review, (2) events not detected but “potentially detectable” by the physics review, and (3) events “not detectable” by the physics review. Potentially detectable events were further classified by which specific checks performed during the pretreatment physics plan review detected or could have detected the event. For these events, the associated specific check was also evaluated as to the possibility of automating that check given current data structures. For comparison, a similar analysis was carried out on 81 events from the international SAFRON radiation oncology incident learning system. Results: Of the 356 applicable events from the institutional database, 180/356 (51%) were detected or could have been detected by the pretreatment physics plan review. Of these events, 125 actually passed through the physics review; however
Gyarmathy, V Anna; Johnston, Lisa G; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A
2014-02-01
Respondent driven sampling (RDS) and incentivized snowball sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania ("original sample") to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1-12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called "strudel effect" is discussed in the paper. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens. Schadauer
2014-01-01
National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...
Error Consistency in Acquired Apraxia of Speech with Aphasia: Effects of the Analysis Unit
Haley, Katarina L.; Cunningham, Kevin T.; Eaton, Catherine Torrington; Jacks, Adam
2018-01-01
Purpose: Diagnostic recommendations for acquired apraxia of speech (AOS) have been contradictory concerning whether speech sound errors are consistent or variable. Studies have reported divergent findings that, on face value, could argue either for or against error consistency as a diagnostic criterion. The purpose of this study was to explain…
Sampling strategies for estimating brook trout effective population size
Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher
2012-01-01
The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...
Energy Technology Data Exchange (ETDEWEB)
Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-04
We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B_{0} is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB_{0}/B_{0}, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2
Joo, Yeon Kyoung; Lee-Won, Roselyn J
2016-10-01
For members of a group negatively stereotyped in a domain, making mistakes can aggravate the influence of stereotype threat because negative stereotypes often blame target individuals and attribute the outcome to their lack of ability. Virtual agents offering real-time error feedback may influence performance under stereotype threat by shaping the performers' attributional perception of errors they commit. We explored this possibility with female drivers, considering the prevalence of the "women-are-bad-drivers" stereotype. Specifically, we investigated how in-vehicle voice agents offering error feedback based on responsibility attribution (internal vs. external) and outcome attribution (ability vs. effort) influence female drivers' performance under stereotype threat. In addressing this question, we conducted an experiment in a virtual driving simulation environment that provided moment-to-moment error feedback messages. Participants performed a challenging driving task and made mistakes preprogrammed to occur. Results showed that the agent's error feedback with outcome attribution moderated the stereotype threat effect on driving performance. Participants under stereotype threat had a smaller number of collisions when the errors were attributed to effort than to ability. In addition, outcome attribution feedback moderated the effect of responsibility attribution on driving performance. Implications of these findings are discussed.
Energy Technology Data Exchange (ETDEWEB)
Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))
1990-01-01
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
Edge placement error control and Mask3D effects in High-NA anamorphic EUV lithography
van Setten, Eelco; Bottiglieri, Gerardo; de Winter, Laurens; McNamara, John; Rusu, Paul; Lubkoll, Jan; Rispens, Gijsbert; van Schoot, Jan; Neumann, Jens Timo; Roesch, Matthias; Kneer, Bernhard
2017-10-01
To enable cost-effective shrink at the 3nm node and beyond, and to extend Moore's law into the next decade, ASML is developing a new high-NA EUV platform. The high-NA system is targeted to feature a numerical aperture (NA) of 0.55 to extend the single exposure resolution limit to 8nm half pitch. The system is being designed to achieve an on-product-overlay (OPO) performance well below 2nm, a high image contrast to drive down local CD errors and to obtain global CDU at sub-1nm level to be able to meet customer edge placement error (EPE) requirements for the devices of the future. EUV scanners employ reflective Bragg multi-layer mirrors in the mask and in the Projection Optics Box (POB) that is used to project the mask pattern into the photoresist on the silicon wafer. These MoSi multi-layer mirrors are tuned for maximum reflectivity, and thus productivity, at 13.5nm wavelength. The angular range of incident light for which a high reflectivity at the reticle can be obtained is limited to +/- 11o, exceeding the maximum angle occurring in current 0.33NA scanners at 4x demagnification. At 0.55NA the maximum angle at reticle level would extend up to 17o in the critical (scanning) direction and compromise the imaging performance of horizontal features severely. To circumvent this issue a novel anamorphic optics design has been introduced, which has a 4x demagnification in the X- (slit) direction and 8x demagnification in the Y- (scanning) direction as well as a central obscuration in the exit pupil. In this work we will show that the EUV high-NA anamorphic concept can successfully solve the angular reflectivity issues and provide good imaging performance in both directions. Several unique imaging challenges in comparison to the 0.33NA isomorphic baseline are being studied, such as the impact of the central obscuration in the POB and Mask-3D effects at increased NA that seem most pronounced for vertical features. These include M3D induced contrast loss and non
Clarke, D L; Kong, V Y; Naidoo, L C; Furlong, H; Aldous, C
2013-01-01
Acute surgical patients are particularly vulnerable to human error. The Acute Physiological Support Team (APST) was created with the twin objectives of identifying high-risk acute surgical patients in the general wards and reducing both the incidence of error and impact of error on these patients. A number of error taxonomies were used to understand the causes of human error and a simple risk stratification system was adopted to identify patients who are particularly at risk of error. During the period November 2012-January 2013 a total of 101 surgical patients were cared for by the APST at Edendale Hospital. The average age was forty years. There were 36 females and 65 males. There were 66 general surgical patients and 35 trauma patients. Fifty-six patients were referred on the day of their admission. The average length of stay in the APST was four days. Eleven patients were haemo-dynamically unstable on presentation and twelve were clinically septic. The reasons for referral were sepsis,(4) respiratory distress,(3) acute kidney injury AKI (38), post-operative monitoring (39), pancreatitis,(3) ICU down-referral,(7) hypoxia,(5) low GCS,(1) coagulopathy.(1) The mortality rate was 13%. A total of thirty-six patients experienced 56 errors. A total of 143 interventions were initiated by the APST. These included institution or adjustment of intravenous fluids (101), blood transfusion,(12) antibiotics,(9) the management of neutropenic sepsis,(1) central line insertion,(3) optimization of oxygen therapy,(7) correction of electrolyte abnormality,(8) correction of coagulopathy.(2) CONCLUSION: Our intervention combined current taxonomies of error with a simple risk stratification system and is a variant of the defence in depth strategy of error reduction. We effectively identified and corrected a significant number of human errors in high-risk acute surgical patients. This audit has helped understand the common sources of error in the general surgical wards and will inform
Effects of sample size on the second magnetization peak in ...
Indian Academy of Sciences (India)
the sample size decreases – a result that could be interpreted as a size effect in the order– disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order–disorder transition induction in samples of different size. Keywords.
Vlasceanu, Madalina; Drach, Rae; Coman, Alin
2018-05-03
The mind is a prediction machine. In most situations, it has expectations as to what might happen. But when predictions are invalidated by experience (i.e., prediction errors), the memories that generate these predictions are suppressed. Here, we explore the effect of prediction error on listeners' memories following social interaction. We find that listening to a speaker recounting experiences similar to one's own triggers prediction errors on the part of the listener that lead to the suppression of her memories. This effect, we show, is sensitive to a perspective-taking manipulation, such that individuals who are instructed to take the perspective of the speaker experience memory suppression, whereas individuals who undergo a low-perspective-taking manipulation fail to show a mnemonic suppression effect. We discuss the relevance of these findings for our understanding of the bidirectional influences between cognition and social contexts, as well as for the real-world situations that involve memory-based predictions.
Haverkamp, Nicolas; Beauducel, André
2017-01-01
We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The
Effect of Pointing Error on the BER Performance of an Optical CDMA FSO Link with SIK Receiver
Nazrul Islam, A. K. M.; Majumder, S. P.
2017-12-01
An analytical approach is presented for an optical code division multiple access (OCDMA) system over free space optical (FSO) channel considering the effect of pointing error between the transmitter and the receiver. Analysis is carried out with an optical sequence inverse keying (SIK) correlator receiver with intensity modulation and direct detection (IM/DD) to find the bit error rate (BER) with pointing error. The results are evaluated numerically in terms of signal-to-noise plus multi-access interference (MAI) ratio, BER and power penalty due to pointing error. It is noticed that the OCDMA FSO system is highly affected by pointing error with significant power penalty at a BER of 10-6 and 10-9. For example, penalty at BER 10-9 is found to be 9 dB corresponding to normalized pointing error of 1.4 for 16 users with processing gain of 256 and is reduced to 6.9 dB when the processing gain is increased to 1,024.
Hoogeveen, Suzanne; Schjoedt, Uffe; van Elk, Michiel
2018-06-19
This study examines the effects of expected transcranial stimulation on the error(-related) negativity (Ne or ERN) and the sense of agency in participants who perform a cognitive control task. Placebo transcranial direct current stimulation was used to elicit expectations of transcranially induced cognitive improvement or impairment. The improvement/impairment manipulation affected both the Ne/ERN and the sense of agency (i.e., whether participants attributed errors to oneself or the brain stimulation device): Expected improvement increased the ERN in response to errors compared with both impairment and control conditions. Expected impairment made participants falsely attribute errors to the transcranial stimulation. This decrease in sense of agency was correlated with a reduced ERN amplitude. These results show that expectations about transcranial stimulation impact users' neural response to self-generated errors and the attribution of responsibility-especially when actions lead to negative outcomes. We discuss our findings in relation to predictive processing theory according to which the effect of prior expectations on the ERN reflects the brain's attempt to generate predictive models of incoming information. By demonstrating that induced expectations about transcranial stimulation can have effects at a neural level, that is, beyond mere demand characteristics, our findings highlight the potential for placebo brain stimulation as a promising tool for research.
Measurement of Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter
Imig, Astrid; Stephenson, Edward
2009-10-01
The Storage Ring EDM Collaboration was using the Cooler Synchrotron (COSY) and the EDDA detector at the Forschungszentrum J"ulich to explore systematic errors in very sensitive storage-ring polarization measurements. Polarized deuterons of 235 MeV were used. The analyzer target was a block of 17 mm thick carbon placed close to the beam so that white noise applied to upstream electrostatic plates increases the vertical phase space of the beam, allowing deuterons to strike the front face of the block. For a detector acceptance that covers laboratory angles larger than 9 ^o, the efficiency for particles to scatter into the polarimeter detectors was about 0.1% (all directions) and the vector analyzing power was about 0.2. Measurements were made of the sensitivity of the polarization measurement to beam position and angle. Both vector and tensor asymmetries were measured using beams with both vector and tensor polarization. Effects were seen that depend upon both the beam geometry and the data rate in the detectors.
Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.
Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M
2006-10-01
Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.
Directory of Open Access Journals (Sweden)
Endang Fauziati
2009-06-01
months afterwards. They were analyzed quantitatively and qualitatively. The result indicates that the ET changed the state of the learners’ ungrammatical items. They became so dynamic. At a certain period, some appeared; then due to the ET, some were destabilized, some were fluctuating, and others were still stabilized. New errors appeared as they started learning to use new grammatical items. The conclusion drawn from this study is that ET can change the state of the learners’ IL errors; ET contributes to the destabilization process. Errors may persist momentarily but they can be destabilized. The ET still works on the learners who are at their post puberty. Thus, there is a great possibility for the learners to acquire complete TL grammar since their ungrammatical items are dynamic. Keywords: error treatment, interlanguage, fossilization, stabilization, destabilization.
Some Considerations Regarding Plane to Plane Parallelism Error Effects in Robotic Systems
Directory of Open Access Journals (Sweden)
Stelian Alaci
2015-06-01
Full Text Available The paper shows that by imposing the parallelism constraint between the measured plane and the reference plane, the position of the current plane is not univocal specified and is impossible to specify the way to attain the parallelism errors imposed by accuracy constrains. The parameters involved in the calculus of plane to plane parallelism error can be used to set univocal the relative position between the two planes.
The effect of phase advance errors between interaction points on beam halos
International Nuclear Information System (INIS)
Chen, T.; Irwin, J.; Siemann, R.H.
1995-01-01
Phase advance errors between interaction points (IP) break the symmetry of multi-IP colliders. This symmetry breaking introduces new, lower order resonances which may chance the halo from the beam-beam interaction dramatically. In this paper, the mechanism of introducing new resonances is discussed. Simulation results showing the changes due to phase advance errors are presented. Simulation results are compared with experimental measurements at VEPP-2M
Effects of Lexico-syntactic Errors on Teaching Materials: A Study of Textbooks Written by Nigerians
Directory of Open Access Journals (Sweden)
Peace Chinwendu Israel
2014-01-01
Full Text Available This study examined lexico-syntactic errors in selected textbooks written by Nigerians. Our focus was on the educated bilinguals (acrolect who acquired their primary, secondary and tertiary education in Nigeria and the selected textbooks were textbooks published by Vanity Publishers/Press. The participants (authors cut across the three major ethnic groups in Nigeria – Hausa, Igbo and Yoruba and the selection of the textbooks covered the major disciplines of study. We adopted the descriptive research design and specifically employed the survey method to accomplish the purpose of our exploratory research. The lexico-syntactic errors in the selected textbooks were identified and classified into various categories. These errors were not different from those identified over the years in students’ essays and exam scripts. This buttressed our argument that students are merely the conveyor belt of errors contained in the teaching material and that we can analyse the students’ lexico-syntactic errors in tandem with errors contained in the material used in teaching.
International Nuclear Information System (INIS)
Thiel, K.
1975-01-01
Using the fission track dating method by means of uranium fission tracks in meteorites and moon samples (according to the successful Apollo and Luna missions), special problems arise, as the samples frequently have a very great age and were subjected to the inmediate effect of primary cosmic radiation. To determine the share of induced fission tracks, an extended 'cosmic ray' simulation experiment was carried out on the p-synchrocyclotron in CERN, Geneva; the performance and results of the test with the proton flux and U fission track measurements are dealt with in detail. (HK/LH) [de
The effect of TWD estimation error on the geometry of machined surfaces in micro-EDM milling
DEFF Research Database (Denmark)
Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard
In micro EDM (electrical discharge machining) milling, tool electrode wear must be effectively compensated in order to achieve high accuracy of machined features [1]. Tool wear compensation in micro-EDM milling can be based on off-line techniques with limited accuracy such as estimation...... and statistical characterization of the discharge population [3]. The TWD based approach permits the direct control of the position of the tool electrode front surface. However, TWD estimation errors will generate a self-amplifying error on the tool electrode axial depth during micro-EDM milling. Therefore....... The error propagation effect is demonstrated through a software simulation tool developed by the authors for determination of the correct TWD for subsequent use in compensation of electrode wear in EDM milling. The implemented model uses an initial arbitrary estimation of TWD and a single experiment...
Methods of human body odor sampling: the effect of freezing.
Lenochova, Pavlina; Roberts, S Craig; Havlicek, Jan
2009-02-01
Body odor sampling is an essential tool in human chemical ecology research. However, methodologies of individual studies vary widely in terms of sampling material, length of sampling, and sample processing. Although these differences might have a critical impact on results obtained, almost no studies test validity of current methods. Here, we focused on the effect of freezing samples between collection and use in experiments involving body odor perception. In 2 experiments, we tested whether axillary odors were perceived differently by raters when presented fresh or having been frozen and whether several freeze-thaw cycles affected sample quality. In the first experiment, samples were frozen for 2 weeks, 1 month, or 4 months. We found no differences in ratings of pleasantness, attractiveness, or masculinity between fresh and frozen samples. Similarly, almost no differences between repeatedly thawed and fresh samples were found. We found some variations in intensity; however, this was unrelated to length of storage. The second experiment tested differences between fresh samples and those frozen for 6 months. Again no differences in subjective ratings were observed. These results suggest that freezing has no significant effect on perceived odor hedonicity and that samples can be reliably used after storage for relatively long periods.
Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B
2000-12-01
Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.
Laurier, Dominique; Rage, Estelle
2018-01-01
Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies. PMID:29408862
Gardner, Aimee K; Abdelfattah, Kareem; Wiersch, John; Ahmed, Rami A; Willis, Ross E
2015-01-01
Error management training is an approach that encourages exposure to errors during initial skill acquisition so that learners can be equipped with important error identification, management, and metacognitive skills. The purpose of this study was to determine how an error-focused training program affected performance, retention, and transfer of central venous catheter (CVC) placement skills when compared with traditional training methodologies. Surgical interns (N = 30) participated in a 1-hour session featuring an instructional video and practice performing internal jugular (IJ) and subclavian (SC) CVC placement with guided instruction. All interns underwent baseline knowledge and skill assessment for IJ and SC (pretest) CVC placement; watched a "correct-only" (CO) or "correct + error" (CE) instructional video; practiced for 30 minutes; and were posttested on knowledge and IJ and SC CVC placement. Skill retention and transfer (femoral CVC placement) were assessed 30 days later. All skills tests (pretest, posttest, and transfer) were videorecorded and deidentified for evaluation by a single blinded instructor using a validated 17-item checklist. Both the groups exhibited significant improvements (p error-based activities and discussions into training programs can be beneficial for skill retention and transfer. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Bai, Sen; Li, Guangjun; Wang, Maojie; Jiang, Qinfeng; Zhang, Yingjie [State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan (China); Wei, Yuquan, E-mail: yuquawei@vip.sina.com [State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan (China)
2013-07-01
The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors were 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.
Ziebart, Christina; Giangregorio, Lora M; Gibbs, Jenna C; Levine, Iris C; Tung, James; Laing, Andrew C
2017-06-14
A wide variety of accelerometer systems, with differing sensor characteristics, are used to detect impact loading during physical activities. The study examined the effects of system characteristics on measured peak impact loading during a variety of activities by comparing outputs from three separate accelerometer systems, and by assessing the influence of simulated reductions in operating range and sampling rate. Twelve healthy young adults performed seven tasks (vertical jump, box drop, heel drop, and bilateral single leg and lateral jumps) while simultaneously wearing three tri-axial accelerometers including a criterion standard laboratory-grade unit (Endevco 7267A) and two systems primarily used for activity-monitoring (ActiGraph GT3X+, GCDC X6-2mini). Peak acceleration (gmax) was compared across accelerometers, and errors resulting from down-sampling (from 640 to 100Hz) and range-limiting (to ±6g) the criterion standard output were characterized. The Actigraph activity-monitoring accelerometer underestimated gmax by an average of 30.2%; underestimation by the X6-2mini was not significant. Underestimation error was greater for tasks with greater impact magnitudes. gmax was underestimated when the criterion standard signal was down-sampled (by an average of 11%), range limited (by 11%), and by combined down-sampling and range-limiting (by 18%). These effects explained 89% of the variance in gmax error for the Actigraph system. This study illustrates that both the type and intensity of activity should be considered when selecting an accelerometer for characterizing impact events. In addition, caution may be warranted when comparing impact magnitudes from studies that use different accelerometers, and when comparing accelerometer outputs to osteogenic impact thresholds proposed in literature. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Response of residential electricity demand to price: The effect of measurement error
International Nuclear Information System (INIS)
Alberini, Anna; Filippini, Massimo
2011-01-01
In this paper we present an empirical analysis of the residential demand for electricity using annual aggregate data at the state level for 48 US states from 1995 to 2007. Earlier literature has examined residential energy consumption at the state level using annual or monthly data, focusing on the variation in price elasticities of demand across states or regions, but has failed to recognize or address two major issues. The first is that, when fitting dynamic panel models, the lagged consumption term in the right-hand side of the demand equation is endogenous. This has resulted in potentially inconsistent estimates of the long-run price elasticity of demand. The second is that energy price is likely mismeasured. To address these issues, we estimate a dynamic partial adjustment model using the Kiviet corrected Least Square Dummy Variables (LSDV) (1995) and the Blundell-Bond (1998) estimators. We find that the long-term elasticities produced by the Blundell-Bond system GMM methods are largest, and that from the bias-corrected LSDV are greater than that from the conventional LSDV. From an energy policy point of view, the results obtained using the Blundell-Bond estimator where we instrument for price imply that a carbon tax or other price-based policy may be effective in discouraging residential electricity consumption and hence curbing greenhouse gas emissions in an electricity system mainly based on coal and gas power plants. - Research Highlights: → Updated information on price elasticities for the US energy policy. → Taking into account measurement error in the price variable increase price elasticity. → Room for discouraging residential electricity consumption using price increases.
Response of residential electricity demand to price: The effect of measurement error
Energy Technology Data Exchange (ETDEWEB)
Alberini, Anna [Department of Agricultural Economics, University of Maryland (United States); Centre for Energy Policy and Economics (CEPE), ETH Zurich (Switzerland); Gibson Institute and Institute for a Sustainable World, School of Biological Sciences, Queen' s University Belfast, Northern Ireland (United Kingdom); Filippini, Massimo, E-mail: mfilippini@ethz.ch [Centre for Energy Policy and Economics (CEPE), ETH Zurich (Switzerland); Department of Economics, University of Lugano (Switzerland)
2011-09-15
In this paper we present an empirical analysis of the residential demand for electricity using annual aggregate data at the state level for 48 US states from 1995 to 2007. Earlier literature has examined residential energy consumption at the state level using annual or monthly data, focusing on the variation in price elasticities of demand across states or regions, but has failed to recognize or address two major issues. The first is that, when fitting dynamic panel models, the lagged consumption term in the right-hand side of the demand equation is endogenous. This has resulted in potentially inconsistent estimates of the long-run price elasticity of demand. The second is that energy price is likely mismeasured. To address these issues, we estimate a dynamic partial adjustment model using the Kiviet corrected Least Square Dummy Variables (LSDV) (1995) and the Blundell-Bond (1998) estimators. We find that the long-term elasticities produced by the Blundell-Bond system GMM methods are largest, and that from the bias-corrected LSDV are greater than that from the conventional LSDV. From an energy policy point of view, the results obtained using the Blundell-Bond estimator where we instrument for price imply that a carbon tax or other price-based policy may be effective in discouraging residential electricity consumption and hence curbing greenhouse gas emissions in an electricity system mainly based on coal and gas power plants. - Research Highlights: > Updated information on price elasticities for the US energy policy. > Taking into account measurement error in the price variable increase price elasticity. > Room for discouraging residential electricity consumption using price increases.
International Nuclear Information System (INIS)
McDonald, D.W.
1977-01-01
Thermocouples with ferromagnetic thermoelements (iron, Alumel, Nisil) are used extensively in industry. We have observed the generation of voltage spikes within ferromagnetic wires when the wires are placed in an alternating magnetic field. This effect has implications for thermocouple thermometry, where it was first observed. For example, the voltage generated by this phenomenon will contaminate the thermocouple thermal emf, resulting in temperature measurement error
Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim
2017-09-01
Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.
Effects of XPS operational parameters on investigated sample surfaces
International Nuclear Information System (INIS)
Mrad, O.; Ismail, I.
2013-04-01
In this work, we studied the effects of the operating conditions of the xray photoelectron spectroscopy analysis technique (XPS) on the investigated samples. Firstly, the performances of the whole system have been verified as well as the accuracy of the analysis. Afterwards, the problem of the analysis of insulating samples caused by the charge buildup on the surface has been studied. The use of low-energy electron beam (<100 eV) to compensate the surface charge has been applied. The effect of X-ray on the samples have been assessed and was found to be nondestructive within the analysis time. The effect of low- and high-energy electron beams on the sample surface have been investigated. Highenergy electrons were found to have destructive effect on organic samples. The sample heating procedure has been tested and its effect on the chemical stat of the surface was followed. Finally, the ion source was used to determine the elements distribution and the chemical stat of different depths of the sample. A method has been proposed to determine these depths (author).
The effect of biotope-specific sampling for aquatic ...
African Journals Online (AJOL)
The effect of biotope-specific sampling for aquatic macroinvertebrates on ... riffle), depth, and quality (deposition of silt on stones), were important at habitat scale. ... Geological type, which affects overall water chemistry, was important in the ...
Schmid, Tobias; Rolland, Jannick P; Rakich, Andrew; Thompson, Kevin P
2010-08-02
We present the nodal aberration field response of Ritchey-Chrétien telescopes to a combination of optical component misalignments and astigmatic figure error on the primary mirror. It is shown that both astigmatic figure error and secondary mirror misalignments lead to binodal astigmatism, but that each type has unique, characteristic locations for the astigmatic nodes. Specifically, the characteristic node locations in the presence of astigmatic figure error (at the pupil) in an otherwise aligned telescope exhibit symmetry with respect to the field center, i.e. the midpoint between the astigmatic nodes remains at the field center. For the case of secondary mirror misalignments, one of the astigmatic nodes remains nearly at the field center (in a coma compensated state) as presented in Optics Express 18, 5282-5288 (2010), while the second astigmatic node moves away from the field center. This distinction leads directly to alignment methods that preserve the dynamic range of the active wavefront compensation component.
Energy Technology Data Exchange (ETDEWEB)
Liu, Dong' an; Peng, Linfa; Lai, Xinmin [State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai 200240 (China)
2009-01-15
Recently, the metallic bipolar plate (BPP) has received considerable attention because of its advantageous electrical and mechanical properties. In this study, a methodology based on FEA model and Monte Carlo simulation is developed to investigate the effect of dimensional error of the metallic BPP on the pressure distribution of gas diffusion layer (GDL). At first, a parameterized FEA model of metallic BPP/GDL assembly is established, and heights of the channel and rib are considered to be randomly varying parameters of normal distribution due to the dimensional error. Then, GDL pressure distributions with different dimensional errors are obtained respectively based on the Monte Carlo simulation, and the desirability function method is employed to evaluate them. At last, a regression equation between the GDL pressure distribution and the dimensional error is modeled. With the regression equation, the allowed maximum dimensional error for the metallic BPP is calculated. The methodology in this study can be applied to guide the design and manufacturing of the metallic BPP. (author)
Sosic-Vasic, Zrinka; Ulrich, Martin; Ruchsow, Martin; Vasic, Nenad; Grön, Georg
2012-01-01
The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness) and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI). A second strong positive correlation was observed in the anterior cingulate gyrus (ACC). Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.
Directory of Open Access Journals (Sweden)
Zrinka Sosic-Vasic
Full Text Available The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI. A second strong positive correlation was observed in the anterior cingulate gyrus (ACC. Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.
Effect of manufacturing errors on field quality of the LBL SSC dipoles
International Nuclear Information System (INIS)
Meuser, R.B.
1984-01-01
A method is developed for determining the field aberrations resulting from specific kinds of manufacturing errors. This method is applied to the 40-mm i.d. dipoles under consideration at LBL, and also to similar ones with 30 and 50 mm i.d. The method is also applied to the CBA and Doubler/Saver magnets and the results compared with the measurements. The results obtained by this method are also compared with those obtained by assigning identical errors to the positions of the edges of all the coil sectors
Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A
2007-02-01
The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.
The effectiveness of cooling conditions on temperature of canine EDTA whole blood samples.
Tobias, Karen M; Serrano, Leslie; Sun, Xiaocun; Flatland, Bente
2016-01-01
Preanalytic factors such as time and temperature can have significant effects on laboratory test results. For example, ammonium concentration will increase 31% in blood samples stored at room temperature for 30 min before centrifugation. To reduce preanalytic error, blood samples may be placed in precooled tubes and chilled on ice or in ice water baths; however, the effectiveness of these modalities in cooling blood samples has not been formally evaluated. The purpose of this study was to evaluate the effectiveness of various cooling modalities on reducing temperature of EDTA whole blood samples. Pooled samples of canine EDTA whole blood were divided into two aliquots. Saline was added to one aliquot to produce a packed cell volume (PCV) of 40% and to the second aliquot to produce a PCV of 20% (simulated anemia). Thirty samples from each aliquot were warmed to 37.7 °C and cooled in 2 ml allotments under one of three conditions: in ice, in ice after transfer to a precooled tube, or in an ice water bath. Temperature of each sample was recorded at one minute intervals for 15 min. Within treatment conditions, sample PCV had no significant effect on cooling. Cooling in ice water was significantly faster than cooling in ice only or transferring the sample to a precooled tube and cooling it on ice. Mean temperature of samples cooled in ice water was significantly lower at 15 min than mean temperatures of those cooled in ice, whether or not the tube was precooled. By 4 min, samples cooled in an ice water bath had reached mean temperatures less than 4 °C (refrigeration temperature), while samples cooled in other conditions remained above 4.0 °C for at least 11 min. For samples with a PCV of 40%, precooling the tube had no significant effect on rate of cooling on ice. For samples with a PCV of 20%, transfer to a precooled tube resulted in a significantly faster rate of cooling than direct placement of the warmed tube onto ice. Canine EDTA whole blood samples cool most
Are Divorce Studies Trustworthy? The Effects of Survey Nonresponse and Response Errors
Mitchell, Colter
2010-01-01
Researchers rely on relationship data to measure the multifaceted nature of families. This article speaks to relationship data quality by examining the ramifications of different types of error on divorce estimates, models predicting divorce behavior, and models employing divorce as a predictor. Comparing matched survey and divorce certificate…
Limited Effects of Agreement Errors on Word Monitoring in 5-year-olds
Czech Academy of Sciences Publication Activity Database
Smolík, Filip
2011-01-01
Roč. 2, č. 1 (2011), s. 17-28 ISSN 1804-3240 R&D Projects: GA ČR GAP407/10/2047 Institutional research plan: CEZ:AV0Z70250504 Keywords : language acquisition * morphosyntactic error * word monitoring Subject RIV: AN - Psychology
Individual differences in political ideology are effects of adaptive error management.
Petersen, Michael Bang; Aarøe, Lene
2014-06-01
We apply error management theory to the analysis of individual differences in the negativity bias and political ideology. Using principles from evolutionary psychology, we propose a coherent theoretical framework for understanding (1) why individuals differ in their political ideology and (2) the conditions under which these individual differences influence and fail to influence the political choices people make.
Sirriyeh, Reema; Lawton, Rebecca; Gardner, Peter; Armitage, Gerry
2010-12-01
Previous research has established health professionals as secondary victims of medical error, with the identification of a range of emotional and psychological repercussions that may occur as a result of involvement in error.2 3 Due to the vast range of emotional and psychological outcomes, research to date has been inconsistent in the variables measured and tools used. Therefore, differing conclusions have been drawn as to the nature of the impact of error on professionals and the subsequent repercussions for their team, patients and healthcare institution. A systematic review was conducted. Data sources were identified using database searches, with additional reference and hand searching. Eligibility criteria were applied to all studies identified, resulting in a total of 24 included studies. Quality assessment was conducted with the included studies using a tool that was developed as part of this research, but due to the limited number and diverse nature of studies, no exclusions were made on this basis. Review findings suggest that there is consistent evidence for the widespread impact of medical error on health professionals. Psychological repercussions may include negative states such as shame, self-doubt, anxiety and guilt. Despite much attention devoted to the assessment of negative outcomes, the potential for positive outcomes resulting from error also became apparent, with increased assertiveness, confidence and improved colleague relationships reported. It is evident that involvement in a medical error can elicit a significant psychological response from the health professional involved. However, a lack of literature around coping and support, coupled with inconsistencies and weaknesses in methodology, may need be addressed in future work.
International Nuclear Information System (INIS)
Kim, Yochan; Park, Jinkyun; Jung, Wondea; Jang, Inseok; Hyun Seong, Poong
2015-01-01
Despite recent efforts toward data collection for supporting human reliability analysis, there remains a lack of empirical basis in determining the effects of performance shaping factors (PSFs) on human error probabilities (HEPs). To enhance the empirical basis regarding the effects of the PSFs, a statistical methodology using a logistic regression and stepwise variable selection was proposed, and the effects of the PSF on HEPs related with the soft controls were estimated through the methodology. For this estimation, more than 600 human error opportunities related to soft controls in a computerized control room were obtained through laboratory experiments. From the eight PSF surrogates and combinations of these variables, the procedure quality, practice level, and the operation type were identified as significant factors for screen switch and mode conversion errors. The contributions of these significant factors to HEPs were also estimated in terms of a multiplicative form. The usefulness and limitation of the experimental data and the techniques employed are discussed herein, and we believe that the logistic regression and stepwise variable selection methods will provide a way to estimate the effects of PSFs on HEPs in an objective manner. - Highlights: • It is necessary to develop an empirical basis for the effects of the PSFs on the HEPs. • A statistical method using a logistic regression and variable selection was proposed. • The effects of PSFs on the HEPs of soft controls were empirically investigated. • The significant factors were identified and their effects were estimated
International Nuclear Information System (INIS)
Winterflood, A.H.
1980-01-01
In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)
Analysis Of The Effect Of Fuel Enrichment Error On Neutronic Properties Of The RSG-GAS Core
International Nuclear Information System (INIS)
Saragih, Tukiran; Pinem, Surian
2002-01-01
The analysis of the fuel enrichment error effect on neutronic properties has been carried out. The fuel enrichment could be improperly done because of wrong fabrication. Therefore it is necessary to analyze the fuel enrichment error effect to determine how many percents the fuel enrichment maximum can be accepted in the core. The analysis was done by simulation method The RSG-GAS core was simulated with 5 standard fuels and 1 control element having wrong enrichment when inserted into the core. Fuel enrichment error was then simulated from 20%, 25% and 30% and the simulation was done using WIMSD/4 and Batan-2DIFF codes. The cross section of core material of the RSG-GAS was generated by WIMSD/4 code in 1-D, X-Y geometry and 10 energy neutron group. Two dimensions, diffusion calculation based on finite element method was done by using Batan-2DIFF code. Five fuel elements and one control element changed the enrichment was finally arranged as a new core of the RSG-Gas reactor. The neutronic properties can be seen from eigenvalues (k eff ) as well as from the kinetic properties based on moderator void reactivity coefficient. The calculated results showed that the error are still acceptable by k eff 1,097 even until 25% fuel enrichment but not more than 25,5%
Effect of temperature on surface error and laser damage threshold for self-healing BK7 glass.
Wang, Chu; Wang, Hongxiang; Shen, Lu; Hou, Jing; Xu, Qiao; Wang, Jian; Chen, Xianhua; Liu, Zhichao
2018-03-20
Cracks caused during the lapping and polishing process can decrease the laser-induced damage threshold (LIDT) of the BK7 glass optical elements, which would shorten the lifetime and limit the output power of the high-energy laser system. When BK7 glass is heated under appropriate conditions, the surface cracks can exhibit a self-healing phenomenon. In this paper, based on thermodynamics and viscous fluid mechanics theory, the mechanisms of crack self-healing are explained. The heat-healing experiment was carried out, and the effect of water was analyzed. The multi-spatial-frequency analysis was used to investigate the effect of temperature on surface error for self-healing BK7 glass, and the lapped BK7 glass specimens before and after heat healing were detected by an interferometer and atomic force microscopy. The low-spatial-frequency error was analyzed by peak to valley and root mean square, the mid-spatial-frequency error was analyzed by power spectral density, and the high-spatial-frequency error was analyzed by surface roughness. The results showed that the optimal heating temperature for BK7 was 450°C, and when the heating temperature was higher than the glass transition temperature (555°C), the surface quality decreased a lot. The laser damage test was performed, and the specimen heated at 450°C showed an improvement in LIDT.
Energy Technology Data Exchange (ETDEWEB)
Roon, Serafin von
2012-02-28
A permanent balance between consumption and generation is essential for a stable supply of electricity. In order to ensure this balance, all relevant load data have to be announced for the following day. Consequently, a day-ahead forecast of the wind power generation is required, which also forms the basis for the sale of the wind power at the wholesale market. The main subject of the study is the short-term power supply, which compensates errors in wind power forecasting for balancing the wind power forecast errors at short notice. These forecast errors effects the revenues and the expenses by selling and buying power in the day-ahead, intraday and balance energy market. These price effects resulting from the forecast errors are derived from an empirical analysis. In a scenario for the year 2020 the potential of conventional power plants to supply power at short notice is evaluated from a technical and economic point of view by a time series analysis and a unit commitment simulation.
Local heterogeneity effects on small-sample worths
International Nuclear Information System (INIS)
Schaefer, R.W.
1986-01-01
One of the parameters usually measured in a fast reactor critical assembly is the reactivity associated with inserting a small sample of a material into the core (sample worth). Local heterogeneities introduced by the worth measurement techniques can have a significant effect on the sample worth. Unfortunately, the capability is lacking to model some of the heterogeneity effects associated with the experimental technique traditionally used at ANL (the radial tube technique). It has been suggested that these effects could account for a large portion of what remains of the longstanding central worth discrepancy. The purpose of this paper is to describe a large body of experimental data - most of which has never been reported - that shows the effect of radial tube-related local heterogeneities
Payne, Velma L; Medvedeva, Olga; Legowski, Elizabeth; Castine, Melissa; Tseytlin, Eugene; Jukic, Drazen; Crowley, Rebecca S
2009-11-01
Determine effects of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths. Determine if limited enforcement in a medical tutoring system inhibits students from learning the optimal and most efficient solution path. Describe the type of deviations from the optimal solution path that occur during tutoring, and how these deviations change over time. Determine if the size of the problem-space (domain scope), has an effect on learning gains when using a tutor with limited enforcement. Analyzed data mined from 44 pathology residents using SlideTutor-a Medical Intelligent Tutoring System in Dermatopathology that teaches histopathologic diagnosis and reporting skills based on commonly used diagnostic algorithms. Two subdomains were included in the study representing sub-algorithms of different sizes and complexities. Effects of the tutoring system on student errors, goal states and solution paths were determined. Students gradually increase the frequency of steps that match the tutoring system's expectation of expert performance. Frequency of errors gradually declines in all categories of error significance. Student performance frequently differs from the tutor-defined optimal path. However, as students continue to be tutored, they approach the optimal solution path. Performance in both subdomains was similar for both errors and goal differences. However, the rate at which students progress toward the optimal solution path differs between the two domains. Tutoring in superficial perivascular dermatitis, the larger and more complex domain was associated with a slower rate of approximation towards the optimal solution path. Students benefit from a limited-enforcement tutoring system that leverages diagnostic algorithms but does not prevent alternative strategies. Even with limited enforcement, students converge toward the optimal solution path.
Learning from errors in super-resolution.
Tang, Yi; Yuan, Yuan
2014-11-01
A novel framework of learning-based super-resolution is proposed by employing the process of learning from the estimation errors. The estimation errors generated by different learning-based super-resolution algorithms are statistically shown to be sparse and uncertain. The sparsity of the estimation errors means most of estimation errors are small enough. The uncertainty of the estimation errors means the location of the pixel with larger estimation error is random. Noticing the prior information about the estimation errors, a nonlinear boosting process of learning from these estimation errors is introduced into the general framework of the learning-based super-resolution. Within the novel framework of super-resolution, a low-rank decomposition technique is used to share the information of different super-resolution estimations and to remove the sparse estimation errors from different learning algorithms or training samples. The experimental results show the effectiveness and the efficiency of the proposed framework in enhancing the performance of different learning-based algorithms.
Drawing conclusions: The effect of instructions on children's confabulation and fantasy errors.
Macleod, Emily; Gross, Julien; Hayne, Harlene
2016-01-01
Drawing is commonly used in forensic and clinical interviews with children. In these interviews, children are often allowed to draw without specific instructions about the purpose of the drawing materials. Here, we examined whether this practice influenced the accuracy of children's reports. Seventy-four 5- and 6-year-old children were interviewed one to two days after they took part in an interactive event. Some children were given drawing materials to use during the interview. Of these children, some were instructed to draw about the event, and some were given no additional instructions at all. Children who were instructed to draw about the event, or who were interviewed without drawing, made few errors. In contrast, children who drew without being given specific instructions reported more errors that were associated with both confabulation and fantasy. We conclude that, to maximise accuracy during interviews involving drawing, children should be directed to draw specifically about the interview topic.
Effect of Slice Error of Glass on Zero Offset of Capacitive Accelerometer
Hao, R.; Yu, H. J.; Zhou, W.; Peng, B.; Guo, J.
2018-03-01
Packaging process had been studied on capacitance accelerometer. The silicon-glass bonding process had been adopted on sensor chip and glass, and sensor chip and glass was adhered on ceramic substrate, the three-layer structure was curved due to the thermal mismatch, the slice error of glass lead to asymmetrical curve of sensor chip. Thus, the sensitive mass of accelerometer deviated along the sensitive direction, which was caused in zero offset drift. It was meaningful to confirm the influence of slice error of glass, the simulation results showed that the zero output drift was 12.3×10-3 m/s2 when the deviation was 40μm.
Effects of dating errors on nonparametric trend analyses of speleothem time series
Directory of Open Access Journals (Sweden)
M. Mudelsee
2012-10-01
Full Text Available A fundamental problem in paleoclimatology is to take fully into account the various error sources when examining proxy records with quantitative methods of statistical time series analysis. Records from dated climate archives such as speleothems add extra uncertainty from the age determination to the other sources that consist in measurement and proxy errors. This paper examines three stalagmite time series of oxygen isotopic composition (δ^{18}O from two caves in western Germany, the series AH-1 from the Atta Cave and the series Bu1 and Bu4 from the Bunker Cave. These records carry regional information about past changes in winter precipitation and temperature. U/Th and radiocarbon dating reveals that they cover the later part of the Holocene, the past 8.6 thousand years (ka. We analyse centennial- to millennial-scale climate trends by means of nonparametric Gasser–Müller kernel regression. Error bands around fitted trend curves are determined by combining (1 block bootstrap resampling to preserve noise properties (shape, autocorrelation of the δ^{18}O residuals and (2 timescale simulations (models StalAge and iscam. The timescale error influences on centennial- to millennial-scale trend estimation are not excessively large. We find a "mid-Holocene climate double-swing", from warm to cold to warm winter conditions (6.5 ka to 6.0 ka to 5.1 ka, with warm–cold amplitudes of around 0.5‰ δ^{18}O; this finding is documented by all three records with high confidence. We also quantify the Medieval Warm Period (MWP, the Little Ice Age (LIA and the current warmth. Our analyses cannot unequivocally support the conclusion that current regional winter climate is warmer than that during the MWP.
Minimization of the effect of errors in approximate radiation view factors
International Nuclear Information System (INIS)
Clarksean, R.; Solbrig, C.
1993-01-01
The maximum temperature of irradiated fuel rods in storage containers was investigated taking credit only for radiation heat transfer. Estimating view factors is often easy but in many references the emphasis is placed on calculating the quadruple integrals exactly. Selecting different view factors in the view factor matrix as independent, yield somewhat different view factor matrices. In this study ten to twenty percent error in view factors produced small errors in the temperature which are well within the uncertainty due to the surface emissivities uncertainty. However, the enclosure and reciprocity principles must be strictly observed or large errors in the temperatures and wall heat flux were observed (up to a factor of 3). More than just being an aid for calculating the dependent view factors, satisfying these principles, particularly reciprocity, is more important than the calculation accuracy of the view factors. Comparison to experiment showed that the result of the radiation calculation was definitely conservative as desired in spite of the approximations to the view factors
International Nuclear Information System (INIS)
Silverman, J.A.; Mehta, J.; Brocher, S.; Amenta, J.S.
1985-01-01
Previous studies on protein turnover in 3 H-labelled L-cell cultures have shown recovery of total 3 H at the end of a three-day experiment to be always significantly in excess of the 3 H recovered at the beginning of the experiment. A number of possible sources for this error in measuring radioactivity in cell proteins has been reviewed. 3 H-labelled proteins, when dissolved in NaOH and counted for radioactivity in a liquid-scintillation spectrometer, showed losses of 30-40% of the radioactivity; neither external or internal standardization compensated for this loss. Hydrolysis of these proteins with either Pronase or concentrated HCl significantly increased the measured radioactivity. In addition, 5-10% of the cell protein is left on the plastic culture dish when cells are recovered in phosphate-buffered saline. Furthermore, this surface-adherent protein, after pulse labelling, contains proteins of high radioactivity that turn over rapidly and make a major contribution to the accumulating radioactivity in the medium. These combined errors can account for up to 60% of the total radioactivity in the cell culture. Similar analytical errors have been found in studies of other cell cultures. The effect of these analytical errors on estimates of protein turnover in cell cultures is discussed. (author)
Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa
2013-01-01
To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.
International Nuclear Information System (INIS)
Kamiya, Yukihide.
1980-05-01
Has been developed a computational method for the astral survey procedure of the primary monuments that consists in the measurements of short chords and perpendicular distances. This method can be applied to any astral polygon with the lengths of chords and vertical angles different from each other. We will study the propagation of measurement errors for KEK-PF storage ring, and also examine its effect on the closed orbit distortion. (author)
The error analysis of coke moisture measured by neutron moisture gauge
International Nuclear Information System (INIS)
Tian Huixing
1995-01-01
The error of coke moisture measured by neutron method in the iron and steel industry is analyzed. The errors are caused by inaccurate sampling location in the calibration procedure on site. By comparison, the instrument error and the statistical fluctuation error are smaller. So the sampling proportion should be increased as large as possible in the calibration procedure on site, and a satisfied calibration effect can be obtained on a suitable size hopper
Peer Effects on Obesity in a Sample of European Children
DEFF Research Database (Denmark)
Gwozdz, Wencke; Sousa-Poza, Alfonso; Reisch, Lucia A.
2015-01-01
This study analyzes peer effects on childhood obesity using data from the first two waves of the IDEFICS study, which applies several anthropometric and other measures of fatness to approximately 14,000 children aged two to nine participating in both waves in 16 regions of eight European countries....... Peers are defined as same-sex children in the same school and age group. The results show that peer effects do exist in this European sample but that they differ among both regions and different fatness measures. Peer effects are larger in Spain, Italy, and Cyprus – the more collectivist regions in our...... sample – while waist circumference generally gives rise to larger peer effects than BMI. We also provide evidence that parental misperceptions of their own children's weight goes hand in hand with fatter peer groups, supporting the notion that in making such assessments, parents compare their children...
Pruitt, Sandi L; Jeffe, Donna B; Yan, Yan; Schootman, Mario
2012-04-01
Limited psychometric research has examined the reliability of self-reported measures of neighbourhood conditions, the effect of measurement error on associations between neighbourhood conditions and health, and potential differences in the reliabilities between neighbourhood strata (urban vs rural and low vs high poverty). We assessed overall and stratified reliability of self-reported perceived neighbourhood conditions using five scales (social and physical disorder, social control, social cohesion, fear) and four single items (multidimensional neighbouring). We also assessed measurement error-corrected associations of these conditions with self-rated health. Using random-digit dialling, 367 women without breast cancer (matched controls from a larger study) were interviewed twice, 2-3 weeks apart. Test-retest (intraclass correlation coefficients (ICC)/weighted κ) and internal consistency reliability (Cronbach's α) were assessed. Differences in reliability across neighbourhood strata were tested using bootstrap methods. Regression calibration corrected estimates for measurement error. All measures demonstrated satisfactory internal consistency (α ≥ 0.70) and either moderate (ICC/κ=0.41-0.60) or substantial (ICC/κ=0.61-0.80) test-retest reliability in the full sample. Internal consistency did not differ by neighbourhood strata. Test-retest reliability was significantly lower among rural (vs urban) residents for two scales (social control, physical disorder) and two multidimensional neighbouring items; test-retest reliability was higher for physical disorder and lower for one multidimensional neighbouring item among the high (vs low) poverty strata. After measurement error correction, the magnitude of associations between neighbourhood conditions and self-rated health were larger, particularly in the rural population. Research is needed to develop and test reliable measures of perceived neighbourhood conditions relevant to the health of rural populations.
Directory of Open Access Journals (Sweden)
Fatemeh Donboli Miandoab
2017-12-01
Full Text Available Background: Professionalism and adherence to ethics and professional standards are among the most important topics in medical ethics that can play a role in reducing medical errors. This paper examines and evaluates the effect of professional ethics on reducing medical errors from the viewpoint of faculty members in the medical school of the Tabriz University of Medical Sciences. Methods: in this cross-sectional descriptive study, faculty members of the Tabriz University of Medical Sciences were the statistical population from whom 105 participants were randomly selected through simple random sampling. A questionnaire was used, to examine and compare the self-assessed opinions of faculty members in the internal, surgical, pediatric, gynecological, and psychiatric departments. The questionnaires were completed by a self-assessment method and the collected data was analyzed using SPSS 21. Results: Based on physicians’ opinions, professional ethical considerations and its three domains and aspects have a significant role in reducing medical errors and crimes. The mean scores (standard deviations of the managerial, knowledge and communication skills and environmental variables were respectively 46.7 (5.64, 64.6 (8.14 and 16.2 (2.97 from the physicians’ viewpoints. The significant factors with highest scores on the reduction of medical errors and crimes in all three domains were as follows: in the managerial skills variable, trust, physician’s sense of responsibility against the patient and his/her respect for patients’ rights; in the knowledge and communication skills domain, general competence and eligibility as a physician and examination and diagnosis skills; and, last, in the environmental domain, the sufficiency of trainings in ethical issues during education and their satisfaction with basic needs. Conclusion: Based on the findings of this research, attention to the improvement of communication, management and environment skills should
Comparison of dechlorination rates for field DNAPL vs synthetic samples: effect of sample matrix
O'Carroll, D. M.; Sakulchaicharoen, N.; Herrera, J. E.
2015-12-01
Nanometals have received significant attention in recent years due to their ability to rapidly destroy numerous priority source zone contaminants in controlled laboratory studies. This has led to great optimism surrounding nanometal particle injection for insitu remediation. Reported dechlorination rates vary widely among different investigators. These differences have been ascribed to differences in the iron types (granular, micro, or nano-sized iron), matrix solution chemistry and the morphology of the nZVI surface. Among these, the effects of solution chemistry on rates of reductive dechlorination of various chlorinated compounds have been investigated in several short-term laboratory studies. Variables investigated include the effect of anions or groundwater solutes such as SO4-2, Cl-, NO3-, pH, natural organic matters (NOM), surfactant, and humic acid on dechlorination reaction of various chlorinated compounds such as TCE, carbon tetrachloride (CT), and chloroform (CF). These studies have normally centered on the assessment of nZVI reactivity toward dechlorination of an isolated individual contaminant spiked into a ground water sample under ideal conditions, with limited work conducted using real field samples. In this work, the DNAPL used for the dechlorination study was obtained from a contaminatied site. This approach was selected to adequately simulate a condition where the nZVI suspension was in direct contact with DNAPL and to isolate the dechlorination activity shown by the nZVI from the groundwater matrix effects. An ideal system "synthetic DNAPL" composed of a mixture of chlorinated compounds mimicking the composition of the actual DNAPL was also dechlorinated to evaluate the DNAPL "matrix effect" on NZVI dechlorination activity. This approach allowed us to evaluate the effect of the presence of different types of organic compounds (volatile fatty acids and humic acids) found in the actual DNAPL on nZVI dechlorination activity. This presentation will
Kim, Min-A; Sim, Hye-Min; Lee, Hye-Seong
2016-11-01
As reformulations and processing changes are increasingly needed in the food industry to produce healthier, more sustainable, and cost effective products while maintaining superior quality, reliable measurements of consumers' sensory perception and discrimination are becoming more critical. Consumer discrimination methods using a preferred-reference duo-trio test design have been shown to be effective in improving the discrimination performance by customizing sample presentation sequences. However, this design can add complexity to the discrimination task for some consumers, resulting in more errors in sensory discrimination. The objective of the present study was to investigate the effects of different types of test instructions using the preference-reference duo-trio test design where a paired-preference test is followed by 6 repeated preferred-reference duo-trio tests, in comparison to the analytical method using the balanced-reference duo-trio. Analyses of d' estimates (product-related measure) and probabilistic sensory discriminators in momentary numbers of subjects showing statistical significance (subject-related measure) revealed that only preferred-reference duo-trio test using affective reference-framing, either by providing no information about the reference or information on a previously preferred sample, improved the sensory discrimination more than the analytical method. No decrease in discrimination performance was observed with any type of instruction, confirming that consumers could handle the test methods. These results suggest that when repeated tests are feasible, using the affective discrimination method would be operationally more efficient as well as ecologically more reliable for measuring consumers' sensory discrimination ability. Copyright © 2016 Elsevier Ltd. All rights reserved.
Effects of sample size on the second magnetization peak in ...
Indian Academy of Sciences (India)
8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...
Effects-Driven Participatory Design: Learning from Sampling Interruptions
DEFF Research Database (Denmark)
Brandrup, Morten; Østergaard, Kija Lin; Hertzum, Morten
2017-01-01
a sustained focus on pursued effects and uses the experience sampling method (ESM) to collect real-use feedback. To illustrate the use of the method we analyze a case that involves the organizational implementation of electronic whiteboards at a Danish hospital to support the clinicians’ intra...
Bergen, Silas; Sheppard, Lianne; Sampson, Paul D; Kim, Sun-Young; Richards, Mark; Vedal, Sverre; Kaufman, Joel D; Szpiro, Adam A
2013-09-01
Studies estimating health effects of long-term air pollution exposure often use a two-stage approach: building exposure models to assign individual-level exposures, which are then used in regression analyses. This requires accurate exposure modeling and careful treatment of exposure measurement error. To illustrate the importance of accounting for exposure model characteristics in two-stage air pollution studies, we considered a case study based on data from the Multi-Ethnic Study of Atherosclerosis (MESA). We built national spatial exposure models that used partial least squares and universal kriging to estimate annual average concentrations of four PM2.5 components: elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S). We predicted PM2.5 component exposures for the MESA cohort and estimated cross-sectional associations with carotid intima-media thickness (CIMT), adjusting for subject-specific covariates. We corrected for measurement error using recently developed methods that account for the spatial structure of predicted exposures. Our models performed well, with cross-validated R2 values ranging from 0.62 to 0.95. Naïve analyses that did not account for measurement error indicated statistically significant associations between CIMT and exposure to OC, Si, and S. EC and OC exhibited little spatial correlation, and the corrected inference was unchanged from the naïve analysis. The Si and S exposure surfaces displayed notable spatial correlation, resulting in corrected confidence intervals (CIs) that were 50% wider than the naïve CIs, but that were still statistically significant. The impact of correcting for measurement error on health effect inference is concordant with the degree of spatial correlation in the exposure surfaces. Exposure model characteristics must be considered when performing two-stage air pollution epidemiologic analyses because naïve health effect inference may be inappropriate.
The Effect of Asymmetrical Sample Training on Retention Functions for Hedonic Samples in Rats
Simmons, Sabrina; Santi, Angelo
2012-01-01
Rats were trained in a symbolic delayed matching-to-sample task to discriminate sample stimuli that consisted of the presence of food or the absence of food. Asymmetrical sample training was provided in which one group was initially trained with only the food sample and the other group was initially trained with only the no-food sample. In…
Bao, Guzhi; Wickenbrock, Arne; Rochester, Simon; Zhang, Weiping; Budker, Dmitry
2018-01-19
The nonlinear Zeeman effect can induce splitting and asymmetries of magnetic-resonance lines in the geophysical magnetic-field range. This is a major source of "heading error" for scalar atomic magnetometers. We demonstrate a method to suppress the nonlinear Zeeman effect and heading error based on spin locking. In an all-optical synchronously pumped magnetometer with separate pump and probe beams, we apply a radio-frequency field which is in phase with the precessing magnetization. This results in the collapse of the multicomponent asymmetric magnetic-resonance line with ∼100 Hz width in the Earth-field range into a single peak with a width of 22 Hz, whose position is largely independent of the orientation of the sensor within a range of orientation angles. The technique is expected to be broadly applicable in practical magnetometry, potentially boosting the sensitivity and accuracy of Earth-surveying magnetometers by increasing the magnetic-resonance amplitude, decreasing its width, and removing the important and limiting heading-error systematic.
Bao, Guzhi; Wickenbrock, Arne; Rochester, Simon; Zhang, Weiping; Budker, Dmitry
2018-01-01
The nonlinear Zeeman effect can induce splitting and asymmetries of magnetic-resonance lines in the geophysical magnetic-field range. This is a major source of "heading error" for scalar atomic magnetometers. We demonstrate a method to suppress the nonlinear Zeeman effect and heading error based on spin locking. In an all-optical synchronously pumped magnetometer with separate pump and probe beams, we apply a radio-frequency field which is in phase with the precessing magnetization. This results in the collapse of the multicomponent asymmetric magnetic-resonance line with ˜100 Hz width in the Earth-field range into a single peak with a width of 22 Hz, whose position is largely independent of the orientation of the sensor within a range of orientation angles. The technique is expected to be broadly applicable in practical magnetometry, potentially boosting the sensitivity and accuracy of Earth-surveying magnetometers by increasing the magnetic-resonance amplitude, decreasing its width, and removing the important and limiting heading-error systematic.
International Nuclear Information System (INIS)
Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullan, D.E.
1986-01-01
We investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. We consider the 2.0347 to 3.3546 keV energy region for 238 U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems. (author)
International Nuclear Information System (INIS)
Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullen, D.E.
1985-01-01
The authors investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. They consider the 2.0347 to 3.3546 keV energy region for /sup 238/U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems
International Nuclear Information System (INIS)
Oliveira, G.M. de; Leitao, M. de M.V.B.R.
2000-01-01
The objective of this study was to analyze the consequences in the evapotranspiration estimates (ET) during the growing cycle of a peanut crop due to the errors committed in the determination of the radiation balance (Rn), as well as those caused by the advective effects. This research was conducted at the Experimental Station of CODEVASF in an irrigated perimeter located in the city of Rodelas, BA, during the period of September to December of 1996. The results showed that errors of the order of 2.2 MJ m -2 d -1 in the calculation of Rn, and consequently in the estimate of ET, can occur depending on the time considered for the daily total of Rn. It was verified that the surrounding areas of the experimental field, as well as the areas of exposed soil within the field, contributed significantly to the generation of local advection of sensible heat, which resulted in the increase of the evapotranspiration [pt
Analysis of causes and effects errors in calculation of rolling slewing bearings capacity
Directory of Open Access Journals (Sweden)
Marek Krynke
2016-09-01
Full Text Available In the paper the basic design features and essential assumption of calculation models as well as the factors influencing quality improvement and improvement of calculation process of bearing capacity of rolling slewing bearings are discussed. The aim of conducted research is the identification and elimination of sources of errors in determining the characteristics of slewing bearing capacity. The result of the research aims atdeterminingthe risk of making mistakes and specifying tips for designers of slewing bearings. It is shown that there is a necessity fora numerical method to be applied and that real conditions of bearing work must necessarily be taken into account e.g. carrying structure deformations as the first ones.
Isolating Numerical Error Effects in LES Using DNS-Derived Sub-Grid Closures
Edoh, Ayaboe; Karagozian, Ann
2017-11-01
The prospect of employing an explicitly-defined filter in Large-Eddy Simulations (LES) provides the opportunity to reduce the interaction of numerical/modeling errors and offers the chance to carry out grid-converged assessments, important for model development. By utilizing a quasi a priori evaluation method - wherein the LES is assisted by closures derived from a fully-resolved computation - it then becomes possible to understand the combined impacts of filter construction (e.g., filter width, spectral sharpness) and discretization choice on the solution accuracy. The present work looks at calculations of the compressible LES Navier-Stokes system and considers discrete filtering formulations in conjunction with high-order finite differencing schemes. Accuracy of the overall method construction is compared to a consistently-filtered exact solution, and lessons are extended to a posteriori (i.e., non-assisted) evaluations. Supported by ERC, Inc. (PS150006) and AFOSR (Dr. Chiping Li).
DEFF Research Database (Denmark)
Kressner, Abigail Anne; May, Tobias; Malik Thaarup Høegh, Rasmus
2017-01-01
A recent study suggested that the most important factor for obtaining high speech intelligibility in noise with cochlear implant recipients is to preserve the low-frequency amplitude modulations of speech across time and frequency by, for example, minimizing the amount of noise in speech gaps....... In contrast, other studies have argued that the transients provide the most information. Thus, the present study investigates the relative impact of these two factors in the framework of noise reduction by systematically correcting noise-estimation errors within speech segments, speech gaps......, and the transitions between them. Speech intelligibility in noise was measured using a cochlear implant simulation tested on normal-hearing listeners. The results suggest that minimizing noise in the speech gaps can substantially improve intelligibility, especially in modulated noise. However, significantly larger...
Effects of exposure estimation errors on estimated exposure-response relations for PM2.5.
Cox, Louis Anthony Tony
2018-07-01
Associations between fine particulate matter (PM2.5) exposure concentrations and a wide variety of undesirable outcomes, from autism and auto theft to elderly mortality, suicide, and violent crime, have been widely reported. Influential articles have argued that reducing National Ambient Air Quality Standards for PM2.5 is desirable to reduce these outcomes. Yet, other studies have found that reducing black smoke and other particulate matter by as much as 70% and dozens of micrograms per cubic meter has not detectably affected all-cause mortality rates even after decades, despite strong, statistically significant positive exposure concentration-response (C-R) associations between them. This paper examines whether this disconnect between association and causation might be explained in part by ignored estimation errors in estimated exposure concentrations. We use EPA air quality monitor data from the Los Angeles area of California to examine the shapes of estimated C-R functions for PM2.5 when the true C-R functions are assumed to be step functions with well-defined response thresholds. The estimated C-R functions mistakenly show risk as smoothly increasing with concentrations even well below the response thresholds, thus incorrectly predicting substantial risk reductions from reductions in concentrations that do not affect health risks. We conclude that ignored estimation errors obscure the shapes of true C-R functions, including possible thresholds, possibly leading to unrealistic predictions of the changes in risk caused by changing exposures. Instead of estimating improvements in public health per unit reduction (e.g., per 10 µg/m 3 decrease) in average PM2.5 concentrations, it may be essential to consider how interventions change the distributions of exposure concentrations. Copyright © 2018 Elsevier Inc. All rights reserved.
Modulated error diffusion CGHs for neural nets
Vermeulen, Pieter J. E.; Casasent, David P.
1990-05-01
New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).
Errors in causal inference: an organizational schema for systematic error and random error.
Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji
2016-11-01
To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.
Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello
2013-10-26
Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.
The effect of sample preparation methods on glass performance
International Nuclear Information System (INIS)
Oh, M.S.; Oversby, V.M.
1990-01-01
A series of experiments was conducted using SRL 165 synthetic waste glass to investigate the effects of surface preparation and leaching solution composition on the alteration of the glass. Samples of glass with as-cast surfaces produced smooth reaction layers and some evidence for precipitation of secondary phases from solution. Secondary phases were more abundant in samples reacted in deionized water than for those reacted in a silicate solution. Samples with saw-cut surfaces showed a large reduction in surface roughness after 7 days of reaction in either solution. Reaction in silicate solution for up to 91 days produced no further change in surface morphology, while reaction in DIW produced a spongy surface that formed the substrate for further surface layer development. The differences in the surface morphology of the samples may create microclimates that control the details of development of alteration layers on the glass; however, the concentrations of elements in leaching solutions show differences of 50% or less between samples prepared with different surface conditions for tests of a few months duration. 6 refs., 7 figs., 1 tab
Perry, K Allison; O'Connell, Heather A; Rose, Laura J; Noble-Wang, Judith A; Arduino, Matthew J
The effect of packaging, shipping temperatures and storage times on recovery of Bacillus anthracis . Sterne spores from swabs was investigated. Macrofoam swabs were pre-moistened, inoculated with Bacillus anthracis spores, and packaged in primary containment or secondary containment before storage at -15°C, 5°C, 21°C, or 35°C for 0-7 days. Swabs were processed according to validated Centers for Disease Control/Laboratory Response Network culture protocols, and the percent recovery relative to a reference sample (T 0 ) was determined for each variable. No differences were observed in recovery between swabs held at -15° and 5°C, (p ≥ 0.23). These two temperatures provided significantly better recovery than swabs held at 21°C or 35°C (all 7 days pooled, p ≤ 0.04). The percent recovery at 5°C was not significantly different if processed on days 1, 2 or 4, but was significantly lower on day 7 (day 2 vs. 7, 5°C, 10 2 , p=0.03). Secondary containment provided significantly better percent recovery than primary containment, regardless of storage time (5°C data, p ≤ 0.008). The integrity of environmental swab samples containing Bacillus anthracis spores shipped in secondary containment was maintained when stored at -15°C or 5°C and processed within 4 days to yield the optimum percent recovery of spores.
Armbrecht, Anne-Simone; Wöhrmann, Anne; Gibbons, Henning; Stahl, Jutta
2010-09-01
The present electrophysiological study investigated the temporal development of response conflict and the effects of diverging conflict sources on error(-related) negativity (Ne). Eighteen participants performed a combined stop-signal flanker task, which was comprised of two different conflict sources: a left-right and a go-stop response conflict. It is assumed that the Ne reflects the activity of a conflict monitoring system and thus increases according to (i) the number of conflict sources and (ii) the temporal development of the conflict activity. No increase of the Ne amplitude after double errors (comprising two conflict sources) as compared to hand- and stop-errors (comprising one conflict source) was found, whereas a higher Ne amplitude was observed after a delayed stop-signal onset. The results suggest that the Ne is not sensitive to an increase in the number of conflict sources, but to the temporal dynamics of a go-stop response conflict. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Mitchell, Jason P; Dodson, Chad S; Schacter, Daniel L
2005-05-01
Misattribution refers to the act of attributing a memory or idea to an incorrect source, such as successfully remembering a bit of information but linking it to an inappropriate person or time [Jacoby, L. L., Kelley, C., Brown, J., & Jasechko, J. (1989). Becoming famous overnight: Limits on the ability to avoid unconscious influences of the past. Journal of Personality and Social Psychology, 56, 326-338; Schacter, D. L. (1999). The seven sins of memory: Insights from psychology and cognitive neuroscience. American Psychologist, 54, 182-203; Schacter, D. L. (2001). The seven sins of memory: How the mind forgets and remembers. Boston: Houghton Mifflin]. Cognitive studies have suggested that misattribution errors may occur in the absence of recollection for the details of an initial encounter with a stimulus, but little is known about the neural basis of this memory phenomenon. Here we used functional magnetic resonance imaging (fMRI) to examine the hypothesized role of recollection in counteracting the illusory truth effect, a misattribution error whereby perceivers systematically overrate the truth of previously presented information. Imaging was conducted during the encoding and subsequent judgment of unfamiliar statements that were presented as true or false. Event-related fMRI analyses were conditionalized as a function of subsequent performance. Results demonstrated that encoding activation in regions previously associated with successful recollection--including the hippocampus and the ventrolateral prefrontal cortex (PFC)--correlated with the successful avoidance of misattribution errors, providing initial neuroimaging support for earlier cognitive accounts of misattribution.
Directory of Open Access Journals (Sweden)
Ehsanollah Habibi
2013-01-01
Full Text Available Background: Among the most important and effective factors affecting the efficiency of the human workforce are accuracy, promptness, and ability. In the context of promoting levels and quality of productivity, the aim of this study was to investigate the effects of exposure to noise on the rate of errors, speed of work, and capability in performing manual activities. Methods: This experimental study was conducted on 96 students (52 female and 44 male of the Isfahan Medical Science University with the average and standard deviations of age, height, and weight of 22.81 (3.04 years, 171.67 (8.51 cm, and 65.05 (13.13 kg, respectively. Sampling was conducted with a randomized block design. Along with controlling for intervening factors, a combination of sound pressure levels [65 dB (A, 85 dB (A, and 95 dB (A] and exposure times (0, 20, and 40 were used for evaluation of precision and speed of action of the participants, in the ergonomic test of two-hand coordination. Data was analyzed by SPSS18 software using a descriptive and analytical statistical method by analysis of covariance (ANCOVA repeated measures. Results: The results of this study showed that increasing sound pressure level from 65 to 95 dB in network ′A′ increased the speed of work (P 0.05. Male participants got annoyed from the noise more than females. Also, increase in sound pressure level increased the rate of error (P < 0.05. Conclusions: According to the results of this research, increasing the sound pressure level decreased efficiency and increased the errors and in exposure to sounds less than 85 dB in the beginning, the efficiency decreased initially and then increased in a mild slope.
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu
2017-05-25
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The
Zhao, Q.
2017-12-01
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The
The non-trivial effects of trivial errors in scientific communication and evaluation
Tüür-Fröhlich, Terje
2016-01-01
Thomson Reuters’ citation indexes i.e. SCI, SSCI and AHCI are said to be “authoritative”. Due to the huge influence of these databases on global academic evaluation of productivity and impact, Terje Tüür-Fröhlich decided to conduct case studies on the data quality of Social Sciences Citation Index (SSCI) records. Tüür-Fröhlich investigated articles from social science and law. The main findings: SSCI records contain tremendous amounts of “trivial errors”, not only misspellings and typos as previously mentioned in bibliometrics and scientometrics literature. But Tüür-Fröhlich's research documented fatal errors which have not been mentioned in the scientometrics literature yet at all. Tüür-Fröhlich found more than 80 fatal mutations and mutilations of Pierre Bourdieu (e.g. “Atkinson” or “Pierre, B. and “Pierri, B.”). SSCI even generated zombie references (phantom authors and works) by data fields’ confusion – a deadly sin for a database producer - as fragments of Patent Laws...
Schmidt, Brandy; Papale, Andrew; Redish, A David; Markus, Etan J
2013-02-15
Navigation can be accomplished through multiple decision-making strategies, using different information-processing computations. A well-studied dichotomy in these decision-making strategies compares hippocampal-dependent "place" and dorsal-lateral striatal-dependent "response" strategies. A place strategy depends on the ability to flexibly respond to environmental cues, while a response strategy depends on the ability to quickly recognize and react to situations with well-learned action-outcome relationships. When rats reach decision points, they sometimes pause and orient toward the potential routes of travel, a process termed vicarious trial and error (VTE). VTE co-occurs with neurophysiological information processing, including sweeps of representation ahead of the animal in the hippocampus and transient representations of reward in the ventral striatum and orbitofrontal cortex. To examine the relationship between VTE and the place/response strategy dichotomy, we analyzed data in which rats were cued to switch between place and response strategies on a plus maze. The configuration of the maze allowed for place and response strategies to work competitively or cooperatively. Animals showed increased VTE on trials entailing competition between navigational systems, linking VTE with deliberative decision-making. Even in a well-learned task, VTE was preferentially exhibited when a spatial selection was required, further linking VTE behavior with decision-making associated with hippocampal processing.
Hall effect measurements on proton-irradiated ROSE samples
International Nuclear Information System (INIS)
Biggeri, U.; Bruzzi, M.; Borchi, E.
1997-01-01
Bulk samples obtained from two wafers of a silicon monocrystal material produced by Float-Zone refinement have been analyzed using the four-point probe method. One of the wafers comes from an oxygenated ingot; two sets of pure and oxygenated samples have been irradiated with 24 GeV/c protons in the fluence range from 10 13 p/cm 2 to 2x10 14 p/cm 2 . Van der Pauw resistivity and Hall coefficient have been measured before and after irradiation as a function of the temperature. A thermal treatment (30 minutes at 100C) has been performed to accelerate the reverse annealing effect in the irradiated silicon. The irradiated samples show the same exponential dependence of the resistivity and of the Hall coefficient on the temperature from 370K to 100K, corresponding to the presence of radiation-induced deep energy levels around 0.6-0.7eV in the silicon gap. The free carrier concentrations (n, p) have been evaluated in the investigated fluence range. The inversion of the conductivity type from n to p occurred respectively at 7x10 13 p/cm 2 and at 4x10 13 p/cm 2 before and after the annealing treatment, for both the two sets. Only slight differences have been detected between the pure and oxygenated samples
Yuma, Yoshikazu
2010-08-01
This research examined the effect of prison population densities (PPD) on inmate-inmate prison violence rates (PVR) in Japan using one-year-interval time-series data (1972-2006). Cointegration regressions revealed a long-run equilibrium relationship between PPD and PVR. PPD had a significant and increasing effect on PVR in the long-term. Error correction models showed that in the short-term, the effect of PPD was significant and positive on PVR, even after controlling for the effects of the proportions of males, age younger than 30 years, less than one-year incarceration, and prisoner/staff ratio. The results were discussed in regard to (a) differences between Japanese prisons and prisons in the United States, and (b) methodological problems found in previous research.
The effects of spatial sampling choices on MR temperature measurements.
Todd, Nick; Vyas, Urvi; de Bever, Josh; Payne, Allison; Parker, Dennis L
2011-02-01
The purpose of this article is to quantify the effects that spatial sampling parameters have on the accuracy of magnetic resonance temperature measurements during high intensity focused ultrasound treatments. Spatial resolution and position of the sampling grid were considered using experimental and simulated data for two different types of high intensity focused ultrasound heating trajectories (a single point and a 4-mm circle) with maximum measured temperature and thermal dose volume as the metrics. It is demonstrated that measurement accuracy is related to the curvature of the temperature distribution, where regions with larger spatial second derivatives require higher resolution. The location of the sampling grid relative temperature distribution has a significant effect on the measured values. When imaging at 1.0 × 1.0 × 3.0 mm(3) resolution, the measured values for maximum temperature and volume dosed to 240 cumulative equivalent minutes (CEM) or greater varied by 17% and 33%, respectively, for the single-point heating case, and by 5% and 18%, respectively, for the 4-mm circle heating case. Accurate measurement of the maximum temperature required imaging at 1.0 × 1.0 × 3.0 mm(3) resolution for the single-point heating case and 2.0 × 2.0 × 5.0 mm(3) resolution for the 4-mm circle heating case. Copyright © 2010 Wiley-Liss, Inc.
The perils of straying from protocol: sampling bias and interviewer effects.
Directory of Open Access Journals (Sweden)
Carrie J Ngongo
Full Text Available Fidelity to research protocol is critical. In a contingent valuation study in an informal urban settlement in Nairobi, Kenya, participants responded differently to the three trained interviewers. Interviewer effects were present during the survey pilot, then magnified at the start of the main survey after a seemingly slight adaptation of the survey sampling protocol allowed interviewers to speak with the "closest neighbor" in the event that no one was home at a selected household. This slight degree of interviewer choice led to inferred sampling bias. Multinomial logistic regression and post-estimation tests revealed that the three interviewers' samples differed significantly from one another according to six demographic characteristics. The two female interviewers were 2.8 and 7.7 times less likely to talk with respondents of low socio-economic status than the male interviewer. Systematic error renders it impossible to determine which of the survey responses might be "correct." This experience demonstrates why researchers must take care to strictly follow sampling protocols, consistently train interviewers, and monitor responses by interview to ensure similarity between interviewers' groups and produce unbiased estimates of the parameters of interest.
Lucke, Robert L.; Sirlin, Samuel W.; San Martin, A. M.
1992-01-01
For most imaging sensors, a constant (dc) pointing error is unimportant (unless large), but time-dependent (ac) errors degrade performance by either distorting or smearing the image. When properly quantified, the separation of the root-mean-square effects of random line-of-sight motions into dc and ac components can be used to obtain the minimum necessary line-of-sight stability specifications. The relation between stability requirements and sensor resolution is discussed, with a view to improving communication between the data analyst and the control systems engineer.
Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra
2016-03-01
Sound is among the significant environmental factors for people's health, and it has an important role in both physical and psychological injuries, and it also affects individuals' performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant's performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). This study found that a sound level of 110 dB had an important effect on the individuals' performances, i.e., the performances were decreased.
Effect of granulation of geological samples in neutron transport measurements
International Nuclear Information System (INIS)
Woznicka, Urszula; Drozdowicz, Krzysztof; Gabanska, Barbara; Krynicka, Ewa; Igielski, Andrzej
2001-01-01
The thermal neutron absorption cross section is one of the parameters describing the transport of thermal neutrons in a medium. Theoretical descriptions and experiments which determine the absorption cross section have a wide literature for homogeneous media. The situation comes true e.g. for fluids or amorphous solids. There are many other media which should be treated as heterogeneous. Among others - geological materials. The material heterogeneity for the thermal neutron transport in a considered volume is understood here as an existence of many small regions which differ significantly in their macroscopic neutron diffusion parameters (defined by the absorption and transport cross sections). The final difference, which influences the neutron transport, comes from a combination of the absolute differences between the parameters and of sizes of regions (related to the neutron mean free paths). A rock can be naturally heterogeneous in the above meaning. Besides, it can happen that a preparation of the rock sample for a neutron measurement can increase its natural heterogeneity. (For example, when the rock material is crushed and the measured sample consists of the obtained grains). The question is which granulation is allowed to treat the sample material as still homogeneous, and from which size of the rock grains we have to consider a two-component medium. It has been experimentally proved that the effective absorption of thermal neutrons in a heterogeneous two-component material can significantly differ from the absorption in a homogeneous one which consists of the same elements. The final effect is dependent on a few factors: the macroscopic absorption cross sections of the components, their total mass contributions, and the size of the grains. The ratio of the effective absorption cross section of the heterogeneous material to the cross section of the equivalent homogeneous, is a measure of the heterogeneity effect on the thermal neutron absorption
STAR FORMATION LAWS: THE EFFECTS OF GAS CLOUD SAMPLING
International Nuclear Information System (INIS)
Calzetti, D.; Liu, G.; Koda, J.
2012-01-01
Recent observational results indicate that the functional shape of the spatially resolved star formation-molecular gas density relation depends on the spatial scale considered. These results may indicate a fundamental role of sampling effects on scales that are typically only a few times larger than those of the largest molecular clouds. To investigate the impact of this effect, we construct simple models for the distribution of molecular clouds in a typical star-forming spiral galaxy and, assuming a power-law relation between star formation rate (SFR) and cloud mass, explore a range of input parameters. We confirm that the slope and the scatter of the simulated SFR-molecular gas surface density relation depend on the size of the sub-galactic region considered, due to stochastic sampling of the molecular cloud mass function, and the effect is larger for steeper relations between SFR and molecular gas. There is a general trend for all slope values to tend to ∼unity for region sizes larger than 1-2 kpc, irrespective of the input SFR-cloud relation. The region size of 1-2 kpc corresponds to the area where the cloud mass function becomes fully sampled. We quantify the effects of selection biases in data tracing the SFR, either as thresholds (i.e., clouds smaller than a given mass value do not form stars) or as backgrounds (e.g., diffuse emission unrelated to current star formation is counted toward the SFR). Apparently discordant observational results are brought into agreement via this simple model, and the comparison of our simulations with data for a few galaxies supports a steep (>1) power-law index between SFR and molecular gas.
Support vector regression to predict porosity and permeability: Effect of sample size
Al-Anazi, A. F.; Gates, I. D.
2012-02-01
Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function
Hühn, M; Piepho, H P
2003-03-01
Tests for linkage are usually performed using the lod score method. A critical question in linkage analyses is the choice of sample size. The appropriate sample size depends on the desired type-I error and power of the test. This paper investigates the exact type-I error and power of the lod score method in a segregating F(2) population with co-dominant markers and a qualitative monogenic dominant-recessive trait. For illustration, a disease-resistance trait is considered, where the susceptible allele is recessive. A procedure is suggested for finding the appropriate sample size. It is shown that recessive plants have about twice the information content of dominant plants, so the former should be preferred for linkage detection. In some cases the exact alpha-values for a given nominal alpha may be rather small due to the discrete nature of the sampling distribution in small samples. We show that a gain in power is possible by using exact methods.
Paper Capillary Enables Effective Sampling for Microfluidic Paper Analytical Devices.
Shangguan, Jin-Wen; Liu, Yu; Wang, Sha; Hou, Yun-Xuan; Xu, Bi-Yi; Xu, Jing-Juan; Chen, Hong-Yuan
2018-06-06
Paper capillary is introduced to enable effective sampling on microfluidic paper analytical devices. By coupling mac-roscale capillary force of paper capillary and microscale capillary forces of native paper, fluid transport can be flexibly tailored with proper design. Subsequently, a hybrid-fluid-mode paper capillary device was proposed, which enables fast and reliable sampling in an arrayed form, with less surface adsorption and bias for different components. The resulting device thus well supports high throughput, quantitative, and repeatable assays all by hands operation. With all these merits, multiplex analysis of ions, proteins, and microbe have all been realized on this platform, which has paved the way to level-up analysis on μPADs.
Empirical method for matrix effects correction in liquid samples
International Nuclear Information System (INIS)
Vigoda de Leyt, Dora; Vazquez, Cristina
1987-01-01
A simple method for the determination of Cr, Ni and Mo in stainless steels is presented. In order to minimize the matrix effects, the conditions of liquid system to dissolve stainless steels chips has been developed. Pure element solutions were used as standards. Preparation of synthetic solutions with all the elements of steel and also mathematic corrections are avoided. It results in a simple chemical operation which simplifies the method of analysis. The variance analysis of the results obtained with steel samples show that the three elements may be determined from the comparison with the analytical curves obtained with the pure elements if the same parameters in the calibration curves are used. The accuracy and the precision were checked against other techniques using the British Chemical Standards of the Bureau of Anlysed Samples Ltd. (England). (M.E.L.) [es
Diagnostic errors in pediatric radiology
International Nuclear Information System (INIS)
Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.
2011-01-01
Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)
Galaxy power-spectrum responses and redshift-space super-sample effect
Li, Yin; Schmittfull, Marcel; Seljak, Uroš
2018-02-01
As a major source of cosmological information, galaxy clustering is susceptible to long-wavelength density and tidal fluctuations. These long modes modulate the growth and expansion rate of local structures, shifting them in both amplitude and scale. These effects are often named the growth and dilation effects, respectively. In particular the dilation shifts the baryon acoustic oscillation (BAO) peak and breaks the assumption of the Alcock-Paczynski (AP) test. This cannot be removed with reconstruction techniques because the effect originates from long modes outside the survey. In redshift space, the long modes generate a large-scale radial peculiar velocity that affects the redshift-space distortion (RSD) signal. We compute the redshift-space response functions of the galaxy power spectrum to long density and tidal modes at leading order in perturbation theory, including both the growth and dilation terms. We validate these response functions against measurements from simulated galaxy mock catalogs. As one application, long density and tidal modes beyond the scale of a survey correlate various observables leading to an excess error known as the super-sample covariance, and thus weaken their constraining power. We quantify the super-sample effect on BAO, AP, and RSD measurements, and study its impact on current and future surveys.
Directory of Open Access Journals (Sweden)
Demirhan Erdal
2015-01-01
Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.
Jiang, Yan-xiu; Bayanheshig; Yang, Shuo; Zhao, Xu-long; Wu, Na; Li, Wen-hao
2016-03-01
To making the high resolution grating, a numerical calculation was used to analyze the effect of recording parameters on groove density, focal curve and imaging performance of the grating and their compensation. Based on Fermat' s principle, light path function and aberration, the effect on imaging performance of the grating was analyzed. In the case of fixed using parameters, the error of the recording angle has a greater influence on imaging performance, therefore the gain of the weight of recording angle can improve the accuracy of the recording angle values in the optimization; recording distance has little influence on imaging performance; the relative errors of recording parameters cause the change of imaging performance of the grating; the results indicate that recording parameter errors can be compensated by adjusting its corresponding parameter. The study can give theoretical guidance to the fabrication for high resolution varied-line-space plane holographic grating in on-line spectral diagnostic and reduce the alignment difficulty by analyze the main error effect the imaging performance and propose the compensation method.
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
Studying the effect of perceptual errors on the decisions made by the ...
African Journals Online (AJOL)
The factors that the investors are not aware of their effectiveness and make investment decisions. The main purpose of the present research is to study the perceptual factors affecting on the decision making process of the investors and the effect of information on these factors. For this aim, 385 investors of Tehran Stock ...
Effects of several feedback methods for correcting reading errors by computer-assisted instruction
Spaai, G.W.G.; Reitsma, P.; Ellermann, H.H.
1987-01-01
As modern technology facilitates the presentation of various forms of feedback in instructional systems, it is important to investigate their relative effects. An experiment was performed to investigate the learning effects of three forms of feedback. Sixty novice readers participated in the
Corrected RMS Error and Effective Number of Bits for Sinewave ADC Tests
International Nuclear Information System (INIS)
Jerome J. Blair
2002-01-01
A new definition is proposed for the effective number of bits of an ADC. This definition removes the variation in the calculated effective bits when the amplitude and offset of the sinewave test signal is slightly varied. This variation is most pronounced when test signals with amplitudes of a small number of code bin widths are applied to very low noise ADC's. The effectiveness of the proposed definition is compared with that of other proposed definitions over a range of signal amplitudes and noise levels
International Nuclear Information System (INIS)
Singh, M.R.; Mukund, R.; Sahni, V.C.
1999-01-01
The influence of geometrical shape errors and surface errors on the characteristics and performance of grazing incidence optics used in the design of beamlines at synchrotron radiation facilities is considered. The methodology adopted for the simulation of slope errors is described and results presented for the ellipsoidal focussing mirror used in the design of PGM beamline at Indus-1. (author)
Managing Sensitive Information: DOD Can More Effectively Reduce the Risk of Classification Errors
National Research Council Canada - National Science Library
D'Agostino, Davi M; Borseth, Ann; Fenton, Mattias; Hatton, Adam; Hills, Barbara; Keefer, David; Mayfield, David; Reid, Jim; Richardson, Terry; Schwartz, Marc
2006-01-01
.... While some DoD components and their subordinate commands appear to manage effective programs, GAO identified weaknesses in others in the areas of classification management training, self-inspections...
AFM tip-sample convolution effects for cylinder protrusions
Shen, Jian; Zhang, Dan; Zhang, Fei-Hu; Gan, Yang
2017-11-01
A thorough understanding about the AFM tip geometry dependent artifacts and tip-sample convolution effect is essential for reliable AFM topographic characterization and dimensional metrology. Using rigid sapphire cylinder protrusions (diameter: 2.25 μm, height: 575 nm) as the model system, a systematic and quantitative study about the imaging artifacts of four types of tips-two different pyramidal tips, one tetrahedral tip and one super sharp whisker tip-is carried out through comparing tip geometry dependent variations in AFM topography of cylinders and constructing the rigid tip-cylinder convolution models. We found that the imaging artifacts and the tip-sample convolution effect are critically related to the actual inclination of the working cantilever, the tip geometry, and the obstructive contacts between the working tip's planes/edges and the cylinder. Artifact-free images can only be obtained provided that all planes and edges of the working tip are steeper than the cylinder sidewalls. The findings reported here will contribute to reliable AFM characterization of surface features of micron or hundreds of nanometers in height that are frequently met in semiconductor, biology and materials fields.
Directory of Open Access Journals (Sweden)
Jamshid Jamali
2017-01-01
Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
Effect of sample stratification on dairy GWAS results
Directory of Open Access Journals (Sweden)
Ma Li
2012-10-01
Full Text Available Abstract Background Artificial insemination and genetic selection are major factors contributing to population stratification in dairy cattle. In this study, we analyzed the effect of sample stratification and the effect of stratification correction on results of a dairy genome-wide association study (GWAS. Three methods for stratification correction were used: the efficient mixed-model association expedited (EMMAX method accounting for correlation among all individuals, a generalized least squares (GLS method based on half-sib intraclass correlation, and a principal component analysis (PCA approach. Results Historical pedigree data revealed that the 1,654 contemporary cows in the GWAS were all related when traced through approximately 10–15 generations of ancestors. Genome and phenotype stratifications had a striking overlap with the half-sib structure. A large elite half-sib family of cows contributed to the detection of favorable alleles that had low frequencies in the general population and high frequencies in the elite cows and contributed to the detection of X chromosome effects. All three methods for stratification correction reduced the number of significant effects. EMMAX method had the most severe reduction in the number of significant effects, and the PCA method using 20 principal components and GLS had similar significance levels. Removal of the elite cows from the analysis without using stratification correction removed many effects that were also removed by the three methods for stratification correction, indicating that stratification correction could have removed some true effects due to the elite cows. SNP effects with good consensus between different methods and effect size distributions from USDA’s Holstein genomic evaluation included the DGAT1-NIBP region of BTA14 for production traits, a SNP 45kb upstream from PIGY on BTA6 and two SNPs in NIBP on BTA14 for protein percentage. However, most of these consensus effects had
Directory of Open Access Journals (Sweden)
MarÃa Dolores Arenas JimÃ©nez
2017-11-01
Full Text Available Background: Haemodialysis (HD patients are a high-risk population group. For these patients, an error could have catastrophic consequences. Therefore, system that ensures the safety of these patients in an environment with high technology and great interaction of the human factor is a requirement. Objectives: To show a systematic working approach, reproducible in any HD unit, which consists of recording the complications and errors that occurred during the HD session; defining which of those complications could be considered adverse event (AE, and therefore preventable; and carrying out a systematic analysis of them, as well as of underlying real or potential errors, evaluating their severity, frequency and detection; as well as establishing priorities for action (Failure Mode and Effects Analysis system [FMEA systems]. Methods: Retrospective analysis of the graphs of all HD sessions performed during one month (October 2015 on 97 patients, analysing all recorded complications. The consideration of these complications as AEs was based on a consensus among 13 health professionals and 2 patients. The severity, frequency and detection of each AE were evaluated by the FMEA system. Results: We analysed 1303 HD treatments in 97 patients. A total of 383 complications (1 every 3.4 HD treatments were recorded. Approximately 87.9% of them were deemed AEs and 23.7% complications related with patientsâ underlying pathology. There was one AE every 3.8 HD treatments. Hypertension and hypotension were the most frequent AEs (42.7 and 27.5% of all AEs recorded, respectively. Vascular-access related AEs were one every 68.5 HD treatments. A total of 21 errors (1 every 62 HD treatments, mainly related to the HD technique and to the administration of prescribed medication, were registered. The highest risk priority number, according to the FMEA, corresponded to errors related to patient body weight; dysfunction/rupture of the catheter; and needle extravasation
Baltussen, Rob; Naus, Jeroen; Limburg, Hans
2009-02-01
To estimate the costs and effects of alternative strategies for annual screening of school children for refractive errors, and the provision of spectacles, in different WHO sub-regions in Africa, Asia, America and Europe. We developed a mathematical simulation model for uncorrected refractive error, using prevailing prevalence and incidence rates. Remission rates reflected the absence or presence of screening strategies for school children. All screening strategies were implemented for a period of 10 years and were compared to a situation were no screening was implemented. Outcome measures were life years adjusted for disability (DALYs), costs of screening and provision of spectacles and follow-up for six different screening strategies, and cost-effectiveness in international dollars per DALY averted. Epidemiological information was derived from the burden of disease study from the World Health Organization (WHO). Cost data were derived from large databases from the WHO. Both univariate and multivariate sensitivity analyses were performed on key parameters to determine the robustness of the model results. In all regions, screening of 5-15 years old children yields most health effects, followed by screening of 11-15 years old, 5-10 years old, and screening of 8 and 13 years old. Screening of broad-age intervals is always more costly than screening of single-age intervals, and there are important economies of scale for simultaneous screening of both 5-10 and 11-15-year-old children. In all regions, screening of 11-15 years old is the most cost-effective intervention, with the cost per DALY averted ranging from I$67 per DALY averted in the Asian sub-region to I$458 per DALY averted in the European sub-region. The incremental cost per DALY averted of screening 5-15 years old ranges between I$111 in the Asian sub-region to I$672 in the European sub-region. Considering the conservative study assumptions and the robustness of study conclusions towards changes in these
Liu, Yan; Salvendy, Gavriel
2009-05-01
This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.
Energy Technology Data Exchange (ETDEWEB)
Liu, Dong' an; Peng, Linfa; Lai, Xinmin [State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai 200240 (China)
2010-07-01
In practice, the assembly error of the bipolar plate (BPP) in a PEM fuel cell stack is unavoidable based on the current assembly process. However its effect on the performance of the PEM fuel cell stack is not reported yet. In this study, a methodology based on FEA model, ''least squares-support vector machine (LS-SVM)'' simulation and statistical analysis is developed to investigate the effect of the assembly error of the BPP on the pressure distribution and stress failure of membrane electrode assembly (MEA). At first, a parameterized FEA model of a metallic BPP/MEA assembly is established. Then, the LS-SVM simulation process is conducted based on the FEA model, and datasets for the pressure distribution and Von Mises stress of MEA are obtained, respectively for each assembly error. At last, the effect of the assembly error is obtained by applying the statistical analysis to the LS-SVM results. A regression equation between the stress failure and the assembly error is also built, and the allowed maximum assembly error is calculated based on the equation. The methodology in this study is beneficial to understand the mechanism of the assembly error and can be applied to guide the assembly process for the PEM fuel cell stack. (author)
International Nuclear Information System (INIS)
Lausch, A; Lee, T Y; Wong, E; Jensen, N K G; Chen, J; Lock, M
2014-01-01
Purpose: To investigate the effects of registration error (RE) on parametric response map (PRM) analysis of pre and post-radiotherapy (RT) functional images. Methods: Arterial blood flow maps (ABF) were generated from the CT-perfusion scans of 5 patients with hepatocellular carcinoma. ABF values within each patient map were modified to produce seven new ABF maps simulating 7 distinct post-RT functional change scenarios. Ground truth PRMs were generated for each patient by comparing the simulated and original ABF maps. Each simulated ABF map was then deformed by different magnitudes of realistic respiratory motion in order to simulate RE. PRMs were generated for each of the deformed maps and then compared to the ground truth PRMs to produce estimates of RE-induced misclassification. Main findings: The percentage of voxels misclassified as decreasing, no change, and increasing, increased with RE For all patients, increasing RE was observed to increase the number of high post-RT ABF voxels associated with low pre-RT ABF voxels and vice versa. 3 mm of average tumour RE resulted in 18-45% tumour voxel misclassification rates. Conclusions: RE induced misclassification posed challenges for PRM analysis in the liver where registration accuracy tends to be lower. Quantitative understanding of the sensitivity of the PRM method to registration error is required if PRMs are to be used to guide radiation therapy dose painting techniques.
Griesbach, G S; Hu, D; Amsel, A
1998-12-01
The effects of dizocilpine maleate (MK-801) on vicarious trial-and-error (VTE), and on simultaneous olfactory discrimination learning and its reversal, were observed in weanling rats. The term VTE was used by Tolman (The determiners of behavior at a choice point. Psychol. Rev. 1938;46:318-336), who described it as conflict-like behavior at a choice-point in simultaneous discrimination learning. It takes the form of head movements from one stimulus to the other, and has recently been proposed by Amsel (Hippocampal function in the rat: cognitive mapping or vicarious trial-and-error? Hippocampus, 1993;3:251-256) as related to hippocampal, nonspatial function during this learning. Weanling male rats received systemic MK-801 either 30 min before the onset of olfactory discrimination training and its reversal, or only before its reversal. The MK-801-treated animals needed significantly more sessions to acquire the discrimination and showed significantly fewer VTEs in the acquisition phase of learning. Impaired reversal learning was shown only when MK-801 was administered during the reversal-learning phase, itself, and not when it was administered throughout both phases.
Goldsmith, K A; Chalder, T; White, P D; Sharpe, M; Pickles, A
2018-06-01
Clinical trials are expensive and time-consuming and so should also be used to study how treatments work, allowing for the evaluation of theoretical treatment models and refinement and improvement of treatments. These treatment processes can be studied using mediation analysis. Randomised treatment makes some of the assumptions of mediation models plausible, but the mediator-outcome relationship could remain subject to bias. In addition, mediation is assumed to be a temporally ordered longitudinal process, but estimation in most mediation studies to date has been cross-sectional and unable to explore this assumption. This study used longitudinal structural equation modelling of mediator and outcome measurements from the PACE trial of rehabilitative treatments for chronic fatigue syndrome (ISRCTN 54285094) to address these issues. In particular, autoregressive and simplex models were used to study measurement error in the mediator, different time lags in the mediator-outcome relationship, unmeasured confounding of the mediator and outcome, and the assumption of a constant mediator-outcome relationship over time. Results showed that allowing for measurement error and unmeasured confounding were important. Contemporaneous rather than lagged mediator-outcome effects were more consistent with the data, possibly due to the wide spacing of measurements. Assuming a constant mediator-outcome relationship over time increased precision.
The Effect of Direct and Indirect Corrective Feedback on Iranian EFL Learners' Spelling Errors
Ghandi, Maryam; Maghsoudi, Mojtaba
2014-01-01
The aim of the current study was to investigate the impact of indirect corrective feedback on promoting Iranian high school students' spelling accuracy in English (as a foreign language). It compared the effect of direct feedback with indirect feedback on students' written work dictated by their teacher from Chicken Soup for the Mother and…
The Homophone Effect in Written French: The Case of Verb-Noun Inflection Errors.
Largy, Pierre; Fayol, Michel
1996-01-01
Focuses on understanding the mechanisms that underlie the production of homophone confusions in writing. The article overviews five experiments demonstrating that the homophone effect can be experimentally induced in French adults. Findings are interpreted in the framework of an activation model. (45 references) (Author/CK)
International Nuclear Information System (INIS)
Vidmar, T.; Korun, M.
2004-01-01
When cylindrical samples placed coaxially with the detector are measured on a gamma-ray spectrometer, the position of the sample very often deviates from an ideal one with the axes of the sample and the detector less than perfectly aligned. If a calibrated source is used prior to the measurement and is presumed to have been positioned correctly, one might conclude that the misalignment of the measured sample should result in an uncertainty of the reported nuclide activity, since the efficiencies of the sample and the calibrated source are effectively different due to the difference in placement. The efficiency of a displaced cylindrical sample, however, is always lower than the one of a sample that is perfectly aligned. The net effect of misalignment can therefore be not only an increase in the uncertainty of the activity, but also a systematic error in its evaluation. Since the Guide to the Expression of Uncertainty in Measurement requires that all such systematic effects be corrected for, we have developed a method to assess the change in the efficiency resulting from misalignment and to introduce the required correction. The calculation of this correction only requires knowledge of basic sample and detector data. The uncertainty of the reported activity can then also be assessed and is influenced by the uncertainty of the efficiency evaluated around its new, corrected value. An appropriate expression for this uncertainty has been derived
2008-01-01
One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177
Thermodynamics of Error Correction
Directory of Open Access Journals (Sweden)
Pablo Sartori
2015-12-01
Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
DEFF Research Database (Denmark)
Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard
2017-01-01
In micro-EDM milling, real time electrode wear compensation based on tool wear per discharge (TWD) estimation permits the direct control of the position of the tool electrode frontal surface. However, TWD estimation errors will cause errors on the tool electrode axial depth. A simulation tool...... is developed to determine the effects of errors in the initial estimation of TWD and its propagation effect with respect to the error on the depth of the cavity generated. Simulations were applied to micro-EDM milling of a slot of 5000 μm length and 50 μm depth and validated through slot milling experiments...... performed on a micro-EDM machine. Simulations and experimental results were found to be in good agreement, showing the effect of errror amplification through the cavity depth....
DEFF Research Database (Denmark)
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng
2017-01-01
and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method...... in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA......) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic...
Study of the Gamma Radiation Effect on Tannins Samples
International Nuclear Information System (INIS)
Coto Hernandez, I.; Barroso Solares, S.; Martinez Luzardo, F.; Guzman Martinez, F.; Diaz Rizo, O.; Arado Lopez, J.O.; Santana Romero, J.L.; Baeza Fonte, A.; Rapado Paneque, M.; Garcia, F.
2011-01-01
Vegetable tannins are polyphenolic substances of different chemical mixtures, in correspondence with the characteristics of groups of polyphenols. Taking into consideration its composition, we can find different types of flavonoids, mainly in the so-called condensed tannins. In general, many applications have been explored, including the medical ones, due to their proven biological activity as antiviral, antibacterial and others characteristics derived from their reactions with metal ions and amino acids of the protein components. Therefore it is promising to examine the effects of gamma radiation on the structure of tannin, looking for the possible modification of its biological activity. To this end, samples of tannins are irradiated at different doses (maximum dose 35 kGy) with the use of a Cobalt-60 irradiator. Scanning Electron Microscopy (SEM) permitted to characterize the samples in morphology and composition. The changes were analyzed by using infrared spectroscopy Fourier transform (FT-IR) and High Resolution Liquid Chromatography (HPLC). At the end we discuss the implication of the results for a dosage range above 5 kGy. (Author)
Effective traffic features selection algorithm for cyber-attacks samples
Li, Yihong; Liu, Fangzheng; Du, Zhenyu
2018-05-01
By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.
SU-E-T-631: Preliminary Results for Analytical Investigation Into Effects of ArcCHECK Setup Errors
International Nuclear Information System (INIS)
Kar, S; Tien, C
2015-01-01
Purpose: As three-dimensional diode arrays increase in popularity for patient-specific quality assurance for intensity-modulated radiation therapy (IMRT), it is important to evaluate an array’s susceptibility to setup errors. The ArcCHECK phantom is set up by manually aligning its outside marks with the linear accelerator’s lasers and light-field. If done correctly, this aligns the ArcCHECK cylinder’s central axis (CAX) with the linear accelerator’s axis of rotation. However, this process is prone to error. This project has developed an analytical expression including a perturbation factor to quantify the effect of shifts. Methods: The ArcCHECK is set up by aligning its machine marks with either the sagittal room lasers or the light-field of the linear accelerator at gantry zero (IEC). ArcCHECK has sixty-six evenly-spaced SunPoint diodes aligned radially in a ring 14.4 cm from CAX. The detector response function (DRF) was measured and combined with inverse-square correction to develop an analytical expression for output. The output was calculated using shifts of 0 (perfect alignment), +/−1, +/−2 and +/−5 mm. The effect on a series of simple inputs was determined: unity, 1-D ramp, steps, and hat-function to represent uniform field, wedge, evenly-spaced modulation, and single sharp modulation, respectively. Results: Geometric expressions were developed with perturbation factor included to represent shifts. DRF was modeled using sixth-degree polynomials with correlation coefficient 0.9997. The output was calculated using simple inputs such as unity, 1-D ramp, steps, and hat-function, with perturbation factors of: 0, +/−1, +/−2 and +/−5 mm. Discrepancies have been observed, but large fluctuations have been somewhat mitigated by aliasing arising from discrete diode placement. Conclusion: An analytical expression with perturbation factors was developed to estimate the impact of setup errors on an ArcCHECK phantom. Presently, this has been applied to
Energy Technology Data Exchange (ETDEWEB)
Wikstroem-Blomqvist, Evalena; Franke, Jolanta; Johansson, Ingvar
2007-12-15
The aim of the project is to evaluate the possibilities to simplify the methods used during sampling and laboratory preparation of heterogeneous waste materials. Existing methods for solid fuel material is summarized and evaluated in the project. As a result of the project two new simplified methods, one for field sampling and one for laboratory preparation work has been suggested. One large challenge regarding waste sampling is to achieve a representative sample due to the considerable heterogeneity of the material. How do you perform a sampling campaign that will give rise to representative results without too large costs? The single largest important source of error is the sampling procedure, equivalent to about 80% of the total error. Meanwhile the sample reduction and laboratory work only represents 15 % and 5 % respectively. Thus, to minimize the total error it is very important that the sampling is well planned in a testing program. In the end a very small analytical sample (1 gram) should reflected a large heterogeneous sample population of 1000 of tons. In this project two sampling campaigns, the fall of 2006 and early winter 2007, were conducted at the waste power plant Renova in Gothenburg, Sweden. The first campaign consisted of three different sample sizes with different number of sub-samples. One reference sample (50 tons and 48 sub-samples), two samples consisting of 16 tons and 8 sub-samples and finally two 4 tons consisting of 2 sub-samples each. During the second sampling campaign, four additional 4 ton samples were taken to repeat and thus evaluate the simplified sampling method. This project concludes that the simplified sampling methods only consisting of two sub-samples and a total sample volume of 4 tons give rise to results with as good quality and precision is the more complicated methods tested. Moreover the results from the two sampling campaigns generated equivalent results. The preparation methods used in the laboratory can as well be
Sotiropoulou, Rafaella-Eleni P.; Nenes, Athanasios; Adams, Peter J.; Seinfeld, John H.
2007-01-01
In situ observations of aerosol and cloud condensation nuclei (CCN) and the GISS GCM Model II' with an online aerosol simulation and explicit aerosol-cloud interactions are used to quantify the uncertainty in radiative forcing and autoconversion rate from application of Kohler theory. Simulations suggest that application of Koehler theory introduces a 10-20% uncertainty in global average indirect forcing and 2-11% uncertainty in autoconversion. Regionally, the uncertainty in indirect forcing ranges between 10-20%, and 5-50% for autoconversion. These results are insensitive to the range of updraft velocity and water vapor uptake coefficient considered. This study suggests that Koehler theory (as implemented in climate models) is not a significant source of uncertainty for aerosol indirect forcing but can be substantial for assessments of aerosol effects on the hydrological cycle in climatically sensitive regions of the globe. This implies that improvements in the representation of GCM subgrid processes and aerosol size distribution will mostly benefit indirect forcing assessments. Predictions of autoconversion, by nature, will be subject to considerable uncertainty; its reduction may require explicit representation of size-resolved aerosol composition and mixing state.
Competition increases binding errors in visual working memory.
Emrich, Stephen M; Ferber, Susanne
2012-04-20
When faced with maintaining multiple objects in visual working memory, item information must be bound to the correct object in order to be correctly recalled. Sometimes, however, binding errors occur, and participants report the feature (e.g., color) of an unprobed, non-target item. In the present study, we examine whether the configuration of sample stimuli affects the proportion of these binding errors. The results demonstrate that participants mistakenly report the identity of the unprobed item (i.e., they make a non-target response) when sample items are presented close together in space, suggesting that binding errors can increase independent of increases in memory load. Moreover, the proportion of these non-target responses is linearly related to the distance between sample items, suggesting that these errors are spatially specific. Finally, presenting sample items sequentially decreases non-target responses, suggesting that reducing competition between sample stimuli reduces the number of binding errors. Importantly, these effects all occurred without increases in the amount of error in the memory representation. These results suggest that competition during encoding can account for some of the binding errors made during VWM recall.
Holm-Alwmark, S.; Ferrière, L.; Alwmark, C.; Poelchau, M. H.
2018-01-01
Planar deformation features (PDFs) in quartz are the most widely used indicator of shock metamorphism in terrestrial rocks. They can also be used for estimating average shock pressures that quartz-bearing rocks have been subjected to. Here we report on a number of observations and problems that we have encountered when performing universal stage measurements and crystallographically indexing of PDF orientations in quartz. These include a comparison between manual and automated methods of indexing PDFs, an evaluation of the new stereographic projection template, and observations regarding the PDF statistics related to the c-axis position and rhombohedral plane symmetry. We further discuss the implications that our findings have for shock barometry studies. Our study shows that the currently used stereographic projection template for indexing PDFs in quartz might induce an overestimation of rhombohedral planes with low Miller-Bravais indices. We suggest, based on a comparison of different shock barometry methods, that a unified method of assigning shock pressures to samples based on PDFs in quartz is necessary to allow comparison of data sets. This method needs to take into account not only the average number of PDF sets/grain but also the number of high Miller-Bravais index planes, both of which are important factors according to our study. Finally, we present a suggestion for such a method (which is valid for nonporous quartz-bearing rock types), which consists of assigning quartz grains into types (A-E) based on the PDF orientation pattern, and then calculation of a mean shock pressure for each sample.
Effectiveness of laser sources for contactless sampling of explosives
Akmalov, Artem E.; Chistyakov, Alexander A.; Kotkovskii, Gennadii E.
2016-05-01
A mass-spectrometric study of photo processes initiated by ultraviolet (UV) laser radiation in explosives adsorbed on metal and dielectric substrates has been performed. A calibrated quadrupole mass spectrometer was used to determine a value of activation energy of desorption and a quantity of explosives desorbed by laser radiation. A special vacuumoptical module was elaborated and integrated into a vacuum mass-spectrometric system to focus the laser beam on a sample. It has been shown that the action of nanosecond laser radiation set at q= 107 - 108 W/cm2, λ=266 nm on adsorbed layers of molecules of trinitrotoluene (TNT ) and pentaerytritoltetranitrate (PETN) leads not only to an effective desorption, but also to the non-equilibrium dissociation of molecules with the formation of nitrogen oxide NO. The cyclotrimethylenetrinitramine (RDX) dissociation products are observed only at high laser intensities (q> 109 W/cm2) thus indicating the thermal nature of dissociation, whereas desorption of RDX is observed even at q> 107 W/cm2 from all substrates. Desorption is not observed for cyclotetramethylenetetranitramine (HMX) under single pulse action: the dissociation products NO and NO2 are registered only, whereas irradiation at 10Hz is quite effective for HMX desorption. The results clearly demonstrate a high efficiency of nanosecond laser radiation with λ = 266 nm, q ~ 107 - 108 W/cm2, Epulse= 1mJ for desorption of molecules of explosives from various surfaces.
Effective sampling strategy to detect food and feed contamination
Bouzembrak, Yamine; Fels, van der Ine
2018-01-01
Sampling plans for food safety hazards are aimed to be used to determine whether a lot of food is contaminated (with microbiological or chemical hazards) or not. One of the components of sampling plans is the sampling strategy. The aim of this study was to compare the performance of three
Effects of Blood Sample Collection Pre- and Post- Slaughter, Edta ...
African Journals Online (AJOL)
The samples were immediately subjected to Wet mount (WM), Haemotocrit centrifugation test (HCT) and Thin smear (TS) tests. The results revealed that, of the 100 samples examined, 19 (19%) were positive for the presence of Microfilaria spp while 6(6%) yielded Trypanosome spp. Of the 19 samples detected having ...
International Nuclear Information System (INIS)
Wang, B; Pan, B; Tao, R; Lubineau, G
2017-01-01
The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε . Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach. (paper)
Wang, B
2017-02-15
The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε. Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach.
Directory of Open Access Journals (Sweden)
Rosa M. Manchón
2010-06-01
Full Text Available Framed in a cognitively-oriented strand of research on corrective feedback (CF in SLA, the controlled three- stage (composition/comparison-noticing/revision study reported in this paper investigated the effects of two forms of direct CF (error correction and reformulation on noticing and uptake, as evidenced in the written output produced by a group of 8 secondary school EFL learners. Noticing was operationalized as the amount of corrections noticed in the comparison stage of the writing task, whereas uptake was operationally defined as the type and amount of accurate revisions incorporated in the participants’ revised versions of their original texts. Results support previous research findings on the positive effects of written CF on noticing and uptake, with a clear advantage of error correction over reformulation as far as uptake was concerned. Data also point to the existence of individual differences in the way EFL learners process and make use of CF in their writing. These findings are discussed from the perspective of the light they shed on the learning potential of CF in instructed SLA, and suggestions for future research are put forward.Enmarcado en la investigación de orden cognitivo sobre la corrección (“corrective feedback”, en este trabajo se investigó la incidencia de dos tipos de corrección escrita (corrección de errores y reformulación en los procesos de detección (noticing e incorporación (“uptake”. Ocho alumnos de inglés de Educción Secundaria participaron en un experimento que constó de tres etapas: redacción, comparación-detección y revisión. La detección se definió operacionalmente en términos del número de correcciones registradas por los alumnos durante la etapa de detección-comparación, mientras que la operacionalización del proceso de incorporación fue el tipo y cantidad de revisiones llevadas a cabo en la última etapa del experimento. Nuestros resultados confirman los hallazgos de la
Adams, Megan A; Elmunzer, B Joseph; Scheiman, James M
2014-04-01
In 2001, the University of Michigan Health System (UMHS) implemented a novel medical error disclosure program. This study analyzes the effect of this program on gastroenterology (GI)-related claims and costs. This was a review of claims in the UMHS Risk Management Database (1990-2010), naming a gastroenterologist. Claims were classified according to pre-determined categories. Claims data, including incident date, date of resolution, and total liability dollars, were reviewed. Mean total liability incurred per claim in the pre- and post-implementation eras was compared. Patient encounter data from the Division of Gastroenterology was also reviewed in order to benchmark claims data with changes in clinical volume. There were 238,911 GI encounters in the pre-implementation era and 411,944 in the post-implementation era. A total of 66 encounters resulted in claims: 38 in the pre-implementation era and 28 in the post-implementation era. Of the total number of claims, 15.2% alleged delay in diagnosis/misdiagnosis, 42.4% related to a procedure, and 42.4% involved improper management, treatment, or monitoring. The reduction in the proportion of encounters resulting in claims was statistically significant (P=0.001), as was the reduction in time to claim resolution (1,000 vs. 460 days) (P<0.0001). There was also a reduction in the mean total liability per claim ($167,309 pre vs. $81,107 post, 95% confidence interval: 33682.5-300936.2 pre vs. 1687.8-160526.7 post). Implementation of a novel medical error disclosure program, promoting transparency and quality improvement, not only decreased the number of GI-related claims per patient encounter, but also dramatically shortened the time to claim resolution.
Directory of Open Access Journals (Sweden)
David P Piñero
2015-01-01
Full Text Available Purpose: To evaluate the predictability of the refractive correction achieved with a positional accommodating intraocular lenses (IOL and to develop a potential optimization of it by minimizing the error associated with the keratometric estimation of the corneal power and by developing a predictive formula for the effective lens position (ELP. Materials and Methods: Clinical data from 25 eyes of 14 patients (age range, 52-77 years and undergoing cataract surgery with implantation of the accommodating IOL Crystalens HD (Bausch and Lomb were retrospectively reviewed. In all cases, the calculation of an adjusted IOL power (P IOLadj based on Gaussian optics considering the residual refractive error was done using a variable keratometric index value (n kadj for corneal power estimation with and without using an estimation algorithm for ELP obtained by multiple regression analysis (ELP adj . P IOLadj was compared to the real IOL power implanted (P IOLReal , calculated with the SRK-T formula and also to the values estimated by the Haigis, HofferQ, and Holladay I formulas. Results: No statistically significant differences were found between P IOLReal and P IOLadj when ELP adj was used (P = 0.10, with a range of agreement between calculations of 1.23 D. In contrast, P IOLReal was significantly higher when compared to P IOLadj without using ELP adj and also compared to the values estimated by the other formulas. Conclusions: Predictable refractive outcomes can be obtained with the accommodating IOL Crystalens HD using a variable keratometric index for corneal power estimation and by estimating ELP with an algorithm dependent on anatomical factors and age.
DEFF Research Database (Denmark)
Skogstrand, K.; Thorsen, P.; Vogel, I.
2008-01-01
of whole blood samples at low temperatures and rapid isolation of plasma and serum. Effects of different handling procedures for all markers studied are given. DBSS proved to be a robust and convenient way to handle samples for immunoassay analysis of inflammatory markers in whole blood Udgivelsesdato......The interests in monitoring inflammation by immunoassay determination of blood inflammatory markers call for information on the stability of these markers in relation to the handling of blood samples. The increasing use of stored biobank samples for such ventures that may have been collected...... and stored for other purposes, justifies the study hereof. Blood samples were stored for 0, 4, 24, and 48 h at 4 degrees C, room temperature (RT), and at 35 degrees C, respectively, before they were separated into serum or plasma and frozen. Dried blood spot samples (DBSS) were stored for 0, 1, 2, 3, 7...
Directory of Open Access Journals (Sweden)
Michael B.C. Khoo
2013-11-01
Full Text Available The double sampling (DS X bar chart, one of the most widely-used charting methods, is superior for detecting small and moderate shifts in the process mean. In a right skewed run length distribution, the median run length (MRL provides a more credible representation of the central tendency than the average run length (ARL, as the mean is greater than the median. In this paper, therefore, MRL is used as the performance criterion instead of the traditional ARL. Generally, the performance of the DS X bar chart is investigated under the assumption of known process parameters. In practice, these parameters are usually estimated from an in-control reference Phase-I dataset. Since the performance of the DS X bar chart is significantly affected by estimation errors, we study the effects of parameter estimation on the MRL-based DS X bar chart when the in-control average sample size is minimised. This study reveals that more than 80 samples are required for the MRL-based DS X bar chart with estimated parameters to perform more favourably than the corresponding chart with known parameters.
DEFF Research Database (Denmark)
Itoh, Kenji; Omata, N.; Andersen, Henning Boje
2009-01-01
The present paper reports on a human error taxonomy system developed for healthcare risk management and on its application to evaluating safety performance and reporting culture. The taxonomy comprises dimensions for classifying errors, for performance-shaping factors, and for the maturity...
Zhao, Chen-Guang; Tan, Jiu-Bin; Liu, Tao
2010-09-01
The mechanism of a non-polarizing beam splitter (NPBS) with asymmetrical transfer coefficients causing the rotation of polarization direction is explained in principle, and the measurement nonlinear error caused by NPBS is analyzed based on Jones matrix theory. Theoretical calculations show that the nonlinear error changes periodically, and the error period and peak values increase with the deviation between transmissivities of p-polarization and s-polarization states. When the transmissivity of p-polarization is 53% and that of s-polarization is 48%, the maximum error reaches 2.7 nm. The imperfection of NPBS is one of the main error sources in simultaneous phase-shifting polarization interferometer, and its influence can not be neglected in the nanoscale ultra-precision measurement.
Counting OCR errors in typeset text
Sandberg, Jonathan S.
1995-03-01
Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.
International Nuclear Information System (INIS)
Saengkul, C.; Sawangwong, P.; Pakkong, P.
2014-01-01
Contamination of 137 Cs in sediment is a far more serious problem than in water because sediment is a main transport factor of 137 Cs to the aquatic environmental. Most of 137 Cs in water could be accumulated in sediment which has direct effect to benthos. This study focused on factors effecting the adsorption of 137Cs in marine sediment samples collected from four different estuary sites to assess the transfer direction of 137 Cs from water to sediment that the study method by treat 137 Cs into seawater and mixed with different sediment samples for 4 days. The result indicated that properties of marine sediment (cation exchange capacity (CEC), organic matter, clay content, texture, type of clay mineral and size of soil particle) had effects on 137 Cs adsorption. CEC and clay content correlated positively with the accumulation of 137 Cs in the marine sediment samples. On the other hand, organic matter in sediment correlated negatively with the accumulation of 137 Cs in samples. The study of environmental effects (pH and potassium) found that the 137 Cs adsorption decreased when concentration of potassium increased. The pH effect is still unclear in this study because the differentiation of pH levels (6, 7, 8.3) did not have effects on 137 Cs adsorption in the samples.
Effects of GPS sampling intensity on home range analyses
Jeffrey J. Kolodzinski; Lawrence V. Tannenbaum; David A. Osborn; Mark C. Conner; W. Mark Ford; Karl V. Miller
2010-01-01
The two most common methods for determining home ranges, minimum convex polygon (MCP) and kernel analyses, can be affected by sampling intensity. Despite prior research, it remains unclear how high-intensity sampling regimes affect home range estimations. We used datasets from 14 GPS-collared, white-tailed deer (Odocoileus virginianus) to describe...
The study of CD side to side error in line/space pattern caused by post-exposure bake effect
Huang, Jin; Guo, Eric; Ge, Haiming; Lu, Max; Wu, Yijun; Tian, Mingjing; Yan, Shichuan; Wang, Ran
2016-10-01
In semiconductor manufacturing, as the design rule has decreased, the ITRS roadmap requires crucial tighter critical dimension (CD) control. CD uniformity is one of the necessary parameters to assure good performance and reliable functionality of any integrated circuit (IC) [1] [2], and towards the advanced technology nodes, it is a challenge to control CD uniformity well. The study of corresponding CD Uniformity by tuning Post-Exposure bake (PEB) and develop process has some significant progress[3], but CD side to side error happening to some line/space pattern are still found in practical application, and the error has approached to over the uniformity tolerance. After details analysis, even though use several developer types, the CD side to side error has not been found significant relationship to the developing. In addition, it is impossible to correct the CD side to side error by electron beam correction as such error does not appear in all Line/Space pattern masks. In this paper the root cause of CD side to side error is analyzed and the PEB module process are optimized as a main factor for improvement of CD side to side error.
Energy Technology Data Exchange (ETDEWEB)
Romero-Gomez, P.; Harding, S. F.; Richmond, M. C.
2017-01-01
Standards provide recommendations for best practices when installing current meters to measure fluid flow in closed conduits. A central guideline requires the velocity distribution to be regular and the flow steady. Because of the nature of the short converging intakes typical of low-head hydroturbines, these assumptions may be invalid if current meters are intended to be used to estimate discharge. Usual concerns are (1) the effects of the number of devices, (2) the sampling location and (3) the high turbulence caused by blockage from submersible traveling screens usually deployed for safe downstream fish passage. These three effects were examined in the present study by using 3D simulated flow fields in both steady-state and transient modes. In the process of describing an application at an existing hydroturbine intake at Ice Harbor Dam, the present work outlined the methods involved, which combined computational fluid dynamics, laboratory measurements in physical models of the hydroturbine, and current meter performance evaluations in experimental settings. The main conclusions in this specific application were that a steady-state flow field sufficed to determine the adequate number of meters and their location, and that both the transverse velocity and turbulence intensity had a small impact on estimate errors. However, while it may not be possible to extrapolate these findings to other field conditions and measuring devices, the study laid out a path to conduct similar assessments in other applications.
Architecture design for soft errors
Mukherjee, Shubu
2008-01-01
This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.
Directory of Open Access Journals (Sweden)
Antonio Boldrini
2013-06-01
Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research
Tridandapani, Srini; Ramamurthy, Senthil; Provenzale, James; Obuchowski, Nancy A; Evanoff, Michael G; Bhatti, Pamela
2014-08-01
To evaluate whether the presence of facial photographs obtained at the point-of-care of portable radiography leads to increased detection of wrong-patient errors. In this institutional review board-approved study, 166 radiograph-photograph combinations were obtained from 30 patients. Consecutive radiographs from the same patients resulted in 83 unique pairs (ie, a new radiograph and prior, comparison radiograph) for interpretation. To simulate wrong-patient errors, mismatched pairs were generated by pairing radiographs from different patients chosen randomly from the sample. Ninety radiologists each interpreted a unique randomly chosen set of 10 radiographic pairs, containing up to 10% mismatches (ie, error pairs). Radiologists were randomly assigned to interpret radiographs with or without photographs. The number of mismatches was identified, and interpretation times were recorded. Ninety radiologists with 21 ± 10 (mean ± standard deviation) years of experience were recruited to participate in this observer study. With the introduction of photographs, the proportion of errors detected increased from 31% (9 of 29) to 77% (23 of 30; P = .006). The odds ratio for detection of error with photographs to detection without photographs was 7.3 (95% confidence interval: 2.29-23.18). Observer qualifications, training, or practice in cardiothoracic radiology did not influence sensitivity for error detection. There is no significant difference in interpretation time for studies without photographs and those with photographs (60 ± 22 vs. 61 ± 25 seconds; P = .77). In this observer study, facial photographs obtained simultaneously with portable chest radiographs increased the identification of any wrong-patient errors, without substantial increase in interpretation time. This technique offers a potential means to increase patient safety through correct patient identification. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Samuel Arba-Mosquera
2012-01-01
Conclusions: The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.
Validity of a portable urine refractometer: the effects of sample freezing.
Sparks, S Andy; Close, Graeme L
2013-01-01
The use of portable urine osmometers is widespread, but no studies have assessed the validity of this measurement technique. Furthermore, it is unclear what effect freezing has on osmolality. One-hundred participants of mean (±SD) age 25.1 ± 7.6 years, height 1.77 ± 0.1 m and weight 77.1 ± 10.8 kg provided single urine samples that were analysed using freeze point depression (FPD) and refractometry (RI). Samples were then frozen at -80°C (n = 81) and thawed prior to re-analysis. Differences between methods and freezing were determined using Wilcoxon's signed rank test. Relationships between measurements were assessed using intraclass correlation coefficients (ICC) and typical error of estimate (TE). Osmolality was lower (P = 0.001) using RI (634.2 ± 339.8 mOsm · kgH2O(-1)) compared with FPD (656.7 ± 334.1 mOsm · kgH2O(-1)) but the TE was trivial (0.17). Freezing significantly reduced mean osmolality using FPD (656.7 ± 341.1 to 606.5 ± 333.4 mOsm · kgH2O(-1); P < 0.001), but samples were still highly related following freezing (ICC, r = 0.979, P < 0.001, CI = 0.993-0.997; TE = 0.15; and r=0.995, P < 0.001, CI = 0.967-0.986; TE = 0.07 for RI and FPD respectively). Despite mean differences between methods and as a result of freezing, such differences are physiologically trivial. Therefore, the use of RI appears to be a valid measurement tool to determine urine osmolality.
Modeling coherent errors in quantum error correction
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Hotspot and sampling analysis for effective maintenance and performance monitoring.
2017-05-01
In this project, we propose two sampling methods addressing how much and where the agencies need to collect infrastraucture condition data for accurate Level-of-Maintenance (LOM) estimation in maintenance network with single type or multiple ty...
Garrison, Jane R; Bond, Rebecca; Gibbard, Emma; Johnson, Marcia K; Simons, Jon S
2017-02-01
Reality monitoring refers to processes involved in distinguishing internally generated information from information presented in the external world, an activity thought to be based, in part, on assessment of activated features such as the amount and type of cognitive operations and perceptual content. Impairment in reality monitoring has been implicated in symptoms of mental illness and associated more widely with the occurrence of anomalous perceptions as well as false memories and beliefs. In the present experiment, the cognitive mechanisms of reality monitoring were probed in healthy individuals using a task that investigated the effects of stimulus modality (auditory vs visual) and the type of action undertaken during encoding (thought vs speech) on subsequent source memory. There was reduced source accuracy for auditory stimuli compared with visual, and when encoding was accompanied by thought as opposed to speech, and a greater rate of externalization than internalization errors that was stable across factors. Interpreted within the source monitoring framework (Johnson, Hashtroudi, & Lindsay, 1993), the results are consistent with the greater prevalence of clinically observed auditory than visual reality discrimination failures. The significance of these findings is discussed in light of theories of hallucinations, delusions and confabulation. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Perfect, Timothy J; Field, Ian; Jones, Robert
2009-01-01
Unconscious plagiarism occurs when people try to generate new ideas or when they try to recall their own ideas from among a set generated by a group. In this study, the factors that independently influence these two forms of plagiarism error were examined. Participants initially generated solutions to real-world problems in 2 domains of knowledge in collaboration with a confederate presented as an expert in 1 domain. Subsequently, the participant generated improvements to half of the ideas from each person. Participants returned 1 day later to recall either their own ideas or their partner's ideas and to complete a generate-new task. A double dissociation was observed. Generate-new plagiarism was driven by partner expertise but not by idea improvement, whereas recall plagiarism was driven by improvement but not expertise. This improvement effect on recall plagiarism was seen for the recall-own but not the recall-partner task, suggesting that the increase in recall-own plagiarism is due to mistaken idea ownership, not source confusion.
Aradhya, Sriharsha; Rowlands, Graham; Shi, Shengjie; Oh, Junseok; Ralph, D. C.; Buhrman, Robert
Magnetic random access memory (MRAM) using spin transfer torques (STT) holds great promise for replacing existing best-in-class memory technologies in several application domains. Research on conventional two-terminal STT-MRAM thus far has revealed the existence of limitations that constrain switching reliability and speed for both in-plane and perpendicularly magnetized devices. Recently, spin torque arising from the giant spin-Hall effect in Ta, W and Pt has been shown to be an efficient mechanism to switch magnetic bits in a three-terminal geometry. Here we report highly reliable, nanosecond timescale pulse switching of three-terminal devices with in-plane magnetized magnetic tunnel junctions. We obtain write error rates (WER) down to ~10-5 using pulses as short as 2 ns, in contrast to conventional in-plane STT-MRAM devices where write speeds were limited to a few tens of nanoseconds for comparable WER. Utilizing micro-magnetic simulations, we discuss the differences from conventional MRAM that allow for this unanticipated and significant performance improvement. Finally, we highlight the path towards practical application enabled by the ability to separately optimize the read and write pathways in three-terminal devices.
Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun
2015-08-17
Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.
Energy Technology Data Exchange (ETDEWEB)
Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk
2016-10-15
Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allows us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.
International Nuclear Information System (INIS)
Kikuchi, T.; Kawata, S.; Kawata, S.; Nakajima, M.; Horioka, K.
2006-01-01
Emittance growth due to the transverse focusing field error is investigated during the final beam bunching in the energy driver system of heavy ion inertial fusion. The beam bunch is longitudinally compressed during the transport with the field error in the continuous focusing (CF) or the alternating gradient (AG) field lattices. Numerical calculation results show the only 2% difference of the emittance growth between the cases with and without field error in the CF lattice. In the case of the AG lattice model with the field error of 10%, the emittance growth of 2.4 times is estimated, and the major difference between the CF and AG models is indicated from the numerical simulations. (author)
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous
The effect of sample preparation on uranium hydriding
International Nuclear Information System (INIS)
Banos, A.; Stitt, C.A.; Scott, T.B.
2016-01-01
Highlights: • Distinct differences in uranium hydride growth rates and characteristics between different surface preparation methods. • The primary difference between the categories of sample preparations is the level of strain present in the surface. • Greater surface-strain, leads to higher nucleation number density, implying a preferred attack of strained vs unstrained metal. • As strain is reduced, surface features such as carbides and grain boundaries become more important in controlling the UH3 location. - Abstract: The influence of sample cleaning preparation on the early stages of uranium hydriding has been examined, by using four identical samples but concurrently prepared using four different methods. The samples were reacted together in the same corrosion cell to ensure identical exposure conditions. From the analysis, it was found that the hydride nucleation rate was proportional to the level of strain exhibiting higher number density for the more strained surfaces. Additionally, microstructure of the metal plays a secondary role regarding initial hydrogen attack on the highly strained surfaces yet starts to dominate the system while moving to more pristine samples.
Kim, Myoungsoo
2010-04-01
The purpose of this study was to examine the impact of strategies to promote reporting of errors on nurses' attitude to reporting errors, organizational culture related to patient safety, intention to report and reporting rate in hospital nurses. A nonequivalent control group non-synchronized design was used for this study. The program was developed and then administered to the experimental group for 12 weeks. Data were analyzed using descriptive analysis, X(2)-test, t-test, and ANCOVA with the SPSS 12.0 program. After the intervention, the experimental group showed significantly higher scores for nurses' attitude to reporting errors (experimental: 20.73 vs control: 20.52, F=5.483, p=.021) and reporting rate (experimental: 3.40 vs control: 1.33, F=1998.083, porganizational culture and intention to report. The study findings indicate that strategies that promote reporting of errors play an important role in producing positive attitudes to reporting errors and improving behavior of reporting. Further advanced strategies for reporting errors that can lead to improved patient safety should be developed and applied in a broad range of hospitals.
Dopamine reward prediction error coding.
Schultz, Wolfram
2016-03-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.
Rovno Amber Ant Assamblage: Bias toward Arboreal Strata or Sampling Effect?
Directory of Open Access Journals (Sweden)
Perkovsky E. E.
2016-06-01
Full Text Available In 2015 B. Guenard with co-authors indicated that the Rovno amber ant assemblage, as described by G. Dlussky and A. Rasnitsyn (2009, showed modest support for a bias towards arboreal origin comparing the Baltic and Bitterfeld assemblages, although it is not clear whether this reflects a sampling error or a signal of real deviation. Since 2009, the Rovno ant collection has now grown more than twice in volume which makes possible to check if the above inference about the essentially arboreal character of the assemblage is real or due to a sampling error. The comparison provided suggests in favour of the latter reason for the bias revealed by B. Guenard and co-authors. The new and larger data on the Rovno assemblage show that the share of non-arboreal ants is now well comparable with those concerning the Baltic and Bitterfeld assemblages. This holds true for the both total assemblages and subassemblages of worker ants only.
Effect of method of sample preparation on ruminal in situ ...
African Journals Online (AJOL)
Midmar) was harvested at three and four weeks after cutting and fertilizing with 200 kg nitrogen (N)/ha. Freshly cut herbage was used to investigate the following four sample preparation methods. In trial 1, herbage was (1) chopped with a paper-cutting guillotine into 5-10 mm lengths, representing fresh (FR) herbage; ...
Field testing for cosmic ray soft errors in semiconductor memories
International Nuclear Information System (INIS)
O'Gorman, T.J.; Ross, J.M.; Taber, A.H.; Ziegler, J.F.; Muhlfeld, H.P.; Montrose, C.J.; Curtis, H.W.; Walsh, J.L.
1996-01-01
This paper presents a review of experiments performed by IBM to investigate the causes of soft errors in semiconductor memory chips under field test conditions. The effects of alpha-particles and cosmic rays are separated by comparing multiple measurements of the soft-error rate (SER) of samples of memory chips deep underground and at various altitudes above the earth. The results of case studies on four different memory chips show that cosmic rays are an important source of the ionizing radiation that causes soft errors. The results of field testing are used to confirm the accuracy of the modeling and the accelerated testing of chips
Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I
Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions
Approximation errors during variance propagation
International Nuclear Information System (INIS)
Dinsmore, Stephen
1986-01-01
Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given
Directory of Open Access Journals (Sweden)
Lya Aklimawati
2013-12-01
Full Text Available High volatility cocoa price movement is consequenced by imbalancing between power demand and power supply in commodity market. World economy expectation and market liberalization would lead to instability on cocoa prices in the international commerce. Dynamic prices moving erratically influence the benefit of market players, particularly producers. The aim of this research is (1 to estimate the empirical cocoa prices model for responding market dynamics and (2 analyze short-term and long-term effect of price determinants variables on cocoa prices. This research was carried out by analyzing annualdata from 1980 to 2011, based on secondary data. Error correction mechanism (ECM approach was used to estimate the econometric model of cocoa price.The estimation results indicated that cocoa price was significantly affected by exchange rate IDR-USD, world gross domestic product, world inflation, worldcocoa production, world cocoa consumption, world cocoa stock and Robusta prices at varied significance level from 1 - 10%. All of these variables have a long run equilibrium relationship. In long run effect, world gross domestic product, world cocoa consumption and world cocoa stock were elastic (E >1, while other variables were inelastic (E <1. Variables that affecting cocoa pricesin short run equilibrium were exchange rate IDR-USD, world gross domestic product, world inflation, world cocoa consumption and world cocoa stock. The analysis results showed that world gross domestic product, world cocoa consumption and world cocoa stock were elastic (E >1 to cocoa prices in short-term. Whereas, the response of cocoa prices was inelastic to change of exchange rate IDR-USD and world inflation.Key words: Price
Learning from prescribing errors
Dean, B
2002-01-01
The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1981-01-01
Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
Effect of slope errors on the performance of mirrors for x-ray free electron laser applications.
Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P
2015-12-14
In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help to correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.
International Nuclear Information System (INIS)
Salas, P.J.; Sanz, A.L.
2004-01-01
In this work we discuss the ability of different types of ancillas to control the decoherence of a qubit interacting with an environment. The error is introduced into the numerical simulation via a depolarizing isotropic channel. The ranges of values considered are 10 -4 ≤ε≤10 -2 for memory errors and 3x10 -5 ≤γ/7≤10 -2 for gate errors. After the correction we calculate the fidelity as a quality criterion for the qubit recovered. We observe that a recovery method with a three-qubit ancilla provides reasonably good results bearing in mind its economy. If we want to go further, we have to use fault tolerant ancillas with a high degree of parallelism, even if this condition implies introducing additional ancilla verification qubits
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
Sample-size effects in fast-neutron gamma-ray production measurements: solid-cylinder samples
International Nuclear Information System (INIS)
Smith, D.L.
1975-09-01
The effects of geometry, absorption and multiple scattering in (n,Xγ) reaction measurements with solid-cylinder samples are investigated. Both analytical and Monte-Carlo methods are employed in the analysis. Geometric effects are shown to be relatively insignificant except in definition of the scattering angles. However, absorption and multiple-scattering effects are quite important; accurate microscopic differential cross sections can be extracted from experimental data only after a careful determination of corrections for these processes. The results of measurements performed using several natural iron samples (covering a wide range of sizes) confirm validity of the correction procedures described herein. It is concluded that these procedures are reliable whenever sufficiently accurate neutron and photon cross section and angular distribution information is available for the analysis. (13 figures, 5 tables) (auth)
Mobility, bioavailability, and toxic effects of cadmium in soil samples
International Nuclear Information System (INIS)
Prokop, Z.; Cupr, P.; Zlevorova-Zlamalikova V.; Komarek, J.; Dusek, L.; Holoubek, I.
2003-01-01
Total concentration is not a reliable indicator of metal mobility or bioavailability in soils. The physicochemical form determines the behavior of metals in soils and hence the toxicity toward terrestrial biota. The main objectives of this study were the application and comparison of three approaches for the evaluation of cadmium behavior in soil samples. The mobility and bioavailability of cadmium in five selected soil samples were evaluated using equilibrium speciation (Windermere humic aqueous mode (WHAM)), extraction procedures (Milli-Q water, DMSO, and DTPA), and a number of bioassays (Microtox, growth inhibition test, contact toxicity test, and respiration). The mobility, represented by the water-extractable fraction corresponded well with the amount of cadmium in the soil solution, calculate using the WHAM (r 2 =0.96, P<0.001). The results of the ecotoxicologica evaluation, which represent the bioavailable fraction of cadmium, correlated well with DTPA extractability and also with the concentration of free cadmium ion, which is recognized as the most bioavailable metal form. The results of the WHAM as well as the results of extraction experiments showed a strong binding of cadmium to organic matter and a weak sorption of cadmium to clay minerals
Wei, Xiaobo; Liu, Mengjiao; Ding, Yun; Li, Qilin; Cheng, Changhai; Zong, Xian; Yin, Wenming; Chen, Jie; Gu, Wendong
2018-05-08
Breast-conserving surgery (BCS) plus postoperative radiotherapy has become the standard treatment for early-stage breast cancer. The aim of this study was to compare the setup accuracy of optical surface imaging by the Sentinel system with cone-beam computerized tomography (CBCT) imaging currently used in our clinic for patients received BCS. Two optical surface scans were acquired before and immediately after couch movement correction. The correlation between the setup errors as determined by the initial optical surface scan and CBCT was analyzed. The deviation of the second optical surface scan from the reference planning CT was considered an estimate for the residual errors for the new method for patient setup correction. The consequences in terms for necessary planning target volume (PTV) margins for treatment sessions without setup correction applied. We analyzed 145 scans in 27 patients treated for early stage breast cancer. The setup errors of skin marker based patient alignment by optical surface scan and CBCT were correlated, and the residual setup errors as determined by the optical surface scan after couch movement correction were reduced. Optical surface imaging provides a convenient method for improving the setup accuracy for breast cancer patient without unnecessary imaging dose.
Lugtig, Peter; Toepoel, Vera
2016-01-01
Respondents in an Internet panel survey can often choose which device they use to complete questionnaires: a traditional PC, laptop, tablet computer, or a smartphone. Because all these devices have different screen sizes and modes of data entry, measurement errors may differ between devices. Using
Petrova, Natalia; Kocoulin, Valerii; Nefediev, Yurii
2016-07-01
the time of observation), this error is reduced by an order, i.e. does not exceed the error of observation selenographic coordinates. 2. The worst thing - errors in coordinates of catalogue causes though a small but constant shift in the ρ and Iσ. So, when Δα, Δδ ˜0.01", then the shift reaches 0.0025". Moreover there is a trend, with a slight, but noticeable slope. 3. Effect of error in declination of a stars is substantially strong than the error in right ascension. Perhaps it is characteristic only for polar observations. For the required accuracy in determination of the physical libration these phenomena must be taken into account when processing the planned observations. Referencies. Nefediev et al., 2013. Uchenye zapiski Kazanskogo universiteta, v. 155, 1, p.188-194. Petrova, N., Abdulmyanov T., Hanada H. Some qualitative manifestations of the physical libration of the Moon by observing stars from the lunar surface. //J. Adv. Space Res., 2012a. V. 50, p. 1702-1711
Energy Technology Data Exchange (ETDEWEB)
Kaganovich, Igor D.; Massidda, Scottt; Startsev, Edward A.; Davidson, Ronald C.; Vay, Jean-Luc; Friedman, Alex
2012-06-21
Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the
Cognitive aspect of diagnostic errors.
Phua, Dong Haur; Tan, Nigel C K
2013-01-01
Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.
International Nuclear Information System (INIS)
Anon.
1991-01-01
This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements
International Nuclear Information System (INIS)
Picard, R.R.
1989-01-01
Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process
Martínez-Legaz, Juan Enrique; Soubeyran, Antoine
2003-01-01
We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.
Effects of radiation on lithium aluminate samples properties
Energy Technology Data Exchange (ETDEWEB)
Botter, F.; Lefevre, F.; Rasneur, B.; Trotabas, M.; Roth, E.
The irradiation behaviour of lithium aluminate, a candidate material for a fusion reactor blanket, has been investigated. About 130 samples of 7.5% WLi content el-LiAlO2 have been loaded in a 6 level device, and were irradiated for 25.7 FPD in the core of the Osiris reactor at Saclay at the end of 1984, within an experiment named ALICE 1. The properties of several textural groups have been examined before and after irradiation and the correlation of the results observed as a funcion of the irradiation conditions is given. No significant variation of the properties, as a whole, was shown at 400C under fluences of 4.7x10S n cm S fast neutrons (>1 MeV) and 1.48x10S n cm S thermal neutrons. At 600C, under the highest flux, weight losses less than 1%, and decreases of 2 to 8% of the sound velocity were measured. Generally, neither swelling nor breakage, except those due to combined mechanical and thermal shocks, were observed.
Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use
Arthur, Steve M.; Schwartz, Charles C.
1999-01-01
We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.
Generalized Gaussian Error Calculus
Grabe, Michael
2010-01-01
For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...
Barriers to medical error reporting
Directory of Open Access Journals (Sweden)
Jalal Poorolajal
2015-01-01
Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.
Quantile Regression With Measurement Error
Wei, Ying
2009-08-27
Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.
Medication errors: prescribing faults and prescription errors.
Velo, Giampaolo P; Minuz, Pietro
2009-06-01
1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.
International Nuclear Information System (INIS)
Kobayashi, H.; Matsunaga, T.; Hoyano, A.
2002-01-01
Absorbed photosynthetically active radiation (APAR), which is defined as downward solar radiation in 400-700 nm absorbed by vegetation, is one of the significant variables for Net Primary Production (NPP) estimation from satellite data. Toward the reduction of the uncertainties in the global NPP estimation, it is necessary to clarify the APAR accuracy. In this paper, first we proposed the improved PAR estimation method based on Eck and Dye's method in which the ultraviolet (UV) reflectivity data derived from Total Ozone Mapping Spectrometer (TOMS) at the top of atmosphere were used for clouds transmittance estimation. The proposed method considered the variable effects of land surface UV reflectivity on the satellite-observed UV data. Monthly mean PAR comparisons between satellite-derived and ground-based data at various meteorological stations in Japan indicated that the improved PAR estimation method reduced the bias errors in the summer season. Assuming the relative error of the fraction of PAR (FPAR) derived from Moderate Resolution Imaging Spectroradiometer (MODIS) to be 10%, we estimated APAR relative errors to be 10-15%. Annual NPP is calculated using APAR derived from MODIS/ FPAR and the improved PAR estimation method. It is shown that random and bias errors of annual NPP in a 1 km resolution pixel are less than 4% and 6% respectively. The APAR bias errors due to the PAR bias errors also affect the estimated total NPP. We estimated the most probable total annual NPP in Japan by subtracting the bias PAR errors. It amounts about 248 MtC/yr. Using the improved PAR estimation method, and Eck and Dye's method, total annual NPP is 4% and 9% difference from most probable value respectively. The previous intercomparison study among using fifteen NPP models4) showed that global NPP estimations among NPP models are 44.4-66.3 GtC/yr (coefficient of variation = 14%). Hence we conclude that the NPP estimation uncertainty due to APAR estimation error is small
2012-01-01
Background and Aim: According to the biorhythm theory when the phase shift from positive to negative and vice versa people experience a critical an unstable day that prone them to error and accident. The purpose of this study is to determine this relationship in one of the automobile manufacturing industry. . Materials and Methods: At first 1280 person incident entered the study was reviewed and then the critical days of each biological cycle was determined using the software Easy Biorh...
Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.
2008-07-01
We analyze the long-time behavior of a quantum computer running a quantum error correction (QEC) code in the presence of a correlated environment. Starting from a Hamiltonian formulation of realistic noise models, and assuming that QEC is indeed possible, we find formal expressions for the probability of a given syndrome history and the associated residual decoherence encoded in the reduced density matrix. Systems with nonzero gate times (“long gates”) are included in our analysis by using an upper bound on the noise. In order to introduce the local error probability for a qubit, we assume that propagation of signals through the environment is slower than the QEC period (hypercube assumption). This allows an explicit calculation in the case of a generalized spin-boson model and a quantum frustration model. The key result is a dimensional criterion: If the correlations decay sufficiently fast, the system evolves toward a stochastic error model for which the threshold theorem of fault-tolerant quantum computation has been proven. On the other hand, if the correlations decay slowly, the traditional proof of this threshold theorem does not hold. This dimensional criterion bears many similarities to criteria that occur in the theory of quantum phase transitions.
Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas
2013-01-01
In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.
Krypton and xenon in Apollo 14 samples - Fission and neutron capture effects in gas-rich samples
Drozd, R.; Hohenberg, C.; Morgan, C.
1975-01-01
Gas-rich Apollo 14 breccias and trench soil are examined for fission xenon from the decay of the extinct isotopes Pu-244 and I-129, and some samples have been found to have an excess fission component which apparently was incorporated after decay elsewhere and was not produced by in situ decay. Two samples have excess Xe-129 resulting from the decay of I-129. The excess is correlated at low temperatures with excess Xe-128 resulting from neutron capture on I-127. This neutron capture effect is accompanied by related low-temperature excesses of Kr-80 and Kr-82 from neutron capture on the bromine isotopes. Surface correlated concentrations of iodine and bromine are calculated from the neutron capture excesses.
Orfanos, Philippos; Knüppel, Sven; Naska, Androniki; Haubrock, Jennifer; Trichopoulou, Antonia; Boeing, Heiner
2013-09-28
Eating out is often recorded through short-term measurements and the large within-person variability in intakes may not be adequately captured. The present study aimed to understand the effect of measurement error when using eating-out data from one or two 24 h dietary recalls (24hDR), in order to describe intakes and assess associations between eating out and personal characteristics. In a sample of 366 adults from Potsdam, Germany, two 24hDR and a FFQ were collected. Out-of-home intakes were estimated based on either one 24hDR or two 24hDR or the Multiple Source Method (MSM) combining the two 24hDR and the questionnaire. The distribution of out-of-home intakes of energy, macronutrients and selected foods was described. Multiple linear regression and partial correlation coefficients were estimated to assess associations between out-of-home energy intake and participants' characteristics. The mean daily out-of-home intakes estimated from the two 24hDR were similar to the usual intakes estimated through the MSM. The out-of-home energy intake, estimated through either one or two 24hDR, was positively associated with total energy intake, inversely with age and associations were stronger when using the two 24hDR. A marginally significant inverse association between out-of-home energy intake and physical activity at work was observed only on the basis of the two 24hDR. After applying the MSM, all significant associations remained and were more precise. Data on eating out collected through one or two 24hDR may not adequately describe intake distributions, but significant associations between eating out and participants' characteristics are highly unlikely to appear when in reality these do not exist.
Influence of random setup error on dose distribution
International Nuclear Information System (INIS)
Zhai Zhenyu
2008-01-01
Objective: To investigate the influence of random setup error on dose distribution in radiotherapy and determine the margin from ITV to PTV. Methods: A random sample approach was used to simulate the fields position in target coordinate system. Cumulative effect of random setup error was the sum of dose distributions of all individual treatment fractions. Study of 100 cumulative effects might get shift sizes of 90% dose point position. Margins from ITV to PTV caused by random setup error were chosen by 95% probability. Spearman's correlation was used to analyze the influence of each factor. Results: The average shift sizes of 90% dose point position was 0.62, 1.84, 3.13, 4.78, 6.34 and 8.03 mm if random setup error was 1,2,3,4,5 and 6 mm,respectively. Univariate analysis showed the size of margin was associated only by the size of random setup error. Conclusions: Margin of ITV to PTV is 1.2 times random setup error for head-and-neck cancer and 1.5 times for thoracic and abdominal cancer. Field size, energy and target depth, unlike random setup error, have no relation with the size of the margin. (authors)
Sampling protein motion and solvent effect during ligand binding
Limongelli, Vittorio; Marinelli, Luciana; Cosconati, Sandro; La Motta, Concettina; Sartini, Stefania; Mugnaini, Laura; Da Settimo, Federico; Novellino, Ettore; Parrinello, Michele
2012-01-01
An exhaustive description of the molecular recognition mechanism between a ligand and its biological target is of great value because it provides the opportunity for an exogenous control of the related process. Very often this aim can be pursued using high resolution structures of the complex in combination with inexpensive computational protocols such as docking algorithms. Unfortunately, in many other cases a number of factors, like protein flexibility or solvent effects, increase the degree of complexity of ligand/protein interaction and these standard techniques are no longer sufficient to describe the binding event. We have experienced and tested these limits in the present study in which we have developed and revealed the mechanism of binding of a new series of potent inhibitors of Adenosine Deaminase. We have first performed a large number of docking calculations, which unfortunately failed to yield reliable results due to the dynamical character of the enzyme and the complex role of the solvent. Thus, we have stepped up the computational strategy using a protocol based on metadynamics. Our approach has allowed dealing with protein motion and solvation during ligand binding and finally identifying the lowest energy binding modes of the most potent compound of the series, 4-decyl-pyrazolo[1,5-a]pyrimidin-7-one. PMID:22238423
An Empirical State Error Covariance Matrix for Batch State Estimation
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the
Directory of Open Access Journals (Sweden)
Scheid Anika
2012-07-01
Full Text Available Abstract Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent stochastic context-free grammar (SCFG that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples, where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones, then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
Dissociable genetic contributions to error processing: a multimodal neuroimaging study.
Directory of Open Access Journals (Sweden)
Yigal Agam
Full Text Available Neuroimaging studies reliably identify two markers of error commission: the error-related negativity (ERN, an event-related potential