WorldWideScience

Sample records for account positioning errors

  1. Taking into account positioning errors during a three dimensional conformal radiotherapy for a non at small cells lung cancer

    International Nuclear Information System (INIS)

    Fernandez, D.; Maisonobe, J.A.; Leignel, D.; Durdux, C.; Henni, M.; Dessard-Diana, B.; Housset, M.; Giraud, P.

    2009-01-01

    Purpose: According to the report 62 of the International commission on radiation units and measurements (ICRU), the estimated target volume adds to the internal margin that takes into account the movements of the target volume during breathing, an external margin that takes into account the uncertainties of beams positioning. Our objective was to describe a method of estimated target volume calculation taking into account the technique of irradiation chosen in the service. (N.C.)

  2. 40 CFR 73.37 - Account error.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Account error. 73.37 Section 73.37 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) SULFUR DIOXIDE ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole...

  3. THE INFLUENCE OF ACCOUNTANCY ERRORS ON FINANCIAL AND TAX REPORTS

    Directory of Open Access Journals (Sweden)

    Mariana GURĂU

    2016-06-01

    Full Text Available To make mistakes is human. An accountant may do mistakes, too. Accountancy errors are defined and classsified by accounting regulations. These set what is the accountant treatment for correcting accountancy errors. However, even though one of the objectives in accounting normalization is made by the disconnection between accountancy and taxation, the accountancy errors influence especially tax reports. We will further point the impact of accountancy errors on financial and tax reports. We will also approach the accountancy principles that impose the rules described for correcting the errors.

  4. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  5. Position Error Covariance Matrix Validation and Correction

    Science.gov (United States)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  6. Pendulum Shifts, Context, Error, and Personal Accountability

    Energy Technology Data Exchange (ETDEWEB)

    Harold Blackman; Oren Hester

    2011-09-01

    This paper describes a series of tools that were developed to achieve a balance in under-standing LOWs and the human component of events (including accountability) as the INL continues its shift to a learning culture where people report, are accountable and interested in making a positive difference - and want to report because information is handled correctly and the result benefits both the reporting individual and the organization. We present our model for understanding these interrelationships; the initiatives that were undertaken to improve overall performance.

  7. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  8. Sources of Error in Satellite Navigation Positioning

    Directory of Open Access Journals (Sweden)

    Jacek Januszewski

    2017-09-01

    Full Text Available An uninterrupted information about the user’s position can be obtained generally from satellite navigation system (SNS. At the time of this writing (January 2017 currently two global SNSs, GPS and GLONASS, are fully operational, two next, also global, Galileo and BeiDou are under construction. In each SNS the accuracy of the user’s position is affected by the three main factors: accuracy of each satellite position, accuracy of pseudorange measurement and satellite geometry. The user’s position error is a function of both the pseudorange error called UERE (User Equivalent Range Error and user/satellite geometry expressed by right Dilution Of Precision (DOP coefficient. This error is decomposed into two types of errors: the signal in space ranging error called URE (User Range Error and the user equipment error UEE. The detailed analyses of URE, UEE, UERE and DOP coefficients, and the changes of DOP coefficients in different days are presented in this paper.

  9. Accounting for optical errors in microtensiometry.

    Science.gov (United States)

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup

  10. IAS 8, Accounting Policies, Changes in Accounting Estimates and Errors – A Closer Look

    OpenAIRE

    Muthupandian, K S

    2008-01-01

    The International Accounting Standards Board issued the revised version of the International Accounting Standard 8, Accounting Policies, Changes in Accounting Estimates and Errors. The objective of IAS 8 is to prescribe the criteria for selecting, applying and changing accounting policies, together with the accounting treatment and disclosure of changes in accounting policies, changes in accounting estimates and the corrections of errors. This article presents a closer look of the standard (o...

  11. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  12. Accounting for measurement error: a critical but often overlooked process.

    Science.gov (United States)

    Harris, Edward F; Smith, Richard N

    2009-12-01

    Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.

  13. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  14. A GLIMPSE OF POSITIVE ACCOUNTING THEORY (PAT

    Directory of Open Access Journals (Sweden)

    Muhammad Rifky Santoso

    2017-11-01

    Full Text Available Positive accounting theory (PAT has been more developed than normative accounting theory in this era. The development of PAT research has discussed about what factors influenced management to report earnings. By using literature reviews, there are many researches in discussing the external factors to influence management to report earnings, such as bonuses, the debt equity ratio, political costs, and good governance. The other researches have discussed the association between earnings and stock prices. There are still few discussions about the selfmotivation of the directors or managers why choose a certain accounting method. The difference of environment, types of industry, and timing of financial statement reporting can be a further research. By using theories introduced by Popper, Kuhn, and Lakatos, PAT has elements in these three theories; however, PAT has not been categorized as science.

  15. Robust topology optimization accounting for spatially varying manufacturing errors

    DEFF Research Database (Denmark)

    Schevenels, M.; Lazarov, Boyan Stefanov; Sigmund, Ole

    2011-01-01

    This paper presents a robust approach for the design of macro-, micro-, or nano-structures by means of topology optimization, accounting for spatially varying manufacturing errors. The focus is on structures produced by milling or etching; in this case over- or under-etching may cause parts...... optimization problem is formulated in a probabilistic way: the objective function is defined as a weighted sum of the mean value and the standard deviation of the structural performance. The optimization problem is solved by means of a Monte Carlo method: in each iteration of the optimization scheme, a Monte...

  16. Influence of Ephemeris Error on GPS Single Point Positioning Accuracy

    Science.gov (United States)

    Lihua, Ma; Wang, Meng

    2013-09-01

    The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.

  17. Approaches to relativistic positioning around Earth and error estimations

    Science.gov (United States)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  18. Seeing the conflict: an attentional account of reasoning errors.

    Science.gov (United States)

    Mata, André; Ferreira, Mário B; Voss, Andreas; Kollei, Tanja

    2017-12-01

    In judgment and reasoning, intuition and deliberation can agree on the same responses, or they can be in conflict and suggest different responses. Incorrect responses to conflict problems have traditionally been interpreted as a sign of faulty problem-solving-an inability to solve the conflict. However, such errors might emerge earlier, from insufficient attention to the conflict. To test this attentional hypothesis, we manipulated the conflict in reasoning problems and used eye-tracking to measure attention. Across several measures, correct responders paid more attention than incorrect responders to conflict problems, and they discriminated between conflict and no-conflict problems better than incorrect responders. These results are consistent with a two-stage account of reasoning, whereby sound problem solving in the second stage can only lead to accurate responses when sufficient attention is paid in the first stage.

  19. A Research on the Responsibility of Accounting Professionals to Determine and Prevent Accounting Errors and Frauds: Edirne Sample

    Directory of Open Access Journals (Sweden)

    Semanur Adalı

    2017-09-01

    Full Text Available In this study, the ethical dimensions of accounting professionals related to accounting errors and frauds were examined. Firstly, general and technical information about accounting were provided. Then, some terminology on error, fraud and ethics in accounting were discussed. Study also included recent statistics about accounting errors and fraud as well as presenting a literature review. As the methodology of research, a questionnaire was distributed to 36 accounting professionals residing in Edirne city of Turkey. The collected data were then entered to the SPSS package program for analysis. The study revealed very important results. Accounting professionals think that, accounting chambers do not organize enough seminars/conferences on errors and fraud. They also believe that supervision and disciplinary boards of professional accounting chambers fulfill their responsibilities partially. Attitude of professional accounting chambers in terms of errors, fraud and ethics is considered neither strict nor lenient. But, most accounting professionals are aware of colleagues who had disciplinary penalties. Most important and effective tool to prevent errors and fraud is indicated as external audit, but internal audit and internal control are valued as well. According to accounting professionals, most errors occur due to incorrect data received from clients and as a result of recording. Fraud is generally made in order to get credit from banks and for providing benefits to the organization by not showing the real situation of the firm. Finally, accounting professionals state that being honest, trustworthy and impartial is the basis of accounting profession and accountants must adhere to ethical rules.

  20. Aliasing errors in measurements of beam position and ellipticity

    International Nuclear Information System (INIS)

    Ekdahl, Carl

    2005-01-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all

  1. Aliasing errors in measurements of beam position and ellipticity

    Science.gov (United States)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  2. Performance of muon reconstruction including Alignment Position Errors for 2016 Collision Data

    CERN Document Server

    CMS Collaboration

    2016-01-01

    From 2016 Run muon reconstruction is using non-zero Alignment Position Errors to account for the residual uncertainties of muon chambers' positions. Significant improvements are obtained in particular for the startup phase after opening/closing the muon detector. Performance results are presented for real data and MC simulations, related to both the offline reconstruction and the High-Level Trigger.

  3. Positional error in automated geocoding of residential addresses

    Directory of Open Access Journals (Sweden)

    Talbot Thomas O

    2003-12-01

    Full Text Available Abstract Background Public health applications using geographic information system (GIS technology are steadily increasing. Many of these rely on the ability to locate where people live with respect to areas of exposure from environmental contaminants. Automated geocoding is a method used to assign geographic coordinates to an individual based on their street address. This method often relies on street centerline files as a geographic reference. Such a process introduces positional error in the geocoded point. Our study evaluated the positional error caused during automated geocoding of residential addresses and how this error varies between population densities. We also evaluated an alternative method of geocoding using residential property parcel data. Results Positional error was determined for 3,000 residential addresses using the distance between each geocoded point and its true location as determined with aerial imagery. Error was found to increase as population density decreased. In rural areas of an upstate New York study area, 95 percent of the addresses geocoded to within 2,872 m of their true location. Suburban areas revealed less error where 95 percent of the addresses geocoded to within 421 m. Urban areas demonstrated the least error where 95 percent of the addresses geocoded to within 152 m of their true location. As an alternative to using street centerline files for geocoding, we used residential property parcel points to locate the addresses. In the rural areas, 95 percent of the parcel points were within 195 m of the true location. In suburban areas, this distance was 39 m while in urban areas 95 percent of the parcel points were within 21 m of the true location. Conclusion Researchers need to determine if the level of error caused by a chosen method of geocoding may affect the results of their project. As an alternative method, property data can be used for geocoding addresses if the error caused by traditional methods is

  4. Perceptual learning eases crowding by reducing recognition errors but not position errors.

    Science.gov (United States)

    Xiong, Ying-Zi; Yu, Cong; Zhang, Jun-Yun

    2015-08-01

    When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three-letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order. We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by-product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors.

  5. Force Reproduction Error Depends on Force Level, whereas the Position Reproduction Error Does Not

    NARCIS (Netherlands)

    Onneweer, B.; Mugge, W.; Schouten, Alfred Christiaan

    2016-01-01

    When reproducing a previously perceived force or position humans make systematic errors. This study determined the effect of force level on force and position reproduction, when both target and reproduction force are self-generated with the same hand. Subjects performed force reproduction tasks at

  6. Predicting positional error of MLC using volumetric analysis

    International Nuclear Information System (INIS)

    Hareram, E.S.

    2008-01-01

    IMRT normally using multiple beamlets (small width of the beam) for a particular field to deliver so that it is imperative to maintain the positional accuracy of the MLC in order to deliver integrated computed dose accurately. Different manufacturers have reported high precession on MLC devices with leaf positional accuracy nearing 0.1 mm but measuring and rectifying the error in this accuracy is very difficult. Various methods are used to check MLC position and among this volumetric analysis is one of the technique. Volumetric approach was adapted in our method using primus machine and 0.6cc chamber at 5 cm depth In perspex. MLC of 1 mm error introduces an error of 20%, more sensitive to other methods

  7. The error model and experiment of measuring angular position error based on laser collimation

    Science.gov (United States)

    Cai, Yangyang; Yang, Jing; Li, Jiakun; Feng, Qibo

    2018-01-01

    Rotary axis is the reference component of rotation motion. Angular position error is the most critical factor which impair the machining precision among the six degree-of-freedom (DOF) geometric errors of rotary axis. In this paper, the measuring method of angular position error of rotary axis based on laser collimation is thoroughly researched, the error model is established and 360 ° full range measurement is realized by using the high precision servo turntable. The change of space attitude of each moving part is described accurately by the 3×3 transformation matrices and the influences of various factors on the measurement results is analyzed in detail. Experiments results show that the measurement method can achieve high measurement accuracy and large measurement range.

  8. Negative cognitive errors and positive illusions for negative divorce events: predictors of children's psychological adjustment.

    Science.gov (United States)

    Mazur, E; Wolchik, S A; Sandler, I N

    1992-12-01

    This study examined the relations among negative cognitive errors regarding hypothetical negative divorce events, positive illusions about those same events, actual divorce events, and psychological adjustment in 38 8- to 12-year-old children whose parents had divorced within the previous 2 years. Children's scores on a scale of negative cognitive errors (catastrophizing, overgeneralizing, and personalizing) correlated significantly with self-reported symptoms of anxiety and self-esteem, and with maternal reports of behavior problems. Children's scores on a scale measuring positive illusions (high self-regard, illusion of personal control, and optimism for the future) correlated significantly with less self-reported aggression. Both appraisal types accounted for variance in some measures of symptomatology beyond that explained by actual events. There was no significant association between children's negative cognitive errors and positive illusions. The implications of these results for theories of negative cognitive errors and of positive illusions, as well as for future research, are discussed.

  9. Positional Velar Fronting: An Updated Articulatory Account

    Science.gov (United States)

    Byun, Tara McAllister

    2012-01-01

    This study develops the hypothesis that the child-specific phenomenon of positional velar fronting can be modeled as the product of phonologically encoded articulatory limitations unique to immature speakers. Children have difficulty executing discrete tongue movements, preferring to move the tongue and jaw as a single unit. This predisposes the…

  10. Source position error influence on industry CT image quality

    International Nuclear Information System (INIS)

    Cong Peng; Li Zhipeng; Wu Haifeng

    2004-01-01

    Based on the emulational exercise, the influence of source position error on industry CT (ICT) image quality was studied and the valuable parameters were obtained for the design of ICT. The vivid container CT image was also acquired from the CT testing system. (authors)

  11. Accounting for model error due to unresolved scales within ensemble Kalman filtering

    OpenAIRE

    Mitchell, Lewis; Carrassi, Alberto

    2014-01-01

    We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are describe...

  12. Slide-position errors degrade machined optical component quality

    International Nuclear Information System (INIS)

    Arnold, J.B.; Steger, P.J.; Burleson, R.R.

    1975-01-01

    An ultraprecision lathe is being developed at the Oak Ridge Y-12 Plant to fabricate optical components for use in high-energy laser systems. The lathe has the capability to produce virtually any shape mirror which is symmetrical about an axis of revolution. Two basic types of mirrors are fabricated on the lathe, namely: (1) mirrors which are machined using a single slide motion (such as flats and cylinders), and (2) mirrors which are produced by two-coordinated slide motions (such as hyperbolic reflectors; large, true-radius reflectors, and other contoured-surface reflectors). The surface-finish quality of typical mirrors machined by a single axis of motion is better than 13 nm, peak to valley, which is an order of magnitude better than the surface finishes of mirrors produced by two axes of motion. Surface finish refers to short-wavelength-figure errors that are visibly detectable. The primary cause of the inability to produce significantly better surface finishes on contoured mirrors has been determined as positional errors which exist in the slide positioning systems. The correction of these errors must be accomplished before contoured surface finishes comparable to the flat and cylinder can be machined on the lathe

  13. Common positioning errors in panoramic radiography: A review

    Energy Technology Data Exchange (ETDEWEB)

    Randon, Rafael Henrique Nunes [Stomathology and Oral Diagnostic Program, School of Dentistry of Sao Paulo, University of Sao Paulo, Sao Paulo (Brazil); Pereira, Yamba Carla Lara [Biology Dental Buco Graduate Program, School of Dentistry of Piracicaba, University of Campinas, Piracicaba (Brazil); Nascimento, Glauce Crivelaro do [Psychobiology Graduate Program, School of Philosophy, Science and Literature of Ribeirao Preto, University of Sao Paulo, Ribeirao Preto (Brazil)

    2014-03-15

    Professionals performing radiographic examinations are responsible for maintaining optimal image quality for accurate diagnoses. These professionals must competently execute techniques such as film manipulation and processing to minimize patient exposure to radiation. Improper performance by the professional and/or patient may result in a radiographic image of unsatisfactory quality that can also lead to a misdiagnosis and the development of an inadequate treatment plan. Currently, the most commonly performed extraoral examination is panoramic radiography. The invention of panoramic radiography has resulted in improvements in image quality with decreased exposure to radiation and at a low cost. However, this technique requires careful, accurate positioning of the patient's teeth and surrounding maxillofacial bone structure within the focal trough. Therefore, we reviewed the literature for the most common types of positioning errors in panoramic radiography to suggest the correct techniques. We would also discuss how to determine if the most common positioning errors occurred in panoramic radiography, such as in the positioning of the patient's head, tongue, chin, or body.

  14. Common positioning errors in panoramic radiography: A review

    International Nuclear Information System (INIS)

    Randon, Rafael Henrique Nunes; Pereira, Yamba Carla Lara; Nascimento, Glauce Crivelaro do

    2014-01-01

    Professionals performing radiographic examinations are responsible for maintaining optimal image quality for accurate diagnoses. These professionals must competently execute techniques such as film manipulation and processing to minimize patient exposure to radiation. Improper performance by the professional and/or patient may result in a radiographic image of unsatisfactory quality that can also lead to a misdiagnosis and the development of an inadequate treatment plan. Currently, the most commonly performed extraoral examination is panoramic radiography. The invention of panoramic radiography has resulted in improvements in image quality with decreased exposure to radiation and at a low cost. However, this technique requires careful, accurate positioning of the patient's teeth and surrounding maxillofacial bone structure within the focal trough. Therefore, we reviewed the literature for the most common types of positioning errors in panoramic radiography to suggest the correct techniques. We would also discuss how to determine if the most common positioning errors occurred in panoramic radiography, such as in the positioning of the patient's head, tongue, chin, or body.

  15. ERRORS AND FRAUD IN ACCOUNTING. THE ROLE OF EXTERNAL AUDIT IN FIGHTING CORRUPTION

    Directory of Open Access Journals (Sweden)

    Luminita Ionescu

    2017-12-01

    Full Text Available Accounting errors and fraud are common in most businesses, but there is a difference between fraud and misinterpretation of communication or accounting regulations. The role of management in preventing fraud becomes important in the last decades and the importance of auditing in curbing corruption is increasingly revealed. There is a strong connection between fraud and corruption, accelerated by electronic systems and modern platforms. The most recent developments tend to confirm that external auditing is curbing corruption, due to international accounting and auditing standards at national and regional levels. Thus, a better implementation of accounting standards and high quality of external control could prevent errors and fraud in accounting, and reduce corruption, as well. The aim of this paper is to present some particular aspects of errors and fraud in accounting, and how external audit could ensure accuracy and accountability in financial reporting.

  16. Accounting for L2 learners’ errors in word stress placement

    Directory of Open Access Journals (Sweden)

    Clara Herlina Karjo

    2016-01-01

    Full Text Available Stress placement in English words is governed by highly complicated rules. Thus, assigning stress correctly in English words has been a challenging task for L2 learners, especially Indonesian learners since their L1 does not recognize such stress system. This study explores the production of English word stress by 30 university students. The method used for this study is immediate repetition task. Participants are instructed to identify the stress placement of 80 English words which are auditorily presented as stimuli and immediately repeat the words with correct stress placement. The objectives of this study are to find out whether English word stress placement is problematic for L2 learners and to investigate the phonological factors which account for these problems. Research reveals that L2 learners have different ability in producing the stress, but three-syllable words are more problematic than two-syllable words. Moreover, misplacement of stress is caused by, among others, the influence of vowel lenght and vowel height.

  17. Coordinated joint motion control system with position error correction

    Science.gov (United States)

    Danko, George L.

    2016-04-05

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  18. CORRECTING ACCOUNTING ERRORS AND ACKNOWLEDGING THEM IN THE EARNINGS TO THE PERIOD

    Directory of Open Access Journals (Sweden)

    BUSUIOCEANU STELIANA

    2013-08-01

    Full Text Available The accounting information is reliable when it does not contain significant errors, is not biasedand accurately represents the transactions and events. In the light of the regulations complying with Europeandirectives, the information is significant if its omission or wrong presentation may influence the decisions users makebased on annual financial statements. Given that the professional practice sees errors in registering or interpretinginformation, as well as omissions and wrong calculations, the Romanian accounting regulations stipulate treatmentsfor correcting errors in compliance with international references. Thus, the correction of the errors corresponding tothe current period is accomplished based on the retained earnings in the case of significant errors or on the currentearnings when the errors are insignificant. The different situations in the professional practice triggered by errorsrequire both knowledge of regulations and professional rationale to be addressed.

  19. Nonlinear control of ships minimizing the position tracking errors

    Directory of Open Access Journals (Sweden)

    Svein P. Berge

    1999-07-01

    Full Text Available In this paper, a nonlinear tracking controller with integral action for ships is presented. The controller is based on state feedback linearization. Exponential convergence of the vessel-fixed position and velocity errors are proven by using Lyapunov stability theory. Since we only have two control devices, a rudder and a propeller, we choose to control the longship and the sideship position errors to zero while the heading is stabilized indirectly. A Virtual Reference Point (VRP is defined at the bow or ahead of the ship. The VRP is used for tracking control. It is shown that the distance from the center of rotation to the VRP will influence on the stability of the zero dynamics. By selecting the VRP at the bow or even ahead of the bow, the damping in yaw can be increased and the zero dynamics is stabilized. Hence, the heading angle will be less sensitive to wind, currents and waves. The control law is simulated by using a nonlinear model of the Japanese training ship Shiojimaru with excellent results. Wind forces are added to demonstrate the robustness and performance of the integral controller.

  20. SU-E-T-377: Inaccurate Positioning Might Introduce Significant MapCheck Calibration Error in Flatten Filter Free Beams

    International Nuclear Information System (INIS)

    Wang, S; Chao, C; Chang, J

    2014-01-01

    Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as a detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect

  1. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  2. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  3. Estimating the acute health effects of coarse particulate matter accounting for exposure measurement error.

    Science.gov (United States)

    Chang, Howard H; Peng, Roger D; Dominici, Francesca

    2011-10-01

    In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.

  4. Sensorless SPMSM Position Estimation Using Position Estimation Error Suppression Control and EKF in Wide Speed Range

    Directory of Open Access Journals (Sweden)

    Zhanshan Wang

    2014-01-01

    Full Text Available The control of a high performance alternative current (AC motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM. In order to guarantee the accuracy of rotor position estimation in the flux-weakening region, one scheme of identifying the permanent magnet flux of SPMSM by extended Kalman filter (EKF is also proposed, which formed the effective combination method to realize the sensorless control of SPMSM with high accuracy. The simulation results demonstrated the validity and feasibility of the proposed position/speed estimation system.

  5. Propagation of positional error in 3D GIS

    NARCIS (Netherlands)

    Biljecki, Filip; Heuvelink, Gerard B.M.; Ledoux, Hugo; Stoter, Jantien

    2015-01-01

    While error propagation in GIS is a topic that has received a lot of attention, it has not been researched with 3D GIS data. We extend error propagation to 3D city models using a Monte Carlo simulation on a use case of annual solar irradiation estimation of building rooftops for assessing the

  6. Determination of global positioning system (GPS) receiver clock errors: impact on positioning accuracy

    International Nuclear Information System (INIS)

    Yeh, Ta-Kang; Hwang, Cheinway; Xu, Guochang; Wang, Chuan-Sheng; Lee, Chien-Chih

    2009-01-01

    Enhancing the positioning precision is the primary pursuit of global positioning system (GPS) users. To achieve this goal, most studies have focused on the relationship between GPS receiver clock errors and GPS positioning precision. This study utilizes undifferentiated phase data to calculate GPS clock errors and to compare with the frequency of cesium clock directly, to verify estimated clock errors by the method used in this paper. The frequency stability calculated from this paper (the indirect method) and measured from the National Standard Time and Frequency Laboratory (NSTFL) of Taiwan (the direct method) match to 1.5 × 10 −12 (the value from this study was smaller than that from NSTFL), suggesting that the proposed technique has reached a certain level of quality. The built-in quartz clocks in the GPS receivers yield relative frequency offsets that are 3–4 orders higher than those of rubidium clocks. The frequency stability of the quartz clocks is on average two orders worse than that of the rubidium clock. Using the rubidium clock instead of the quartz clock, the horizontal and vertical positioning accuracies were improved by 26–78% (0.6–3.6 mm) and 20–34% (1.3–3.0 mm), respectively, for a short baseline. These improvements are 7–25% (0.3–1.7 mm) and 11% (1.7 mm) for a long baseline. Our experiments show that the frequency stability of the clock, rather than relative frequency offset, is the governing factor of positioning accuracy

  7. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where

  8. On a Test of Hypothesis to Verify the Operating Risk Due to Accountancy Errors

    Directory of Open Access Journals (Sweden)

    Paola Maddalena Chiodini

    2014-12-01

    Full Text Available According to the Statement on Auditing Standards (SAS No. 39 (AU 350.01, audit sampling is defined as “the application of an audit procedure to less than 100 % of the items within an account balance or class of transactions for the purpose of evaluating some characteristic of the balance or class”. The audit system develops in different steps: some are not susceptible to sampling procedures, while others may be held using sampling techniques. The auditor may also be interested in two types of accounting error: the number of incorrect records in the sample that overcome a given threshold (natural error rate, which may be indicative of possible fraud, and the mean amount of monetary errors found in incorrect records. The aim of this study is to monitor jointly both types of errors through an appropriate system of hypotheses, with particular attention to the second type error that indicates the risk of non-reporting errors overcoming the upper precision limits.

  9. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    Science.gov (United States)

    Helle, Samuli

    2018-03-01

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  10. POSSIBILITIES TO CORRECT ACCOUNTING ERRORS IN THE CONTEXT OF COMPLYING WITH THE OPENING BALANCE SHEET INTANGIBILITY PRINCIPLE

    Directory of Open Access Journals (Sweden)

    PALIU – POPA LUCIA

    2017-12-01

    Full Text Available There are still different views on the intangibility of the opening balance sheet at global level in the process of convergence and accounting harmonization. Fnding a total difference between the Anglo-Saxon accounting system and that of the Western European continental influence, in the sense that the former is less rigid in regard with the application of the principle of intangibility, whereas that of mainland inspiration apply the provisions of this principle in its entirety. Looking from this perspective and taking into account the major importance of the financial statements that are intended to provide information for all categories of users, ie both for managers and users external to the entity whose position does not allow them to request specific reports, we considered useful to conduct a study aimed at correcting the errors in the context of compliance with the opening balance sheet intangibility principle versus the need to adjust the comparative information on the financial position, financial performance and change in the financial position generated by the correction of the errors in the previous years. In this regard, we will perform a comparative analysis of the application of the intangibility principle both in the two major accounting systems and at international level and we will approach issues related to the correction of the errors in terms of the main differences between the provisions of the continental accounting regulations (represented by the European and national ones in our approach, Anglo-Saxon and those of the international referential on the opening balance sheet intangibility.

  11. Multiple imputation to account for measurement error in marginal structural models

    Science.gov (United States)

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  12. Multiple Imputation to Account for Measurement Error in Marginal Structural Models.

    Science.gov (United States)

    Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel; Crane, Heidi; Eron, Joseph J; Mathews, W Christopher; Moore, Richard; Boswell, Stephen L; Lesko, Catherine R; Mugavero, Michael J

    2015-09-01

    Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and nondifferential measurement error in a marginal structural model. We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3,686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality (hazard ratio [HR]: 1.2 [95% confidence interval [CI] = 0.6, 2.3]). The HR for current smoking and therapy [0.4 (95% CI = 0.2, 0.7)] was similar to the HR for no smoking and therapy (0.4; 95% CI = 0.2, 0.6). Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies.

  13. Accounting for False Positive HIV Tests: Is Visceral Leishmaniasis Responsible?

    Science.gov (United States)

    Shanks, Leslie; Ritmeijer, Koert; Piriou, Erwan; Siddiqui, M Ruby; Kliescikova, Jarmila; Pearce, Neil; Ariti, Cono; Muluneh, Libsework; Masiga, Johnson; Abebe, Almaz

    2015-01-01

    Co-infection with HIV and visceral leishmaniasis is an important consideration in treatment of either disease in endemic areas. Diagnosis of HIV in resource-limited settings relies on rapid diagnostic tests used together in an algorithm. A limitation of the HIV diagnostic algorithm is that it is vulnerable to falsely positive reactions due to cross reactivity. It has been postulated that visceral leishmaniasis (VL) infection can increase this risk of false positive HIV results. This cross sectional study compared the risk of false positive HIV results in VL patients with non-VL individuals. Participants were recruited from 2 sites in Ethiopia. The Ethiopian algorithm of a tiebreaker using 3 rapid diagnostic tests (RDTs) was used to test for HIV. The gold standard test was the Western Blot, with indeterminate results resolved by PCR testing. Every RDT screen positive individual was included for testing with the gold standard along with 10% of all negatives. The final analysis included 89 VL and 405 non-VL patients. HIV prevalence was found to be 12.8% (47/ 367) in the VL group compared to 7.9% (200/2526) in the non-VL group. The RDT algorithm in the VL group yielded 47 positives, 4 false positives, and 38 negatives. The same algorithm for those without VL had 200 positives, 14 false positives, and 191 negatives. Specificity and positive predictive value for the group with VL was less than the non-VL group; however, the difference was not found to be significant (p = 0.52 and p = 0.76, respectively). The test algorithm yielded a high number of HIV false positive results. However, we were unable to demonstrate a significant difference between groups with and without VL disease. This suggests that the presence of endemic visceral leishmaniasis alone cannot account for the high number of false positive HIV results in our study.

  14. Accounting for False Positive HIV Tests: Is Visceral Leishmaniasis Responsible?

    Directory of Open Access Journals (Sweden)

    Leslie Shanks

    Full Text Available Co-infection with HIV and visceral leishmaniasis is an important consideration in treatment of either disease in endemic areas. Diagnosis of HIV in resource-limited settings relies on rapid diagnostic tests used together in an algorithm. A limitation of the HIV diagnostic algorithm is that it is vulnerable to falsely positive reactions due to cross reactivity. It has been postulated that visceral leishmaniasis (VL infection can increase this risk of false positive HIV results. This cross sectional study compared the risk of false positive HIV results in VL patients with non-VL individuals.Participants were recruited from 2 sites in Ethiopia. The Ethiopian algorithm of a tiebreaker using 3 rapid diagnostic tests (RDTs was used to test for HIV. The gold standard test was the Western Blot, with indeterminate results resolved by PCR testing. Every RDT screen positive individual was included for testing with the gold standard along with 10% of all negatives. The final analysis included 89 VL and 405 non-VL patients. HIV prevalence was found to be 12.8% (47/ 367 in the VL group compared to 7.9% (200/2526 in the non-VL group. The RDT algorithm in the VL group yielded 47 positives, 4 false positives, and 38 negatives. The same algorithm for those without VL had 200 positives, 14 false positives, and 191 negatives. Specificity and positive predictive value for the group with VL was less than the non-VL group; however, the difference was not found to be significant (p = 0.52 and p = 0.76, respectively.The test algorithm yielded a high number of HIV false positive results. However, we were unable to demonstrate a significant difference between groups with and without VL disease. This suggests that the presence of endemic visceral leishmaniasis alone cannot account for the high number of false positive HIV results in our study.

  15. On the effect of systematic errors in near real time accountancy

    International Nuclear Information System (INIS)

    Avenhaus, R.

    1987-01-01

    Systematic measurement errors have a decisive impact on nuclear materials accountancy. This has been demonstrated at various occasions for a fixed number of inventory periods, i.e. for situations where the overall probability of detection is taken as the measure of effectiveness. In the framework of Near Real Time Accountancy (NRTA), however, such analyses have not yet been performed. In this paper sequential test procedures are considered which are based on the so-called MUF-Residuals. It is shown that, if the decision maker does not know the systematic error variance, the average run lengths tend towards infinity if this variance is equal or longer than that of the random error. Furthermore, if the decision maker knows this invariance, the average run length for constant loss or diversion is not shorter than that without loss or diversion. These results cast some doubt on the present practice of data evaluation where systematic errors are tacitly assumed to persist for an infinite time. In fact, information about the time dependence of the variances of these errors has to be gathered in order that the efficiency of NRTA evaluation methods can be estimated realistically

  16. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    Science.gov (United States)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion

  17. Detecting errors and anomalies in computerized materials control and accountability databases

    International Nuclear Information System (INIS)

    Whiteson, R.; Hench, K.; Yarbro, T.; Baumgart, C.

    1998-01-01

    The Automated MC and A Database Assessment project is aimed at improving anomaly and error detection in materials control and accountability (MC and A) databases and increasing confidence in the data that they contain. Anomalous data resulting in poor categorization of nuclear material inventories greatly reduces the value of the database information to users. Therefore it is essential that MC and A data be assessed periodically for anomalies or errors. Anomaly detection can identify errors in databases and thus provide assurance of the integrity of data. An expert system has been developed at Los Alamos National Laboratory that examines these large databases for anomalous or erroneous data. For several years, MC and A subject matter experts at Los Alamos have been using this automated system to examine the large amounts of accountability data that the Los Alamos Plutonium Facility generates. These data are collected and managed by the Material Accountability and Safeguards System, a near-real-time computerized nuclear material accountability and safeguards system. This year they have expanded the user base, customizing the anomaly detector for the varying requirements of different groups of users. This paper describes the progress in customizing the expert systems to the needs of the users of the data and reports on their results

  18. Accounting for response misclassification and covariate measurement error improves power and reduces bias in epidemiologic studies.

    Science.gov (United States)

    Cheng, Dunlei; Branscum, Adam J; Stamey, James D

    2010-07-01

    To quantify the impact of ignoring misclassification of a response variable and measurement error in a covariate on statistical power, and to develop software for sample size and power analysis that accounts for these flaws in epidemiologic data. A Monte Carlo simulation-based procedure is developed to illustrate the differences in design requirements and inferences between analytic methods that properly account for misclassification and measurement error to those that do not in regression models for cross-sectional and cohort data. We found that failure to account for these flaws in epidemiologic data can lead to a substantial reduction in statistical power, over 25% in some cases. The proposed method substantially reduced bias by up to a ten-fold margin compared to naive estimates obtained by ignoring misclassification and mismeasurement. We recommend as routine practice that researchers account for errors in measurement of both response and covariate data when determining sample size, performing power calculations, or analyzing data from epidemiological studies. 2010 Elsevier Inc. All rights reserved.

  19. Accounting for measurement error in log regression models with applications to accelerated testing.

    Directory of Open Access Journals (Sweden)

    Robert Richardson

    Full Text Available In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  20. Accounting for measurement error in log regression models with applications to accelerated testing.

    Science.gov (United States)

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  1. Error Analysis System for Spacecraft Navigation Using the Global Positioning System (GPS)

    Science.gov (United States)

    Truong, S. H.; Hart, R. C.; Hartman, K. R.; Tomcsik, T. L.; Searl, J. E.; Bernstein, A.

    1997-01-01

    The Flight Dynamics Division (FDD) at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) is currently developing improved space-navigation filtering algorithms to use the Global Positioning System (GPS) for autonomous real-time onboard orbit determination. In connection with a GPS technology demonstration on the Small Satellite Technology Initiative (SSTI)/Lewis spacecraft, FDD analysts and programmers have teamed with the GSFC Guidance, Navigation, and Control Branch to develop the GPS Enhanced Orbit Determination Experiment (GEODE) system. The GEODE system consists of a Kalman filter operating as a navigation tool for estimating the position, velocity, and additional states required to accurately navigate the orbiting Lewis spacecraft by using astrodynamic modeling and GPS measurements from the receiver. A parallel effort at the FDD is the development of a GPS Error Analysis System (GEAS) that will be used to analyze and improve navigation filtering algorithms during development phases and during in-flight calibration. For GEAS, the Kalman filter theory is extended to estimate the errors in position, velocity, and other error states of interest. The estimation of errors in physical variables at regular intervals will allow the time, cause, and effect of navigation system weaknesses to be identified. In addition, by modeling a sufficient set of navigation system errors, a system failure that causes an observed error anomaly can be traced and accounted for. The GEAS software is formulated using Object Oriented Design (OOD) techniques implemented in the C++ programming language on a Sun SPARC workstation. The Phase 1 of this effort is the development of a basic system to be used to evaluate navigation algorithms implemented in the GEODE system. This paper presents the GEAS mathematical methodology, systems and operations concepts, and software design and implementation. Results from the use of the basic system to evaluate

  2. SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error

    Energy Technology Data Exchange (ETDEWEB)

    Ju, S; Hong, C; Kim, M; Chung, K; Kim, J; Han, Y; Ahn, S; Chung, S; Shin, E; Shin, J; Kim, H; Kim, D; Choi, D [Department of Radiation Oncology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2014-06-01

    Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed without the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.

  3. Effects of Target Positioning Error on Motion Compensation for Airborne Interferometric SAR

    Directory of Open Access Journals (Sweden)

    Li Yin-wei

    2013-12-01

    Full Text Available The measurement inaccuracies of Inertial Measurement Unit/Global Positioning System (IMU/GPS as well as the positioning error of the target may contribute to the residual uncompensated motion errors in the MOtion COmpensation (MOCO approach based on the measurement of IMU/GPS. Aiming at the effects of target positioning error on MOCO for airborne interferometric SAR, the paper firstly deduces a mathematical model of residual motion error bring out by target positioning error under the condition of squint. And the paper analyzes the effects on the residual motion error caused by system sampling delay error, the Doppler center frequency error and reference DEM error which result in target positioning error based on the model. Then, the paper discusses the effects of the reference DEM error on the interferometric SAR image quality, the interferometric phase and the coherent coefficient. The research provides theoretical bases for the MOCO precision in signal processing of airborne high precision SAR and airborne repeat-pass interferometric SAR.

  4. Positioning errors assessed with kV cone-beam CT for image-guided prostate radiotherapy

    International Nuclear Information System (INIS)

    Li Jiongyan; Guo Xiaomao; Yao Weiqiang; Wang Yanyang; Ma Jinli; Chen Jiayi; Zhang Zhen; Feng Yan

    2010-01-01

    Objective: To assess set-up errors measured with kilovoltage cone-beam CT (KV-CBCT), and the impact of online corrections on margins required to account for set-up variability during IMRT for patients with prostate cancer. Methods: Seven patients with prostate cancer undergoing IMRT were enrolled onto the study. The KV-CBCT scans were acquired at least twice weekly. After initial set-up using the skin marks, a CBCT scan was acquired and registered with the planning CT to determine the setup errors using an auto grey-scale registration software. Corrections would be made by moving the table if the setup errors were considered clinically significant (i. e. , > 2 mm). A second CBCT scan was acquired immediately after the corrections to evaluate the residual error. PTV margins were derived to account for the measured set-up errors and residual errors determined for this group of patients. Results: 197 KV-CBCT images in total were acquired. The random and systematic positioning errors and calculated PTV margins without correction in mm were : a) Lateral 3.1, 2.1, 9.3; b) Longitudinal 1.5, 1.8, 5.1;c) Vertical 4.2, 3.7, 13.0. The random and systematic positioning errors and calculated PTV margin with correction in mm were : a) Lateral 1.1, 0.9, 3.4; b) Longitudinal 0.7, 1.1, 2.5; c) Vertical 1.1, 1.3, 3.7. Conclusions: With the guidance of online KV-CBCT, set-up errors could be reduced significantly for patients with prostate cancer receiving IMRT. The margin required after online CBCT correction for the patients enrolled in the study would be appoximatively 3-4 mm. (authors)

  5. Common Positioning Errors in Digital Panoramic Radiographies Taken In Mashhad Dental School

    Directory of Open Access Journals (Sweden)

    Ali Bagherpour

    2018-06-01

    Full Text Available Introduction: The present study was aimed at evaluating common positioning errors on panoramic radiographs taken in the Radiology Department of Mashhad Dental School. Materials and methods: The study sample included 1,990 digital panoramic radiographs taken in the Radiology Department of Mashhad Dental School by a Planmeca Promax (Planmeca Oy, Helsinki, Finland, during a 2-year period (2010–2012. All radiographs, according to dentition and sex, were evaluated for positioning errors. Results: There were 1,927 (96.8% panoramic radiographs with one or more errors. While the number of errors in each image varied between one and five, most images had one error (48.4%. The most common error was that the tongue was not in contact with the hard palate (94.8%. "Open lips" was an error not seen in any patients. Conclusions:positioning errors are common in panoramic radiographies. The most common error observed in this study was a failure to place the tongue on the palate. This error and the other errors reported in this study can be reduced by training the technicians and spending little more time for patient positioning and more effective communication with the patients.

  6. Evaluation of positioning errors of the patient using cone beam CT megavoltage

    International Nuclear Information System (INIS)

    Garcia Ruiz-Zorrilla, J.; Fernandez Leton, J. P.; Zucca Aparicio, D.; Perez Moreno, J. M.; Minambres Moro, A.

    2013-01-01

    Image-guided radiation therapy allows you to assess and fix the positioning of the patient in the treatment unit, thus reducing the uncertainties due to the positioning of the patient. This work assesses errors systematic and errors of randomness from the corrections made to a series of patients of different diseases through a protocol off line of cone beam CT (CBCT) megavoltage. (Author)

  7. Unification of behavioural, computational and neural accounts of word production errors in post-stroke aphasia

    Directory of Open Access Journals (Sweden)

    Marija Tochadse

    Full Text Available Neuropsychological assessment, brain imaging and computational modelling have augmented our understanding of the multifaceted functional deficits in people with language disorders after stroke. Despite the volume of research using each technique, no studies have attempted to assimilate all three approaches in order to generate a unified behavioural-computational-neural model of post-stroke aphasia.The present study included data from 53 participants with chronic post-stroke aphasia and merged: aphasiological profiles based on a detailed neuropsychological assessment battery which was analysed with principal component and correlational analyses; measures of the impairment taken from Dell's computational model of word production; and the neural correlates of both behavioural and computational accounts analysed by voxel-based correlational methodology.As a result, all three strands coincide with the separation of semantic and phonological stages of aphasic naming, revealing the prominence of these dimensions for the explanation of aphasic performance. Over and above three previously described principal components (phonological ability, semantic ability, executive-demand, we observed auditory working memory as a novel factor. While the phonological Dell parameter was uniquely related to phonological errors/factor, the semantic parameter was less clear-cut, being related to both semantic errors and omissions, and loading heavily with semantic ability and auditory working memory factors. The close relationship between the semantic Dell parameter and omission errors recurred in their high lesion-correlate overlap in the anterior middle temporal gyrus. In addition, the simultaneous overlap of the lesion correlate of omission errors with more dorsal temporal regions, associated with the phonological parameter, highlights the multiple drivers that underpin this error type. The novel auditory working memory factor was located along left superior

  8. Development of a simulation program to study error propagation in the reprocessing input accountancy measurements

    International Nuclear Information System (INIS)

    Sanfilippo, L.

    1987-01-01

    A physical model and a computer program have been developed to simulate all the measurement operations involved with the Isotopic Dilution Analysis technique currently applied in the Volume - Concentration method for the Reprocessing Input Accountancy, together with their errors or uncertainties. The simulator is apt to easily solve a number of problems related to the measurement sctivities of the plant operator and the inspector. The program, written in Fortran 77, is based on a particular Montecarlo technique named ''Random Sampling''; a full description of the code is reported

  9. Differences among Job Positions Related to Communication Errors at Construction Sites

    Science.gov (United States)

    Takahashi, Akiko; Ishida, Toshiro

    In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.

  10. Modeling the probability distribution of positional errors incurred by residential address geocoding

    Directory of Open Access Journals (Sweden)

    Mazumdar Soumya

    2007-01-01

    Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  11. MATLAB implementation of satellite positioning error overbounding by generalized Pareto distribution

    Science.gov (United States)

    Ahmad, Khairol Amali; Ahmad, Shahril; Hashim, Fakroul Ridzuan

    2018-02-01

    In the satellite navigation community, error overbound has been implemented in the process of integrity monitoring. In this work, MATLAB programming is used to implement the overbounding of satellite positioning error CDF. Using a trajectory of reference, the horizontal position errors (HPE) are computed and its non-parametric distribution function is given by the empirical Cumulative Distribution Function (ECDF). According to the results, these errors have a heavy-tailed distribution. Sınce the ECDF of the HPE in urban environment is not Gaussian distributed, the ECDF is overbound with the CDF of the generalized Pareto distribution (GPD).

  12. CALIBRATION ERRORS IN THE CAVITY BEAM POSITION MONITOR SYSTEM AT THE ATF2

    CERN Document Server

    Cullinan, F; Joshi, N; Lyapin, A

    2011-01-01

    It has been shown at the Accelerator Test Facility at KEK, that it is possible to run a system of 37 cavity beam position monitors (BPMs) and achieve high working resolution. However, stability of the calibration constants (position scale and radio frequency (RF) phase) over a three/four week running period is yet to be demonstrated. During the calibration procedure, random beam jitter gives rise to a statistical error in the position scale and slow orbit drift in position and tilt causes systematic errors in both the position scale and RF phase. These errors are dominant and have been evaluated for each BPM. The results are compared with the errors expected after a tested method of beam jitter subtraction has been applied.

  13. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    International Nuclear Information System (INIS)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; Hove, Sybille van den

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.

  14. Correcting a fundamental error in greenhouse gas accounting related to bioenergy.

    Science.gov (United States)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-06-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of 'additional biomass' - biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy - can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.

  15. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    DEFF Research Database (Denmark)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which...... already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants...... and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon...

  16. Accounting for baseline differences and measurement error in the analysis of change over time.

    Science.gov (United States)

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  17. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning.

    Science.gov (United States)

    Deng, Zhongliang; Fu, Xiao; Wang, Hanhua

    2018-01-20

    Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization.

  18. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning

    Directory of Open Access Journals (Sweden)

    Zhongliang Deng

    2018-01-01

    Full Text Available Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS. Wireless positioning signals have a considerable attenuation in received signal strength (RSS when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization.

  19. Reduction of digital errors of digital charge division type position-sensitive detectors

    International Nuclear Information System (INIS)

    Uritani, A.; Yoshimura, K.; Takenaka, Y.; Mori, C.

    1994-01-01

    It is well known that ''digital errors'', i.e. differential non-linearity, appear in a position profile of radiation interactions when the profile is obtained with a digital charge-division-type position-sensitive detector. Two methods are presented to reduce the digital errors. They are the methods using logarithmic amplifiers and a weighting function. The validities of these two methods have been evaluated mainly by computer simulation. These methods can considerably reduce the digital errors. The best results are obtained when both methods are applied. ((orig.))

  20. Analysis of positioning errors in radiotherapy; Analyse des erreurs de positionnement en radiotherapie

    Energy Technology Data Exchange (ETDEWEB)

    Josset-Gaudaire, S.; Lisbona, A.; Llagostera, C.; Delpon, G.; Chiavassa, S.; Brunet, G. [Service de physique medicale, ICO Rene-Gauducheau, Saint Herblain (France); Rousset, S.; Nerriere, E.; Leblanc, M. [Service de radiotherapie, ICO Rene-Gauducheau, Saint Herblain (France)

    2011-10-15

    Within the frame of a study of control imagery management in radiotherapy, the authors report the study of positioning errors associated with control imagery in order to give an overview of practice and to help the adjustment or definition of action levels for clinical practice. Twenty groups of patients have been defined by considering tumour locations (head, ENT, thorax, breast, abdomen, and pelvis), treatment positions, immobilization systems and imagery systems. Positioning errors have thus been analyzed for 340 patients. Aspects and practice to be improved are identified. Short communication

  1. Consonant acquisition: a first approach to the distribution of errors in four positions in the word

    Directory of Open Access Journals (Sweden)

    Silvia Llach

    2012-12-01

    Full Text Available The goal of this study is to describe the behavior of errors in two types of onsets (initial and intervocalic and two types of codas (in the middle and end of the word in order to determine if any of these positions are more prone to specific types of errors than the others.We have looked into the errors that are frequently produced in these four contexts during the acquisition of consonant sounds in the Catalan language. The data were taken from a study on the acquisition of consonants in Catalan, carried out on 90 children between the ages of 3 and 5 years from several kindergarten schools. The results do show that there are characteristic errors depending on the position within the word.

  2. Compensation for positioning error of industrial robot for flexible vision measuring system

    Science.gov (United States)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  3. The method to evaluate the position error in graphic positioning technology

    Institute of Scientific and Technical Information of China (English)

    Huiqing Lu(卢慧卿); Baoguang Wang(王宝光); Lishuang Liu(刘力双); Yabiao Li(李亚标)

    2004-01-01

    In the measurement of automobile body-in-white, it has been widely studied to position the two dimensional(2D)visual sensors with high precision. In this paper a graphic positioning method is proposed,a hollow tetrahedron is used for a positioning target to replace all the edges of a standard automobile body.A 2D visual sensor can be positioned through adjusting two triangles to be superposed on a screen of the computer, so it is very important to evaluate the superposition precision of the two triangles. Several methods are discussed and the least square method is adopted at last, it makes the adjustment more easy and intuitive with high precision.

  4. Position-Sensitive Organic Scintillation Detectors for Nuclear Material Accountancy

    International Nuclear Information System (INIS)

    Hausladen, P.; Newby, J.; Blackston, M.

    2015-01-01

    Recent years have seen renewed interest in fast organic scintillators with pulse shape properties that enable neutron-gamma discrimination, in part because of the present shortage of He3, but primarily because of the diagnostic value of timing and pulse height information available from such scintillators. Effort at Oak Ridge National Laboratory (ORNL) associated with fast organic scintillators has concentrated on development of position-sensitive fast-neutron detectors for imaging applications. Two aspects of this effort are of interest. First, the development has revisited the fundamental limitations on pulseshape measurement imposed by photon counting statistics, properties of the scintillator, and properties of photomultiplier amplification. This idealized limit can then be used to evaluate the performance of the detector combined with data acquisition and analysis such as free-running digitizers with embedded algorithms. Second, the development of position sensitive detectors has enabled a new generation of fast-neutron imaging instruments and techniques with sufficient resolution to give new capabilities relevant to safeguards. Toward this end, ORNL has built and demonstrated a number of passive and active fast-neutron imagers, including a proof-of-concept passive imager capable of resolving individual fuel pins in an assembly via their neutron emanations. This presentation will describe the performance and construction of position-sensing fast-neutron detectors and present results of imaging measurements. (author)

  5. Circular Array of Magnetic Sensors for Current Measurement: Analysis for Error Caused by Position of Conductor.

    Science.gov (United States)

    Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi

    2018-02-14

    This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.

  6. Evaluation of positioning errors of the patient using cone beam CT megavoltage; Evaluacion de errores de posicionamiento del paciente mediante Cone Beam CT de megavoltaje

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Ruiz-Zorrilla, J.; Fernandez Leton, J. P.; Zucca Aparicio, D.; Perez Moreno, J. M.; Minambres Moro, A.

    2013-07-01

    Image-guided radiation therapy allows you to assess and fix the positioning of the patient in the treatment unit, thus reducing the uncertainties due to the positioning of the patient. This work assesses errors systematic and errors of randomness from the corrections made to a series of patients of different diseases through a protocol off line of cone beam CT (CBCT) megavoltage. (Author)

  7. Reduction in specimen labeling errors after implementation of a positive patient identification system in phlebotomy.

    Science.gov (United States)

    Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F

    2010-06-01

    Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.

  8. Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position

    International Nuclear Information System (INIS)

    Malinowski, Kathleen T.; McAvoy, Thomas J.; George, Rohini; Dieterich, Sonja; D'Souza, Warren D.

    2012-01-01

    Purpose: To investigate the effect of tumor site, measurement precision, tumor–surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor–surrogate correlation and the precision in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor–surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3–3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.

  9. Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position

    Energy Technology Data Exchange (ETDEWEB)

    Malinowski, Kathleen T. [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States); McAvoy, Thomas J. [Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States); Department of Chemical and Biomolecular Engineering and Institute of Systems Research, University of Maryland, College Park, MD (United States); George, Rohini [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Dieterich, Sonja [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States); D' Souza, Warren D., E-mail: wdsou001@umaryland.edu [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States)

    2012-04-01

    Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precision in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.

  10. Clinical measuring system for the form and position errors of circular workpieces using optical fiber sensors

    Science.gov (United States)

    Tan, Jiubin; Qiang, Xifu; Ding, Xuemei

    1991-08-01

    Optical sensors have two notable advantages in modern precision measurement. One is that they can be used in nondestructive measurement because the sensors need not touch the surfaces of workpieces in measuring. The other one is that they can strongly resist electromagnetic interferences, vibrations, and noises, so they are suitable to be used in machining sites. But the drift of light intensity and the changing of the reflection coefficient at different measuring positions of a workpiece may have great influence on measured results. To solve the problem, a spectroscopic differential characteristic compensating method is put forward. The method can be used effectively not only in compensating the measuring errors resulted from the drift of light intensity but also in eliminating the influence to measured results caused by the changing of the reflection coefficient. Also, the article analyzes the possibility of and the means of separating data errors of a clinical measuring system for form and position errors of circular workpieces.

  11. Rail-guided robotic end-effector position error due to rail compliance and ship motion

    NARCIS (Netherlands)

    Borgerink, Dian; Stegenga, J.; Brouwer, Dannis Michel; Woertche, H.J.; Stramigioli, Stefano

    2014-01-01

    A rail-guided robotic system is currently being designed for the inspection of ballast water tanks in ships. This robotic system will manipulate sensors toward the interior walls of the tank. In this paper, the influence of rail compliance on the end-effector position error due to ship movement is

  12. The role of positional errors while interpolating soil organic carbon contents using satellite imagery

    NARCIS (Netherlands)

    Samsonova, V.P.; Meshalkina, J.L.; Blagoveschensky, Y.N.; Yaroslavtsev, A.M.; Stoorvogel, J.J.

    2018-01-01

    Increasingly, soil surveys make use of a combination of legacy data, ancillary data and new field data. While combining the different sources of information, positional errors can play a large role. For example, the spatial discrepancy between remote sensing images and field data can depend on

  13. Compensation of position errors in passivity based teleoperation over packet switched communication networks

    NARCIS (Netherlands)

    Secchi, C; Stramigioli, Stefano; Fantuzzi, C.

    Because of the use of scattering based communication channels, passivity based telemanipulation systems can be subject to a steady state position error between master and slave robots. In this paper, we consider the case in which the passive master and slave sides communicate through a packet

  14. Robust Adaptive Beamforming with Sensor Position Errors Using Weighted Subspace Fitting-Based Covariance Matrix Reconstruction.

    Science.gov (United States)

    Chen, Peng; Yang, Yixin; Wang, Yong; Ma, Yuanliang

    2018-05-08

    When sensor position errors exist, the performance of recently proposed interference-plus-noise covariance matrix (INCM)-based adaptive beamformers may be severely degraded. In this paper, we propose a weighted subspace fitting-based INCM reconstruction algorithm to overcome sensor displacement for linear arrays. By estimating the rough signal directions, we construct a novel possible mismatched steering vector (SV) set. We analyze the proximity of the signal subspace from the sample covariance matrix (SCM) and the space spanned by the possible mismatched SV set. After solving an iterative optimization problem, we reconstruct the INCM using the estimated sensor position errors. Then we estimate the SV of the desired signal by solving an optimization problem with the reconstructed INCM. The main advantage of the proposed algorithm is its robustness against SV mismatches dominated by unknown sensor position errors. Numerical examples show that even if the position errors are up to half of the assumed sensor spacing, the output signal-to-interference-plus-noise ratio is only reduced by 4 dB. Beam patterns plotted using experiment data show that the interference suppression capability of the proposed beamformer outperforms other tested beamformers.

  15. Positive Beliefs about Errors as an Important Element of Adaptive Individual Dealing with Errors during Academic Learning

    Science.gov (United States)

    Tulis, Maria; Steuer, Gabriele; Dresel, Markus

    2018-01-01

    Research on learning from errors gives reason to assume that errors provide a high potential to facilitate deep learning if students are willing and able to take these learning opportunities. The first aim of this study was to analyse whether beliefs about errors as learning opportunities can be theoretically and empirically distinguished from…

  16. Quantitative analysis of the errors positioning of a multi leaf collimator for volumetric arcoterapia treatments

    International Nuclear Information System (INIS)

    Gomez Gonzalez, N.; Garcia Repiso, S.; Martin Rincon, C.; Cons Perez, N.; Saez Beltran, M.; Delgado Aparicio, J. M.; Perez alvarez, M. E.; Verde Velasco, J. M.; Ramos Pacho, J. A.; Sena Espinel, E. de

    2013-01-01

    The precision in the positioning of the multi leaf collimation system of a linear accelerator is critical, especially in treatments of IMRT, where small mistakes can cause relevant dosimetry discrepancies regarding the calculated plan. To assess the accuracy and repeatability of the blades positioning can be used controls, including the one known as fence test whose image pattern allows you to find anomalies in a visual way. The objective of this study is to develop a method which allows to quantify the positioning errors of the multi leaf collimator from this test. (Author)

  17. Observations of geographically correlated orbit errors for TOPEX/Poseidon using the global positioning system

    Science.gov (United States)

    Christensen, E. J.; Haines, B. J.; Mccoll, K. C.; Nerem, R. S.

    1994-01-01

    We have compared Global Positioning System (GPS)-based dynamic and reduced-dynamic TOPEX/Poseidon orbits over three 10-day repeat cycles of the ground-track. The results suggest that the prelaunch joint gravity model (JGM-1) introduces geographically correlated errors (GCEs) which have a strong meridional dependence. The global distribution and magnitude of these GCEs are consistent with a prelaunch covariance analysis, with estimated and predicted global rms error statistics of 2.3 and 2.4 cm rms, respectively. Repeating the analysis with the post-launch joint gravity model (JGM-2) suggests that a portion of the meridional dependence observed in JGM-1 still remains, with global rms error of 1.2 cm.

  18. Positive phase error from parallel conductance in tetrapolar bio-impedance measurements and its compensation

    Directory of Open Access Journals (Sweden)

    Ivan M Roitt

    2010-01-01

    Full Text Available Bioimpedance measurements are of great use and can provide considerable insight into biological processes.  However, there are a number of possible sources of measurement error that must be considered.  The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample.  Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. Here we characterise the positive phase and rise in impedance magnitude with frequency that can result from the presence of any parallel conductive pathways in the measurement set-up.  It is shown that fitting experimental data to an equivalent electrical circuit model allows for accurate determination of the true sample impedance as validated through finite element modelling (FEM of the measurement chamber.  Finally, the model is used to extract dispersion information from cell cultures to characterise their growth.

  19. Early math and reading achievement are associated with the error positivity

    Directory of Open Access Journals (Sweden)

    Matthew H. Kim

    2016-12-01

    Full Text Available Executive functioning (EF and motivation are associated with academic achievement and error-related ERPs. The present study explores whether early academic skills predict variability in the error-related negativity (ERN and error positivity (Pe. Data from 113 three- to seven-year-old children in a Go/No-Go task revealed that stronger early reading and math skills predicted a larger Pe. Closer examination revealed that this relation was quadratic and significant for children performing at or near grade level, but not significant for above-average achievers. Early academics did not predict the ERN. These findings suggest that the Pe – which reflects individual differences in motivational processes as well as attention – may be associated with early academic achievement.

  20. Accounting for the measurement error of spectroscopically inferred soil carbon data for improved precision of spatial predictions.

    Science.gov (United States)

    Somarathna, P D S N; Minasny, Budiman; Malone, Brendan P; Stockmann, Uta; McBratney, Alex B

    2018-08-01

    Spatial modelling of environmental data commonly only considers spatial variability as the single source of uncertainty. In reality however, the measurement errors should also be accounted for. In recent years, infrared spectroscopy has been shown to offer low cost, yet invaluable information needed for digital soil mapping at meaningful spatial scales for land management. However, spectrally inferred soil carbon data are known to be less accurate compared to laboratory analysed measurements. This study establishes a methodology to filter out the measurement error variability by incorporating the measurement error variance in the spatial covariance structure of the model. The study was carried out in the Lower Hunter Valley, New South Wales, Australia where a combination of laboratory measured, and vis-NIR and MIR inferred topsoil and subsoil soil carbon data are available. We investigated the applicability of residual maximum likelihood (REML) and Markov Chain Monte Carlo (MCMC) simulation methods to generate parameters of the Matérn covariance function directly from the data in the presence of measurement error. The results revealed that the measurement error can be effectively filtered-out through the proposed technique. When the measurement error was filtered from the data, the prediction variance almost halved, which ultimately yielded a greater certainty in spatial predictions of soil carbon. Further, the MCMC technique was successfully used to define the posterior distribution of measurement error. This is an important outcome, as the MCMC technique can be used to estimate the measurement error if it is not explicitly quantified. Although this study dealt with soil carbon data, this method is amenable for filtering the measurement error of any kind of continuous spatial environmental data. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Direct focusing error correction with ring-wide TBT beam position data

    International Nuclear Information System (INIS)

    Yang, M.J.

    2011-01-01

    Turn-By-Turn (TBT) betatron oscillation data is a very powerful tool in studying machine optics. Hundreds and thousands of turns of free oscillations are taken in just few tens of milliseconds. With beam covering all positions and angles at every location TBT data can be used to diagnose focusing errors almost instantly. This paper describes a new approach that observes focusing error collectively over all available TBT data to find the optimized quadrupole strength, one location at a time. Example will be shown and other issues will be discussed. The procedure presented clearly has helped to reduce overall deviations significantly, with relative ease. Sextupoles, being a permanent feature of the ring, will need to be incorporated into the model. While cumulative effect from all sextupoles around the ring may be negligible on turn-to-turn basis it is not so in this transfer line analysis. It should be noted that this procedure is not limited to looking for quadrupole errors. By modifying the target of minimization it could in principle be used to look for skew quadrupole errors and sextupole errors as well.

  2. A new stochastic model considering satellite clock interpolation errors in precise point positioning

    Science.gov (United States)

    Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong

    2018-03-01

    Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.

  3. Characterization of positional errors and their influence on micro four-point probe measurements on a 100 nm Ru film

    DEFF Research Database (Denmark)

    Kjær, Daniel; Hansen, Ole; Østerberg, Frederik Westergaard

    2015-01-01

    Thin-film sheet resistance measurements at high spatial resolution and on small pads are important and can be realized with micrometer-scale four-point probes. As a result of the small scale the measurements are affected by electrode position errors. We have characterized the electrode position...... errors in measurements on Ru thin film using an Au-coated 12-point probe. We show that the standard deviation of the static electrode position error is on the order of 5 nm, which significantly affects the results of single configuration measurements. Position-error-corrected dual......-configuration measurements, however, are shown to eliminate the effect of position errors to a level limited either by electrical measurement noise or dynamic position errors. We show that the probe contact points remain almost static on the surface during the measurements (measured on an atomic scale) with a standard...

  4. Analysis of the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery☆

    Science.gov (United States)

    Arba-Mosquera, Samuel; Aslanides, Ioannis M.

    2012-01-01

    Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.

  5. The Effect of Antenna Position Errors on Redundant-Baseline Calibration of HERA

    Science.gov (United States)

    Orosz, Naomi; Dillon, Joshua; Ewall-Wice, Aaron; Parsons, Aaron; HERA Collaboration

    2018-01-01

    HERA (the Hydrogen Epoch of Reionization Array) is a large, highly-redundant radio interferometer in South Africa currently being built out to 350 14-m dishes. Its mission is to probe large scale structure during and prior to the epoch of reionization using the 21 cm hyperfine transition of neutral hydrogen. The array is designed to be calibrated using redundant baselines of known lengths. However, the dishes can deviate from ideal positions, with errors on the order of a few centimeters. This potentially increases foreground contamination of the 21 cm power spectrum in the cleanest part of Fourier space. The calibration algorithm treats groups of baselines that should be redundant, but are not due to position errors, as if they actually are. Accurate, precise calibration is critical because the foreground signals are 100,000 times stronger than the reionization signal. We explain the origin of this effect and discuss weighting strategies to mitigate it.

  6. Automatic detection of patient identification and positioning errors in radiotherapy treatment using 3D setup images

    OpenAIRE

    Jani, Shyam

    2015-01-01

    The success of modern radiotherapy treatment depends on the correct alignment of the radiation beams with the target region in the patient. In the conventional paradigm of image-guided radiation therapy, 2D or 3D setup images are taken immediately prior to treatment and are used by radiation therapy technologists to localize the patient to the same position as defined from the reference planning CT dataset. However, numerous reports in the literature have described errors during this step, wh...

  7. Taking human error into account in the design of nuclear reactor centres

    International Nuclear Information System (INIS)

    Prouillac; Lerat; Janoir.

    1982-05-01

    The role of the operator in the centralized management of pressurized water reactors is studied. Different types of human error likely to arise, the means of their prevention and methods of mitigating their consequences are presented. Some possible improvements are outlined

  8. A machine learning approach to the accurate prediction of multi-leaf collimator positional errors

    Science.gov (United States)

    Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon

    2016-03-01

    Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD  =  1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be

  9. Accounting for spatiotemporal errors of gauges: A critical step to evaluate gridded precipitation products

    Science.gov (United States)

    Tang, Guoqiang; Behrangi, Ali; Long, Di; Li, Changming; Hong, Yang

    2018-04-01

    Rain gauge observations are commonly used to evaluate the quality of satellite precipitation products. However, the inherent difference between point-scale gauge measurements and areal satellite precipitation, i.e. a point of space in time accumulation v.s. a snapshot of time in space aggregation, has an important effect on the accuracy and precision of qualitative and quantitative evaluation results. This study aims to quantify the uncertainty caused by various combinations of spatiotemporal scales (0.1°-0.8° and 1-24 h) of gauge network designs in the densely gauged and relatively flat Ganjiang River basin, South China, in order to evaluate the state-of-the-art satellite precipitation, the Integrated Multi-satellite Retrievals for Global Precipitation Measurement (IMERG). For comparison with the dense gauge network serving as "ground truth", 500 sparse gauge networks are generated through random combinations of gauge numbers at each set of spatiotemporal scales. Results show that all sparse gauge networks persistently underestimate the performance of IMERG according to most metrics. However, the probability of detection is overestimated because hit and miss events are more likely fewer than the reference numbers derived from dense gauge networks. A nonlinear error function of spatiotemporal scales and the number of gauges in each grid pixel is developed to estimate the errors of using gauges to evaluate satellite precipitation. Coefficients of determination of the fitting are above 0.9 for most metrics. The error function can also be used to estimate the required minimum number of gauges in each grid pixel to meet a predefined error level. This study suggests that the actual quality of satellite precipitation products could be better than conventionally evaluated or expected, and hopefully enables non-subject-matter-expert researchers to have better understanding of the explicit uncertainties when using point-scale gauge observations to evaluate areal products.

  10. Using the Bootstrap to Account for Linkage Errors when Analysing Probabilistically Linked Categorical Data

    Directory of Open Access Journals (Sweden)

    Chipperfield James O.

    2015-09-01

    Full Text Available Record linkage is the act of bringing together records that are believed to belong to the same unit (e.g., person or business from two or more files. Record linkage is not an error-free process and can lead to linking a pair of records that do not belong to the same unit. This occurs because linking fields on the files, which ideally would uniquely identify each unit, are often imperfect. There has been an explosion of record linkage applications, particularly involving government agencies and in the field of health, yet there has been little work on making correct inference using such linked files. Naively treating a linked file as if it were linked without errors can lead to biased inferences. This article develops a method of making inferences for cross tabulated variables when record linkage is not an error-free process. In particular, it develops a parametric bootstrap approach to estimation which can accommodate the sophisticated probabilistic record linkage techniques that are widely used in practice (e.g., 1-1 linkage. The article demonstrates the effectiveness of this method in a simulation and in a real application.

  11. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  12. Influence of Marker Movement Errors on Measuring 3 Dimentional Scapular Position and Orientation

    Directory of Open Access Journals (Sweden)

    Afsoun Nodehi-Moghaddam

    2003-12-01

    Full Text Available Objective: Scapulothoracic muscles weakness or fatique can result in abnormal scapular positioning and compromising scapulo-humeral rhythm and shoulder dysfunction .The scapula moves in a -3 Dimentional fashion so the use of 2-Dimentional Techniques cannot fully capture scapular motion . One of approaches to positioining markers of kinematic systems is to mount each marker directly on the skin generally over a bony anatomical landmarks . Howerer skin movement and Motion of underlying bony structures are not Necessaritly identical and substantial errors may be introduced in the description of bone movement when using skin –mounted markers. evaluation of Influence of marker movement errors on 3-Dimentional scapular position and orientation. Materials & Methods: 10 Healthy subjects with a mean age 30.50 participated in the study . They were tested in three sessions A 3-dimentiional electro mechanical digitizer was used to measure scapular position and orientation measures were obtained while arm placed at the side of the body and elevated 45٫90٫120 and full Rang of motion in the scapular plane . At each test positions six bony landmarks were palpated and skin markers were mounted on them . This procedure repeated in the second test session in third session Removal of markers was not performed through obtaining entire Range of motion after mounting the markers . Results: The intraclass correlation coefficients (ICC for scapulor variables were higher (0.92-0.84 when markers were replaced and re-mounted on bony landmarks with Increasing the angle of elevation. Conclusion: our findings suggested significant markers movement error on measuring the upward Rotation and posterior tilt angle of scapula.

  13. Analysis of alpha spectrum instrumental errors accounting for the low energy part of semiconductor detector response function

    International Nuclear Information System (INIS)

    Gurbich, A.F.

    1981-01-01

    A technique for processing of instrumental spectrum of charged particles permitting to take account of a low-energy part of spectrometer line shape, to improve accuracy and to estimate detection efficiency is stated on the example of 226 Ra alpha spectrum. The results obtained show that relative intensities of alpha lines within the limits of statistical errors coincide with the known values, line ''tails'' constituting to 3% of total area of the line. Taking account of ''the line tail'' results in shift of centers of peak gravity by 10-20 keV. So low-energy part of the alpha spectrometer line, which is usually not taken account during spectra processing, markedly affect the results [ru

  14. Effect of cooling on thixotropic position-sense error in human biceps muscle.

    Science.gov (United States)

    Sekihara, Chikara; Izumizaki, Masahiko; Yasuda, Tomohiro; Nakajima, Takayuki; Atsumi, Takashi; Homma, Ikuo

    2007-06-01

    Muscle temperature affects muscle thixotropy. However, it is unclear whether changes in muscle temperature affect thixotropic position-sense errors. We studied the effect of cooling on thixotropic position-sense errors induced by short-length muscle contraction (hold-short conditioning) in the biceps of 12 healthy men. After hold-short conditioning of the right biceps muscle in a cooled (5.0 degrees C) or control (36.5 degrees C) environment, subjects perceived greater extension of the conditioned forearm at 5.0 degrees C. The angle differences between the two forearms following hold-short conditioning of the right biceps muscle in normal or cooled conditions were significantly different (-3.335 +/- 1.680 degrees at 36.5 degrees C vs. -5.317 +/- 1.096 degrees at 5.0 degrees C; P=0.043). Induction of a tonic vibration reflex in the biceps muscle elicited involuntary forearm elevation, and the angular velocities of the elevation differed significantly between arms conditioned in normal and cooled environments (1.583 +/- 0.326 degrees /s at 36.5 degrees C vs. 3.100 +/- 0.555 degrees /s at 5.0 degrees C, P=0.0039). Thus, a cooled environment impairs a muscle's ability to provide positional information, potentially leading to poor muscle performance.

  15. Positional accommodative intraocular lens power error induced by the estimation of the corneal power and the effective lens position

    Directory of Open Access Journals (Sweden)

    David P Piñero

    2015-01-01

    Full Text Available Purpose: To evaluate the predictability of the refractive correction achieved with a positional accommodating intraocular lenses (IOL and to develop a potential optimization of it by minimizing the error associated with the keratometric estimation of the corneal power and by developing a predictive formula for the effective lens position (ELP. Materials and Methods: Clinical data from 25 eyes of 14 patients (age range, 52-77 years and undergoing cataract surgery with implantation of the accommodating IOL Crystalens HD (Bausch and Lomb were retrospectively reviewed. In all cases, the calculation of an adjusted IOL power (P IOLadj based on Gaussian optics considering the residual refractive error was done using a variable keratometric index value (n kadj for corneal power estimation with and without using an estimation algorithm for ELP obtained by multiple regression analysis (ELP adj . P IOLadj was compared to the real IOL power implanted (P IOLReal , calculated with the SRK-T formula and also to the values estimated by the Haigis, HofferQ, and Holladay I formulas. Results: No statistically significant differences were found between P IOLReal and P IOLadj when ELP adj was used (P = 0.10, with a range of agreement between calculations of 1.23 D. In contrast, P IOLReal was significantly higher when compared to P IOLadj without using ELP adj and also compared to the values estimated by the other formulas. Conclusions: Predictable refractive outcomes can be obtained with the accommodating IOL Crystalens HD using a variable keratometric index for corneal power estimation and by estimating ELP with an algorithm dependent on anatomical factors and age.

  16. Models and error analyses of measuring instruments in accountability systems in safeguards control

    International Nuclear Information System (INIS)

    Dattatreya, E.S.

    1977-05-01

    Essentially three types of measuring instruments are used in plutonium accountability systems: (1) the bubblers, for measuring the total volume of liquid in the holding tanks, (2) coulometers, titration apparatus and calorimeters, for measuring the concentration of plutonium; and (3) spectrometers, for measuring isotopic composition. These three classes of instruments are modeled and analyzed. Finally, the uncertainty in the estimation of total plutonium in the holding tank is determined

  17. Testing the Motor Simulation Account of Source Errors for Actions in Recall

    Directory of Open Access Journals (Sweden)

    Nicholas Lange

    2017-09-01

    Full Text Available Observing someone else perform an action can lead to false memories of self-performance – the observation inflation effect. One explanation is that action simulation via mirror neuron activation during action observation is responsible for observation inflation by enriching memories of observed actions with motor representations. In three experiments we investigated this account of source memory failures, using a novel paradigm that minimized influences of verbalization and prior object knowledge. Participants worked in pairs to take turns acting out geometric shapes and letters. The next day, participants recalled either actions they had performed or those they had observed. Experiment 1 showed that participants falsely retrieved observed actions as self-performed, but also retrieved self-performed actions as observed. Experiment 2 showed that preventing participants from encoding observed actions motorically by taxing their motor system with a concurrent motor task did not lead to the predicted decrease in false claims of self-performance. Indeed, Experiment 3 showed that this was the case even if participants were asked to carefully monitor their recall. Because our data provide no evidence for a motor activation account, we also discussed our results in light of a source monitoring account.

  18. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data.

    Science.gov (United States)

    Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin

    2016-12-30

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Accounting for standard errors of vision-specific latent trait in regression models.

    Science.gov (United States)

    Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L

    2014-07-11

    To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits

  20. Square-Wave Voltage Injection Algorithm for PMSM Position Sensorless Control With High Robustness to Voltage Errors

    DEFF Research Database (Denmark)

    Ni, Ronggang; Xu, Dianguo; Blaabjerg, Frede

    2017-01-01

    relationship with the magnetic field distortion. Position estimation errors caused by higher order harmonic inductances and voltage harmonics generated by the SVPWM are also discussed. Both simulations and experiments are carried out based on a commercial PMSM to verify the superiority of the proposed method......Rotor position estimated with high-frequency (HF) voltage injection methods can be distorted by voltage errors due to inverter nonlinearities, motor resistance, and rotational voltage drops, etc. This paper proposes an improved HF square-wave voltage injection algorithm, which is robust to voltage...... errors without any compensations meanwhile has less fluctuation in the position estimation error. The average position estimation error is investigated based on the analysis of phase harmonic inductances, and deduced in the form of the phase shift of the second-order harmonic inductances to derive its...

  1. A Multipixel Time Series Analysis Method Accounting for Ground Motion, Atmospheric Noise, and Orbital Errors

    Science.gov (United States)

    Jolivet, R.; Simons, M.

    2018-02-01

    Interferometric synthetic aperture radar time series methods aim to reconstruct time-dependent ground displacements over large areas from sets of interferograms in order to detect transient, periodic, or small-amplitude deformation. Because of computational limitations, most existing methods consider each pixel independently, ignoring important spatial covariances between observations. We describe a framework to reconstruct time series of ground deformation while considering all pixels simultaneously, allowing us to account for spatial covariances, imprecise orbits, and residual atmospheric perturbations. We describe spatial covariances by an exponential decay function dependent of pixel-to-pixel distance. We approximate the impact of imprecise orbit information and residual long-wavelength atmosphere as a low-order polynomial function. Tests on synthetic data illustrate the importance of incorporating full covariances between pixels in order to avoid biased parameter reconstruction. An example of application to the northern Chilean subduction zone highlights the potential of this method.

  2. Evidence, exaggeration, and error in historical accounts of chaparral wildfires in California.

    Science.gov (United States)

    Goforth, Brett R; Minnich, Richard A

    2007-04-01

    For more than half a century, ecologists and historians have been integrating the contemporary study of ecosystems with data gathered from historical sources to evaluate change over broad temporal and spatial scales. This approach is especially useful where ecosystems were altered before formal study as a result of natural resources management, land development, environmental pollution, and climate change. Yet, in many places, historical documents do not provide precise information, and pre-historical evidence is unavailable or has ambiguous interpretation. There are similar challenges in evaluating how the fire regime of chaparral in California has changed as a result of fire suppression management initiated at the beginning of the 20th century. Although the firestorm of October 2003 was the largest officially recorded in California (approximately 300,000 ha), historical accounts of pre-suppression wildfires have been cited as evidence that such a scale of burning was not unprecedented, suggesting the fire regime and patch mosaic in chaparral have not substantially changed. We find that the data do not support pre-suppression megafires, and that the impression of large historical wildfires is a result of imprecision and inaccuracy in the original reports, as well as a parlance that is beset with hyperbole. We underscore themes of importance for critically analyzing historical documents to evaluate ecological change. A putative 100 mile long by 10 mile wide (160 x 16 km) wildfire reported in 1889 was reconstructed to an area of chaparral approximately 40 times smaller by linking local accounts to property tax records, voter registration rolls, claimed insurance, and place names mapped with a geographical information system (GIS) which includes data from historical vegetation surveys. We also show that historical sources cited as evidence of other large chaparral wildfires are either demonstrably inaccurate or provide anecdotal information that is immaterial in the

  3. Positivity, discontinuity, finite resources, and nonzero error for arbitrarily varying quantum channels

    International Nuclear Information System (INIS)

    Boche, H.; Nötzel, J.

    2014-01-01

    This work is motivated by a quite general question: Under which circumstances are the capacities of information transmission systems continuous? The research is explicitly carried out on finite arbitrarily varying quantum channels (AVQCs). We give an explicit example that answers the recent question whether the transmission of messages over AVQCs can benefit from assistance by distribution of randomness between the legitimate sender and receiver in the affirmative. The specific class of channels introduced in that example is then extended to show that the unassisted capacity does have discontinuity points, while it is known that the randomness-assisted capacity is always continuous in the channel. We characterize the discontinuity points and prove that the unassisted capacity is always continuous around its positivity points. After having established shared randomness as an important resource, we quantify the interplay between the distribution of finite amounts of randomness between the legitimate sender and receiver, the (nonzero) probability of a decoding error with respect to the average error criterion and the number of messages that can be sent over a finite number of channel uses. We relate our results to the entanglement transmission capacities of finite AVQCs, where the role of shared randomness is not yet well understood, and give a new sufficient criterion for the entanglement transmission capacity with randomness assistance to vanish

  4. Mobility and Position Error Analysis of a Complex Planar Mechanism with Redundant Constraints

    Science.gov (United States)

    Sun, Qipeng; Li, Gangyan

    2018-03-01

    Nowadays mechanisms with redundant constraints have been created and attracted much attention for their merits. The mechanism of the redundant constraints in a mechanical system is analyzed in this paper. A analysis method of Planar Linkage with a repetitive structure is proposed to get the number and type of constraints. According to the difference of applications and constraint characteristics, the redundant constraints are divided into the theoretical planar redundant constraints and the space-planar redundant constraints. And the calculation formula for the number of redundant constraints and type of judging method are carried out. And a complex mechanism with redundant constraints is analyzed of the influence about redundant constraints on mechanical performance. With the combination of theoretical derivation and simulation research, a mechanism analysis method is put forward about the position error of complex mechanism with redundant constraints. It points out the direction on how to eliminate or reduce the influence of redundant constraints.

  5. Impact of MLC leaf position errors on simple and complex IMRT plans for head and neck cancer

    International Nuclear Information System (INIS)

    Mu, G; Ludlum, E; Xia, P

    2008-01-01

    The dosimetric impact of random and systematic multi-leaf collimator (MLC) leaf position errors is relatively unknown for head and neck intensity-modulated radiotherapy (IMRT) patients. In this report we studied 17 head and neck IMRT patients, including 12 treated with simple plans ( 100 segments). Random errors (-2 to +2 mm) and systematic errors (±0.5 mm and ±1 mm) in MLC leaf positions were introduced into the clinical plans and the resultant dose distributions were analyzed based on defined endpoint doses. The dosimetric effect was insignificant for random MLC leaf position errors up to 2 mm for both simple and complex plans. However, for systematic MLC leaf position errors, we found significant dosimetric differences between the simple and complex IMRT plans. For 1 mm systematic error, the average changes in D 95% were 4% in simple plans versus 8% in complex plans. The average changes in D 0.1cc of the spinal cord and brain stem were 4% in simple plans versus 12% in complex plans. The average changes in parotid glands were 9% in simple plans versus 13% for the complex plans. Overall, simple IMRT plans are less sensitive to leaf position errors than complex IMRT plans

  6. Implication of spot position error on plan quality and patient safety in pencil-beam-scanning proton therapy

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G. [Division of Medical Physics, Department of Radiation Oncology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2014-08-15

    Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 to 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot

  7. Analysis of the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery

    Directory of Open Access Journals (Sweden)

    Samuel Arba-Mosquera

    2012-01-01

    Conclusions: The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.

  8. Exceptional suffering? Enumeration and vernacular accounting in the HIV-positive experience.

    Science.gov (United States)

    Benton, Adia

    2012-01-01

    Drawing on 17 months of ethnographic fieldwork in Freetown, Sierra Leone, I highlight the recursive relationship between Sierra Leone as an exemplary setting and HIV as an exceptional disease. Through this relationship, I examine how HIV-positive individuals rely on both enumerative knowledge (seroprevalence rates) and vernacular accounting (NGO narratives of vulnerability) to communicate the uniqueness of their experience as HIV sufferers and to demarcate the boundaries of their status. Various observers' enumerative and vernacular accounts of Sierra Leone's decade-long civil conflict, coupled with global health accounts of HIV as exceptional, reveal the calculus of power through which global health projects operate. The contradictions between the exemplary and the exceptional-and the accompanying tension between quantitative and qualitative facts-are mutually constituted in performances and claims made by HIV-positive individuals themselves.

  9. Accountability

    Science.gov (United States)

    Fielding, Michael; Inglis, Fred

    2017-01-01

    This contribution republishes extracts from two important articles published around 2000 concerning the punitive accountability system suffered by English primary and secondary schools. The first concerns the inspection agency Ofsted, and the second managerialism. Though they do not directly address assessment, they are highly relevant to this…

  10. SU-F-J-208: Prompt Gamma Imaging-Based Prediction of Bragg Peak Position for Realistic Treatment Error Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Y; Macq, B; Bondar, L [Universite catholique de Louvain, Louvain-la-Neuve (Belgium); Janssens, G [IBA, Louvain-la-Neuve (Belgium)

    2016-06-15

    Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees, and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for

  11. Impact of habitat-specific GPS positional error on detection of movement scales by first-passage time analysis.

    Directory of Open Access Journals (Sweden)

    David M Williams

    Full Text Available Advances in animal tracking technologies have reduced but not eliminated positional error. While aware of such inherent error, scientists often proceed with analyses that assume exact locations. The results of such analyses then represent one realization in a distribution of possible outcomes. Evaluating results within the context of that distribution can strengthen or weaken our confidence in conclusions drawn from the analysis in question. We evaluated the habitat-specific positional error of stationary GPS collars placed under a range of vegetation conditions that produced a gradient of canopy cover. We explored how variation of positional error in different vegetation cover types affects a researcher's ability to discern scales of movement in analyses of first-passage time for white-tailed deer (Odocoileus virginianus. We placed 11 GPS collars in 4 different vegetative canopy cover types classified as the proportion of cover above the collar (0-25%, 26-50%, 51-75%, and 76-100%. We simulated the effect of positional error on individual movement paths using cover-specific error distributions at each location. The different cover classes did not introduce any directional bias in positional observations (1 m≤mean≤6.51 m, 0.24≤p≤0.47, but the standard deviation of positional error of fixes increased significantly with increasing canopy cover class for the 0-25%, 26-50%, 51-75% classes (SD = 2.18 m, 3.07 m, and 4.61 m, respectively and then leveled off in the 76-100% cover class (SD = 4.43 m. We then added cover-specific positional errors to individual deer movement paths and conducted first-passage time analyses on the noisy and original paths. First-passage time analyses were robust to habitat-specific error in a forest-agriculture landscape. For deer in a fragmented forest-agriculture environment, and species that move across similar geographic extents, we suggest that first-passage time analysis is robust with regard to

  12. Errors in the calculation of new salary positions and performance premiums – 2017 MERIT exercise

    CERN Multimedia

    Staff Association

    2017-01-01

    Following the receipt of the letters dated May 12th announcing the qualification of their performance (MERIT 2017), and the notification of their salary slips for the month of May, several colleagues have come to us to enquire about the calculation of salary increases and performance premiums. After verification, the Staff Association has informed the Management, in a meeting of the Standing Concertation Committee on June 1st, about errors owing to rounding in the applied formulas. James Purvis, Head of HR department, has published in the CERN Bulletin dated July 18th an article, under the heading “Better precision (rounding)”, that gives a short explanation of these rounding effects. But we want to further bring you more precise explanations. Advancement On the salary slips for the month of May, the calculations of the advancement and new salary positions were done, by the services of administrative computing in the FAP department, on the basis of the salary, rounded to the nearest franc...

  13. Joint position sense error in people with neck pain: A systematic review.

    Science.gov (United States)

    de Vries, J; Ischebeck, B K; Voogt, L P; van der Geest, J N; Janssen, M; Frens, M A; Kleinrensink, G J

    2015-12-01

    Several studies in recent decades have examined the relationship between proprioceptive deficits and neck pain. However, there is no uniform conclusion on the relationship between the two. Clinically, proprioception is evaluated using the Joint Position Sense Error (JPSE), which reflects a person's ability to accurately return his head to a predefined target after a cervical movement. We focused to differentiate between JPSE in people with neck pain compared to healthy controls. Systematic review according to the PRISMA guidelines. Our data sources were Embase, Medline OvidSP, Web of Science, Cochrane Central, CINAHL and Pubmed Publisher. To be included, studies had to compare JPSE of the neck (O) in people with neck pain (P) with JPSE of the neck in healthy controls (C). Fourteen studies were included. Four studies reported that participants with traumatic neck pain had a significantly higher JPSE than healthy controls. Of the eight studies involving people with non-traumatic neck pain, four reported significant differences between the groups. The JPSE did not vary between neck-pain groups. Current literature shows the JPSE to be a relevant measure when it is used correctly. All studies which calculated the JPSE over at least six trials showed a significantly increased JPSE in the neck pain group. This strongly suggests that 'number of repetitions' is a major element in correctly performing the JPSE test. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Positioning performance analysis of the time sum of arrival algorithm with error features

    Science.gov (United States)

    Gong, Feng-xun; Ma, Yan-qiu

    2018-03-01

    The theoretical positioning accuracy of multilateration (MLAT) with the time difference of arrival (TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival (TSOA) algorithm from the root mean square error ( RMSE) and geometric dilution of precision (GDOP) in additive white Gaussian noise (AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.

  15. SU-E-P-36: Evaluation of MLC Positioning Errors in Dynamic IMRT Treatments by Analyzing Dynalog Files

    International Nuclear Information System (INIS)

    Olasolo, J; Pellejero, S; Gracia, M; Gallardo, N; Martin, M; Lozares, S; Maneru, F; Bragado, L; Miquelez, S; Rubio, A

    2015-01-01

    Purpose: To assess the accuracy of MLC positioning in Varian linear accelerator, in dynamic IMRT technique, from the analysis of dynalog files generated by the MLC controller. Methods: In Clinac accelerators (pre-TrueBeam technology), control system has an approximately 50ms delay (one control cycle time). Then, the system compares the measured position to the planned position corresponding to the next control cycle. As it has been confirmed by Varian technical support, this effect causes that measured positions appear in dynalogs one cycle out of phase with respect to the planned positions. Around 9000 dynalogs have been analyzed, coming from the three linear accelerators of our center (one Trilogy and two Clinac 21EX) equipped with a Millennium 120 MLC. In order to compare our results to recent publications, leaf positioning errors (RMS and 95th percentile) are calculated with and without delay effect. Dynalogs have been analyzed using a in-house Matlab software. Results: The RMS errors were 0.341, 0.339 and 0.348mm for each Linac; being the average error 0.343 mm. The 95th percentiles of the error were 0.617, 0.607 and 0.625; with an average of 0.617mm. A recent multi-institution study carried out by Kerns et al. found a mean leaf RMS error of 0.32mm and a 95th percentile error value of 0.64mm.Without delay effect, mean leaf RMS errors obtained were 0.040, 0.042 and 0.038mm for each treatment machine; being the average 0.040mm. The 95th percentile error values obtained were 0.057, 0.058 and 0.054 mm, with an average of 0.056mm. Conclusion: Results obtained for the mean leaf RMS error and the mean 95th percentile were consistent with the multi-institution study. Calculated error statistics with delay effect are significantly larger due to the speed proportional and systematic leaf offset. Consequently it is proposed to correct this effect in dynalogs analysis to determine the MLC performance

  16. Errors in measuring transverse and energy jitter by beam position monitors

    Energy Technology Data Exchange (ETDEWEB)

    Balandin, V.; Decking, W.; Golubeva, N.

    2010-02-15

    The problem of errors, arising due to finite BPMresolution, in the difference orbit parameters, which are found as a least squares fit to the BPM data, is one of the standard and important problems of accelerator physics. Even so for the case of transversely uncoupled motion the covariance matrix of reconstruction errors can be calculated ''by hand'', the direct usage of obtained solution, as a tool for designing of a ''good measurement system'', does not look to be fairly straightforward. It seems that a better understanding of the nature of the problem is still desirable. We make a step in this direction introducing dynamic into this problem, which at the first glance seems to be static. We consider a virtual beam consisting of virtual particles obtained as a result of application of reconstruction procedure to ''all possible values'' of BPM reading errors. This beam propagates along the beam line according to the same rules as any real beam and has all beam dynamical characteristics, such as emittances, energy spread, dispersions, betatron functions and etc. All these values become the properties of the BPM measurement system. One can compare two BPM systems comparing their error emittances and rms error energy spreads, or, for a given measurement system, one can achieve needed balance between coordinate and momentum reconstruction errors by matching the error betatron functions in the point of interest to the desired values. (orig.)

  17. Errors in measuring transverse and energy jitter by beam position monitors

    International Nuclear Information System (INIS)

    Balandin, V.; Decking, W.; Golubeva, N.

    2010-02-01

    The problem of errors, arising due to finite BPMresolution, in the difference orbit parameters, which are found as a least squares fit to the BPM data, is one of the standard and important problems of accelerator physics. Even so for the case of transversely uncoupled motion the covariance matrix of reconstruction errors can be calculated ''by hand'', the direct usage of obtained solution, as a tool for designing of a ''good measurement system'', does not look to be fairly straightforward. It seems that a better understanding of the nature of the problem is still desirable. We make a step in this direction introducing dynamic into this problem, which at the first glance seems to be static. We consider a virtual beam consisting of virtual particles obtained as a result of application of reconstruction procedure to ''all possible values'' of BPM reading errors. This beam propagates along the beam line according to the same rules as any real beam and has all beam dynamical characteristics, such as emittances, energy spread, dispersions, betatron functions and etc. All these values become the properties of the BPM measurement system. One can compare two BPM systems comparing their error emittances and rms error energy spreads, or, for a given measurement system, one can achieve needed balance between coordinate and momentum reconstruction errors by matching the error betatron functions in the point of interest to the desired values. (orig.)

  18. The role of errors in the measurements performed at the reprocessing plant head-end for material accountancy purposes

    International Nuclear Information System (INIS)

    Foggi, C.; Liebetrau, A.M.; Petraglia, E.

    1999-01-01

    One of the most common procedures used in determining the amount of nuclear material contained in solutions consists of first measuring the volume and the density of the solution, and then determining the concentrations of this material. This presentation will focus on errors generated at the process lime in the measurement of volume and density. These errors and their associated uncertainties can be grouped into distinct categories depending on their origin: those attributable to measuring instruments; those attributable to operational procedures; variability in measurement conditions; errors in the analysis and interpretation of results. Possible errors sources, their relative magnitudes, and an error propagation rationale are discussed, with emphasis placed on bases and errors of the last three types called systematic errors [ru

  19. Decisions to shoot in a weapon identification task: The influence of cultural stereotypes and perceived threat on false positive errors.

    Science.gov (United States)

    Fleming, Kevin K; Bandy, Carole L; Kimble, Matthew O

    2010-01-01

    The decision to shoot a gun engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC), where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and electroencephalogram (EEG) activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of Middle-Eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERNs were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERNs, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of Middle-Eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to Middle-Eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance.

  20. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    Energy Technology Data Exchange (ETDEWEB)

    Boehnke, E McKenzie; DeMarco, J; Steers, J; Fraass, B [Cedars-Sinai Medical Center, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readings are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.

  1. 2D position sensitive microstrip sensors with charge division along the strip Studies on the position measurement error

    CERN Document Server

    Bassignana, D; Fernandez, M; Jaramillo, R; Lozano, M; Munoz, F.J; Pellegrini, G; Quirion, D; Vila, I; Vitorero, F

    2013-01-01

    Position sensitivity in semiconductor detectors of ionizing radiation is usually achieved by the segmentation of the sensing diode junction in many small sensing elements read out separately as in the case of conventional microstrips and pixel detectors. Alternatively, position sensitivity can be obtained by splitting the ionization signal collected by one single electrode amongst more than one readout channel with the ratio of the collected charges depending on the position where the signal was primary generated. Following this later approach, we implemented the charge division method in a conventional microstrip detector to obtain position sensitivity along the strip. We manufactured a proofof-concept demonstrator where the conventional aluminum electrodes were replaced by slightly resistive electrodes made of strongly doped poly-crystalline silicon and being readout at both strip ends. Here, we partially summarize the laser characterization of this first proof-of-concept demonstrator with special emphasis ...

  2. A false positive food chain error associated with a generic predator gut content ELISA

    Science.gov (United States)

    Conventional prey-specific gut content ELISA and PCR assays are useful for identifying predators of insect pests in nature. However, these assays are prone to yielding certain types of food chain errors. For instance, it is possible that prey remains can pass through the food chain as the result of ...

  3. On minimizing assignment errors and the trade-off between false positives and negatives in parentage analysis

    KAUST Repository

    Harrison, Hugo B.

    2013-11-04

    Genetic parentage analyses provide a practical means with which to identify parent-offspring relationships in the wild. In Harrison et al.\\'s study (2013a), we compare three methods of parentage analysis and showed that the number and diversity of microsatellite loci were the most important factors defining the accuracy of assignments. Our simulations revealed that an exclusion-Bayes theorem method was more susceptible to false-positive and false-negative assignments than other methods tested. Here, we analyse and discuss the trade-off between type I and type II errors in parentage analyses. We show that controlling for false-positive assignments, without reporting type II errors, can be misleading. Our findings illustrate the need to estimate and report both the rate of false-positive and false-negative assignments in parentage analyses. © 2013 John Wiley & Sons Ltd.

  4. On minimizing assignment errors and the trade-off between false positives and negatives in parentage analysis

    KAUST Repository

    Harrison, Hugo B.; Saenz Agudelo, Pablo; Planes, Serge; Jones, Geoffrey P.; Berumen, Michael L.

    2013-01-01

    Genetic parentage analyses provide a practical means with which to identify parent-offspring relationships in the wild. In Harrison et al.'s study (2013a), we compare three methods of parentage analysis and showed that the number and diversity of microsatellite loci were the most important factors defining the accuracy of assignments. Our simulations revealed that an exclusion-Bayes theorem method was more susceptible to false-positive and false-negative assignments than other methods tested. Here, we analyse and discuss the trade-off between type I and type II errors in parentage analyses. We show that controlling for false-positive assignments, without reporting type II errors, can be misleading. Our findings illustrate the need to estimate and report both the rate of false-positive and false-negative assignments in parentage analyses. © 2013 John Wiley & Sons Ltd.

  5. SU-E-J-94: Positioning Errors Resulting From Using Bony Anatomy Alignment for Treating SBRT Lung Tumor

    International Nuclear Information System (INIS)

    Frame, C; Ding, G

    2014-01-01

    Purpose: To quantify patient setups errors based on bony anatomy registration rather than 3D tumor alignment for SBRT lung treatments. Method: A retrospective study was performed for patients treated with lung SBRT and imaged with kV cone beam computed tomography (kV-CBCT) image-guidance. Daily CBCT images were registered to treatment planning CTs based on bony anatomy alignment and then inter-fraction tumor movement was evaluated by comparing shift in the tumor center in the medial-lateral, anterior-posterior, and superior-inferior directions. The PTV V100% was evaluated for each patient based on the average daily tumor displacement to assess the impact of the positioning error on the target coverage when the registrations were based on bony anatomy. Of the 35 patients studied, 15 were free-breathing treatments, 10 used abdominal compression with a stereotactic body frame, and the remaining 10 were performed with BodyFIX vacuum bags. Results: For free-breathing treatments, the range of tumor displacement error is between 1–6 mm in the medial-lateral, 1–13 mm in the anterior-posterior, and 1–7 mm in the superior-inferior directions. These positioning errors lead to 6–22% underdose coverage for PTV - V100% . Patients treated with abdominal compression immobilization showed positional errors of 0–4mm mediallaterally, 0–3mm anterior-posteriorly, and 0–2 mm inferior-superiorly with PTV - V100% underdose ranging between 6–17%. For patients immobilized with the vacuum bags, the positional errors were found to be 0–1 mm medial-laterally, 0–1mm anterior-posteriorly, and 0–2 mm inferior-superiorly with PTV - V100% under dose ranging between 5–6% only. Conclusion: It is necessary to align the tumor target by using 3D image guidance to ensure adequate tumor coverage before performing SBRT lung treatments. The BodyFIX vacuum bag immobilization method has the least positioning errors among the three methods studied when bony anatomy is used for

  6. Motivational processes from expectancy-value theory are associated with variability in the error positivity in young children.

    Science.gov (United States)

    Kim, Matthew H; Marulis, Loren M; Grammer, Jennie K; Morrison, Frederick J; Gehring, William J

    2017-03-01

    Motivational beliefs and values influence how children approach challenging activities. The current study explored motivational processes from an expectancy-value theory framework by studying children's mistakes and their responses to them by focusing on two event-related potential (ERP) components: the error-related negativity (ERN) and the error positivity (Pe). Motivation was assessed using a child-friendly challenge puzzle task and a brief interview measure prior to ERP testing. Data from 50 4- to 6-year-old children revealed that greater perceived competence beliefs were related to a larger Pe, whereas stronger intrinsic task value beliefs were associated with a smaller Pe. Motivation was unrelated to the ERN. Individual differences in early motivational processes may reflect electrophysiological activity related to conscious error awareness. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. The impact of sensor errors and building structures on particle filter-based inertial positioning

    DEFF Research Database (Denmark)

    Toftkjær, Thomas; Kjærgaard, Mikkel Baun

    2012-01-01

    Positioning systems that do not depend on in-building infrastructures are critical for enabling a range of applications within pervasive computing. Particle filter-based inertial positioning promises infrastructure-less positioning, but previous research has not provided an understanding of how t...

  8. Hand position-dependent modulation of errors in vibrotactile temporal order judgments

    DEFF Research Database (Denmark)

    Ritterband-Rosenbaum, Anina; Hermosillo, Robert; Kroliczak, Gregory

    2014-01-01

    this confounded information is processed in the brain is poorly understood. In the present set of experiments, we addressed this knowledge gap by using singlepulse transcranial magnetic stimulation (TMS) to disrupt processing in the right or left posterior parietal cortex (PPC) during a vibrotactile TOJ task...... with stimuli applied to the right and left index fingers. In the first experiment, participants held their hands in an uncrossed configuration, and we found that when the index finger contralateral to the site of TMS was stimulated first, there was a significant increase in TOJ errors. This increase did...... that these TMS-induced changes in TOJ errors were not due to a reduced ability to detect the timing of the vibrotactile stimuli. Taken together, these results demonstrate that both the right and left PPC contribute to the processing underlying vibrotactile TOJs by integrating vibrotactile information...

  9. [Positioning errors of CT common rail technique in intensity-modulated radiotherapy for nasopharyngeal carcinoma].

    Science.gov (United States)

    Tian, Fei; Xu, Zihai; Mo, Li; Zhu, Chaohua; Chen, Chaomin

    2012-11-01

    To evaluate the value of CT common rail technique for application in intensity-modulated radiotherapy for nasopharyngeal carcinoma (NPC). Twenty-seven NPC patients underwent Somatom CT scans using the Siemens CTVision system prior to the commencement of the radiotherapy sessions. The acquired CT images were registered with the planning CT images using the matching function of the system to obtain the linear set-up errors of 3 directions, namely X (left to right), Y (superior to inferior), and Z (anterior to posterior). The errors were then corrected online on the moving couch. The 27 NPC patients underwent a total of 110 CT scans and the displacement deviations of the X, Y and Z directions were -0.16∓1.68 mm, 0.25∓1.66 mm, and 0.33∓1.09 mm, respectively. CT common rail technique can accurately and rapidly measure the space error between the posture and the target area to improve the set-up precision of intensity-modulated radiotherapy for NPC.

  10. Measurement errors in network load measurement: Effects on lead management and accounting. Messfehler bei der Netzlasterfassung: Einfluss auf Lastregelung und Leistungsverrechnung

    Energy Technology Data Exchange (ETDEWEB)

    Bunten, B. (Teilbereich Lastfuehrung, ABB Netzleittechnik GmbH, Ladenburg (Germany)); Dib, R.N. (Fachhochschule Giessen-Friedberg, Bereich Elektrische Energietechnik, Friedberg (Germany))

    1994-05-16

    In electric power supply systems continuous power measurement in the delivery points is necessary both for the purpose of load-management and for energy and power accounting. Electricity meters with pulse output points are commonly used for both applications today. The authors quantify the resulting errors in peak load measurement and load management as a function of the main influencing factors. (orig.)

  11. Effect of MLC leaf position, collimator rotation angle, and gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Sen; Li, Guangjun; Wang, Maojie; Jiang, Qinfeng; Zhang, Yingjie [State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan (China); Wei, Yuquan, E-mail: yuquawei@vip.sina.com [State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan (China)

    2013-07-01

    The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors were 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.

  12. An Implementation of Error Minimization Position Estimate in Wireless Inertial Measurement Unit using Modification ZUPT

    Directory of Open Access Journals (Sweden)

    Adytia Darmawan

    2016-12-01

    Full Text Available Position estimation using WIMU (Wireless Inertial Measurement Unit is one of emerging technology in the field of indoor positioning systems. WIMU can detect movement and does not depend on GPS signals. The position is then estimated using a modified ZUPT (Zero Velocity Update method that was using Filter Magnitude Acceleration (FMA, Variance Magnitude Acceleration (VMA and Angular Rate (AR estimation. Performance of this method was justified on a six-legged robot navigation system. Experimental result shows that the combination of VMA-AR gives the best position estimation.

  13. What Do Letter Migration Errors Reveal About Letter Position Coding in Visual Word Recognition?

    Science.gov (United States)

    Davis, Colin J.; Bowers, Jeffrey S.

    2004-01-01

    Dividing attention across multiple words occasionally results in misidentifications whereby letters apparently migrate between words. Previous studies have found that letter migrations preserve within-word letter position, which has been interpreted as support for position-specific letter coding. To investigate this issue, the authors used word…

  14. Residual position errors of lymph node surrogates in breast cancer adjuvant radiotherapy: Comparison of two arm fixation devices and the effect of arm position correction

    International Nuclear Information System (INIS)

    Kapanen, Mika; Laaksomaa, Marko; Skyttä, Tanja; Haltamo, Mikko; Pehkonen, Jani; Lehtonen, Turkka; Kellokumpu-Lehtinen, Pirkko-Liisa; Hyödynmaa, Simo

    2016-01-01

    Residual position errors of the lymph node (LN) surrogates and humeral head (HH) were determined for 2 different arm fixation devices in radiotherapy (RT) of breast cancer: a standard wrist-hold (WH) and a house-made rod-hold (RH). The effect of arm position correction (APC) based on setup images was also investigated. A total of 113 consecutive patients with early-stage breast cancer with LN irradiation were retrospectively analyzed (53 and 60 using the WH and RH, respectively). Residual position errors of the LN surrogates (Th1-2 and clavicle) and the HH were investigated to compare the 2 fixation devices. The position errors and setup margins were determined before and after the APC to investigate the efficacy of the APC in the treatment situation. A threshold of 5 mm was used for the residual errors of the clavicle and Th1-2 to perform the APC, and a threshold of 7 mm was used for the HH. The setup margins were calculated with the van Herk formula. Irradiated volumes of the HH were determined from RT treatment plans. With the WH and the RH, setup margins up to 8.1 and 6.7 mm should be used for the LN surrogates, and margins up to 4.6 and 3.6 mm should be used to spare the HH, respectively, without the APC. After the APC, the margins of the LN surrogates were equal to or less than 7.5/6.0 mm with the WH/RH, but margins up to 4.2/2.9 mm were required for the HH. The APC was needed at least once with both the devices for approximately 60% of the patients. With the RH, irradiated volume of the HH was approximately 2 times more than with the WH, without any dose constraints. Use of the RH together with the APC resulted in minimal residual position errors and setup margins for all the investigated bony landmarks. Based on the obtained results, we prefer the house-made RH. However, more attention should be given to minimize the irradiation of the HH with the RH than with the WH.

  15. Dosimetric impact of systematic MLC positional errors on step and shoot IMRT for prostate cancer: a planning study

    International Nuclear Information System (INIS)

    Ung, N.M.; Harper, C.S.; Wee, L.

    2011-01-01

    Full text: The positional accuracy of multileaf collimators (MLC) is crucial in ensuring precise delivery of intensity-modulated radiotherapy (IMRT). The aim of this planning study was to investigate the dosimetric impact of systematic MLC positional errors on step and shoot IMRT of prostate cancer. A total of 12 perturbations of MLC leaf banks were introduced to six prostate IMRT treatment plans to simulate MLC systematic positional errors. Dose volume histograms (DVHs) were generated for the extraction of dose endpoint parameters. Plans were evaluated in terms of changes to the defined endpoint dose parameters, conformity index (CI) and healthy tissue avoidance (HTA) to planning target volume (PTV), rectum and bladder. Negative perturbations of MLC had been found to produce greater changes to endpoint dose parameters than positive perturbations of MLC (p 9 5 of -1.2 and 0.9% respectively. Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in D 9 5 of -2.3 and 1.8% respectively. Doses to rectum were generally more sensitive to systematic MLC en-ors compared to bladder (p < 0.01). Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in endpoint dose parameters of rectum and bladder from 1.0 to 2.5%. Maximum reduction of -4.4 and -7.3% were recorded for conformity index (CI) and healthy tissue avoidance (HT A) respectively due to synchronised MLC perturbation of 1 mm. MLC errors resulted in dosimetric changes in IMRT plans for prostate. (author)

  16. GPS Users Positioning Errors during Disturbed Near-Earth Space Conditions

    National Research Council Canada - National Science Library

    Afraimovich, E. L; Demyanov, V. V; Tatarinov, P. V; Astafieva, E. I; Zhivetiev, I. V

    2006-01-01

    .... (GPS Solutions, 2003, V7, N2, 109) showed, that during geomagnetic disturbances in the near space deterioration of GNSS operation quality is appeared and, as consequence, reduction of positioning accuracy and occurrence of failures...

  17. Development of a Simple Radioactive marker System to Reduce Positioning Errors in Radiation Treatment

    International Nuclear Information System (INIS)

    William H. Miller; Dr. Jatinder Palta

    2007-01-01

    The objective of this research is to implement an inexpensive, quick and simple monitor that provides an accurate indication of proper patient position during the treatment of cancer by external beam X-ray radiation and also checks for any significant changes in patient anatomy. It is believed that this system will significantly reduce the treatment margin, provide an additional, independent quality assurance check of positioning accuracy prior to all treatments and reduce the probability of misadministration of therapeutic dose

  18. Dosimetric impact of systematic MLC positional errors on step and shoot IMRT for prostate cancer: a planning study

    International Nuclear Information System (INIS)

    Ung, N.M.; Wee, L.; Harper, C.S.

    2010-01-01

    Full text: The positional accuracy of multi leaf collimators (MLC) is crucial in ensuring precise delivery of intensity-modulated radiotherapy (IMRT). The aim of this planning study was to investigate the dosimetric impact of systematic MLC errors on step and shoot IMRT of prostate cancer. Twelve MLC leaf banks perturbations were introduced to six prostate IMRT treatment plans to simulate MLC systematic errors. Dose volume histograms (OYHs) were generated for the extraction of dose endpoint parameters. Plans were evaluated in terms of changes to the defined endpoint dose parameters, conformity index (CI) and healthy tissue avoidance (HTA) to planning target volume (PTY), rectum and bladder. Negative perturbations of MLC had been found to produce greater changes to endpoint dose parameters than positive perturbations of MLC (p < 0.05). Negative and positive synchronized MLC perturbations of I mm resulted in median changes of -2.32 and 1.78%, respectively to 095% of PTY whereas asynchronized MLC perturbations of the same direction and magnitude resulted in median changes of 1.18 and 0.90%, respectively. Doses to rectum were generally more sensitive to systematic MLC errors compared to bladder. Synchronized MLC perturbations of I mm resulted in median changes of endpoint dose parameters to both rectum and bladder from about I to 3%. Maximum reduction of -4.44 and -7.29% were recorded for CI and HTA, respectively, due to synchronized MLC perturbation of I mm. In summary, MLC errors resulted in measurable amount of dose changes to PTY and surrounding critical structures in prostate LMRT. (author)

  19. Position error compensation via a variable reluctance sensor applied to a Hybrid Vehicle Electric machine.

    Science.gov (United States)

    Bucak, Ihsan Ömür

    2010-01-01

    In the automotive industry, electromagnetic variable reluctance (VR) sensors have been extensively used to measure engine position and speed through a toothed wheel mounted on the crankshaft. In this work, an application that already uses the VR sensing unit for engine and/or transmission has been chosen to infer, this time, the indirect position of the electric machine in a parallel Hybrid Electric Vehicle (HEV) system. A VR sensor has been chosen to correct the position of the electric machine, mainly because it may still become critical in the operation of HEVs to avoid possible vehicle failures during the start-up and on-the-road, especially when the machine is used with an internal combustion engine. The proposed method uses Chi-square test and is adaptive in a sense that it derives the compensation factors during the shaft operation and updates them in a timely fashion.

  20. Position Error Compensation via a Variable Reluctance Sensor Applied to a Hybrid Vehicle Electric Machine

    Directory of Open Access Journals (Sweden)

    İhsan Ömür Bucak

    2010-03-01

    Full Text Available In the automotive industry, electromagnetic variable reluctance (VR sensors have been extensively used to measure engine position and speed through a toothed wheel mounted on the crankshaft. In this work, an application that already uses the VR sensing unit for engine and/or transmission has been chosen to infer, this time, the indirect position of the electric machine in a parallel Hybrid Electric Vehicle (HEV system. A VR sensor has been chosen to correct the position of the electric machine, mainly because it may still become critical in the operation of HEVs to avoid possible vehicle failures during the start-up and on-the-road, especially when the machine is used with an internal combustion engine. The proposed method uses Chi-square test and is adaptive in a sense that it derives the compensation factors during the shaft operation and updates them in a timely fashion.

  1. Positioning of Emotional Intelligence Skills within the Overall Skillset of Practice-Based Accountants: Employer and Graduate Requirements

    Science.gov (United States)

    Coady, Peggy; Byrne, Seán; Casey, John

    2018-01-01

    This paper presents evidence of employer and graduate attitudes on the skill set requirements for professional accountants, and whether university accounting programs develop these skills, and in particular emotional intelligence (EI) skills. We use priority indices and strategic mapping to evaluate the positioning of 31 skills. This analysis…

  2. Causes and consequences of timing errors associated with global positioning system collar accelerometer activity monitors

    Science.gov (United States)

    Adam J. Gaylord; Dana M. Sanchez

    2014-01-01

    Direct behavioral observations of multiple free-ranging animals over long periods of time and large geographic areas is prohibitively difficult. However, recent improvements in technology, such as Global Positioning System (GPS) collars equipped with motion-sensitive activity monitors, create the potential to remotely monitor animal behavior. Accelerometer-equipped...

  3. Accounting for sampling error when inferring population synchrony from time-series data: a Bayesian state-space modelling approach with applications.

    Directory of Open Access Journals (Sweden)

    Hugues Santin-Janin

    Full Text Available BACKGROUND: Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal with respect to extrinsic factors (the Moran effect in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. METHODOLOGY/PRINCIPAL FINDINGS: The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i has been previously estimated, and (ii has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. CONCLUSION/SIGNIFICANCE: The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for

  4. Compensation for geometric modeling errors by positioning of electrodes in electrical impedance tomography

    International Nuclear Information System (INIS)

    Hyvönen, N; Majander, H; Staboulis, S

    2017-01-01

    Electrical impedance tomography aims at reconstructing the conductivity inside a physical body from boundary measurements of current and voltage at a finite number of contact electrodes. In many practical applications, the shape of the imaged object is subject to considerable uncertainties that render reconstructing the internal conductivity impossible if they are not taken into account. This work numerically demonstrates that one can compensate for inaccurate modeling of the object boundary in two spatial dimensions by finding compatible locations and sizes for the electrodes as a part of a reconstruction algorithm. The numerical studies, which are based on both simulated and experimental data, are complemented by proving that the employed complete electrode model is approximately conformally invariant, which suggests that the obtained reconstructions in mismodeled domains reflect conformal images of the true targets. The numerical experiments also confirm that a similar approach does not, in general, lead to a functional algorithm in three dimensions. (paper)

  5. Methods for Estimation of Radiation Risk in Epidemiological Studies Accounting for Classical and Berkson Errors in Doses

    KAUST Repository

    Kukush, Alexander

    2011-01-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  6. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    Science.gov (United States)

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  7. Influence of chronic neck pain on cervical joint position error (JPE): Comparison between young and elderly subjects.

    Science.gov (United States)

    Alahmari, Khalid A; Reddy, Ravi Shankar; Silvian, Paul; Ahmad, Irshad; Nagaraj, Venkat; Mahtab, Mohammad

    2017-11-06

    Evaluation of cervical joint position sense in subjects with chronic neck pain has gained importance in recent times. Different authors have established increased joint position error (JPE) in subjects with acute neck pain. However, there is a paucity of studies to establish the influence of chronic neck pain on cervical JPE. The objective of the study was to understand the influence of chronic neck pain on cervical JPE, and to examine the differences in cervical JPE between young and elderly subjects with chronic neck pain. Forty-two chronic neck pain patients (mean age 47.4) were compared for cervical JPE with 42 age-matched healthy subjects (mean age 47.8), using a digital inclinometer. The cervical JPE were measured in flexion, extension, and rotation in right and left movement directions. The comparison of JPE showed significantly larger errors in subjects with chronic neck pain when compared to healthy subjects (ppain revealed no significant differences (P> 0.05) in cervical JPE. Cervical joint position sense is impaired in subjects with chronic neck pain.

  8. [Medical errors from positions of mutual relations of patient-lawyer-doctor].

    Science.gov (United States)

    Radysh, Ia F; Tsema, Ie V; Mehed', V P

    2013-01-01

    The basic theoretical and practical aspects of problem of malpractice in the system of health protection Ukraine are presented in the article. On specific examples the essence of the term "malpractice" is expounded. It was considered types of malpractice, conditions of beginning and kinds of responsibility to assumption of malpractice. The special attention to the legal and mental and ethical questions of problem from positions of protection of rights for a patient and medical worker is spared. The necessity of qualification malpractices on intentional and unintentional, possible and impermissible is grounded.

  9. A new approach to the form and position error measurement of the auto frame surface based on laser

    Science.gov (United States)

    Wang, Hua; Li, Wei

    2013-03-01

    Auto frame is a very large workpiece, with length up to 12 meters and width up to 2 meters, and it's very easy to know that it's inconvenient and not automatic to measure such a large workpiece by independent manual operation. In this paper we propose a new approach to reconstruct the 3D model of the large workpiece, especially the auto truck frame, based on multiple pulsed lasers, for the purpose of measuring the form and position errors. In a concerned area, it just needs one high-speed camera and two lasers. It is a fast, high-precision and economical approach.

  10. Automatic detection of patient identification and positioning errors in radiation therapy treatment using 3-dimensional setup images.

    Science.gov (United States)

    Jani, Shyam S; Low, Daniel A; Lamb, James M

    2015-01-01

    To develop an automated system that detects patient identification and positioning errors between 3-dimensional computed tomography (CT) and kilovoltage CT planning images. Planning kilovoltage CT images were collected for head and neck (H&N), pelvis, and spine treatments with corresponding 3-dimensional cone beam CT and megavoltage CT setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. For positioning errors, setup and planning images were misaligned by 1 to 5 cm in the 6 anatomical directions for H&N and pelvis patients. Spinal misalignments were simulated by misaligning to adjacent vertebral bodies. Image pairs were assessed using commonly used image similarity metrics as well as custom-designed metrics. Linear discriminant analysis classification models were trained and tested on the imaging datasets, and misclassification error (MCE), sensitivity, and specificity parameters were estimated using 10-fold cross-validation. For patient identification, our workflow produced MCE estimates of 0.66%, 1.67%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivity and specificity ranged from 97.5% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 95.4% and 97.7%. MCEs for 1-cm H&N/pelvis misalignments were 1.3%/5.1% and 9.1%/8.6% for TomoTherapy and TrueBeam images, respectively. Two-centimeter MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. MCEs for vertebral body misalignments were 4.8% and 3.6% for TomoTherapy and TrueBeam images, respectively. Patient identification and gross misalignment errors can be robustly and automatically detected using 3-dimensional setup images of different energies across 3 commonly treated anatomical sites. Copyright © 2015 American Society for Radiation Oncology. Published by

  11. Accounting for representativeness errors in the inversion of atmospheric constituent emissions: application to the retrieval of regional carbon monoxide fluxes

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Koohkan

    2012-07-01

    Full Text Available A four-dimensional variational data assimilation system (4D-Var is developed to retrieve carbon monoxide (CO fluxes at regional scale, using an air quality network. The air quality stations that monitor CO are proximity stations located close to industrial, urban or traffic sources. The mismatch between the coarsely discretised Eulerian transport model and the observations, inferred to be mainly due to representativeness errors in this context, lead to a bias (average simulated concentrations minus observed concentrations of the same order of magnitude as the concentrations. 4D-Var leads to a mild improvement in the bias because it does not adequately handle the representativeness issue. For this reason, a simple statistical subgrid model is introduced and is coupled to 4D-Var. In addition to CO fluxes, the optimisation seeks to jointly retrieve influence coefficients, which quantify each station's representativeness. The method leads to a much better representation of the CO concentration variability, with a significant improvement of statistical indicators. The resulting increase in the total inventory estimate is close to the one obtained from remote sensing data assimilation. This methodology and experiments suggest that information useful at coarse scales can be better extracted from atmospheric constituent observations strongly impacted by representativeness errors.

  12. Accounting for model error in air quality forecasts: an application of 4DEnVar to the assimilation of atmospheric composition using QG-Chem 1.0

    Directory of Open Access Journals (Sweden)

    E. Emili

    2016-11-01

    Full Text Available Model errors play a significant role in air quality forecasts. Accounting for them in the data assimilation (DA procedures is decisive to obtain improved forecasts. We address this issue using a reduced-order coupled chemistry–meteorology model based on quasi-geostrophic dynamics and a detailed tropospheric chemistry mechanism, which we name QG-Chem. This model has been coupled to the software library for the data assimilation Object Oriented Prediction System (OOPS and used to assess the potential of the 4DEnVar algorithm for air quality analyses and forecasts. The assets of 4DEnVar include the possibility to deal with multivariate aspects of atmospheric chemistry and to account for model errors of a generic type. A simple diagnostic procedure for detecting model errors is proposed, based on the 4DEnVar analysis and one additional model forecast. A large number of idealized data assimilation experiments are shown for several chemical species of relevance for air quality forecasts (O3, NOx, CO and CO2 with very different atmospheric lifetimes and chemical couplings. Experiments are done both under a perfect model hypothesis and including model error through perturbation of surface chemical emissions. Some key elements of the 4DEnVar algorithm such as the ensemble size and localization are also discussed. A comparison with results of 3D-Var, widely used in operational centers, shows that, for some species, analysis and next-day forecast errors can be halved when model error is taken into account. This result was obtained using a small ensemble size, which remains affordable for most operational centers. We conclude that 4DEnVar has a promising potential for operational air quality models. We finally highlight areas that deserve further research for applying 4DEnVar to large-scale chemistry models, i.e., localization techniques, propagation of analysis covariance between DA cycles and treatment for chemical nonlinearities. QG-Chem can provide a

  13. Modification of an impulse-factoring orbital transfer technique to account for orbit determination and maneuver execution errors

    Science.gov (United States)

    Kibler, J. F.; Green, R. N.; Young, G. R.; Kelly, M. G.

    1974-01-01

    A method has previously been developed to satisfy terminal rendezvous and intermediate timing constraints for planetary missions involving orbital operations. The method uses impulse factoring in which a two-impulse transfer is divided into three or four impulses which add one or two intermediate orbits. The periods of the intermediate orbits and the number of revolutions in each orbit are varied to satisfy timing constraints. Techniques are developed to retarget the orbital transfer in the presence of orbit-determination and maneuver-execution errors. Sample results indicate that the nominal transfer can be retargeted with little change in either the magnitude (Delta V) or location of the individual impulses. Additonally, the total Delta V required for the retargeted transfer is little different from that required for the nominal transfer. A digital computer program developed to implement the techniques is described.

  14. Setup accuracy of stereoscopic X-ray positioning with automated correction for rotational errors in patients treated with conformal arc radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Soete, Guy; Verellen, Dirk; Tournel, Koen; Storme, Guy

    2006-01-01

    We evaluated setup accuracy of NovalisBody stereoscopic X-ray positioning with automated correction for rotational errors with the Robotics Tilt Module in patients treated with conformal arc radiotherapy for prostate cancer. The correction of rotational errors was shown to reduce random and systematic errors in all directions. (NovalisBody TM and Robotics Tilt Module TM are products of BrainLAB A.G., Heimstetten, Germany)

  15. Cognitive moderators of children's adjustment to stressful divorce events: the role of negative cognitive errors and positive illusions.

    Science.gov (United States)

    Mazur, E; Wolchik, S A; Virdin, L; Sandler, I N; West, S G

    1999-01-01

    This study examined whether children's cognitive appraisal biases moderate the impact of stressful divorce-related events on psychological adjustment in 355 children ages 9 to 12, whose families had experienced divorce within the past 2 years. Multiple regression indicated that endorsement of negative cognitive errors for hypothetical divorce events moderates the relations between stressful divorce events and self- and maternal reports of internalizing and externalizing symptoms, but only for older children. Positive illusions buffer the effects of stressful divorce events on child-reported depression and mother-reported externalizing problems. Implications of these results for theories of stress and coping, as well as for interventions for children of divorced families, are discussed.

  16. Branch-based model for the diameters of the pulmonary airways: accounting for departures from self-consistency and registration errors.

    Science.gov (United States)

    Neradilek, Moni B; Polissar, Nayak L; Einstein, Daniel R; Glenny, Robb W; Minard, Kevin R; Carson, James P; Jiao, Xiangmin; Jacob, Richard E; Cox, Timothy C; Postlethwait, Edward M; Corley, Richard A

    2012-06-01

    We examine a previously published branch-based approach for modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that take account of error. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys, and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from self-consistency exist, we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. The new variance model can be used instead. Measurement error has an important impact on the estimated morphometry models and needs to be addressed in the analysis. Copyright © 2012 Wiley Periodicals, Inc.

  17. An information theory account of late frontoparietal ERP positivities in cognitive control.

    Science.gov (United States)

    Barceló, Francisco; Cooper, Patrick S

    2018-03-01

    ERP research on task switching has revealed distinct transient and sustained positive waveforms (latency circa 300-900 ms) while shifting task rules or stimulus-response (S-R) mappings. However, it remains unclear whether such switch-related positivities show similar scalp topography and index context-updating mechanisms akin to those posed for domain-general (i.e., classic P300) positivities in many task domains. To examine this question, ERPs were recorded from 31 young adults (18-30 years) while they were intermittently cued to switch or repeat their perceptual categorization of Gabor gratings varying in color and thickness (switch task), or else they performed two visually identical control tasks (go/no-go and oddball). Our task cueing paradigm examined two temporarily distinct stages of proactive rule updating and reactive rule execution. A simple information theory model helped us gauge cognitive demands under distinct temporal and task contexts in terms of low-level S-R pathways and higher-order rule updating operations. Task demands modulated domain-general (indexed by classic oddball P3) and switch positivities-indexed by both a cue-locked late positive complex and a sustained positivity ensuing task transitions. Topographic scalp analyses confirmed subtle yet significant split-second changes in the configuration of neural sources for both domain-general P3s and switch positivities as a function of both the temporal and task context. These findings partly meet predictions from information estimates, and are compatible with a family of P3-like potentials indexing functionally distinct neural operations within a common frontoparietal "multiple demand" system during the preparation and execution of simple task rules. © 2016 Society for Psychophysiological Research.

  18. Modelling of the X , Y , Z positioning errors and uncertainty evaluation for the LNE’s mAFM using the Monte Carlo method

    International Nuclear Information System (INIS)

    Ceria, Paul; Ducourtieux, Sebastien; Boukellal, Younes; Feltin, Nicolas; Allard, Alexandre; Fischer, Nicolas

    2017-01-01

    In order to evaluate the uncertainty budget of the LNE’s mAFM, a reference instrument dedicated to the calibration of nanoscale dimensional standards, a numerical model has been developed to evaluate the measurement uncertainty of the metrology loop involved in the XYZ positioning of the tip relative to the sample. The objective of this model is to overcome difficulties experienced when trying to evaluate some uncertainty components which cannot be experimentally determined and more specifically, the one linked to the geometry of the metrology loop. The model is based on object-oriented programming and developed under Matlab. It integrates one hundred parameters that allow the control of the geometry of the metrology loop without using analytical formulae. The created objects, mainly the reference and the mobile prism and their mirrors, the interferometers and their laser beams, can be moved and deformed freely to take into account several error sources. The Monte Carlo method is then used to determine the positioning uncertainty of the instrument by randomly drawing the parameters according to their associated tolerances and their probability density functions (PDFs). The whole process follows Supplement 2 to ‘The Guide to the Expression of the Uncertainty in Measurement’ (GUM). Some advanced statistical tools like Morris design and Sobol indices are also used to provide a sensitivity analysis by identifying the most influential parameters and quantifying their contribution to the XYZ positioning uncertainty. The approach validated in the paper shows that the actual positioning uncertainty is about 6 nm. As the final objective is to reach 1 nm, we engage in a discussion to estimate the most effective way to reduce the uncertainty. (paper)

  19. Modelling of the X,Y,Z positioning errors and uncertainty evaluation for the LNE’s mAFM using the Monte Carlo method

    Science.gov (United States)

    Ceria, Paul; Ducourtieux, Sebastien; Boukellal, Younes; Allard, Alexandre; Fischer, Nicolas; Feltin, Nicolas

    2017-03-01

    In order to evaluate the uncertainty budget of the LNE’s mAFM, a reference instrument dedicated to the calibration of nanoscale dimensional standards, a numerical model has been developed to evaluate the measurement uncertainty of the metrology loop involved in the XYZ positioning of the tip relative to the sample. The objective of this model is to overcome difficulties experienced when trying to evaluate some uncertainty components which cannot be experimentally determined and more specifically, the one linked to the geometry of the metrology loop. The model is based on object-oriented programming and developed under Matlab. It integrates one hundred parameters that allow the control of the geometry of the metrology loop without using analytical formulae. The created objects, mainly the reference and the mobile prism and their mirrors, the interferometers and their laser beams, can be moved and deformed freely to take into account several error sources. The Monte Carlo method is then used to determine the positioning uncertainty of the instrument by randomly drawing the parameters according to their associated tolerances and their probability density functions (PDFs). The whole process follows Supplement 2 to ‘The Guide to the Expression of the Uncertainty in Measurement’ (GUM). Some advanced statistical tools like Morris design and Sobol indices are also used to provide a sensitivity analysis by identifying the most influential parameters and quantifying their contribution to the XYZ positioning uncertainty. The approach validated in the paper shows that the actual positioning uncertainty is about 6 nm. As the final objective is to reach 1 nm, we engage in a discussion to estimate the most effective way to reduce the uncertainty.

  20. The position of IAEA safeguards relative to nuclear material control accountancy by States

    International Nuclear Information System (INIS)

    Rometsch, R.; Hough, G.

    1977-01-01

    IAEA Safeguards, which are always implemented on the basis of agreements which are concluded between one or more Governments and the IAEA, lay down the rights and obligations of the parties; and the more modern types of agreement, in particular those in connection with the Treaty on the Non-Proliferation of Nuclear Weapons, do this in quite some detail. Several articles, for instance, regulate the working relations between the States and the IAEA inspectorate. These are based on two basic obligations - that of the State to establish and maintain a ''System of Accountancy for and Control of Nuclear Material'' and that of the IAEA to ascertain the absence of diversion of nuclear material by verifying the findings of the States' systems, inter alia through independent measurements and observations. Other articles dealing also with the working relations between States and the IAEA rule that the IAEA should take due account of the technical effectiveness of the States' systems and mention among the criteria for determining the inspection effort, the extent of functional dependence of the State's accountancy on that of the facility operator. However, quantitative relationships in this respect are left to be worked out in practice. With the help of consultants and expert advisory groups a rationale has been developed and possible practical arrangements discussed with several States concerned. The rationale for co-ordinating the work of the States' inspectorate with that of the IAEA was to use a factor by which the significant quantity used for calculating verification sampling plans would be adjusted so as to reduce to a certain extent the IAEA's independent verification work in case the States would themselves do extensive verifications in a manner transparent to the IAEA. However, in practice it proved that there are a number of points in the fuel cycle where such adaptations would have little or no effect on the inspection effort necessary to achieve the safeguards

  1. The position of IAEA safeguards relative to nuclear material control accountancy by states

    International Nuclear Information System (INIS)

    Rometsch, R.; Hough, G.

    1977-01-01

    IAEA Safeguards are always implemented on the basis of agreements which are concluded between one or more Governments and the Agency. They lay down the rights and obligations of the parties; the more modern types of agreements, in particular those in connection with the Treaty on the Non-Proliferation of Nuclear Weapons, do that in quite some details. Several articles, for instance, regulate the working relations between the States and the IAEA inspectorate. Those are based on two basic obligations: that of the State to establish and maintain a ''System of Accountancy for and Control of Nuclear Material'' and that of the Agency to ascertain the absence of diversion of nuclear material by verifying the findings of the States' system, inter alia through independent measurements and observations. Other articles dealing also with the working relations States - IAEA rule that the Agency should take due account of the technical effectiveness of the States' system and mention among the criteria for determining the inspection effort, the extent of functional dependence of the State's accountancy from that of the facility operator. However, quantitative relationships in that respect are left to be worked out in practice. With the help of consultants and expert advisory groups a rational has been developed and possible practical arrangements discussed with several States concerned. The rational for coordinating the work of the States' inspectorate with IAEA's inspectorate was to use a factor by which the significant quantity used for calculating verification sampling plans would be adjusted in order to reduce to a certain extent the Agency's independent verification work in case the States would do extensive verifications themselves in a manner transparent to IAEA. However, in practice it proved that there are quite a number of points in the fuel cycle where such adaptations would have little or no effect on the inspection effort necessary to achieve the safeguards objective

  2. Positive implications from socially accountable, community-engaged medical education across two Philippines regions.

    Science.gov (United States)

    Woolley, Torres; Cristobal, Fortunato; Siega-Sur, Jusie; Ross, Simone; Neusy, Andre-Jacques; Halili, Servando; Reeve, Carole

    2018-02-01

    Hundreds of millions of people worldwide lack access to quality health services, largely because of geographic and socioeconomic maldistribution of qualified practitioners. This study describes differences between the practice locations of Philippines medical graduates from two 'socially accountable, community-engaged' health professional education (SAHPE) schools and the practice locations of graduates from two 'conventionally trained' medical schools located in the same respective geographic regions. Licensed medical graduates were currently practising in the Philippines and had been practising for at least 6 months. Graduates were from two Philippines SAHPE schools (Ateneo de Zamboanga University-School of Medicine (ADZU-SOM) on the Zamboanga Peninsula (n=212) and the University of the Philippines Manila-School of Health Sciences (SHS-Palo) in Eastern Visayas (n=71), and from two 'conventional' medical schools Methods: Current graduate practice locations in municipalities or cities were linked with their respective population size and socioeconomic income class, and geocoded using Geographical Information System software onto a geospatial map of the Philippines. Bivariate analysis compared the population size and socioeconomic class of communities where the SAHPE medical graduates practised to communities where 'conventional' medical school graduates practised. Thirty-one percent of ADZU-SOM medical graduates practised in communities play a significant role in graduates choosing to practice in rural and/or economically disadvantaged communities. Governments experiencing medical workforce maldistributions similar to those in the Philippines should consider SAHPE as a potentially cost-effective strategy in recruiting and retaining health graduates to underserved areas.

  3. Overcoming function annotation errors in the Gram-positive pathogen Streptococcus suis by a proteomics-driven approach

    Directory of Open Access Journals (Sweden)

    Bárcena José A

    2008-12-01

    Full Text Available Abstract Background Annotation of protein-coding genes is a key step in sequencing projects. Protein functions are mainly assigned on the basis of the amino acid sequence alone by searching of homologous proteins. However, fully automated annotation processes often lead to wrong prediction of protein functions, and therefore time-intensive manual curation is often essential. Here we describe a fast and reliable way to correct function annotation in sequencing projects, focusing on surface proteomes. We use a proteomics approach, previously proven to be very powerful for identifying new vaccine candidates against Gram-positive pathogens. It consists of shaving the surface of intact cells with two proteases, the specific cleavage-site trypsin and the unspecific proteinase K, followed by LC/MS/MS analysis of the resulting peptides. The identified proteins are contrasted by computational analysis and their sequences are inspected to correct possible errors in function prediction. Results When applied to the zoonotic pathogen Streptococcus suis, of which two strains have been recently sequenced and annotated, we identified a set of surface proteins without cytoplasmic contamination: all the proteins identified had exporting or retention signals towards the outside and/or the cell surface, and viability of protease-treated cells was not affected. The combination of both experimental evidences and computational methods allowed us to determine that two of these proteins are putative extracellular new adhesins that had been previously attributed a wrong cytoplasmic function. One of them is a putative component of the pilus of this bacterium. Conclusion We illustrate the complementary nature of laboratory-based and computational methods to examine in concert the localization of a set of proteins in the cell, and demonstrate the utility of this proteomics-based strategy to experimentally correct function annotation errors in sequencing projects. This

  4. [Transposition errors during learning to reproduce a sequence by the right- and the left-hand movements: simulation of positional and movement coding].

    Science.gov (United States)

    Liakhovetskiĭ, V A; Bobrova, E V; Skopin, G N

    2012-01-01

    Transposition errors during the reproduction of a hand movement sequence make it possible to receive important information on the internal representation of this sequence in the motor working memory. Analysis of such errors showed that learning to reproduce sequences of the left-hand movements improves the system of positional coding (coding ofpositions), while learning of the right-hand movements improves the system of vector coding (coding of movements). Learning of the right-hand movements after the left-hand performance involved the system of positional coding "imposed" by the left hand. Learning of the left-hand movements after the right-hand performance activated the system of vector coding. Transposition errors during learning to reproduce movement sequences can be explained by neural network using either vector coding or both vector and positional coding.

  5. Condom Use Errors and Problems: A Comparative Study of HIV-Positive Versus HIV-Negative Young Black Men Who Have Sex With Men.

    Science.gov (United States)

    Crosby, Richard; Mena, Leandro; Yarber, William L; Graham, Cynthia A; Sanders, Stephanie A; Milhausen, Robin R

    2015-11-01

    To describe self-reported frequencies of selected condom use errors and problems among young (age, 15-29 years) black men who have sex with men (YBMSM) and to compare the observed prevalence of these errors/problems by HIV serostatus. Between September 2012 October 2014, electronic interview data were collected from 369 YBMSM attending a federally supported sexually transmitted infection clinic located in the southern United States. Seventeen condom use errors and problems were assessed. χ(2) Tests were used to detect significant differences in the prevalence of these 17 errors and problems between HIV-negative and HIV-positive men. The recall period was the past 90 days. The overall mean (SD) number of errors/problems was 2.98 (2.29). The mean (SD) for HIV-negative men was 2.91 (2.15), and the mean (SD) for HIV-positive men was 3.18 (2.57). These means were not significantly different (t = 1.02, df = 367, P = 0.31). Only 2 significant differences were observed between HIV-negative and HIV-positive men. Breakage (P = 0.002) and slippage (P = 0.005) were about twice as likely among HIV-positive men. Breakage occurred for nearly 30% of the HIV-positive men compared with approximately 15% among HIV-negative men. Slippage occurred for approximately 16% of the HIV-positive men compared with approximately 9% among HIV-negative men. A need exists to help YBMSM acquire the skills needed to avert breakage and slippage issues that could lead to HIV transmission. Beyond these 2 exceptions, condom use errors and problems were ubiquitous in this population regardless of HIV serostatus. Clinic-based intervention is warranted for these young men, including education about correct condom use and provision of free condoms and long-lasting lubricants.

  6. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  7. Quantitative analysis of the errors positioning of a multi leaf collimator for volumetric arcoterapia treatments; Analisis cuantitativo de los errores de posicionamiento de un colimador multilaminas para tratamientos de arcoterapia volumetrica

    Energy Technology Data Exchange (ETDEWEB)

    Gomez Gonzalez, N.; Garcia Repiso, S.; Martin Rincon, C.; Cons Perez, N.; Saez Beltran, M.; Delgado Aparicio, J. M.; Perez alvarez, M. E.; Verde Velasco, J. M.; Ramos Pacho, J. A.; Sena Espinel, E. de

    2013-07-01

    The precision in the positioning of the multi leaf collimation system of a linear accelerator is critical, especially in treatments of IMRT, where small mistakes can cause relevant dosimetry discrepancies regarding the calculated plan. To assess the accuracy and repeatability of the blades positioning can be used controls, including the one known as fence test whose image pattern allows you to find anomalies in a visual way. The objective of this study is to develop a method which allows to quantify the positioning errors of the multi leaf collimator from this test. (Author)

  8. A study of the positioning errors of head and neck in the process of intensity modulation radiated therapy of nasopharyngeal carcinoma

    International Nuclear Information System (INIS)

    Lin Chengguang; Lin Liuwen; Liu Bingti; Liu Xiaomao; Li Guowen

    2011-01-01

    Objective: To investigate the positioning errors of head and neck during intensity-modulated radiation therapy of nasopharyngeal carcinoma. Methods: Nineteen patients with middle-advanced nasopharyngeal carcinoma (T 2-4 N 1-3 M 0 ), treated by intensity-modulated radiation therapy, underwent repeated CT during their 6-week treatment course. All the patients were immobilized by head-neck-shoulder thermoplastic mask. We evaluated their anatomic landmark coordinated in a total of 66 repeated CT data sets and respective x, y, z shifts relative to their position in the planning CT. Results: The positioning error of the neck was 2.44 mm ± 2.24 mm, 2.05 mm ± 1.42 mm, 1.83 mm ± 1.53 mm in x, y, z respectively. And that of the head was 1.05 mm ± 0.87 mm, 1.23 mm ± 1.05 mm, 1.17 mm ± 1.55 mm respectively. The positioning error between neck and head have respectively statistical difference (t=-6.58, -5.28, -3.42, P=0.000, 0.000, 0.001). The system error of the neck was 2.33, 1.67 and 1.56 higher than that of the head, respectively in left-right, vertical and head-foot directions; and the random error of neck was 2.57, 1.34 and 0.99 higher than that of head respectively. Conclusions: In the process of the intensity-modulated radiation therapy of nasopharyngeal carcinoma, with the immobilization by head-neck-shoulder thermoplastic mask, the positioning error of neck is higher than that of head. (authors)

  9. Dosimetric implications of inter- and intrafractional prostate positioning errors during tomotherapy : Comparison of gold marker-based registrations with native MVCT.

    Science.gov (United States)

    Wust, Peter; Joswig, Marc; Graf, Reinhold; Böhmer, Dirk; Beck, Marcus; Barelkowski, Thomasz; Budach, Volker; Ghadjar, Pirus

    2017-09-01

    For high-dose radiation therapy (RT) of prostate cancer, image-guided (IGRT) and intensity-modulated RT (IMRT) approaches are standard. Less is known regarding comparisons of different IGRT techniques and the resulting residual errors, as well as regarding their influences on dose distributions. A total of 58 patients who received tomotherapy-based RT up to 84 Gy for high-risk prostate cancer underwent IGRT based either on daily megavoltage CT (MVCT) alone (n = 43) or the additional use of gold markers (n = 15) under routine conditions. Planned Adaptive (Accuray Inc., Madison, WI, USA) software was used for elaborated offline analysis to quantify residual interfractional prostate positioning errors, along with systematic and random errors and the resulting safety margins after both IGRT approaches. Dosimetric parameters for clinical target volume (CTV) coverage and exposition of organs at risk (OAR) were also analyzed and compared. Interfractional as well as intrafractional displacements were determined. Particularly in the vertical direction, residual interfractional positioning errors were reduced using the gold marker-based approach, but dosimetric differences were moderate and the clinical relevance relatively small. Intrafractional prostate motion proved to be quite high, with displacements of 1-3 mm; however, these did not result in additional dosimetric impairments. Residual interfractional positioning errors were reduced using gold marker-based IGRT; however, this resulted in only slightly different final dose distributions. Therefore, daily MVCT-based IGRT without markers might be a valid alternative.

  10. SU-F-T-381: Fast Calculation of Three-Dimensional Dose Considering MLC Leaf Positional Errors for VMAT Plans

    Energy Technology Data Exchange (ETDEWEB)

    Katsuta, Y [Takeda General Hospital, Aizuwakamatsu City, Fukushima (Japan); Tohoku University Graduate School of Medicine, Sendal, Miyagi (Japan); Kadoya, N; Jingu, K [Tohoku University Graduate School of Medicine, Sendal, Miyagi (Japan); Shimizu, E; Majima, K [Takeda General Hospital, Aizuwakamatsu City, Fukushima (Japan)

    2016-06-15

    Purpose: In this study, we developed a system to calculate three dimensional (3D) dose that reflects dosimetric error caused by leaf miscalibration for head and neck and prostate volumetric modulated arc therapy (VMAT) without additional treatment planning system calculation on real time. Methods: An original system called clarkson dose calculation based dosimetric error calculation to calculate dosimetric error caused by leaf miscalibration was developed by MATLAB (Math Works, Natick, MA). Our program, first, calculates point doses at isocenter for baseline and modified VMAT plan, which generated by inducing MLC errors that enlarged aperture size of 1.0 mm with clarkson dose calculation. Second, error incuced 3D dose was generated with transforming TPS baseline 3D dose using calculated point doses. Results: Mean computing time was less than 5 seconds. For seven head and neck and prostate plans, between our method and TPS calculated error incuced 3D dose, the 3D gamma passing rates (0.5%/2 mm, global) are 97.6±0.6% and 98.0±0.4%. The dose percentage change with dose volume histogram parameter of mean dose on target volume were 0.1±0.5% and 0.4±0.3%, and with generalized equivalent uniform dose on target volume were −0.2±0.5% and 0.2±0.3%. Conclusion: The erroneous 3D dose calculated by our method is useful to check dosimetric error caused by leaf miscalibration before pre treatment patient QA dosimetry checks.

  11. Population and patient-specific target margins for 4D adaptive radiotherapy to account for intra- and inter-fraction variation in lung tumour position

    International Nuclear Information System (INIS)

    Hugo, Geoffrey D; Di Yan; Jian Liang

    2007-01-01

    In this work, five 4D image-guidance strategies (two population, an offline adaptive and two online strategies) were evaluated that compensated for both inter- and intra-fraction variability such as changes to the baseline tumour position and respiratory pattern. None of the strategies required active motion compensation such as gating or tracking; all strategies simulated a free-breathing-based treatment technique. Online kilovoltage fluoroscopy was acquired for eight patients with lung tumours, and used to construct inter- and intra-fraction tumour position variability models. Planning was performed on a mid-ventilation image acquired from a respiration-correlated CT scan. The blurring effect of tumour position variability was included in the dose calculation by convolution. CTV to PTV margins were calculated for variability in the cranio-caudal direction. A population margin of 9.0 ± 0.7 mm was required to account for setup error and respiration in the study population without the use of image-guidance. The greatest mean margin reduction was introduced by the offline adaptive strategy. A daily online correction strategy produced a small reduction (1.6 mm) in the mean margin from the offline strategy. Adaptively correcting for an inter-fraction change in the respiratory pattern had little effect on margin size due to most patients having only small daily changes in the respiratory pattern. A daily online correction strategy would be useful for patients who exhibit large variations in the daily mean tumour position, while an offline adaptive strategy is more applicable to patients with less variation

  12. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT

    NARCIS (Netherlands)

    Visser, Ruurd; J., Godart; Wauben, D.J.L.; Langendijk, J.; van 't Veld, A.A.; Korevaar, E.W.

    2016-01-01

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a

  13. SU-E-P-21: Impact of MLC Position Errors On Simultaneous Integrated Boost Intensity-Modulated Radiotherapy for Nasopharyngeal Carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Chengqiang, L; Yin, Y; Chen, L [Shandong Cancer Hospital and Institute, 440 Jiyan Road, Jinan, 250117 (China)

    2015-06-15

    Purpose: To investigate the impact of MLC position errors on simultaneous integrated boost intensity-modulated radiotherapy (SIB-IMRT) for patients with nasopharyngeal carcinoma. Methods: To compare the dosimetric differences between the simulated plans and the clinical plans, ten patients with locally advanced NPC treated with SIB-IMRT were enrolled in this study. All plans were calculated with an inverse planning system (Pinnacle3, Philips Medical System{sub )}. Random errors −2mm to 2mm{sub )},shift errors{sub (} 2mm,1mm and 0.5mm) and systematic extension/ contraction errors (±2mm, ±1mm and ±0.5mm) of the MLC leaf position were introduced respectively into the original plans to create the simulated plans. Dosimetry factors were compared between the original and the simulated plans. Results: The dosimetric impact of the random and system shift errors of MLC position was insignificant within 2mm, the maximum changes in D95% of PGTV,PTV1,PTV2 were-0.92±0.51%,1.00±0.24% and 0.62±0.17%, the maximum changes in the D0.1cc of spinal cord and brainstem were 1.90±2.80% and −1.78±1.42%, the maximum changes in the Dmean of parotids were1.36±1.23% and −2.25±2.04%.However,the impact of MLC extension or contraction errors was found significant. For 2mm leaf extension errors, the average changes in D95% of PGTV,PTV1,PTV2 were 4.31±0.67%,4.29±0.65% and 4.79±0.82%, the averaged value of the D0.1cc to spinal cord and brainstem were increased by 7.39±5.25% and 6.32±2.28%,the averaged value of the mean dose to left and right parotid were increased by 12.75±2.02%,13.39±2.17% respectively. Conclusion: The dosimetric effect was insignificant for random MLC leaf position errors up to 2mm. There was a high sensitivity to dose distribution for MLC extension or contraction errors.We should pay attention to the anatomic changes in target organs and anatomical structures during the course,individual radiotherapy was recommended to ensure adaptive doses.

  14. Effects of past and recent blood pressure and cholesterol level on coronary heart disease and stroke mortality, accounting for measurement error.

    Science.gov (United States)

    Boshuizen, Hendriek C; Lanti, Mariapaola; Menotti, Alessandro; Moschandreas, Joanna; Tolonen, Hanna; Nissinen, Aulikki; Nedeljkovic, Srecko; Kafatos, Anthony; Kromhout, Daan

    2007-02-15

    The authors aimed to quantify the effects of current systolic blood pressure (SBP) and serum total cholesterol on the risk of mortality in comparison with SBP or serum cholesterol 25 years previously, taking measurement error into account. The authors reanalyzed 35-year follow-up data on mortality due to coronary heart disease and stroke among subjects aged 65 years or more from nine cohorts of the Seven Countries Study. The two-step method of Tsiatis et al. (J Am Stat Assoc 1995;90:27-37) was used to adjust for regression dilution bias, and results were compared with those obtained using more commonly applied methods of adjustment for regression dilution bias. It was found that the commonly used univariate adjustment for regression dilution bias overestimates the effects of both SBP and cholesterol compared with multivariate methods. Also, the two-step method makes better use of the information available, resulting in smaller confidence intervals. Results comparing recent and past exposure indicated that past SBP is more important than recent SBP in terms of its effect on coronary heart disease mortality, while both recent and past values seem to be important for effects of cholesterol on coronary heart disease mortality and effects of SBP on stroke mortality. Associations between serum cholesterol concentration and risk of stroke mortality are weak.

  15. Cognitive Moderators of Children's Adjustment to Stressful Divorce Events: The Role of Negative Cognitive Errors and Positive Illusions.

    Science.gov (United States)

    Mazur, Elizabeth; Wolchik, Sharlene A.; Virdin, Lynn; Sandler, Irwin N.; West, Stephen G.

    1999-01-01

    Examined whether children's cognitive biases moderated impact of stressful divorce-related events on adjustment in 9- to 12-year olds. Found that endorsing negative cognitive errors for hypothetical divorce events moderated relations between stressful divorce events and self- and maternal-reports of internalizing and externalizing symptoms for…

  16. Set-up error in supine-positioned patients immobilized with two different modalities during conformal radiotherapy of prostate cancer

    International Nuclear Information System (INIS)

    Fiorino, C.; Cattaneo, G.M.; Calandrino, R.; Reni, M.; Bolognesi, A.; Bonini, A.

    1998-01-01

    Background: Conformal radiotherapy requires reduced margins around the clinical target volume (CTV) with respect to traditional radiotherapy techniques. Therefore, high set-up accuracy and reproducibility are mandatory. Purpose: To investigate the effectiveness of two different immobilization techniques during conformal radiotherapy of prostate cancer with small fields. Materials and methods: 52 patients with prostate cancer were treated by conformal three- or four-field techniques with radical or adjuvant intent between November 1996 and March 1998. In total, 539 portal images were collected on a weekly basis for at least the first 4 weeks of the treatment on lateral and anterior 18 MV X-ray fields. The average number of sessions monitored per patient was 5.7 (range 4-10). All patients were immobilized with an alpha-cradle system; 25 of them were immobilized at the pelvis level (group A) and the remaining 27 patients were immobilized in the legs (group B). The shifts with respect to the simulation condition were assessed by measuring the distances between the same bony landmarks and the field edges. The global distributions of cranio-caudal (CC), posterior-anterior (PA) and left-right (LR) shifts were considered; for each patient random and systematic error components were assessed by following the procedure suggested by Bijhold et al. (Bijhold J, Lebesque JV, Hart AAM, Vijlbrief RE. Maximising set-up accuracy using portal images as applied to a conformal boost technique for prostatic cancer. Radiother. Oncol. 1992;24:261-271). For each patient the average isocentre (3D) shift was assessed as the quadratic sum of the average shifts in the three directions. Results 5 mm equal to 4.4% with respect to the 21.6% of group A (P<0.0001). This value was also better than the corresponding value found in a previously investigated group of 21 non-immobilized patients (Italia C, Fiorino C, Ciocca M, et al. Quality control by portal film analysis of the conformal radiotherapy

  17. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  18. A Two-Step Method to Identify Positive Deviant Physician Organizations of Accountable Care Organizations with Robust Performance Management Systems.

    Science.gov (United States)

    Pimperl, Alexander F; Rodriguez, Hector P; Schmittdiel, Julie A; Shortell, Stephen M

    2018-06-01

    To identify positive deviant (PD) physician organizations of Accountable Care Organizations (ACOs) with robust performance management systems (PMSYS). Third National Survey of Physician Organizations (NSPO3, n = 1,398). Organizational and external factors from NSPO3 were analyzed. Linear regression estimated the association of internal and contextual factors on PMSYS. Two cutpoints (75th/90th percentiles) identified PDs with the largest residuals and highest PMSYS scores. A total of 65 and 41 PDs were identified using 75th and 90th percentiles cutpoints, respectively. The 90th percentile more strongly differentiated PDs from non-PDs. Having a high proportion of vulnerable patients appears to constrain PMSYS development. Our PD identification method increases the likelihood that PD organizations selected for in-depth inquiry are high-performing organizations that exceed expectations. © Health Research and Educational Trust.

  19. Decisions to Shoot in a Weapon Identification Task: The Influence of Cultural Stereotypes and Perceived Threat on False Positive Errors

    OpenAIRE

    Fleming, Kevin K.; Bandy, Carole L.; Kimble, Matthew O.

    2009-01-01

    The decision to shoot engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC) where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and EEG activ...

  20. Accounting for Berkson and Classical Measurement Error in Radon Exposure Using a Bayesian Structural Approach in the Analysis of Lung Cancer Mortality in the French Cohort of Uranium Miners.

    Science.gov (United States)

    Hoffmann, Sabine; Rage, Estelle; Laurier, Dominique; Laroche, Pierre; Guihenneuc, Chantal; Ancelet, Sophie

    2017-02-01

    Many occupational cohort studies on underground miners have demonstrated that radon exposure is associated with an increased risk of lung cancer mortality. However, despite the deleterious consequences of exposure measurement error on statistical inference, these analyses traditionally do not account for exposure uncertainty. This might be due to the challenging nature of measurement error resulting from imperfect surrogate measures of radon exposure. Indeed, we are typically faced with exposure uncertainty in a time-varying exposure variable where both the type and the magnitude of error may depend on period of exposure. To address the challenge of accounting for multiplicative and heteroscedastic measurement error that may be of Berkson or classical nature, depending on the year of exposure, we opted for a Bayesian structural approach, which is arguably the most flexible method to account for uncertainty in exposure assessment. We assessed the association between occupational radon exposure and lung cancer mortality in the French cohort of uranium miners and found the impact of uncorrelated multiplicative measurement error to be of marginal importance. However, our findings indicate that the retrospective nature of exposure assessment that occurred in the earliest years of mining of this cohort as well as many other cohorts of underground miners might lead to an attenuation of the exposure-risk relationship. More research is needed to address further uncertainties in the calculation of lung dose, since this step will likely introduce important sources of shared uncertainty.

  1. The input ambiguity hypothesis and case blindness: an account of cross-linguistic and intra-linguistic differences in case errors.

    Science.gov (United States)

    Pelham, Sabra D

    2011-03-01

    English-acquiring children frequently make pronoun case errors, while German-acquiring children rarely do. Nonetheless, German-acquiring children frequently make article case errors. It is proposed that when child-directed speech contains a high percentage of case-ambiguous forms, case errors are common in child language; when percentages are low, case errors are rare. Input to English and German children was analyzed for percentage of case-ambiguous personal pronouns on adult tiers of corpora from 24 English-acquiring and 24 German-acquiring children. Also analyzed for German was the percentage of case-ambiguous articles. Case-ambiguous pronouns averaged 63·3% in English, compared with 7·6% in German. The percentage of case-ambiguous articles in German was 77·0%. These percentages align with the children's errors reported in the literature. It appears children may be sensitive to levels of ambiguity such that low ambiguity may aid error-free acquisition, while high ambiguity may blind children to case distinctions, resulting in errors.

  2. Measurement Errors Arising When Using Distances in Microeconometric Modelling and the Individuals’ Position Is Geo-Masked for Confidentiality

    Directory of Open Access Journals (Sweden)

    Giuseppe Arbia

    2015-10-01

    Full Text Available In many microeconometric models we use distances. For instance, in modelling the individual behavior in labor economics or in health studies, the distance from a relevant point of interest (such as a hospital or a workplace is often used as a predictor in a regression framework. However, in order to preserve confidentiality, spatial micro-data are often geo-masked, thus reducing their quality and dramatically distorting the inferential conclusions. In particular in this case, a measurement error is introduced in the independent variable which negatively affects the properties of the estimators. This paper studies these negative effects, discusses their consequences, and suggests possible interpretations and directions to data producers, end users, and practitioners.

  3. Translational and rotational intra- and inter-fractional errors in patient and target position during a short course of frameless stereotactic body radiotherapy

    International Nuclear Information System (INIS)

    Josipovic, Mirjana; Fredberg Persson, Gitte; Logadottir, Aashildur; Smulders, Bob; Westmann, Gunnar; Bangsgaard, Jens Peter

    2012-01-01

    Background. Implementation of cone beam computed tomography (CBCT) in frameless stereotactic body radiotherapy (SBRT) of lung tumours enables setup correction based on tumour position. The aim of this study was to compare setup accuracy with daily soft tissue matching to bony anatomy matching and evaluate intra- and inter-fractional translational and rotational errors in patient and target positions. Material and methods. Fifteen consecutive SBRT patients were included in the study. Vacuum cushions were used for immobilisation. SBRT plans were based on midventilation phase of four-dimensional (4D)-CT or three-dimensional (3D)-CT from PET/CT. Margins of 5 mm in the transversal plane and 10 mm in the cranio-caudal (CC) direction were applied. SBRT was delivered in three fractions within a week. At each fraction, CBCT was performed before and after the treatment. Setup accuracy comparison between soft tissue matching and bony anatomy matching was evaluated on pretreatment CBCTs. From differences in pre- and post-treatment CBCTs, we evaluated the extent of translational and rotational intra-fractional changes in patient position, tumour position and tumour baseline shift. All image registration was rigid with six degrees of freedom. Results. The median 3D difference between patient position based on bony anatomy matching and soft tissue matching was 3.0 mm (0-8.3 mm). The median 3D intra-fractional change in patient position was 1.4 mm (0-12.2 mm) and 2.2 mm (0-13.2 mm) in tumour position. The median 3D intra-fractional baseline shift was 2.2 mm (0-4.7 mm). With correction of translational errors, the remaining systematic and random errors were approximately 1deg. Conclusion. Soft tissue tumour matching improved precision of treatment delivery in frameless SBRT of lung tumours compared to image guidance using bone matching. The intra-fractional displacement of the target position was affected by both translational and rotational changes in tumour baseline position

  4. Estimating the population distribution of usual 24-hour sodium excretion from timed urine void specimens using a statistical approach accounting for correlated measurement errors.

    Science.gov (United States)

    Wang, Chia-Yih; Carriquiry, Alicia L; Chen, Te-Ching; Loria, Catherine M; Pfeiffer, Christine M; Liu, Kiang; Sempos, Christopher T; Perrine, Cria G; Cogswell, Mary E

    2015-05-01

    High US sodium intake and national reduction efforts necessitate developing a feasible and valid monitoring method across the distribution of low-to-high sodium intake. We examined a statistical approach using timed urine voids to estimate the population distribution of usual 24-h sodium excretion. A sample of 407 adults, aged 18-39 y (54% female, 48% black), collected each void in a separate container for 24 h; 133 repeated the procedure 4-11 d later. Four timed voids (morning, afternoon, evening, overnight) were selected from each 24-h collection. We developed gender-specific equations to calibrate total sodium excreted in each of the one-void (e.g., morning) and combined two-void (e.g., morning + afternoon) urines to 24-h sodium excretion. The calibrated sodium excretions were used to estimate the population distribution of usual 24-h sodium excretion. Participants were then randomly assigned to modeling (n = 160) or validation (n = 247) groups to examine the bias in estimated population percentiles. Median bias in predicting selected percentiles (5th, 25th, 50th, 75th, 95th) of usual 24-h sodium excretion with one-void urines ranged from -367 to 284 mg (-7.7 to 12.2% of the observed usual excretions) for men and -604 to 486 mg (-14.6 to 23.7%) for women, and with two-void urines from -338 to 263 mg (-6.9 to 10.4%) and -166 to 153 mg (-4.1 to 8.1%), respectively. Four of the 6 two-void urine combinations produced no significant bias in predicting selected percentiles. Our approach to estimate the population usual 24-h sodium excretion, which uses calibrated timed-void sodium to account for day-to-day variation and covariance between measurement errors, produced percentile estimates with relatively low biases across low-to-high sodium excretions. This may provide a low-burden, low-cost alternative to 24-h collections in monitoring population sodium intake among healthy young adults and merits further investigation in other population subgroups. © 2015 American

  5. SU-E-T-132: Dosimetric Impact of Positioning Errors in Hypo-Fractionated Cranial Radiation Therapy Using Frameless Stereotactic BrainLAB System

    International Nuclear Information System (INIS)

    Keeling, V; Jin, H; Ali, I; Ahmad, S

    2014-01-01

    Purpose: To determine dosimetric impact of positioning errors in the stereotactic hypo-fractionated treatment of intracranial lesions using 3Dtransaltional and 3D-rotational corrections (6D) frameless BrainLAB ExacTrac X-Ray system. Methods: 20 cranial lesions, treated in 3 or 5 fractions, were selected. An infrared (IR) optical positioning system was employed for initial patient setup followed by stereoscopic kV X-ray radiographs for position verification. 6D-translational and rotational shifts were determined to correct patient position. If these shifts were above tolerance (0.7 mm translational and 1° rotational), corrections were applied and another set of X-rays was taken to verify patient position. Dosimetric impact (D95, Dmin, Dmax, and Dmean of planning target volume (PTV) compared to original plans) of positioning errors for initial IR setup (XC: Xray Correction) and post-correction (XV: X-ray Verification) was determined in a treatment planning system using a method proposed by Yue et al. (Med. Phys. 33, 21-31 (2006)) with 3D-translational errors only and 6D-translational and rotational errors. Results: Absolute mean translational errors (±standard deviation) for total 92 fractions (XC/XV) were 0.79±0.88/0.19±0.15 mm (lateral), 1.66±1.71/0.18 ±0.16 mm (longitudinal), 1.95±1.18/0.15±0.14 mm (vertical) and rotational errors were 0.61±0.47/0.17±0.15° (pitch), 0.55±0.49/0.16±0.24° (roll), and 0.68±0.73/0.16±0.15° (yaw). The average changes (loss of coverage) in D95, Dmin, Dmax, and Dmean were 4.5±7.3/0.1±0.2%, 17.8±22.5/1.1±2.5%, 0.4±1.4/0.1±0.3%, and 0.9±1.7/0.0±0.1% using 6Dshifts and 3.1±5.5/0.0±0.1%, 14.2±20.3/0.8±1.7%, 0.0±1.2/0.1±0.3%, and 0.7±1.4/0.0±0.1% using 3D-translational shifts only. The setup corrections (XC-XV) improved the PTV coverage by 4.4±7.3% (D95) and 16.7±23.5% (Dmin) using 6D adjustment. Strong correlations were observed between translation errors and deviations in dose coverage for XC. Conclusion

  6. The Relationship Between Implementation of School-Wide Positive Behavior Intervention and Supports and Performance on State Accountability Measures

    Directory of Open Access Journals (Sweden)

    Adriana M. Marin

    2013-10-01

    Full Text Available This study examined data from 96 schools in a Southeastern U.S. state participating in training and/or coaching on School-Wide Positive Behavioral Interventions and Supports (SWPBIS provided by the State Personnel Development Grant (SPDG in their state. Schools studied either received training only (“non-intensive” sites or training and on-site coaching (“intensive” sites. Fidelity of implementation was self-evaluated by both types of schools using the Benchmarks of Quality (BOQ. Some schools were also externally evaluated using the School-Wide Evaluation Tool (SET, with those scoring 80% or higher determined “model sites.” Using an independent sample t-test, analyses revealed statistically significant differences between intensive and nonintensive schools’ Quality of Distribution Index (QDI scores and between model sites and nonmodel sites on QDI scores. Correlations were performed to determine whether the fidelity of implementation of SWPBIS as measured by the BOQ was related to any of the state’s accountability measures: performance classification, QDI, or growth.

  7. The influence of the analog-to-digital conversion error on the JT-60 plasma position/shape feedback control system

    International Nuclear Information System (INIS)

    Yoshida, Michiharu; Kurihara, Kenichi

    1995-12-01

    In the plasma feedback control system (PFCS) and the direct digital controller (DDC) for the poloidal field coil power supply in the JT-60 tokamak, it is necessary to observe signals of all the poloidal field coil currents. Each of the signals, originally measured by a single sensor, is distributed to the PFCS and DDC through different cable routes and different analog-to-digital converters from each other. This produces the conversion error to the amount of several bits. Consequently, proper voltage from feedback calculation cannot be applied to the coil, and hence the control performance is possibly supposed to deteriorate to a certain extent. This paper describes how this error makes an influence on the plasma horizontal position control and how to improve the deteriorated control performance. (author)

  8. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  9. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  10. Dosimetric implications of inter- and intrafractional prostate positioning errors during tomotherapy. Comparison of gold marker-based registrations with native MVCT

    Energy Technology Data Exchange (ETDEWEB)

    Wust, Peter; Joswig, Marc; Graf, Reinhold; Boehmer, Dirk; Beck, Marcus; Barelkowski, Thomasz; Budach, Volker; Ghadjar, Pirus [Charite Universitaetsmedizin Berlin, Department of Radiation Oncology and Radiotherapy, Berlin (Germany)

    2017-09-15

    For high-dose radiation therapy (RT) of prostate cancer, image-guided (IGRT) and intensity-modulated RT (IMRT) approaches are standard. Less is known regarding comparisons of different IGRT techniques and the resulting residual errors, as well as regarding their influences on dose distributions. A total of 58 patients who received tomotherapy-based RT up to 84 Gy for high-risk prostate cancer underwent IGRT based either on daily megavoltage CT (MVCT) alone (n = 43) or the additional use of gold markers (n = 15) under routine conditions. Planned Adaptive (Accuray Inc., Madison, WI, USA) software was used for elaborated offline analysis to quantify residual interfractional prostate positioning errors, along with systematic and random errors and the resulting safety margins after both IGRT approaches. Dosimetric parameters for clinical target volume (CTV) coverage and exposition of organs at risk (OAR) were also analyzed and compared. Interfractional as well as intrafractional displacements were determined. Particularly in the vertical direction, residual interfractional positioning errors were reduced using the gold marker-based approach, but dosimetric differences were moderate and the clinical relevance relatively small. Intrafractional prostate motion proved to be quite high, with displacements of 1-3 mm; however, these did not result in additional dosimetric impairments. Residual interfractional positioning errors were reduced using gold marker-based IGRT; however, this resulted in only slightly different final dose distributions. Therefore, daily MVCT-based IGRT without markers might be a valid alternative. (orig.) [German] Bei der hochdosierten Bestrahlung des Prostatakarzinoms sind die bildgesteuerte (IGRT) und die intensitaetsmodulierte Bestrahlung (IMRT) Standard. Offene Fragen gibt es beim Vergleich von IGRT-Techniken im Hinblick auf residuelle Fehler und Beeinflussungen der Dosisverteilung. Bei 58 Patienten, deren Hochrisiko-Prostatakarzinom am

  11. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  12. Assessment of long-range kinematic GPS positioning errors by comparison with airborne laser altimetry and satellite altimetry

    DEFF Research Database (Denmark)

    Zhang, X.H.; Forsberg, René

    2007-01-01

    Long-range airborne laser altimetry and laser scanning (LIDAR) or airborne gravity surveys in, for example, polar or oceanic areas require airborne kinematic GPS baselines of many hundreds of kilometers in length. In such instances, with the complications of ionospheric biases, it can be a real...... challenge for traditional differential kinematic GPS software to obtain reasonable solutions. In this paper, we will describe attempts to validate an implementation of the precise point positioning (PPP) technique on an aircraft without the use of a local GPS reference station. We will compare PPP solutions...... of the Arctic Ocean north of Greenland, near-coincident in time and space with the ICESat satellite laser altimeter. Both of these flights were more than 800 km long. Comparisons between different GPS methods and four different software packages do not suggest a clear preference for any one, with the heights...

  13. SU-E-T-261: Development of An Automated System to Detect Patient Identification and Positioning Errors Prior to Radiotherapy Treatment

    Energy Technology Data Exchange (ETDEWEB)

    Jani, S; Low, D; Lamb, J [UCLA, Los Angeles, CA (United States)

    2015-06-15

    Purpose: To develop a system that can automatically detect patient identification and positioning errors using 3D computed tomography (CT) setup images and kilovoltage CT (kVCT) planning images. Methods: Planning kVCT images were collected for head-and-neck (H&N), pelvis, and spine treatments with corresponding 3D cone-beam CT (CBCT) and megavoltage CT (MVCT) setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. Positioning errors were simulated by misaligning the setup image by 1cm to 5cm in the six anatomical directions for H&N and pelvis patients. Misalignments for spine treatments were simulated by registering the setup image to adjacent vertebral bodies on the planning kVCT. A body contour of the setup image was used as an initial mask for image comparison. Images were pre-processed by image filtering and air voxel thresholding, and image pairs were assessed using commonly-used image similarity metrics as well as custom -designed metrics. A linear discriminant analysis classifier was trained and tested on the datasets, and misclassification error (MCE), sensitivity, and specificity estimates were generated using 10-fold cross validation. Results: Our workflow produced MCE estimates of 0.7%, 1.7%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivities and specificities ranged from 98.0% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 96.2% and 98.4%. MCEs for 1cm H&N/pelvis misalignments were 1.3/5.1% and 9.1/8.6% for TomoTherapy and TrueBeam images, respectively. 2cm MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. Vertebral misalignment MCEs were 4.8% and 4.9% for TomoTherapy and TrueBeam images, respectively. Conclusion: Patient identification and gross misalignment errors can be robustly and

  14. SU-E-T-261: Development of An Automated System to Detect Patient Identification and Positioning Errors Prior to Radiotherapy Treatment

    International Nuclear Information System (INIS)

    Jani, S; Low, D; Lamb, J

    2015-01-01

    Purpose: To develop a system that can automatically detect patient identification and positioning errors using 3D computed tomography (CT) setup images and kilovoltage CT (kVCT) planning images. Methods: Planning kVCT images were collected for head-and-neck (H&N), pelvis, and spine treatments with corresponding 3D cone-beam CT (CBCT) and megavoltage CT (MVCT) setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. Positioning errors were simulated by misaligning the setup image by 1cm to 5cm in the six anatomical directions for H&N and pelvis patients. Misalignments for spine treatments were simulated by registering the setup image to adjacent vertebral bodies on the planning kVCT. A body contour of the setup image was used as an initial mask for image comparison. Images were pre-processed by image filtering and air voxel thresholding, and image pairs were assessed using commonly-used image similarity metrics as well as custom -designed metrics. A linear discriminant analysis classifier was trained and tested on the datasets, and misclassification error (MCE), sensitivity, and specificity estimates were generated using 10-fold cross validation. Results: Our workflow produced MCE estimates of 0.7%, 1.7%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivities and specificities ranged from 98.0% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 96.2% and 98.4%. MCEs for 1cm H&N/pelvis misalignments were 1.3/5.1% and 9.1/8.6% for TomoTherapy and TrueBeam images, respectively. 2cm MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. Vertebral misalignment MCEs were 4.8% and 4.9% for TomoTherapy and TrueBeam images, respectively. Conclusion: Patient identification and gross misalignment errors can be robustly and

  15. Classification of radiological errors in chest radiographs, using support vector machine on the spatial frequency features of false- negative and false-positive regions

    Science.gov (United States)

    Pietrzyk, Mariusz W.; Donovan, Tim; Brennan, Patrick C.; Dix, Alan; Manning, David J.

    2011-03-01

    Aim: To optimize automated classification of radiological errors during lung nodule detection from chest radiographs (CxR) using a support vector machine (SVM) run on the spatial frequency features extracted from the local background of selected regions. Background: The majority of the unreported pulmonary nodules are visually detected but not recognized; shown by the prolonged dwell time values at false-negative regions. Similarly, overestimated nodule locations are capturing substantial amounts of foveal attention. Spatial frequency properties of selected local backgrounds are correlated with human observer responses either in terms of accuracy in indicating abnormality position or in the precision of visual sampling the medical images. Methods: Seven radiologists participated in the eye tracking experiments conducted under conditions of pulmonary nodule detection from a set of 20 postero-anterior CxR. The most dwelled locations have been identified and subjected to spatial frequency (SF) analysis. The image-based features of selected ROI were extracted with un-decimated Wavelet Packet Transform. An analysis of variance was run to select SF features and a SVM schema was implemented to classify False-Negative and False-Positive from all ROI. Results: A relative high overall accuracy was obtained for each individually developed Wavelet-SVM algorithm, with over 90% average correct ratio for errors recognition from all prolonged dwell locations. Conclusion: The preliminary results show that combined eye-tracking and image-based features can be used for automated detection of radiological error with SVM. The work is still in progress and not all analytical procedures have been completed, which might have an effect on the specificity of the algorithm.

  16. Sample positioning effects in x-ray spectrometry

    International Nuclear Information System (INIS)

    Carpenter, D.

    Instrument error due to variation in sample position in a crystal x-ray spectrometer can easily exceed the total instrumental error. Lack of reproducibility in sample position in the x-ray optics is the single largest source of system error. The factors that account for sample positioning error are described, and many of the details of flat crystal x-ray optics are discussed

  17. Error induced by the estimation of the corneal power and the effective lens position with a rotationally asymmetric refractive multifocal intraocular lens.

    Science.gov (United States)

    Piñero, David P; Camps, Vicente J; Ramón, María L; Mateo, Verónica; Pérez-Cambrodí, Rafael J

    2015-01-01

    To evaluate the prediction error in intraocular lens (IOL) power calculation for a rotationally asymmetric refractive multifocal IOL and the impact on this error of the optimization of the keratometric estimation of the corneal power and the prediction of the effective lens position (ELP). Retrospective study including a total of 25 eyes of 13 patients (age, 50 to 83y) with previous cataract surgery with implantation of the Lentis Mplus LS-312 IOL (Oculentis GmbH, Germany). In all cases, an adjusted IOL power (PIOLadj) was calculated based on Gaussian optics using a variable keratometric index value (nkadj) for the estimation of the corneal power (Pkadj) and on a new value for ELP (ELPadj) obtained by multiple regression analysis. This PIOLadj was compared with the IOL power implanted (PIOLReal) and the value proposed by three conventional formulas (Haigis, Hoffer Q and Holladay I). PIOLReal was not significantly different than PIOLadj and Holladay IOL power (P>0.05). In the Bland and Altman analysis, PIOLadj showed lower mean difference (-0.07 D) and limits of agreement (of 1.47 and -1.61 D) when compared to PIOLReal than the IOL power value obtained with the Holladay formula. Furthermore, ELPadj was significantly lower than ELP calculated with other conventional formulas (P<0.01) and was found to be dependent on axial length, anterior chamber depth and Pkadj. Refractive outcomes after cataract surgery with implantation of the multifocal IOL Lentis Mplus LS-312 can be optimized by minimizing the keratometric error and by estimating ELP using a mathematical expression dependent on anatomical factors.

  18. Error induced by the estimation of the corneal power and the effective lens position with a rotationally asymmetric refractive multifocal intraocular lens

    Directory of Open Access Journals (Sweden)

    David P. Piñero

    2015-06-01

    Full Text Available AIM:To evaluate the prediction error in intraocular lens (IOL power calculation for a rotationally asymmetric refractive multifocal IOL and the impact on this error of the optimization of the keratometric estimation of the corneal power and the prediction of the effective lens position (ELP.METHODS:Retrospective study including a total of 25 eyes of 13 patients (age, 50 to 83y with previous cataract surgery with implantation of the Lentis Mplus LS-312 IOL (Oculentis GmbH, Germany. In all cases, an adjusted IOL power (PIOLadj was calculated based on Gaussian optics using a variable keratometric index value (nkadj for the estimation of the corneal power (Pkadj and on a new value for ELP (ELPadj obtained by multiple regression analysis. This PIOLadj was compared with the IOL power implanted (PIOLReal and the value proposed by three conventional formulas (Haigis, Hoffer Q and Holladay Ⅰ.RESULTS:PIOLReal was not significantly different than PIOLadj and Holladay IOL power (P>0.05. In the Bland and Altman analysis, PIOLadj showed lower mean difference (-0.07 D and limits of agreement (of 1.47 and -1.61 D when compared to PIOLReal than the IOL power value obtained with the Holladay formula. Furthermore, ELPadj was significantly lower than ELP calculated with other conventional formulas (P<0.01 and was found to be dependent on axial length, anterior chamber depth and Pkadj.CONCLUSION:Refractive outcomes after cataract surgery with implantation of the multifocal IOL Lentis Mplus LS-312 can be optimized by minimizing the keratometric error and by estimating ELP using a mathematical expression dependent on anatomical factors.

  19. Lung cancer mortality in the European uranium miners cohorts analyzed with a biologically based model taking into account radon measurement error

    Energy Technology Data Exchange (ETDEWEB)

    Heidenreich, W.F. [German Research Center for Environmental Health (GmbH), Institute for Radiation Protection, Neuherberg (Germany); Helmholtz Zentrum Muenchen, German Research Center for Environmental Health (GmbH), Institute for Radiation Biology, Neuherberg (Germany); Tomasek, L. [National Radiation Protection Institute, Prague (Czech Republic); Grosche, B. [BfS Bundesamt fuer Strahlenschutz, Neuherberg (Germany); Leuraud, K.; Laurier, D. [DRPH, SRBE, LEPID, Institut de Radioprotection et de Surete Nucleaire (IRSN), Fontenay-aux-Roses (France)

    2012-08-15

    The biologically based two-stage clonal expansion (TSCE) model is used to analyze lung cancer mortality of European miners from the Czech Republic, France, and Germany. All three cohorts indicate a highly significant action of exposure to radon and its progeny on promotion. The action on initiation is not significant in the French cohort. An action on transformation was tested but not found significant. In a pooled analysis, the results based on the French and German datasets do not differ significantly in any of the used parameters. For the Czech dataset, only lag time and two parameters that determine the clonal expansion without exposure and with low exposure rates (promotion) are consistent with the other studies. For low exposure rates, the resulting relative risks are quite similar. Exposure estimates for each calendar year are used. A model for random errors in each of these yearly exposures is presented. Depending on the used technique of exposure estimate, Berkson and classical errors are used. The consequences for the model parameters are calculated and found to be mostly of minor importance, except that the large difference in the exposure-induced initiation between the studies is decreased substantially. (orig.)

  20. SU-F-J-131: Reproducibility of Positioning Error Due to Temporarily Indwelled Urethral Catheter for Urethra-Sparing Prostate IMRT

    International Nuclear Information System (INIS)

    Hirose, K; Takai, Y; Sato, M; Hatayama, Y; Kawaguchi, H; Aoki, M; Akimoto, H; Komai, F; Souma, M; Obara, H; Suzuki, M

    2016-01-01

    Purpose: The purpose of this study was to prospectively assess the reproducibility of positioning errors due to temporarily indwelled catheter in urethra-sparing image-guided (IG) IMRT. Methods: Ten patients received urethra-sparing prostate IG-IMRT with implanted fiducials. After the first CT scan was performed in supine position, 6-Fr catheter was indwelled into urethra, and the second CT images were taken for planning. While the PTV received 80 Gy, 5% dose reduction was applied for the urethral PRV along the catheter. Additional CT scans were also performed at 5th and 30th fraction. Positions of interests (POIs) were set on posterior edge of prostate at beam isocenter level (POI1) and cranial and caudal edge of prostatic urethra on the post-indwelled CT images. POIs were copied into the pre-indwelled, 5th and 30th fraction’s CT images after fiducial matching on these CT images. The deviation of each POI between pre- and post-indwelled CT and the reproducibility of prostate displacement due to catheter were evaluated. Results: The deviation of POI1 caused by the indwelled catheter to the directions of RL/AP/SI (mm) was 0.20±0.27/−0.64±2.43/1.02±2.31, respectively, and the absolute distances (mm) were 3.15±1.41. The deviation tends to be larger if closer to the caudal edge of prostate. Compared with the pre-indwelled CT scan, a median displacement of all POIs (mm) were 0.3±0.2/2.2±1.1/2.0±2.6 in the post-indwelled, 0.4±0.4/3.4±2.1/2.3±2.6 in 5th, and 0.5±0.5/1.7±2.2/1.9±3.1 in 30th fraction’s CT scan with a similar data distribution. There were 6 patients with 5-mm-over displacement in AP and/or CC directions. Conclusion: Reproducibility of positioning errors due to temporarily indwelling catheter was observed. Especially in case of patients with unusually large shifts by indwelling catheter at the planning process, treatment planning should be performed by using the pre-indwelled CT images with transferred contour of the urethra identified by

  1. SU-F-J-131: Reproducibility of Positioning Error Due to Temporarily Indwelled Urethral Catheter for Urethra-Sparing Prostate IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Hirose, K; Takai, Y [Hirosaki University, Hirosaki (Japan); Southern Tohoku BNCT Research Center, Koriyama (Japan); Sato, M; Hatayama, Y; Kawaguchi, H; Aoki, M; Akimoto, H [Hirosaki University, Hirosaki (Japan); Komai, F; Souma, M; Obara, H; Suzuki, M [Hirosaki University Hospital, Hirosaki (Japan)

    2016-06-15

    Purpose: The purpose of this study was to prospectively assess the reproducibility of positioning errors due to temporarily indwelled catheter in urethra-sparing image-guided (IG) IMRT. Methods: Ten patients received urethra-sparing prostate IG-IMRT with implanted fiducials. After the first CT scan was performed in supine position, 6-Fr catheter was indwelled into urethra, and the second CT images were taken for planning. While the PTV received 80 Gy, 5% dose reduction was applied for the urethral PRV along the catheter. Additional CT scans were also performed at 5th and 30th fraction. Positions of interests (POIs) were set on posterior edge of prostate at beam isocenter level (POI1) and cranial and caudal edge of prostatic urethra on the post-indwelled CT images. POIs were copied into the pre-indwelled, 5th and 30th fraction’s CT images after fiducial matching on these CT images. The deviation of each POI between pre- and post-indwelled CT and the reproducibility of prostate displacement due to catheter were evaluated. Results: The deviation of POI1 caused by the indwelled catheter to the directions of RL/AP/SI (mm) was 0.20±0.27/−0.64±2.43/1.02±2.31, respectively, and the absolute distances (mm) were 3.15±1.41. The deviation tends to be larger if closer to the caudal edge of prostate. Compared with the pre-indwelled CT scan, a median displacement of all POIs (mm) were 0.3±0.2/2.2±1.1/2.0±2.6 in the post-indwelled, 0.4±0.4/3.4±2.1/2.3±2.6 in 5th, and 0.5±0.5/1.7±2.2/1.9±3.1 in 30th fraction’s CT scan with a similar data distribution. There were 6 patients with 5-mm-over displacement in AP and/or CC directions. Conclusion: Reproducibility of positioning errors due to temporarily indwelling catheter was observed. Especially in case of patients with unusually large shifts by indwelling catheter at the planning process, treatment planning should be performed by using the pre-indwelled CT images with transferred contour of the urethra identified by

  2. TU-AB-201-05: Automatic Adaptive Per-Operative Re-Planning for HDR Prostate Brachytherapy - a Simulation Study On Errors in Needle Positioning

    International Nuclear Information System (INIS)

    Borot de Battisti, M; Maenhout, M; Lagendijk, J J W; Van Vulpen, M; Moerland, M A; Senneville, B Denis de; Hautvast, G; Binnekamp, D

    2015-01-01

    Purpose: To develop adaptive planning with feedback for MRI-guided focal HDR prostate brachytherapy with a single divergent needle robotic implant device. After each needle insertion, the dwell positions for that needle are calculated and the positioning of remaining needles and dosimetry are both updated based on MR imaging. Methods: Errors in needle positioning may occur due to inaccurate needle insertion (caused by e.g. the needle’s bending) and unpredictable changes in patient anatomy. Consequently, the dose plan quality might dramatically decrease compared to the preplan. In this study, a procedure was developed to re-optimize, after each needle insertion, the remaining needle angulations, source positions and dwell times in order to obtain an optimal coverage (D95% PTV>19 Gy) without exceeding the constraints of the organs at risk (OAR) (D10% urethra<21 Gy, D1cc bladder<12 Gy and D1cc rectum<12 Gy). Complete HDR procedures with 6 needle insertions were simulated for a patient MR-image set with PTV, prostate, urethra, bladder and rectum delineated. Random angulation errors, modeled by a Gaussian distribution (standard deviation of 3 mm at the needle’s tip), were generated for each needle insertion. We compared the final dose parameters for the situations (I) without re-optimization and (II) with the automatic feedback. Results: The computation time of replanning was below 100 seconds on a current desk computer. For the patient tested, a clinically acceptable dose plan was achieved while applying the automatic feedback (median(range) in Gy, D95% PTV: 19.9(19.3–20.3), D10% urethra: 13.4(11.9–18.0), D1cc rectum: 11.0(10.7–11.6), D1cc bladder: 4.9(3.6–6.8)). This was not the case without re-optimization (median(range) in Gy, D95% PTV: 19.4(14.9–21.3), D10% urethra: 12.6(11.0–15.7), D1cc rectum: 10.9(8.9–14.1), D1cc bladder: 4.8(4.4–5.2)). Conclusion: An automatic guidance strategy for HDR prostate brachytherapy was developed to compensate

  3. Time-order errors and standard-position effects in duration discrimination: An experimental study and an analysis by the sensation-weighting model.

    Science.gov (United States)

    Hellström, Åke; Rammsayer, Thomas H

    2015-10-01

    Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström's sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St-Co, Co-St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St-Co than for Co-St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.

  4. Identity, small stories and interpretative repertoires in research interviews. An account of market researchers’ discursive positioning strategies

    Directory of Open Access Journals (Sweden)

    Cosmin Toth

    2014-12-01

    Full Text Available My main purpose in this paper is to illustrate how participants in a research interview occasioned conversation make use of two important discursive devices, namely: small stories and interpretative repertoires for positioning during interaction in order to foster certain situated identity claims. The premises I work with in this paper are that identity is a practiced situated accomplishment, that small stories are devices employed frequently for identity work that are no less important than extended autobiographical expositions, and that interpretative repertoires are practiced ways of speaking that allow participants to manage their positions in certain ways. Moreover, I will try to show that positioning by means of small stories and interpretative repertoires should be understood in direct relation with the identities and other membership categories made relevant by the interviewer. When participants’ positions are conflicting or miss-aligned, a more pronounced identity work is employed on the part of the interviewee, sustained by certain repertoires’ management strategies: alternation, nuancing, or rejecting certain repertoires.

  5. Can the Pro-Drop Parameter Account for All the Errors in the Acquisition of Non-Referential "It" in L2 English?

    Science.gov (United States)

    Antonova-Ünlü, Elena

    2015-01-01

    Numerous studies, examining the acquisition of non-referential it in [-pro-drop] English by learners of [+pro-drop] languages, have revealed that their participants omit non-referential subjects in English if their L1 allows null-subject position. However, due to the specificity of their focus, these studies have not considered other difficulties…

  6. Hemispheric Asymmetries in Striatal Reward Responses Relate to Approach-Avoidance Learning and Encoding of Positive-Negative Prediction Errors in Dopaminergic Midbrain Regions.

    Science.gov (United States)

    Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie

    2015-10-28

    Some individuals are better at learning about rewarding situations, whereas others are inclined to avoid punishments (i.e., enhanced approach or avoidance learning, respectively). In reinforcement learning, action values are increased when outcomes are better than predicted (positive prediction errors [PEs]) and decreased for worse than predicted outcomes (negative PEs). Because actions with high and low values are approached and avoided, respectively, individual differences in the neural encoding of PEs may influence the balance between approach-avoidance learning. Recent correlational approaches also indicate that biases in approach-avoidance learning involve hemispheric asymmetries in dopamine function. However, the computational and neural mechanisms underpinning such learning biases remain unknown. Here we assessed hemispheric reward asymmetry in striatal activity in 34 human participants who performed a task involving rewards and punishments. We show that the relative difference in reward response between hemispheres relates to individual biases in approach-avoidance learning. Moreover, using a computational modeling approach, we demonstrate that better encoding of positive (vs negative) PEs in dopaminergic midbrain regions is associated with better approach (vs avoidance) learning, specifically in participants with larger reward responses in the left (vs right) ventral striatum. Thus, individual dispositions or traits may be determined by neural processes acting to constrain learning about specific aspects of the world. Copyright © 2015 the authors 0270-6474/15/3514491-10$15.00/0.

  7. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    Energy Technology Data Exchange (ETDEWEB)

    Santoro, J. P.; McNamara, J.; Yorke, E.; Pham, H.; Rimner, A.; Rosenzweig, K. E.; Mageras, G. S. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States)

    2012-10-15

    Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged images for determining tumor deviations. Methods: Eleven stage II-IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction

  8. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    International Nuclear Information System (INIS)

    Santoro, J. P.; McNamara, J.; Yorke, E.; Pham, H.; Rimner, A.; Rosenzweig, K. E.; Mageras, G. S.

    2012-01-01

    Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged images for determining tumor deviations. Methods: Eleven stage II–IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction

  9. Assessment on tracking error performance of Cascade P/PI, NPID and N-Cascade controller for precise positioning of xy table ballscrew drive system

    International Nuclear Information System (INIS)

    Abdullah, L; Jamaludin, Z; Rafan, N A; Jamaludin, J; Chiew, T H

    2013-01-01

    At present, positioning plants in machine tools are looking for high degree of accuracy and robustness attributes for the purpose of compensating various disturbance forces. The objective of this paper is to assess the tracking performance of Cascade P/PI, Nonlinear PID (NPID) and Nonlinear cascade (N-Cascade) controller with the existence of disturbance forces in the form of cutting forces. Cutting force characteristics at different cutting parameters; such as spindle speed rotations is analysed using Fast Fourier Transform. The tracking performance of a Nonlinear cascade controller in presence of these cutting forces is compared with NPID controller and Cascade P/PI controller. Robustness of these controllers in compensating different cutting characteristics is compared based on reduction in the amplitudes of cutting force harmonics using Fast Fourier Transform. It is found that the N-cascade controller performs better than both NPID controller and Cascade P/PI controller. The average percentage error reduction between N-cascade controller and Cascade P/PI controller is about 65% whereas the average percentage error reduction between cascade controller and NPID controller is about 82% at spindle speed of 3000 rpm spindle speed rotation. The finalized design of N-cascade controller could be utilized further for machining application such as milling process. The implementation of N-cascade in machine tools applications will increase the quality of the end product and the productivity in industry by saving the machining time. It is suggested that the range of the spindle speed could be made wider to accommodate the needs for high speed machining

  10. Inhibiting HER3-mediated tumor cell growth with affibody molecules engineered to low picomolar affinity by position-directed error-prone PCR-like diversification.

    Science.gov (United States)

    Malm, Magdalena; Kronqvist, Nina; Lindberg, Hanna; Gudmundsdotter, Lindvi; Bass, Tarek; Frejd, Fredrik Y; Höidén-Guthenberg, Ingmarie; Varasteh, Zohreh; Orlova, Anna; Tolmachev, Vladimir; Ståhl, Stefan; Löfblom, John

    2013-01-01

    The HER3 receptor is implicated in the progression of various cancers as well as in resistance to several currently used drugs, and is hence a potential target for development of new therapies. We have previously generated Affibody molecules that inhibit heregulin-induced signaling of the HER3 pathways. The aim of this study was to improve the affinity of the binders to hopefully increase receptor inhibition efficacy and enable a high receptor-mediated uptake in tumors. We explored a novel strategy for affinity maturation of Affibody molecules that is based on alanine scanning followed by design of library diversification to mimic the result from an error-prone PCR reaction, but with full control over mutated positions and thus less biases. Using bacterial surface display and flow-cytometric sorting of the maturation library, the affinity for HER3 was improved more than 30-fold down to 21 pM. The affinity is among the higher that has been reported for Affibody molecules and we believe that the maturation strategy should be generally applicable for improvement of affinity proteins. The new binders also demonstrated an improved thermal stability as well as complete refolding after denaturation. Moreover, inhibition of ligand-induced proliferation of HER3-positive breast cancer cells was improved more than two orders of magnitude compared to the previously best-performing clone. Radiolabeled Affibody molecules showed specific targeting of a number of HER3-positive cell lines in vitro as well as targeting of HER3 in in vivo mouse models and represent promising candidates for future development of targeted therapies and diagnostics.

  11. Accounting for horizontal gene transfers explains conflicting hypotheses regarding the position of aquificales in the phylogeny of Bacteria

    Directory of Open Access Journals (Sweden)

    Gouy Manolo

    2008-10-01

    Full Text Available Abstract Background Despite a large agreement between ribosomal RNA and concatenated protein phylogenies, the phylogenetic tree of the bacterial domain remains uncertain in its deepest nodes. For instance, the position of the hyperthermophilic Aquificales is debated, as their commonly observed position close to Thermotogales may proceed from horizontal gene transfers, long branch attraction or compositional biases, and may not represent vertical descent. Indeed, another view, based on the analysis of rare genomic changes, places Aquificales close to epsilon-Proteobacteria. Results To get a whole genome view of Aquifex relationships, all trees containing sequences from Aquifex in the HOGENOM database were surveyed. This study revealed that Aquifex is most often found as a neighbour to Thermotogales. Moreover, informational genes, which appeared to be less often transferred to the Aquifex lineage than non-informational genes, most often placed Aquificales close to Thermotogales. To ensure these results did not come from long branch attraction or compositional artefacts, a subset of carefully chosen proteins from a wide range of bacterial species was selected for further scrutiny. Among these genes, two phylogenetic hypotheses were found to be significantly more likely than the others: the most likely hypothesis placed Aquificales as a neighbour to Thermotogales, and the second one with epsilon-Proteobacteria. We characterized the genes that supported each of these two hypotheses, and found that differences in rates of evolution or in amino-acid compositions could not explain the presence of two incongruent phylogenetic signals in the alignment. Instead, evidence for a large Horizontal Gene Transfer between Aquificales and epsilon-Proteobacteria was found. Conclusion Methods based on concatenated informational proteins and methods based on character cladistics led to different conclusions regarding the position of Aquificales because this lineage

  12. Detecting and accounting for multiple sources of positional variance in peak list registration analysis and spin system grouping.

    Science.gov (United States)

    Smelter, Andrey; Rouchka, Eric C; Moseley, Hunter N B

    2017-08-01

    Peak lists derived from nuclear magnetic resonance (NMR) spectra are commonly used as input data for a variety of computer assisted and automated analyses. These include automated protein resonance assignment and protein structure calculation software tools. Prior to these analyses, peak lists must be aligned to each other and sets of related peaks must be grouped based on common chemical shift dimensions. Even when programs can perform peak grouping, they require the user to provide uniform match tolerances or use default values. However, peak grouping is further complicated by multiple sources of variance in peak position limiting the effectiveness of grouping methods that utilize uniform match tolerances. In addition, no method currently exists for deriving peak positional variances from single peak lists for grouping peaks into spin systems, i.e. spin system grouping within a single peak list. Therefore, we developed a complementary pair of peak list registration analysis and spin system grouping algorithms designed to overcome these limitations. We have implemented these algorithms into an approach that can identify multiple dimension-specific positional variances that exist in a single peak list and group peaks from a single peak list into spin systems. The resulting software tools generate a variety of useful statistics on both a single peak list and pairwise peak list alignment, especially for quality assessment of peak list datasets. We used a range of low and high quality experimental solution NMR and solid-state NMR peak lists to assess performance of our registration analysis and grouping algorithms. Analyses show that an algorithm using a single iteration and uniform match tolerances approach is only able to recover from 50 to 80% of the spin systems due to the presence of multiple sources of variance. Our algorithm recovers additional spin systems by reevaluating match tolerances in multiple iterations. To facilitate evaluation of the

  13. Gas-cooled reactor programs. Fuel-management positioning and accounting module: FUELMANG Version V1. 11, September 1981

    Energy Technology Data Exchange (ETDEWEB)

    Medlin, T.W.; Hill, K.L.; Johnson, G.L.; Jones, J.E.; Vondy, D.R.

    1982-01-01

    This report documents the code module FUELMANG for fuel management of a reactor. This code may be used to position fuel during the calculation of a reactor history, maintain a mass balance history of the fuel movement, and calculate the unit fuel cycle component of the electrical generation cost. In addition to handling fixed feed fuel without recycle, provision has been made for fuel recycle with various options applied to the recycled fuel. A continuous fueling option is also available with the code. A major edit produced by the code is a detailed summary of the mass balance history of the reactor and a fuel cost analysis of that mass balance history. This code is incorporated in the system containing the VENTURE diffusion theory neutronics code for routine use. Fuel movement according to prescribed instructions is performed without the access of additional user input data during the calculation of a reactor operating history. Local application has been primarily for analysis of the performance of gas-cooled thermal reactor core concepts.

  14. Gas-cooled reactor programs. Fuel-management positioning and accounting module: FUELMANG Version V1.11, September 1981

    International Nuclear Information System (INIS)

    Medlin, T.W.; Hill, K.L.; Johnson, G.L.; Jones, J.E.; Vondy, D.R.

    1982-01-01

    This report documents the code module FUELMANG for fuel management of a reactor. This code may be used to position fuel during the calculation of a reactor history, maintain a mass balance history of the fuel movement, and calculate the unit fuel cycle component of the electrical generation cost. In addition to handling fixed feed fuel without recycle, provision has been made for fuel recycle with various options applied to the recycled fuel. A continuous fueling option is also available with the code. A major edit produced by the code is a detailed summary of the mass balance history of the reactor and a fuel cost analysis of that mass balance history. This code is incorporated in the system containing the VENTURE diffusion theory neutronics code for routine use. Fuel movement according to prescribed instructions is performed without the access of additional user input data during the calculation of a reactor operating history. Local application has been primarily for analysis of the performance of gas-cooled thermal reactor core concepts

  15. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  16. Analyzing temozolomide medication errors: potentially fatal.

    Science.gov (United States)

    Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee

    2014-10-01

    The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.

  17. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  18. Impact of error management culture on knowledge performance in professional service firms

    Directory of Open Access Journals (Sweden)

    Tabea Scheel

    2014-01-01

    Full Text Available Knowledge is the most crucial resource of the 21st century. For professional service firms (PSFs, knowledge represents the input as well as the output, and thus the fundamental base for performance. As every organization, PSFs have to deal with errors – and how they do that indicates their error culture. Considering the positive potential of errors (e.g., innovation, error management culture is positively related to organizational performance. This longitudinal quantitative study investigates the impact of error management culture on knowledge performance in four waves. The study was conducted in 131 PSFs, i.e. tax accounting offices. As a standard quality management system (QMS was assumed to moderate the relationship between error management culture and knowledge performance, offices' ISO 9000 certification was assessed. Error management culture correlated positively with knowledge performance at a significant level and predicted knowledge performance one year later. While the ISO 9000 certification correlated positively with knowledge performance, its assumed moderation of the relationship between error management culture and knowledge performance was not consistent. The process-oriented QMS seems to function as facilitator for the more behavior-oriented error management culture. However, the benefit of ISO 9000 certification for tax accounting remains to be proven. Given the impact of error management culture on knowledge performance, PSFs should focus on actively promoting positive attitudes towards errors.

  19. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  20. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  1. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  2. Comparison of measurement methods with a mixed effects procedure accounting for replicated evaluations (COM3PARE): method comparison algorithm implementation for head and neck IGRT positional verification.

    Science.gov (United States)

    Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R

    2015-08-28

    Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed

  3. The Pupillary Orienting Response Predicts Adaptive Behavioral Adjustment after Errors.

    Directory of Open Access Journals (Sweden)

    Peter R Murphy

    Full Text Available Reaction time (RT is commonly observed to slow down after an error. This post-error slowing (PES has been thought to arise from the strategic adoption of a more cautious response mode following deployment of cognitive control. Recently, an alternative account has suggested that PES results from interference due to an error-evoked orienting response. We investigated whether error-related orienting may in fact be a pre-cursor to adaptive post-error behavioral adjustment when the orienting response resolves before subsequent trial onset. We measured pupil dilation, a prototypical measure of autonomic orienting, during performance of a choice RT task with long inter-stimulus intervals, and found that the trial-by-trial magnitude of the error-evoked pupil response positively predicted both PES magnitude and the likelihood that the following response would be correct. These combined findings suggest that the magnitude of the error-related orienting response predicts an adaptive change of response strategy following errors, and thereby promote a reconciliation of the orienting and adaptive control accounts of PES.

  4. Determination of locational error associated with global positioning system (GPS) radio collars in relation to vegetation and topography in north-central New Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, K.; Biggs, J.; Fresquez, P.R.

    1997-02-01

    In 1996, a study was initiated to assess seasonal habitat use and movement patterns of Rocky Mountain elk (Cervus elaphus nelsoni) using global positioning system (GPS) radio collars. As part of this study, the authors attempted to assess the accuracies of GPS (non-differentially corrected) positions under various vegetation canopies and terrain conditions with the use of a GPS ``test`` collar. The test collar was activated every twenty minutes to obtain a position location and continuously uplinked to Argos satellites to transfer position data files. They used a Telonics, Inc. uplink receiver to intercept the transmission and view the results of the collar in real time. They placed the collar on a stand equivalent to the neck height of an adult elk and then placed the stand within three different treatment categories: (1) topographical influence (canyon and mesa tops), (2) canopy influence (open and closed canopy), and (3) vegetation type influence (ponderosa pine and pinion pine-juniper). The collar was kept at each location for one hour (usually obtaining three fixes). In addition, the authors used a hand-held GPS to obtain a position of the test collar at the same time and location.

  5. A mode of error: Immunoglobulin binding protein (a subset of anti-citrullinated proteins can cause false positive tuberculosis test results in rheumatoid arthritis

    Directory of Open Access Journals (Sweden)

    Maria Greenwald

    2017-12-01

    Full Text Available Citrullinated Immunoglobulin Binding Protein (BiP is a newly described autoimmune target in rheumatoid arthritis (RA, one of many cyclic citrullinated peptides(CCP or ACPA. BiP is over-expressed in RA patients causing T cell expansion and increased interferon levels during incubation for the QuantiFERON-Gold tuberculosis test (QFT-G TB. The QFT-G TB has never been validated where interferon is increased by underlying disease, as for example RA.Of ACPA-positive RA patients (n = 126, we found a 13% false-positive TB test rate by QFT-G TB. Despite subsequent biologic therapy for 3 years of all 126 RA patients, none showed evidence of TB without INH. Most of the false-positive RA patients after treatment with biologic therapy reverted to a negative QFT-G test. False TB tests correlated with ACPA level (p < 0.02.Three healthy women without arthritis or TB exposure had negative QFT-G TB. In vitro, all three tested positive every time for TB correlating to the dose of BiP or anti-BiP added, at 2 ug/ml, 5 ug/ml, 10 ug/ml, and 20 ug/ml.BiP naturally found in the majority of ACPA-positive RA patients can result in a false positive QFT-G TB. Subsequent undertreatment of RA, if biologic therapy is withheld, and overtreatment of presumed latent TB may harm patients. Keywords: Tuberculosis, IGRA, Rheumatoid arthritis, Interferon, Anti-citrullinated peptide antibody (ACPA, Immunoglobulin binding protein (BiP

  6. The sensitivity of gamma-index method to the positioning errors of high-definition MLC in patient-specific VMAT QA for SBRT

    International Nuclear Information System (INIS)

    Kim, Jung-in; Park, So-Yeon; Kim, Hak Jae; Kim, Jin Ho; Ye, Sung-Joon; Park, Jong Min

    2014-01-01

    To investigate the sensitivity of various gamma criteria used in the gamma-index method for patient-specific volumetric modulated arc therapy (VMAT) quality assurance (QA) for stereotactic body radiation therapy (SBRT) using a flattening filter free (FFF) photon beam. Three types of intentional misalignments were introduced to original high-definition multi-leaf collimator (HD-MLC) plans. The first type, referred to Class Out, involved the opening of each bank of leaves. The second type, Class In, involved the closing of each bank of leaves. The third type, Class Shift, involved the shifting of each bank of leaves towards the ground. Patient-specific QAs for the original and the modified plans were performed with MapCHECK2 and EBT2 films. The sensitivity of the gamma-index method using criteria of 1%/1 mm, 1.5%/1.5 mm, 1%/2 mm, 2%/1 mm and 2%/2 mm was investigated with absolute passing rates according to the magnitudes of MLCs misalignments. In addition, the changes in dose-volumetric indicators due to the magnitudes of MLC misalignments were investigated. The correlations between passing rates and the changes in dose-volumetric indicators were also investigated using Spearman’s rank correlation coefficient (γ). The criterion of 2%/1 mm was able to detect Class Out and Class In MLC misalignments of 0.5 mm and Class Shift misalignments of 1 mm. The widely adopted clinical criterion of 2%/2 mm was not able to detect 0.5 mm MLC errors of the Class Out or Class In types, and also unable to detect 3 mm Class Shift errors. No correlations were observed between dose-volumetric changes and gamma passing rates (γ < 0.8). Gamma criterion of 2%/1 mm was found to be suitable as a tolerance level with passing rates of 90% and 80% for patient-specific VMAT QA for SBRT when using MapCHECK2 and EBT2 film, respectively

  7. Translational and rotational intra- and inter-fractional errors in patient and target position during a short course of frameless stereotactic body radiotherapy

    DEFF Research Database (Denmark)

    Josipovic, Mirjana; Persson, Gitte Fredberg; Logadottir, Ashildur

    2012-01-01

    Implementation of cone beam computed tomography (CBCT) in frameless stereotactic body radiotherapy (SBRT) of lung tumours enables setup correction based on tumour position. The aim of this study was to compare setup accuracy with daily soft tissue matching to bony anatomy matching and evaluate...

  8. Errors and mistakes in breast ultrasound diagnostics

    Directory of Open Access Journals (Sweden)

    Wiesław Jakubowski

    2012-09-01

    Full Text Available Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Neverthe‑ less, as in each imaging method, there are errors and mistakes resulting from the techni‑ cal limitations of the method, breast anatomy (fibrous remodeling, insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts, improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS‑usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, includ‑ ing the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  9. Process Accounting

    OpenAIRE

    Gilbertson, Keith

    2002-01-01

    Standard utilities can help you collect and interpret your Linux system's process accounting data. Describes the uses of process accounting, standard process accounting commands, and example code that makes use of process accounting utilities.

  10. SU-G-JeP3-02: Comparison of Magnitude and Frequency of Patient Positioning Errors in Breast Irradiation Using AlignRT 3D Optical Surface Imaging and Skin Mark Techniques

    International Nuclear Information System (INIS)

    Yao, R; Chisela, W; Dorbu, G

    2016-01-01

    Purpose: To evaluate clinical usefulness of AlignRT (Vision RT Ltd., London, UK) in reducing patient positioning errors in breast irradiation. Methods: 60 patients undergoing whole breast irradiation were selected for this study. Patients were treated to the left or right breast lying on Qfix Access breast board (Qfix, Avondale, PA) in supine position for 28 fractions using tangential fields. 30 patients were aligned using AlignRT by aligning a breast surface region of interest (ROI) to the same area from a reference surface image extracted from planning CT. When the patient’s surface image deviated from the reference by more than 3mm on one or more translational and rotational directions, a new reference was acquired using AlignRT in-room cameras. The other 30 patients were aligned to the skin marks with room lasers. On-Board MV portal images of medial field were taken daily and matched to the DRRs. The magnitude and frequency of positioning errors were determined from measured translational shifts. Kolmogorov-Smirnov test was used to evaluate statistical differences of positional accuracy and precision between AlignRT and non-AlignRT patients. Results: The percentage of port images with no shift required was 46.5% and 27.0% in vertical, 49.8% and 25.8% in longitudinal, 47.6% and 28.5% in lateral for AlignRT and non-AlignRT patients, respectively. The percentage of port images requiring more than 3mm shifts was 18.1% and 35.1% in vertical, 28.6% and 50.8% in longitudinal, 11.3% and 24.2% in lateral for AlignRT and non-AlignRT patients, respectively. Kolmogorov-Smirnov test showed that there were significant differences between the frequency distributions of AlignRT and non-AlignRT in vertical, longitudinal, and lateral shifts. Conclusion: As confirmed by port images, AlignRT-assisted patient positioning can significantly reduce the frequency and magnitude of patient setup errors in breast irradiation compared to the use of lasers and skin marks.

  11. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  12. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  13. Human Error Mechanisms in Complex Work Environments

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1988-01-01

    will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...

  14. Internet accounting

    NARCIS (Netherlands)

    Pras, Aiko; van Beijnum, Bernhard J.F.; Sprenkels, Ron; Parhonyi, R.

    2001-01-01

    This article provides an introduction to Internet accounting and discusses the status of related work within the IETF and IRTF, as well as certain research projects. Internet accounting is different from accounting in POTS. To understand Internet accounting, it is important to answer questions like

  15. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  16. Error mapping of high-speed AFM systems

    Science.gov (United States)

    Klapetek, Petr; Picco, Loren; Payton, Oliver; Yacoot, Andrew; Miles, Mervyn

    2013-02-01

    In recent years, there have been several advances in the development of high-speed atomic force microscopes (HSAFMs) to obtain images with nanometre vertical and lateral resolution at frame rates in excess of 1 fps. To date, these instruments are lacking in metrology for their lateral scan axes; however, by imaging a series of two-dimensional lateral calibration standards, it has been possible to obtain information about the errors associated with these HSAFM scan axes. Results from initial measurements are presented in this paper and show that the scan speed needs to be taken into account when performing a calibration as it can lead to positioning errors of up to 3%.

  17. Effect of removing the common mode errors on linear regression analysis of noise amplitudes in position time series of a regional GPS network & a case study of GPS stations in Southern California

    Science.gov (United States)

    Jiang, Weiping; Ma, Jun; Li, Zhao; Zhou, Xiaohui; Zhou, Boye

    2018-05-01

    The analysis of the correlations between the noise in different components of GPS stations has positive significance to those trying to obtain more accurate uncertainty of velocity with respect to station motion. Previous research into noise in GPS position time series focused mainly on single component evaluation, which affects the acquisition of precise station positions, the velocity field, and its uncertainty. In this study, before and after removing the common-mode error (CME), we performed one-dimensional linear regression analysis of the noise amplitude vectors in different components of 126 GPS stations with a combination of white noise, flicker noise, and random walking noise in Southern California. The results show that, on the one hand, there are above-moderate degrees of correlation between the white noise amplitude vectors in all components of the stations before and after removal of the CME, while the correlations between flicker noise amplitude vectors in horizontal and vertical components are enhanced from un-correlated to moderately correlated by removing the CME. On the other hand, the significance tests show that, all of the obtained linear regression equations, which represent a unique function of the noise amplitude in any two components, are of practical value after removing the CME. According to the noise amplitude estimates in two components and the linear regression equations, more accurate noise amplitudes can be acquired in the two components.

  18. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  19. Management Accounting

    OpenAIRE

    John Burns; Martin Quinn; Liz Warren; João Oliveira

    2013-01-01

    Overview of the BookThe textbook comprises six sections which together represent a comprehensive insight into management accounting - its technical attributes, changeable wider context, and the multiple roles of management accountants. The sections cover: (1) an introduction to management accounting, (2) how organizations account for their costs, (3) the importance of tools and techniques which assist organizational planning and control, (4) the various dimensions of making business decisions...

  20. Accounting standards

    NARCIS (Netherlands)

    Stellinga, B.; Mügge, D.

    2014-01-01

    The European and global regulation of accounting standards have witnessed remarkable changes over the past twenty years. In the early 1990s, EU accounting practices were fragmented along national lines and US accounting standards were the de facto global standards. Since 2005, all EU listed

  1. Accounting outsourcing

    OpenAIRE

    Linhartová, Lucie

    2012-01-01

    This thesis gives a complex view on accounting outsourcing, deals with the outsourcing process from its beginning (condition of collaboration, making of contract), through collaboration to its possible ending. This work defines outsourcing, indicates the main advatages, disadvatages and arguments for its using. The main object of thesis is mainly practical side of accounting outsourcing and providing of first quality accounting services.

  2. DC cancellation as a method of generating a t2-response and of solving the radial position error in a concentric free-falling two-sphere equivalence-principle experiment in a drag-free satellite

    International Nuclear Information System (INIS)

    Lange, Benjamin

    2010-01-01

    This paper presents a new method for doing a free-fall equivalence-principle (EP) experiment in a satellite at ambient temperature which solves two problems that have previously blocked this approach. By using large masses to change the gravity gradient at the proof masses, the orbit dynamics of a drag-free satellite may be changed in such a way that the experiment can mimic a free-fall experiment in a constant gravitational field on the earth. An experiment using a sphere surrounded by a spherical shell both completely unsupported and free falling has previously been impractical because (1) it is not possible to distinguish between a small EP violation and a slight difference in the semi-major axes of the orbits of the two proof masses and (2) the position difference in the orbit due to an EP violation only grows as t whereas the largest disturbance grows as t 3/2 . Furthermore, it has not been known how to independently measure the positions of a shell and a solid sphere with sufficient accuracy. The measurement problem can be solved by using a two-color transcollimator (see the main text), and since the radial-position-error and t-response problems arise from the earth's gravity gradient and not from its gravity field, one solution is to modify the earth's gravity gradient with local masses fixed in the satellite. Since the gravity gradient at the surface of a sphere, for example, depends only on its density, the gravity gradients of laboratory masses and of the earth unlike their fields are of the same order of magnitude. In a drag-free satellite spinning perpendicular to the orbit plane, two fixed spherical masses whose connecting line parallels the satellite spin axis can generate a dc gravity gradient at test masses located between them which cancels the combined gravity gradient of the earth and differential centrifugal force. With perfect cancellation, the position-error problem vanishes and the response grows as t 2 along a line which always points toward

  3. Accounting for Life-Course Exposures in Epigenetic Biomarker Association Studies: Early Life Socioeconomic Position, Candidate Gene DNA Methylation, and Adult Cardiometabolic Risk.

    Science.gov (United States)

    Huang, Jonathan Y; Gavin, Amelia R; Richardson, Thomas S; Rowhani-Rahbar, Ali; Siscovick, David S; Hochner, Hagit; Friedlander, Yechiel; Enquobahrie, Daniel A

    2016-10-01

    Recent studies suggest that epigenetic programming may mediate the relationship between early life environment, including parental socioeconomic position, and adult cardiometabolic health. However, interpreting associations between early environment and adult DNA methylation may be difficult because of time-dependent confounding by life-course exposures. Among 613 adult women (mean age = 32 years) of the Jerusalem Perinatal Study Family Follow-up (2007-2009), we investigated associations between early life socioeconomic position (paternal occupation and parental education) and mean adult DNA methylation at 5 frequently studied cardiometabolic and stress-response genes (ABCA1, INS-IGF2, LEP, HSD11B2, and NR3C1). We used multivariable linear regression and marginal structural models to estimate associations under 2 causal structures for life-course exposures and timing of methylation measurement. We also examined whether methylation was associated with adult cardiometabolic phenotype. Higher maternal education was consistently associated with higher HSD11B2 methylation (e.g., 0.5%-point higher in 9-12 years vs. ≤8 years, 95% confidence interval: 0.1, 0.8). Higher HSD11B2 methylation was also associated with lower adult weight and total and low-density lipoprotein cholesterol. We found that associations with early life socioeconomic position measures were insensitive to different causal assumption; however, exploratory analysis did not find evidence for a mediating role of methylation in socioeconomic position-cardiometabolic risk associations. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. COMPUTER-ASSISTED ACCOUNTING

    Directory of Open Access Journals (Sweden)

    SORIN-CIPRIAN TEIUŞAN

    2009-01-01

    Full Text Available What is computer-assisted accounting? Where is the place and what is the role of the computer in the financial-accounting activity? What is the position and importance of the computer in the accountant’s activity? All these are questions that require scientific research in order to find the answers. The paper approaches the issue of the support granted to the accountant to organize and manage the accounting activity by the computer. Starting from the notions of accounting and computer, the concept of computer-assisted accounting is introduced, it has a general character and it refers to the accounting performed with the help of the computer or using the computer to automate the procedures performed by the person who is doing the accounting activity; this is a concept used to define the computer applications of the accounting activity. The arguments regarding the use of the computer to assist accounting targets the accounting informatization, the automating of the financial-accounting activities and the endowment with modern technology of the contemporary accounting.

  5. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  6. The Errors of Our Ways

    Science.gov (United States)

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  7. Spacecraft and propulsion technician error

    Science.gov (United States)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  8. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  9. Interpreting the results of the Semmes-Weinstein monofilament test: accounting for false-positive answers in the international consensus on the diabetic foot protocol by a new model.

    Science.gov (United States)

    Slater, Robert A; Koren, Shlomit; Ramot, Yoram; Buchs, Andreas; Rapoport, Micha J

    2014-01-01

    The Semmes-Weinstein monofilament is the most widely used test to diagnose the loss of protective sensation. The commonly used protocol of the International Consensus on the Diabetic Foot includes a 'sham' application that allows for false-positive answers. We sought to study the heretofore unexamined significance of false-positive answers. Forty-five patients with diabetes and a history of pedal ulceration (Group I) and 81 patients with diabetes but no history of ulceration (Group II) were studied. The three original sites of the International Consensus on the Diabetic Foot at the hallux, 1st metatarsal and 5th metatarsal areas were used. At each location, the test was performed three times: 2 actual and 1 "sham" applications. Scores were graded from 0 to 3 based upon correct responses. Determination of loss of protective sensation was performed with and without calculating a false-positive answer as a minus 1 score. False-positive responses were found in a significant percentage of patients with and without history of ulceration. Introducing false-positive results as minus 1 into the test outcome significantly increased the number of patients diagnosed with loss of protective sensation in both groups. False-positive answers can significantly affect Semmes-Weinstein monofilament test results and the diagnosis of LOPS. A model that accounts for false-positive answers is offered. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Non-synonymous FGD3 Variant as Positional Candidate for Disproportional Tall Stature Accounting for a Carcass Weight QTL (CW-3 and Skeletal Dysplasia in Japanese Black Cattle.

    Directory of Open Access Journals (Sweden)

    Akiko Takasuga

    2015-08-01

    Full Text Available Recessive skeletal dysplasia, characterized by joint- and/or hip bone-enlargement, was mapped within the critical region for a major quantitative trait locus (QTL influencing carcass weight; previously named CW-3 in Japanese Black cattle. The risk allele was on the same chromosome as the Q allele that increases carcass weight. Phenotypic characterization revealed that the risk allele causes disproportional tall stature and bone size that increases carcass weight in heterozygous individuals but causes disproportionately narrow chest width in homozygotes. A non-synonymous variant of FGD3 was identified as a positional candidate quantitative trait nucleotide (QTN and the corresponding mutant protein showed reduced activity as a guanine nucleotide exchange factor for Cdc42. FGD3 is expressed in the growth plate cartilage of femurs from bovine and mouse. Thus, loss of FDG3 activity may lead to subsequent loss of Cdc42 function. This would be consistent with the columnar disorganization of proliferating chondrocytes in chondrocyte-specific inactivated Cdc42 mutant mice. This is the first report showing association of FGD3 with skeletal dysplasia.

  11. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  12. Analysis of the asset position of the Hungarian pig farming sector based on the data of the Farm Accountancy Data Network (FADN

    Directory of Open Access Journals (Sweden)

    Ildikó ÁBEL

    2017-03-01

    Full Text Available The goal of the study was the examination of the Hungarian pig sector with particular attention paid to the available assets, to the composition of the non-current assets, to the depreciation value, to the value of gross and net investments and to the value of various supports. It was found that the position of individual farms was more unfavorable; only from 2012 exceeded the value of their investments the value of depreciation, consequently these investments did not result in farm development. Corporate farms on the other hand were able to increase their investments - partly because they were more successful in utilizing the various support measures. Although individual farms had an increased value of investment in the last examined year, the statement above is still valid. Companies characteristically invested in high-value fixed assets, particularly in real estate property, while individual farms preferred intermediate assets, particularly machinery and breeding stock. The results of the study also show that farms keeping fewer pigs (below 50 livestock units chose to increase the size of their breeding stock while reducing their real estate and machinery investment. In the case of medium size pig farms (livestock units 50-150 the situation were more diverse. In 2010 the biggest investment activity occurred in increasing the size of the breeding stock, in 2011 in real estate investment and from 2012 machinery investment had the biggest value. Farms having more than 150 livestock units purchased mainly breeding stock in the first two years, and invested in real estate property from 2012. The small and medium size pig farms realized negative net investment indicating a decreasing productive capacity and falling behind in terms of development. These farms were not able to replace their depreciated assets. In terms of developments only the big pig farms were successful having sufficient resources and successful partaking in the various support

  13. Preliminary evaluation of an algorithm to minimize the power error selection of an aspheric intraocular lens by optimizing the estimation of the corneal power and the effective lens position

    Directory of Open Access Journals (Sweden)

    David P. Piñero

    2016-06-01

    Full Text Available AIM: To evaluate the refractive predictability achieved with an aspheric intraocular lens(IOLand to develop a preliminary optimized algorithm for the calculation of its power(PIOL.METHODS: This study included 65 eyes implanted with the aspheric IOL LENTIS L-313(Oculentis GmbHthat were divided into 2 groups: 12 eyes(8 patientswith PIOL≥23.0 D(group A, and 53 eyes(35 patientswith PIOLIOLadjwas calculated considering a variable refractive index for corneal power estimation, the refractive outcome obtained, and an adjusted effective lens position(ELPadjaccording to age and anatomical factors. RESULTS: Postoperative spherical equivalent ranged from -0.75 to +0.75 D and from -1.38 to +0.75 D in groups A and B, respectively. No statistically significant differences were found in groups A(P=0.64and B(P=0.82between PIOLadj and the IOL power implanted(PIOLReal. The Bland and Altman analysis showed ranges of agreement between PIOLadj and PIOLReal of +1.11 to -0.96 D and +1.14 to -1.18 D in groups A and B, respectively. Clinically and statistically significant differences were found between PIOLadj and PIOL obtained with Hoffer Q and Holladay I formulas(PCONCLUSION: The refractive predictability of cataract surgery with implantation of an aspheric IOL can be optimized using paraxial optics combined with linear algorithms to minimize the error associated to the estimation of corneal power and ELP.

  14. Error Parsing: An alternative method of implementing social judgment theory

    OpenAIRE

    Crystal C. Hall; Daniel M. Oppenheimer

    2015-01-01

    We present a novel method of judgment analysis called Error Parsing, based upon an alternative method of implementing Social Judgment Theory (SJT). SJT and Error Parsing both posit the same three components of error in human judgment: error due to noise, error due to cue weighting, and error due to inconsistency. In that sense, the broad theory and framework are the same. However, SJT and Error Parsing were developed to answer different questions, and thus use different m...

  15. Multi-isocenter stereotactic radiotherapy: implications for target dose distributions of systematic and random localization errors

    International Nuclear Information System (INIS)

    Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.

    2001-01-01

    Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition

  16. Tritium accountancy

    International Nuclear Information System (INIS)

    Avenhaus, R.; Spannagel, G.

    1995-01-01

    Conventional accountancy means that for a given material balance area and a given interval of time the tritium balance is established so that at the end of that interval of time the book inventory is compared with the measured inventory. In this way, an optimal effectiveness of accountancy is achieved. However, there are still further objectives of accountancy, namely the timely detection of anomalies as well as the localization of anomalies in a major system. It can be shown that each of these objectives can be optimized only at the expense of the others. Recently, Near-Real-Time Accountancy procedures have been studied; their methodological background as well as their merits will be discussed. (orig.)

  17. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  18. Accounting assessment

    Directory of Open Access Journals (Sweden)

    Kafka S.М.

    2017-03-01

    Full Text Available The proper evaluation of accounting objects influences essentially upon the reliability of assessing the financial situation of a company. Thus, the problem in accounting estimate is quite relevant. The works of home and foreign scholars on the issues of assessment of accounting objects, regulatory and legal acts of Ukraine controlling the accounting and compiling financial reporting are a methodological basis for the research. The author uses the theoretical methods of cognition (abstraction and generalization, analysis and synthesis, induction and deduction and other methods producing conceptual knowledge for the synthesis of theoretical and methodological principles in the evaluation of assets accounting, liabilities and equity. The tabular presentation and information comparison methods are used for analytical researches. The article considers the modern approaches to the issue of evaluation of accounting objects and financial statements items. The expedience to keep records under historical value is proved and the articles of financial statements are to be presented according to the evaluation on the reporting date. In connection with the evaluation the depreciation of fixed assets is considered as a process of systematic return into circulation of the before advanced funds on the purchase (production, improvement of fixed assets and intangible assets by means of including the amount of wear in production costs. Therefore it is proposed to amortize only the actual costs incurred, i.e. not to depreciate the fixed assets received free of charge and surplus valuation of different kinds.

  19. An adaptive orienting theory of error processing.

    Science.gov (United States)

    Wessel, Jan R

    2018-03-01

    The ability to detect and correct action errors is paramount to safe and efficient goal-directed behaviors. Existing work on the neural underpinnings of error processing and post-error behavioral adaptations has led to the development of several mechanistic theories of error processing. These theories can be roughly grouped into adaptive and maladaptive theories. While adaptive theories propose that errors trigger a cascade of processes that will result in improved behavior after error commission, maladaptive theories hold that error commission momentarily impairs behavior. Neither group of theories can account for all available data, as different empirical studies find both impaired and improved post-error behavior. This article attempts a synthesis between the predictions made by prominent adaptive and maladaptive theories. Specifically, it is proposed that errors invoke a nonspecific cascade of processing that will rapidly interrupt and inhibit ongoing behavior and cognition, as well as orient attention toward the source of the error. It is proposed that this cascade follows all unexpected action outcomes, not just errors. In the case of errors, this cascade is followed by error-specific, controlled processing, which is specifically aimed at (re)tuning the existing task set. This theory combines existing predictions from maladaptive orienting and bottleneck theories with specific neural mechanisms from the wider field of cognitive control, including from error-specific theories of adaptive post-error processing. The article aims to describe the proposed framework and its implications for post-error slowing and post-error accuracy, propose mechanistic neural circuitry for post-error processing, and derive specific hypotheses for future empirical investigations. © 2017 Society for Psychophysiological Research.

  20. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  1. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  2. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  3. "You Worry, 'cause You Want to Give a Reasonable Account of Yourself": Gender, Identity Management, and the Discursive Positioning of "Risk" in Men's and Women's Talk About Heterosexual Casual Sex.

    Science.gov (United States)

    Farvid, Panteá; Braun, Virginia

    2018-07-01

    Heterosexual casual sex is routinely depicted as a physically, socially, and psychologically "risky" practice. This is the case in media accounts, psychological research, and other academic work. In this article, we examine 15 men's and 15 women's talk about casual sex from a discursive psychological stance to achieve two objectives. Firstly, we confirm the categories of risk typically associated with casual sex but expand these to include a domain of risks related to (gendered) identities and representation. Men's talk of risk centered on concerns about sexual performance, whereas women's talk centered on keeping safe from violence and sexual coercion. The notion of a sexual reputation was also identified as a risk and again manifested differently for men and women. While women were concerned about being deemed promiscuous, men displayed concern about the quality of their sexual performance. Secondly, within this talk about risks of casual sex, the participants' identities were identified as "at risk" and requiring careful management within the interview context. This was demonstrated by instances of: keeping masculinity intact in accounts of no erection, negotiating a responsible subject position, and crafting agency in accounts of sexual coercion-in the participants' talk. We argue that casual sex, as situated within dominant discourses of gendered heterosexuality, is a fraught practice for both men and women and subject to the demands of identity representation within co-present interactions.

  4. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  5. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  6. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  7. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  8. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  9. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  10. AMERICAN ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Mihaela Onica

    2005-01-01

    Full Text Available The international Accounting Standards already contribute to the generation of better and more easily comparable financial information on an international level, supporting thus a more effective allocationof the investments resources in the world. Under the circumstances, there occurs the necessity of a consistent application of the standards on a global level. The financial statements are part of thefinancial reporting process. A set of complete financial statements usually includes a balance sheet,a profit and loss account, a report of the financial item change (which can be presented in various ways, for example as a status of the treasury flows and of the funds flows and those notes, as well as those explanatory situations and materials which are part of the financial statements.

  11. Payroll accounting

    OpenAIRE

    Hodžová, Markéta

    2009-01-01

    Abstract Main topic of my thesis is the Payroll Accounting. The work summarizes most of the areas that are related to this topic and the knowledge necessary in calculating the final determination of wages. Beginning the thesis mentions specific chapters from the Labor code which explain the facts about the start, changes and the termination of the employment contract then more detailed description of the arrangements performed outside of the employment contract and then working hours and mini...

  12. Human error mechanisms in complex work environments

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1988-01-01

    Human error taxonomies have been developed from analysis of industrial incident reports as well as from psychological experiments. In this paper the results of the two approaches are reviewed and compared. It is found, in both cases, that a fairly small number of basic psychological mechanisms will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations. The implications for system safety and briefly mentioned, together with the implications for system design. (author)

  13. Human error mechanisms in complex work environments

    International Nuclear Information System (INIS)

    Rasmussen, Jens; Danmarks Tekniske Hoejskole, Copenhagen)

    1988-01-01

    Human error taxonomies have been developed from analysis of industrial incident reports as well as from psychological experiments. In this paper the results of the two approaches are reviewed and compared. It is found, in both cases, that a fairly small number of basic psychological mechanisms will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations. The implications for system safety are briefly mentioned, together with the implications for system design. (author)

  14. Medical Error and Moral Luck.

    Science.gov (United States)

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome.

  15. New approaches towards information materiality in accounting

    OpenAIRE

    Карзаева, Наталия Николаевна

    2015-01-01

    A theoretically substantiated method of the calculation of the level of the materiality factor, taking into account the interest of the persons making decision on the basis of financial factors, formulas of the calculations of the crucial level of error, above which the data of the accounting reporting cannot be adopted as reliable and appropriate corrections must be introduced in accounting reporting. The materials on the order of making corrections in accounting records and accounting repor...

  16. Counting OCR errors in typeset text

    Science.gov (United States)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  17. Infrastrukturel Accountability

    DEFF Research Database (Denmark)

    Ubbesen, Morten Bonde

    Hvordan redegør man troværdigt for noget så diffust som en hel nations udledning af drivhusgasser? Det undersøger denne afhandling i et etnografisk studie af hvordan Danmarks drivhusgasregnskab udarbejdes, rapporteres og kontrolleres. Studiet trækker på begreber og forståelser fra 'Science & Tech...... & Technology Studies', og bidrager med begrebet 'infrastrukturel accountability' til nye måder at forstå og tænke om det arbejde, hvormed højt specialiserede praksisser dokumenterer og redegør for kvaliteten af deres arbejde....

  18. Mental Accounting and Economic Behaviour

    NARCIS (Netherlands)

    Antonides, Gerrit; Ranyard, Rob

    2017-01-01

    This chapter first presents an overview of research into mental accounting and its effects on economic behaviour. It then considers mental accounts posited to broadly categorize financial resources across the life-cycle, and those constructed for specific transactions. Mental accounting has several

  19. Socioeconomic Position Across the Life Course and Cognitive Ability Later in Life

    DEFF Research Database (Denmark)

    Foverskov, Else; Mortensen, Erik Lykke; Holm, Anders

    2017-01-01

    OBJECTIVE: Investigate direct and indirect associations between markers of socioeconomic position (SEP) across the life course and midlife cognitive ability while addressing methodological limitations in prior work. METHOD: Longitudinal data from the Danish Metropolit cohort of men born in 1953 ( N......: The impact of adult SEP on later life ability may be exaggerated when not accounting for the stability of individual differences in cognitive ability and measurement error in test scores....... of accounting for childhood ability and measurement error. RESULTS: Associations between adult SEP measures and midlife ability decreased significantly when adjusting for childhood ability and measurement error. The association between childhood and midlife ability was by far the strongest. DISCUSSION...

  20. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  1. Educational Accounting Procedures.

    Science.gov (United States)

    Tidwell, Sam B.

    This chapter of "Principles of School Business Management" reviews the functions, procedures, and reports with which school business officials must be familiar in order to interpret and make decisions regarding the school district's financial position. Among the accounting functions discussed are financial management, internal auditing,…

  2. TRUE AND FAIR VIEW – THE SUPREM TRUTH IN ACCOUNTING?

    Directory of Open Access Journals (Sweden)

    Ana-Maria MARCULESCU

    2013-06-01

    Full Text Available Acting in a social and economic environment in a continuous movement and transformation, accounting is experiencing with new situations, in its attempt to answer all functions and information users that it generates, making it an important arbitration instrument in the game of those involved in business world. The main characteristic of accounting is represented by providing useful information retrodictive and predictive in an efficient and economic decision process regarding the economic entity's financial position and accounting performance. Under these circumstances, the information must not contain significant errors, is not biased, and users can be confident that it accurately represents what they proposed to represent, or what is expected, in a reasonably manner to be. Fair and true view seen as an ideal to which any professional accountant should aim, seems to be more difficult to achieve under the current conditions, in which fiscal pressure is more and more unbearable and top management is more and more willing to make compromises that sometimes are at a legal limit. In theory, it is well known that registrations and accounting reports must provide a clear image, honest, fair and full on the assets, financial position and results of the entity. In practice, contrary to ethical principles, more and more "servants" of the accounting profession are either willing, or forced to submit distorted accounting information giving the impression of calmness to a disastrous situation.

  3. Energy accountancy

    International Nuclear Information System (INIS)

    Boer, G.A. de.

    1981-01-01

    G.A. de Boer reacts to recently published criticism of his contribution to a report entitled 'Commentaar op het boek 'Tussen Kernenergie en Kolen. Een Analyse' van ir. J.W. Storm van Leeuwen' (Commentary on the book 'Nuclear Energy versus Coal. An Analysis by ir. J.W. Storm van Leeuwen), published by the Dutch Ministry of Economic Affairs. The contribution (Appendix B) deals with energy analyses. He justifies his arguments for using energy accountancy for assessing different methods of producing electricity, and explains that it is simply an alternative to purely economic methods. The energy conversion yield (ratio of energy produced to energy required) is tabulated for different sources. De Boer emphasises that his article purposely discusses among other things, definitions, forms of energy, the limits of the systems, the conversion of money into energy and the definition of the energy yield at length, in order to prevent misunderstandings. (C.F.)

  4. Design Accountability

    DEFF Research Database (Denmark)

    Koskinen, Ilpo; Krogh, Peter

    2015-01-01

    When design research builds on design practice, it may contribute to both theory and practice of design in ways richer than research that treats design as a topic. Such research, however, faces several tensions that it has to negotiate successfully in order not to lose its character as research....... This paper looks at constructive design research which takes the entanglement of theory and practice as its hallmark, and uses it as a test case in exploring how design researchers can work with theory, methodology, and practice without losing their identity as design researchers. The crux of practice based...... design research is that where classical research is interested in singling out a particular aspect and exploring it in depth, design practice is characterized by balancing numerous concerns in a heterogenous and occasionally paradoxical product. It is on this basis the notion of design accountability...

  5. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  6. Natural Resources Accounting and Sustainable Development: The ...

    African Journals Online (AJOL)

    Natural Resources Accounting and Sustainable Development: The Challenge to Economics and Accounting Profession. ... African Research Review ... The approach used in achieving this objective is by identifying the present position, limitations and the challenges for the economics and accounting professions.

  7. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  8. Incorporating measurement error in n=1 psychological autoregressive modeling

    NARCIS (Netherlands)

    Schuurman, Noemi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive

  9. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  10. Documentation of Accounting Records in Light of Legislative Innovations

    Directory of Open Access Journals (Sweden)

    K. V. BEZVERKHIY

    2017-05-01

    Full Text Available Legislative reforms in accounting aim to simplify accounting records and compilation of financial reports by business entities, thus increasing the position of Ukraine in the global ranking of Doing Business. This simplification is implied in the changes in the Regulation on Documentation of Accounting Records, entered into force to the Resolution of the Ukrainian Ministry of Finance. The objective of the study is to analyze the legislative innovations involved. The review of changes in documentation of accounting records is made. A comparative analysis of changes in the Regulation on Documentation of Accounting Records is made by sections: 1 General; 2 Primary documents; 3 Accounting records; 4 Correction of errors in primary documents and accounting records; 5 Organization of document circulation; 6 Storage of documents. Methods of analysis and synthesis are used for separating the differences in the editions of the Regulation on Documentation of Accounting Records. The result of the study has theoretical and practical value for the domestic business enterprise sector.

  11. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    Science.gov (United States)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  12. MATERIAL CONTROL ACCOUNTING INMM

    Energy Technology Data Exchange (ETDEWEB)

    Hasty, T.

    2009-06-14

    Since 1996, the Mining and Chemical Combine (MCC - formerly known as K-26), and the United States Department of Energy (DOE) have been cooperating under the cooperative Nuclear Material Protection, Control and Accounting (MPC&A) Program between the Russian Federation and the U.S. Governments. Since MCC continues to operate a reactor for steam and electricity production for the site and city of Zheleznogorsk which results in production of the weapons grade plutonium, one of the goals of the MPC&A program is to support implementation of an expanded comprehensive nuclear material control and accounting (MC&A) program. To date MCC has completed upgrades identified in the initial gap analysis and documented in the site MC&A Plan and is implementing additional upgrades identified during an update to the gap analysis. The scope of these upgrades includes implementation of MCC organization structure relating to MC&A, establishing material balance area structure for special nuclear materials (SNM) storage and bulk processing areas, and material control functions including SNM portal monitors at target locations. Material accounting function upgrades include enhancements in the conduct of physical inventories, limit of error inventory difference procedure enhancements, implementation of basic computerized accounting system for four SNM storage areas, implementation of measurement equipment for improved accountability reporting, and both new and revised site-level MC&A procedures. This paper will discuss the implementation of MC&A upgrades at MCC based on the requirements established in the comprehensive MC&A plan developed by the Mining and Chemical Combine as part of the MPC&A Program.

  13. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. [Analysis of intrusion errors in free recall].

    Science.gov (United States)

    Diesfeldt, H F A

    2017-06-01

    Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.

  15. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  16. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  17. Teamwork and Clinical Error Reporting among Nurses in Korean Hospitals

    Directory of Open Access Journals (Sweden)

    Jee-In Hwang, PhD

    2015-03-01

    Conclusions: Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety.

  18. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  19. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  20. Common patterns in 558 diagnostic radiology errors.

    Science.gov (United States)

    Donald, Jennifer J; Barnard, Stuart A

    2012-04-01

    As a Quality Improvement initiative our department has held regular discrepancy meetings since 2003. We performed a retrospective analysis of the cases presented and identified the most common pattern of error. A total of 558 cases were referred for discussion over 92 months, and errors were classified as perceptual or interpretative. The most common patterns of error for each imaging modality were analysed, and the misses were scored by consensus as subtle or non-subtle. Of 558 diagnostic errors, 447 (80%) were perceptual and 111 (20%) were interpretative errors. Plain radiography and computed tomography (CT) scans were the most frequent imaging modalities accounting for 246 (44%) and 241 (43%) of the total number of errors, respectively. In the plain radiography group 120 (49%) of the errors occurred in chest X-ray reports with perceptual miss of a lung nodule occurring in 40% of this subgroup. In the axial and appendicular skeleton missed fractures occurred most frequently, and metastatic bone disease was overlooked in 12 of 50 plain X-rays of the pelvis or spine. The majority of errors within the CT group were in reports of body scans with the commonest perceptual errors identified including 16 missed significant bone lesions, 14 cases of thromboembolic disease and 14 gastrointestinal tumours. Of the 558 errors, 312 (56%) were considered subtle and 246 (44%) non-subtle. Diagnostic errors are not uncommon and are most frequently perceptual in nature. Identification of the most common patterns of error has the potential to improve the quality of reporting by improving the search behaviour of radiologists. © 2012 The Authors. Journal of Medical Imaging and Radiation Oncology © 2012 The Royal Australian and New Zealand College of Radiologists.

  1. CHANGES IN PRIOR PERIOD ERRORS

    OpenAIRE

    Alina Pietraru; Dorina Luţă

    2007-01-01

    In 2007, in Romania is continued the gradual implementation of the International Financial Reporting Standards which includes: the IFRS, the IAS and their Interpretation as approved by European Union, translated and published in Romanian. The financial statement must achieve one major purpose that is to provide information about the financial position, financial performance and changes in financial position of that entity. In this context, the accounting policy elaborated and assumed by the m...

  2. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  3. Safeguarding on-line accountability data

    International Nuclear Information System (INIS)

    Carlson, R.L.

    1980-06-01

    HEDL is developing a computerized accountability system that addresses the concerns of data access and data protection. The system utilizes a distributed processing network with transaction oriented software. Microprocessor controlled data accumulation stations are utilized which permit data editing to reduce errors, while maintaining several levels of backup protection. Software changes are allowed only after thorough review, protected by passwording, and are overchecked by audit. Protection against misuse by unauthorized individuals utilizes extensive physical security of the computer and the remote terminals. The positive identification devices and strict software control ensure that individuals cannot subvert the system without leaving an audit trail of their actions. This serves as a deterrent against malevolent acts by the authorized user

  4. Total quality accounting

    Directory of Open Access Journals (Sweden)

    Andrijašević Maja

    2008-01-01

    Full Text Available The focus of competitive "battle" shifted from the price towards non-price instruments, above all, towards quality that became the key variable for profitability increase and achievement of better comparative position of a company. Under such conditions, management of a company, which, according to the established and certified system of total quality, strives towards achieving of a better market position, faces the problem of quality cost measurement and determination. Management, above all, cost accounting can help in solving of this problem, but the question is how much of its potential is being used for that purpose.

  5. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  6. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  7. Verification of the Patient Positioning in the Bellyboard Pelvic Radiotherapy

    International Nuclear Information System (INIS)

    Kasabasic, M.; Faj, D.; Smilovic Radojcic, D.; Svabic, M.; Ivkovic, A.; Jurkovic, S.

    2008-01-01

    The size and shape of the treatment fields applied in radiotherapy account for uncertainties in the daily set-up of the patients during the treatment. We investigated the accuracy of daily patient positioning in the bellyboard pelvic radiotherapy in order to find out the magnitude of the patients movement during the treatment. Translational as well as rotational movements of the patients are explored. Film portal imaging is used in order to find patient positioning error during the treatment of the pelvic region. Patients are treated in the prone position using the bellyboard positioning device. Thirty six patients are included in the study; 15 patients were followed during the whole treatment and 21 during the first 5 consecutive treatment days. The image acquisition was completed in 85 percent and systematic and random positioning errors in 453 images are analyzed. (author)

  8. Surprised at all the entropy: hippocampal, caudate and midbrain contributions to learning from prediction errors.

    Directory of Open Access Journals (Sweden)

    Anne-Marike Schiffer

    Full Text Available Influential concepts in neuroscientific research cast the brain a predictive machine that revises its predictions when they are violated by sensory input. This relates to the predictive coding account of perception, but also to learning. Learning from prediction errors has been suggested for take place in the hippocampal memory system as well as in the basal ganglia. The present fMRI study used an action-observation paradigm to investigate the contributions of the hippocampus, caudate nucleus and midbrain dopaminergic system to different types of learning: learning in the absence of prediction errors, learning from prediction errors, and responding to the accumulation of prediction errors in unpredictable stimulus configurations. We conducted analyses of the regions of interests' BOLD response towards these different types of learning, implementing a bootstrapping procedure to correct for false positives. We found both, caudate nucleus and the hippocampus to be activated by perceptual prediction errors. The hippocampal responses seemed to relate to the associative mismatch between a stored representation and current sensory input. Moreover, its response was significantly influenced by the average information, or Shannon entropy of the stimulus material. In accordance with earlier results, the habenula was activated by perceptual prediction errors. Lastly, we found that the substantia nigra was activated by the novelty of sensory input. In sum, we established that the midbrain dopaminergic system, the hippocampus, and the caudate nucleus were to different degrees significantly involved in the three different types of learning: acquisition of new information, learning from prediction errors and responding to unpredictable stimulus developments. We relate learning from perceptual prediction errors to the concept of predictive coding and related information theoretic accounts.

  9. Surprised at all the entropy: hippocampal, caudate and midbrain contributions to learning from prediction errors.

    Science.gov (United States)

    Schiffer, Anne-Marike; Ahlheim, Christiane; Wurm, Moritz F; Schubotz, Ricarda I

    2012-01-01

    Influential concepts in neuroscientific research cast the brain a predictive machine that revises its predictions when they are violated by sensory input. This relates to the predictive coding account of perception, but also to learning. Learning from prediction errors has been suggested for take place in the hippocampal memory system as well as in the basal ganglia. The present fMRI study used an action-observation paradigm to investigate the contributions of the hippocampus, caudate nucleus and midbrain dopaminergic system to different types of learning: learning in the absence of prediction errors, learning from prediction errors, and responding to the accumulation of prediction errors in unpredictable stimulus configurations. We conducted analyses of the regions of interests' BOLD response towards these different types of learning, implementing a bootstrapping procedure to correct for false positives. We found both, caudate nucleus and the hippocampus to be activated by perceptual prediction errors. The hippocampal responses seemed to relate to the associative mismatch between a stored representation and current sensory input. Moreover, its response was significantly influenced by the average information, or Shannon entropy of the stimulus material. In accordance with earlier results, the habenula was activated by perceptual prediction errors. Lastly, we found that the substantia nigra was activated by the novelty of sensory input. In sum, we established that the midbrain dopaminergic system, the hippocampus, and the caudate nucleus were to different degrees significantly involved in the three different types of learning: acquisition of new information, learning from prediction errors and responding to unpredictable stimulus developments. We relate learning from perceptual prediction errors to the concept of predictive coding and related information theoretic accounts.

  10. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  11. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  12. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  13. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  14. Error Estimation for Indoor 802.11 Location Fingerprinting

    DEFF Research Database (Denmark)

    Lemelson, Hendrik; Kjærgaard, Mikkel Baun; Hansen, Rene

    2009-01-01

    providers could adapt their delivered services based on the estimated position error to achieve a higher service quality. Finally, system operators could use the information to inspect whether a location system provides satisfactory positioning accuracy throughout the covered area. For position error...

  15. Tanks for liquids: calibration and errors assessment

    International Nuclear Information System (INIS)

    Espejo, J.M.; Gutierrez Fernandez, J.; Ortiz, J.

    1980-01-01

    After a brief reference to some of the problems raised by tanks calibration, two methods, theoretical and experimental are presented, so as to achieve it taking into account measurement errors. The method is applied to the transfer of liquid from one tank to another. Further, a practical example is developed. (author)

  16. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...

  17. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  18. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  19. Errors in energy bills

    International Nuclear Information System (INIS)

    Kop, L.

    2001-01-01

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  20. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  1. Assessing errors related to characteristics of the items measured

    International Nuclear Information System (INIS)

    Liggett, W.

    1980-01-01

    Errors that are related to some intrinsic property of the items measured are often encountered in nuclear material accounting. An example is the error in nondestructive assay measurements caused by uncorrected matrix effects. Nuclear material accounting requires for each materials type one measurement method for which bounds on these errors can be determined. If such a method is available, a second method might be used to reduce costs or to improve precision. If the measurement error for the first method is longer-tailed than Gaussian, then precision might be improved by measuring all items by both methods. 8 refs

  2. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  3. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  4. ACCOUNTING AND AUDIT OPERATIONS ON CURRENT ACCOUNT

    Directory of Open Access Journals (Sweden)

    Koblyanska Olena

    2018-03-01

    Full Text Available Introduction. The article is devoted to theoretical, methodical and practical issues of accounting and auditing of operations on the current account. The purpose of the study is to deepen and consolidate the theoretical and practical knowledge of the issues of accounting and auditing of operations on the current account, identify practical problems with the implementation of the methodology and organization of accounting and auditing of operations on the current account and develop recommendations for the elimination of deficiencies and improve the accounting and auditing. Results. The issue of the relevance of proper accounting and audit of transactions on the current account in the bank is considered. The research of typical operations on the current account was carried out with using of the method of their reflection in the account on practical examples. Features of the audit of transactions on the current account are examined, the procedure for its implementation is presented, and types of abuses and violations that occur while performing operations on the current account are identified. The legal regulation of accounting, analysis and control of operations with cash on current accounts is considered. The problem issues related to the organization and conducting of the audit of funds in the accounts of the bank are analyzed, as well as the directions of their solution are determined. The proposals for determining the sequence of actions of the auditor during the check of cash flow on accounts in the bank are provided. Conclusions. The questions about theoretical, methodological and practical issues of accounting and auditing of operations on the current account in the bank. A study of typical operations with cash on the current account was carried out with the use of the method of their reflection in the accounts and the features of the auditing of cash on the account.

  5. The District Nursing Clinical Error Reduction Programme.

    Science.gov (United States)

    McGraw, Caroline; Topping, Claire

    2011-01-01

    The District Nursing Clinical Error Reduction (DANCER) Programme was initiated in NHS Islington following an increase in the number of reported medication errors. The objectives were to reduce the actual degree of harm and the potential risk of harm associated with medication errors and to maintain the existing positive reporting culture, while robustly addressing performance issues. One hundred medication errors reported in 2007/08 were analysed using a framework that specifies the factors that predispose to adverse medication events in domiciliary care. Various contributory factors were identified and interventions were subsequently developed to address poor drug calculation and medication problem-solving skills and incorrectly transcribed medication administration record charts. Follow up data were obtained at 12 months and two years. The evaluation has shown that although medication errors do still occur, the programme has resulted in a marked shift towards a reduction in the associated actual degree of harm and the potential risk of harm.

  6. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  7. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  8. Uticaj mesta ugradnje inercijalnog mernog bloka i akcelerometara na grešku u određivanju pozicije aviona / Size effect of the inertial measurement unit and inside IMU accelerometers on aircraft position error

    Directory of Open Access Journals (Sweden)

    Slobodan Janićijević

    2003-03-01

    Full Text Available U ovom članku analiziran je uticaj mesta ugradnje inercijalnog mernog bloka (IMB u avionu i mesta ugradnje akcelerometara u IMB na tačnost određivanja pozicije pomoću bes-platformskog inercijalnog navigacijskog sistema (BINS. Pokazano je da se ovi uticaji ne mogu uvek zanemariti. Izračunata je ukupna greška u određivanju pozicije aviona ako se IMB ugrađuje van centra rotacije aviona, a akcelerometri van centra IMB. Predložena je optimalna orijentacija akcelerometara u IMB-u, kako bi se minimizirao uticaj ugradnje akcelerometara van centra IMB na tačnost određivanja pozicije aviona. Predložen je i način kompenzacije greške. / This paper analyzes the mounting offset size effect of the inertial measurement unit (IMU in aircraft and accelerometers mounting offset size effect in the IMU on the accuracy of strap down inertial navigation system (SDINS. It is also shown that these effects cannot be always neglected. The total size effect error for the IMU has been the computed. An accelerometers optimum orientation inside the IMU has been proposed to minimize size effects on the accuracy of navigation parameters. A manner to compensate these size effects has been proposed as well.

  9. The impact of measurement errors in the identification of regulatory networks

    Directory of Open Access Journals (Sweden)

    Sato João R

    2009-12-01

    Full Text Available Abstract Background There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent and non-time series (independent data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models and dependent (autoregressive models data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error. The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.

  10. Error estimation and adaptivity for incompressible hyperelasticity

    KAUST Repository

    Whiteley, J.P.

    2014-04-30

    SUMMARY: A Galerkin FEM is developed for nonlinear, incompressible (hyper) elasticity that takes account of nonlinearities in both the strain tensor and the relationship between the strain tensor and the stress tensor. By using suitably defined linearised dual problems with appropriate boundary conditions, a posteriori error estimates are then derived for both linear functionals of the solution and linear functionals of the stress on a boundary, where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution, where the entries of the strain tensor exhibit large, rapid variations, demonstrates the accuracy and sharpness of the error estimators. Finally, using a selection of model problems, the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity. © 2014 John Wiley & Sons, Ltd.

  11. ACCOUNTING HARMONIZATION AND HISTORICAL COST ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Valentin Gabriel CRISTEA

    2017-05-01

    Full Text Available There is a huge interest in accounting harmonization and historical costs accounting, in what they offer us. In this article, different valuation models are discussed. Although one notices the movement from historical cost accounting to fair value accounting, each one has its advantages.

  12. Analytical sensitivity analysis of geometric errors in a three axis machine tool

    International Nuclear Information System (INIS)

    Park, Sung Ryung; Yang, Seung Han

    2012-01-01

    In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors

  13. Individual differences in error monitoring in healthy adults: psychological symptoms and antisocial personality characteristics.

    Science.gov (United States)

    Chang, Wen-Pin; Davies, Patricia L; Gavin, William J

    2010-10-01

    Recent studies have investigated the relationship between psychological symptoms and personality traits and error monitoring measured by error-related negativity (ERN) and error positivity (Pe) event-related potential (ERP) components, yet there remains a paucity of studies examining the collective simultaneous effects of psychological symptoms and personality traits on error monitoring. This present study, therefore, examined whether measures of hyperactivity-impulsivity, depression, anxiety and antisocial personality characteristics could collectively account for significant interindividual variability of both ERN and Pe amplitudes, in 29 healthy adults with no known disorders, ages 18-30 years. The bivariate zero-order correlation analyses found that only the anxiety measure was significantly related to both ERN and Pe amplitudes. However, multiple regression analyses that included all four characteristic measures while controlling for number of segments in the ERP average revealed that both depression and antisocial personality characteristics were significant predictors for the ERN amplitudes whereas antisocial personality was the only significant predictor for the Pe amplitude. These findings suggest that psychological symptoms and personality traits are associated with individual variations in error monitoring in healthy adults, and future studies should consider these variables when comparing group difference in error monitoring between adults with and without disabilities. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  14. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  15. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  16. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  17. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  18. Libertarismo & Error Categorial

    OpenAIRE

    PATARROYO G, CARLOS G

    2009-01-01

    En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...

  19. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  20. MODELING OF MANUFACTURING ERRORS FOR PIN-GEAR ELEMENTS OF PLANETARY GEARBOX

    Directory of Open Access Journals (Sweden)

    Ivan M. Egorov

    2014-11-01

    Full Text Available Theoretical background for calculation of k-h-v type cycloid reducers was developed relatively long ago. However, recently the matters of cycloid reducer design again attracted heightened attention. The reason for that is that such devices are used in many complex engineering systems, particularly, in mechatronic and robotics systems. The development of advanced technological capabilities for manufacturing of such reducers today gives the possibility for implementation of essential features of such devices: high efficiency, high gear ratio, kinematic accuracy and smooth motion. The presence of an adequate mathematical model gives the possibility for adjusting kinematic accuracy of the reducer by rational selection of manufacturing tolerances for its parts. This makes it possible to automate the design process for cycloid reducers with account of various factors including technological ones. A mathematical model and mathematical technique have been developed giving the possibility for modeling the kinematic error of the reducer with account of multiple factors, including manufacturing errors. The errors are considered in the way convenient for prediction of kinematic accuracy early at the manufacturing stage according to the results of reducer parts measurement on coordinate measuring machines. During the modeling, the wheel manufacturing errors are determined by the eccentricity and radius deviation of the pin tooth centers circle, and the deviation between the pin tooth axes positions and the centers circle. The satellite manufacturing errors are determined by the satellite eccentricity deviation and the satellite rim eccentricity. Due to the collinearity, the pin tooth and pin tooth hole diameter errors and the satellite tooth profile errors for a designated contact point are integrated into one deviation. Software implementation of the model makes it possible to estimate the pointed errors influence on satellite rotation angle error and

  1. Social Responsibility of Accounting

    OpenAIRE

    JINNAI, Yoshiaki

    2011-01-01

    Historical and theoretical inquiries into the function of accounting have provided fruitful insights into social responsibility of accounting, which is, and should be, based on accounts kept through everyday accounting activities. However, at the current stage of capitalist accounting, keeping accounts is often regarded as merely a preparatory process for creating financial statements at the end of an accounting period. Thus, discussions on the social responsibility of accounting tend to conc...

  2. Importance of interpolation and coincidence errors in data fusion

    Science.gov (United States)

    Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana

    2018-02-01

    The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  3. Error framing effects on performance: cognitive, motivational, and affective pathways.

    Science.gov (United States)

    Steele-Johnson, Debra; Kalinoski, Zachary T

    2014-01-01

    Our purpose was to examine whether positive error framing, that is, making errors salient and cuing individuals to see errors as useful, can benefit learning when task exploration is constrained. Recent research has demonstrated the benefits of a newer approach to training, that is, error management training, that includes the opportunity to actively explore the task and framing errors as beneficial to learning complex tasks (Keith & Frese, 2008). Other research has highlighted the important role of errors in on-the-job learning in complex domains (Hutchins, 1995). Participants (N = 168) from a large undergraduate university performed a class scheduling task. Results provided support for a hypothesized path model in which error framing influenced cognitive, motivational, and affective factors which in turn differentially affected performance quantity and quality. Within this model, error framing had significant direct effects on metacognition and self-efficacy. Our results suggest that positive error framing can have beneficial effects even when tasks cannot be structured to support extensive exploration. Whereas future research can expand our understanding of error framing effects on outcomes, results from the current study suggest that positive error framing can facilitate learning from errors in real-time performance of tasks.

  4. Accounting Fundamentals for Non-Accountants

    Science.gov (United States)

    The purpose of this module is to provide an introduction and overview of accounting fundamentals for non-accountants. The module also covers important topics such as communication, internal controls, documentation and recordkeeping.

  5. DO ACCOUNTING PRACTITIONERS USE ACCOUNTING RESEARCH RESULTS?

    OpenAIRE

    ALINA BEATTRICE VLADU

    2015-01-01

    This paper reports the results of a survey designed to explore if accounting practitioners are using as a reference point in their daily activities the opinions of academia. Since accounting research comprises various trends of research the earnings management research field is used as illustrative case. Among our respondents were accounting professional, members of professional bodies as the Chamber of Financial Auditors or Romania and also Body of Expert and Licensed Accountants...

  6. ACCOUNTING TREATMENTS USED FOR ACCOUNTING SERVICES PROVIDERS

    OpenAIRE

    ŢOGOE GRETI DANIELA; AVRAM MARIOARA; AVRAM COSTIN DANIEL

    2014-01-01

    The theme of our research is the ways of keeping accounting entities that are the object of the provision of services in the accounting profession. This paper aims to achieve a parallel between the ways of organizing financial records - accounting provided by freelancers and companies with activity in the financial - accounting. The first step in our scientific research is to establish objectives chosen area of scientific knowledge. Our scientific approach seeks to explain thr...

  7. Bootstrap-Based Improvements for Inference with Clustered Errors

    OpenAIRE

    Doug Miller; A. Colin Cameron; Jonah B. Gelbach

    2006-01-01

    Microeconometrics researchers have increasingly realized the essential need to account for any within-group dependence in estimating standard errors of regression parameter estimates. The typical preferred solution is to calculate cluster-robust or sandwich standard errors that permit quite general heteroskedasticity and within-cluster error correlation, but presume that the number of clusters is large. In applications with few (5-30) clusters, standard asymptotic tests can over-reject consid...

  8. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  9. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  10. Prevalence of Refractive Error and Visual Impairment among Rural ...

    African Journals Online (AJOL)

    Refractive error was the major cause of visual impairment accounting for 54% of all causes in the study group. No child was found wearing ... So, large scale community level screening for refractive error should be conducted and integrated with regular school eye screening programs. Effective strategies need to be devised ...

  11. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  12. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  13. Open quantum systems and error correction

    Science.gov (United States)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC

  14. Safeguards Accountability Network accountability and materials management

    International Nuclear Information System (INIS)

    Carnival, G.J.; Meredith, E.M.

    1985-01-01

    The Safeguards Accountability Network (SAN) is a computerized on-line accountability system for the safeguards accountability control of nuclear materials inventories at Rocky Flats Plant. SAN is a dedicated accountability system utilizing source documents filled out on the shop floor as its base. The system incorporates double entry accounting and is developed around the Material Balance Area (MBA) concept. MBA custodians enter transaction information from source documents prepared by personnel in the process areas directly into the SAN system. This provides a somewhat near-real time perpetual inventory system which has limited interaction with MBA custodians. MBA custodians are permitted to inquire into the system and status items on inventory. They are also responsible for the accuracy of the accountability information used as input to the system for their MBA. Monthly audits by the Nuclear Materials Control group assure the timeliness and accuracy of SAN accountability information

  15. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Science.gov (United States)

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  16. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  17. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    Energy Technology Data Exchange (ETDEWEB)

    Beckerman, M.; Jones, J.P.

    1999-02-01

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  18. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  19. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Directory of Open Access Journals (Sweden)

    Pitchaiah Mandava

    Full Text Available OBJECTIVE: Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS, a range of scores ("Shift" is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. METHODS: We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. RESULTS: Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD. Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001. Taking errors into account, SAINT I would have required 24% more subjects than were randomized. CONCLUSION: We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We

  20. ACCOUNTING TREATMENTS USED FOR ACCOUNTING SERVICES PROVIDERS

    Directory of Open Access Journals (Sweden)

    ŢOGOE GRETI DANIELA

    2014-08-01

    Full Text Available The theme of our research is the ways of keeping accounting entities that are the object of the provision of services in the accounting profession. This paper aims to achieve a parallel between the ways of organizing financial records - accounting provided by freelancers and companies with activity in the financial - accounting. The first step in our scientific research is to establish objectives chosen area of scientific knowledge. Our scientific approach seeks to explain through a thorough and detailed approach as different sides (conceptual and practical looking projections of accounting issues related to regulatory developments and practices in the field. This paper addresses various concepts, accounting treatments, and books and accounting documents used both freelancers in providing accounting services and legal persons authorized accounting profession. In terms of methodology and research perspective, the whole scientific approach combined with quantitative and qualitative research theoretical perspective (descriptive-conceptual with practice perspective (empirical analyzing the main contributions of various authors (Romanian and foreign to knowledge in the field. Following the survey believe that the amendments to the national legislation will support entities providing accounting services, by cutting red tape on Administrative Burdens, and consequently will increase profitability and increase service quality.

  1. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  2. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  3. The position of place in governing global problems: A mechanistic account of place-as-context, and analysis of transitions towards spatially explicit approaches to climate science and policy

    International Nuclear Information System (INIS)

    MacGillivray, Brian H.

    2015-01-01

    Highlights: • Place is a central yet undertheorised concept within sustainability science. • Introduces an account of place as the context in which social and environmental mechanisms operate. • Uses this account to critique historical aspatial approaches to climate science and policy. • Traces out shifts towards spatially explicit approaches to climate governance. • A focus on place, heterogeneity, and context maximizes the credibility and policy-relevance of climate science. - Abstract: Place is a central concept within the sustainability sciences, yet it remains somewhat undertheorised, and its relationship to generalisation and scale is unclear. Here, we develop a mechanistic account of place as the fundamental context in which social and environmental mechanisms operate. It is premised on the view that the social and environmental sciences are typically concerned with causal processes and their interaction with context, rather than with a search for laws. We deploy our mechanistic account to critique the neglect of place that characterised the early stages of climate governance, ranging from the highly idealised general circulation and integrated assessment models used to analyze climate change, to the global institutions and technologies designed to manage it. We implicate this neglect of place in the limited progress in tackling climate change in both public and policy spheres, before tracing out recent shifts towards more spatially explicit approaches to climate change science and policy-making. These shifts reflect a move towards an ontology which acknowledges that even where causal drivers are in a sense global in nature (e.g. atmospheric levels of greenhouse gases), their impacts are often mediated through variables that are spatially clustered at multiple scales, moderated by contextual features of the local environment, and interact with the presence of other (localised) stressors in synergistic rather than additive ways. We conclude that a

  4. Minimum Tracking Error Volatility

    OpenAIRE

    Luca RICCETTI

    2010-01-01

    Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...

  5. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  6. Satellite Photometric Error Determination

    Science.gov (United States)

    2015-10-18

    Satellite Photometric Error Determination Tamara E. Payne, Philip J. Castro, Stephen A. Gregory Applied Optimization 714 East Monument Ave, Suite...advocate the adoption of new techniques based on in-frame photometric calibrations enabled by newly available all-sky star catalogs that contain highly...filter systems will likely be supplanted by the Sloan based filter systems. The Johnson photometric system is a set of filters in the optical

  7. Dose error analysis for a scanned proton beam delivery system

    International Nuclear Information System (INIS)

    Coutrakon, G; Wang, N; Miller, D W; Yang, Y

    2010-01-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 x 10 x 8 cm 3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  8. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  9. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  10. Traditional Market Accounting: Management or Financial Accounting?

    OpenAIRE

    Wiyarni, Wiyarni

    2017-01-01

    The purpose of this study is to explore the area of accounting in traditional market. There are two areas of accounting: management and financial accounting. Some of traditional market traders have prepared financial notes, whereas some of them do not. Their financial notes usually consist of receivables, payables, customer orders, inventories, sales and cost price, and salary expenses. The purpose of these financial notes is usually for decision making. It is very rare for the traditional ma...

  11. ABACC's nuclear accounting area

    International Nuclear Information System (INIS)

    Nicolas, Ruben O.

    2001-01-01

    The functions and activities of the Brazilian-Argentine Agency for the Accounting and Control of Nuclear Materials (ABACC) accounting area is outlined together with a detailed description of the nuclear accounting system used by the bilateral organization

  12. Gender Wage Gap Accounting: The Role of Selection Bias.

    Science.gov (United States)

    Bar, Michael; Kim, Seik; Leukhina, Oksana

    2015-10-01

    Mulligan and Rubinstein (2008) (MR) argued that changing selection of working females on unobservable characteristics, from negative in the 1970s to positive in the 1990s, accounted for nearly the entire closing of the gender wage gap. We argue that their female wage equation estimates are inconsistent. Correcting this error substantially weakens the role of the rising selection bias (39 % versus 78 %) and strengthens the contribution of declining discrimination (42 % versus 7 %). Our findings resonate better with related literature. We also explain why our finding of positive selection in the 1970s provides additional support for MR's main hypothesis that an exogenous rise in the market value of unobservable characteristics contributed to the closing of the gender gap.

  13. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. multiangulation position estimation performance analysis using

    African Journals Online (AJOL)

    HOD

    multiangulation PE error is 50% lower than that of the directional rotating antenna system. Furthermore, the ... system is an example of a wireless positioning system that has ..... Table 2: PE error for some selection source locations. No. Range ...

  15. Implications of Fraud and Error Risks in the Enterprise Environment and Auditor’s Work

    Directory of Open Access Journals (Sweden)

    Emil Horomnea

    2012-05-01

    Full Text Available The objective of this study is to identify and analyze the main correlations and implications offraud and error in the business environment and in the financial scandals occurred in the last decade. Theapproach envisages a synthesis and antithesis of the ideas found on this subject in the specialty literature, ofthe regulations issued by various international bodies. To achieve the established objectives, we used aconstructive methodology to identify criticism, presentations and developed a speech with view to a moreefficient and effective fraud and error risk management. The results of the study show that the major financialscandals and hence the global economic crisis are based largely on fraudulent maneuvers of significantproportions. By using "creative accounting" in fraud and error, famous companies have managed to distortreality for their performance and market position, misleading the users’ perception. This study is a theoreticalhaving implications for a future empirical study.The study contributes to auditing literature diversification inthe field of risk of fraud and error. An additional perspective is gained by addressing the financial crisis andsome famous bankruptcies by way of the financial auditors activity and the fraud and error risk.

  16. Delphi Accounts Receivable Module -

    Data.gov (United States)

    Department of Transportation — Delphi accounts receivable module contains the following data elements, but are not limited to customer information, cash receipts, line of accounting details, bill...

  17. Error correction and degeneracy in surface codes suffering loss

    International Nuclear Information System (INIS)

    Stace, Thomas M.; Barrett, Sean D.

    2010-01-01

    Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.

  18. Determination of charged particle beam parameters with taking into account of space charge

    International Nuclear Information System (INIS)

    Ishkhanov, B.S.; Poseryaev, A.V.; Shvedunov, V.I.

    2005-01-01

    One describes a procedure to determine the basic parameters of a paraxial axially-symmetric beam of charged particles taking account of space charge contribution. The described procedure is based on application of the general equation for beam envelope. Paper presents data on its convergence and resistance to measurement errors. The position determination error of crossover (stretching) and radius of beam in crossover is maximum 15% , while the emittance determination error depends on emittance and space charge correlation. The introduced procedure was used to determine parameters of the available electron gun 20 keV energy beam with 0.64 A current. The derived results turned to agree closely with the design parameters [ru

  19. Forecast errors in IEA-countries' energy consumption

    DEFF Research Database (Denmark)

    Linderoth, Hans

    2002-01-01

    Every year Policy of IEA Countries includes a forecast of the energy consumption in the member countries. Forecasts concerning the years 1985,1990 and 1995 can now be compared to actual values. The second oil crisis resulted in big positive forecast errors. The oil price drop in 1986 did not have...... the small value is often the sum of large positive and negative errors. Almost no significant correlation is found between forecast errors in the 3 years. Correspondingly, no significant correlation coefficient is found between forecasts errors in the 3 main energy sectors. Therefore, a relatively small...

  20. Bringing organizational factors to the fore of human error management

    International Nuclear Information System (INIS)

    Embrey, D.

    1991-01-01

    Human performance problems account for more than half of all significant events at nuclear power plants, even when these did not necessarily lead to severe accidents. In dealing with the management of human error, both technical and organizational factors need to be taken into account. Most important, a long-term commitment from senior management is needed. (author)

  1. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  2. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  3. Revaluation and the the internal audit limits that influence the financial-accounting activity

    OpenAIRE

    Dragos Laurentiu Zaharia; Luminita Dragne; Doina Maria Tilea

    2014-01-01

    Regarding the financial-accounting system, the internal audit aims at understanding the accounting and control systems finding and correcting the errors. The auditor must inform the general management about the obvious errors within the entity. The board must know evrything concerning these errors.

  4. How "accountable" are accountable care organizations?

    Science.gov (United States)

    Addicott, Rachael; Shortell, Stephen M

    2014-01-01

    The establishment of accountable care organizations (ACOs) in the Affordable Care Act (ACA) was intended to support both cost savings and high-quality care. However, a key challenge will be to ensure that governance and accountability mechanisms are sufficient to support those twin ambitions. This exploratory study considers how recently developed ACOs have established governance structures and accountability mechanisms, particularly focusing on attempts at collaborative accountability and shared governance arrangements. Four case studies of ACOs across the United States were undertaken, with data collected throughout 2012. These involved 34 semistructured interviews with ACO administrative and clinical leaders, observation of nine meetings, and a review of documentary materials from each ACO. We identified very few examples of physicians being held to account as a collective and therefore only limited evidence of collaborative accountability impacting on behavior change. However, ACO leaders do have many mechanisms available to stimulate change across physicians. The challenge is to determine governance structure(s) and accountability mechanisms that facilitate the most effective combination of approaches, measures, incentives, and sanctions to achieve the goals of more accountable care. Accountability structures and processes will need to be tailored to local membership composition, historical evolution, and current stage of development. There are also some common lessons to be drawn. Shared goals and incentives should be reflected through performance criteria. It is important to align measures and thresholds across payers to ensure ACOs are not unnecessarily burdened or compromised by reporting on different and potentially disjointed measures. Finally, emphasis needs to be placed on the importance of credible, transparent data. This exploratory study provides early evidence regarding how ACOs are establishing their governance and accountability arrangements and

  5. Using lexical variables to predict picture-naming errors in jargon aphasia

    Directory of Open Access Journals (Sweden)

    Catherine Godbold

    2015-04-01

    Full Text Available Introduction Individuals with jargon aphasia produce fluent output which often comprises high proportions of non-word errors (e.g., maf for dog. Research has been devoted to identifying the underlying mechanisms behind such output. Some accounts posit a reduced flow of spreading activation between levels in the lexical network (e.g., Robson et al., 2003. If activation level differences across the lexical network are a cause of non-word outputs, we would predict improved performance when target items reflect an increased flow of activation between levels (e.g. more frequently-used words are often represented by higher resting levels of activation. This research investigates the effect of lexical properties of targets (e.g., frequency, imageability on accuracy, error type (real word vs. non-word and target-error overlap of non-word errors in a picture naming task by individuals with jargon aphasia. Method Participants were 17 individuals with Wernicke’s aphasia, who produced a high proportion of non-word errors (>20% of errors on the Philadelphia Naming Test (PNT; Roach et al., 1996. The data were retrieved from the Moss Aphasic Psycholinguistic Database Project (MAPPD, Mirman et al., 2010. We used a series of mixed models to test whether lexical variables predicted accuracy, error type (real word vs. non-word and target-error overlap for the PNT data. As lexical variables tend to be highly correlated, we performed a principal components analysis to reduce the variables into five components representing variables associated with phonology (length, phonotactic probability, neighbourhood density and neighbourhood frequency, semantics (imageability and concreteness, usage (frequency and age-of-acquisition, name agreement and visual complexity. Results and Discussion Table 1 shows the components that made a significant contribution to each model. Individuals with jargon aphasia produced more correct responses and fewer non-word errors relative to

  6. Maintenance strategies to reduce downtime due to machine positional errors

    OpenAIRE

    Shagluf, Abubaker; Longstaff, A.P.; Fletcher, S.

    2014-01-01

    Proceedings of Maintenance Performance Measurement and Management (MPMM) Conference 2014 Manufacturing strives to reduce waste and increase Overall Equipment Effectiveness (OEE). When managing machine tool maintenance a manufacturer must apply an appropriate decision technique in order to reveal hidden costs associated with production losses, reduce equipment downtime competentely and similiarly identify the machines performance. Total productive maintenance (TPM) is a maintenance progr...

  7. The modulating effect of personality traits on neural error monitoring: evidence from event-related FMRI.

    Science.gov (United States)

    Sosic-Vasic, Zrinka; Ulrich, Martin; Ruchsow, Martin; Vasic, Nenad; Grön, Georg

    2012-01-01

    The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness) and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI). A second strong positive correlation was observed in the anterior cingulate gyrus (ACC). Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.

  8. The modulating effect of personality traits on neural error monitoring: evidence from event-related FMRI.

    Directory of Open Access Journals (Sweden)

    Zrinka Sosic-Vasic

    Full Text Available The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI. A second strong positive correlation was observed in the anterior cingulate gyrus (ACC. Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.

  9. A Harmonious Accounting Duo?

    Science.gov (United States)

    Schapperle, Robert F.; Hardiman, Patrick F.

    1992-01-01

    Accountants have urged "harmonization" of standards between the Governmental Accounting Standards Board and the Financial Accounting Standards Board, recommending similar reporting of like transactions. However, varying display of similar accounting events does not necessarily indicate disharmony. The potential for problems because of…

  10. SIMULATION OF INERTIAL NAVIGATION SYSTEM ERRORS AT AERIAL PHOTOGRAPHY FROM UAV

    Directory of Open Access Journals (Sweden)

    R. Shults

    2017-05-01

    Full Text Available The problem of accuracy determination of the UAV position using INS at aerial photography can be resolved in two different ways: modelling of measurement errors or in-field calibration for INS. The paper presents the results of INS errors research by mathematical modelling. In paper were considered the following steps: developing of INS computer model; carrying out INS simulation; using reference data without errors, estimation of errors and their influence on maps creation accuracy by UAV data. It must be remembered that the values of orientation angles and the coordinates of the projection centre may change abruptly due to the influence of the atmosphere (different air density, wind, etc.. Therefore, the mathematical model of the INS was constructed taking into account the use of different models of wind gusts. For simulation were used typical characteristics of micro electromechanical (MEMS INS and parameters of standard atmosphere. According to the simulation established domination of INS systematic errors that accumulate during the execution of photographing and require compensation mechanism, especially for orientation angles. MEMS INS have a high level of noise at the system input. Thanks to the developed model, we are able to investigate separately the impact of noise in the absence of systematic errors. According to the research was found that on the interval of observations in 5 seconds the impact of random and systematic component is almost the same. The developed model of INS errors studies was implemented in Matlab software environment and without problems can be improved and enhanced with new blocks.

  11. Evaluation of statistical models for forecast errors from the HBV model

    Science.gov (United States)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  12. Management Accounting and Supply Chain Strategy

    OpenAIRE

    Hald, Kim S.; Thrane, Sof

    2016-01-01

    Research positioned in the intersection between management accounting and supply chain management is increasing. However, the relationship between management accounting and supply chain strategies has been neglected in extant research. This research adds to literature on management accounting and supply chain management through exploring how supply chain strategy and management accounting is related, and how supply chain relationship structure modifies this relation. Building on a contingency...

  13. Standard Errors for Matrix Correlations.

    Science.gov (United States)

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  14. Entropy Error Model of Planar Geometry Features in GIS

    Institute of Scientific and Technical Information of China (English)

    LI Dajun; GUAN Yunlan; GONG Jianya; DU Daosheng

    2003-01-01

    Positional error of line segments is usually described by using "g-band", however, its band width is in relation to the confidence level choice. In fact, given different confidence levels, a series of concentric bands can be obtained. To overcome the effect of confidence level on the error indicator, by introducing the union entropy theory, we propose an entropy error ellipse index of point, then extend it to line segment and polygon,and establish an entropy error band of line segment and an entropy error donut of polygon. The research shows that the entropy error index can be determined uniquely and is not influenced by confidence level, and that they are suitable for positional uncertainty of planar geometry features.

  15. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  16. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  17. Safeguards Accountability Network accountability and materials management

    International Nuclear Information System (INIS)

    Carnival, G.J.; Meredith, E.M.

    1985-01-01

    The Safeguards Accountability Network (SAN) is an on-line accountability system used by Rocky Flats Plant to provide accountability control of its nuclear material inventory. The system is also used to monitor and evaluate the use of the nuclear material inventory against programmatic objectives for materials management. The SAN system utilizes two Harris 800 Computers as central processing units. Enhancement plans are currently being formulated to provide automated data collection from process operations on the shop floor and from non-destructive analysis safeguards instrumentation. SAN, discussed in this paper, is an excellent system for basic accountability control of nuclear materials inventories and is a quite useful tool in evaluating the efficient use of nuclear materials inventories at Rocky Flats Plant

  18. SPACE-BORNE LASER ALTIMETER GEOLOCATION ERROR ANALYSIS

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2018-05-01

    Full Text Available This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

  19. New Horizons For Accounting: Social Accounting

    OpenAIRE

    Ertuna, Özer

    2012-01-01

    Currently financial accounting function is going through an accelerated transformation. In this transformation the area of interest of the accounting function is expanding to serve the information needs of a greater number of interest groups’ wider spectrum of interests with financial, economic, social and environmental data related to the performance of companies. This transformation is initiated by the developments in stakeholder, corporate social responsibility, sustainability and environm...

  20. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  1. Error Patterns in Problem Solving.

    Science.gov (United States)

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  2. Rotational error in path integration: encoding and execution errors in angle reproduction.

    Science.gov (United States)

    Chrastil, Elizabeth R; Warren, William H

    2017-06-01

    Path integration is fundamental to human navigation. When a navigator leaves home on a complex outbound path, they are able to keep track of their approximate position and orientation and return to their starting location on a direct homebound path. However, there are several sources of error during path integration. Previous research has focused almost exclusively on encoding error-the error in registering the outbound path in memory. Here, we also consider execution error-the error in the response, such as turning and walking a homebound trajectory. In two experiments conducted in ambulatory virtual environments, we examined the contribution of execution error to the rotational component of path integration using angle reproduction tasks. In the reproduction tasks, participants rotated once and then rotated again to face the original direction, either reproducing the initial turn or turning through the supplementary angle. One outstanding difficulty in disentangling encoding and execution error during a typical angle reproduction task is that as the encoding angle increases, so does the required response angle. In Experiment 1, we dissociated these two variables by asking participants to report each encoding angle using two different responses: by turning to walk on a path parallel to the initial facing direction in the same (reproduction) or opposite (supplementary angle) direction. In Experiment 2, participants reported the encoding angle by turning both rightward and leftward onto a path parallel to the initial facing direction, over a larger range of angles. The results suggest that execution error, not encoding error, is the predominant source of error in angular path integration. These findings also imply that the path integrator uses an intrinsic (action-scaled) rather than an extrinsic (objective) metric.

  3. Performance, postmodernity and errors

    DEFF Research Database (Denmark)

    Harder, Peter

    2013-01-01

    speaker’s competency (note the –y ending!) reflects adaptation to the community langue, including variations. This reversal of perspective also reverses our understanding of the relationship between structure and deviation. In the heyday of structuralism, it was tempting to confuse the invariant system...... with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....

  4. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. DETECTING AND REPORTING THE FRAUDS AND ERRORS BY THE AUDITOR

    OpenAIRE

    Ovidiu Constantin Bunget; Alin Constantin Dumitrescu

    2009-01-01

    Responsibility for preventing and detecting fraud rest with management entities.Although the auditor is not and cannot be held responsible for preventing fraud and errors, in yourwork, he can have a positive role in preventing fraud and errors by deterring their occurrence. Theauditor should plan and perform the audit with an attitude of professional skepticism, recognizingthat condition or events may be found that indicate that fraud or error may exist.Based on the audit risk assessment, aud...

  6. Accounting as Myth Maker

    Directory of Open Access Journals (Sweden)

    Kathy Rudkin

    2007-06-01

    Full Text Available Accounting is not only a technical apparatus, but also manifests a societal dimension. Thispaper proposes that accounting is a protean and complex form of myth making, and as suchforms a cohesive tenet in societies. It is argued that there are intrinsic parallels between thetheoretical attributes of myth and accounting practice, and that these mythicalcharacteristics sustain the existence and acceptance of accounting and its consequences insocieties over time. A theoretical exploration of accounting as a form of myth revealsaccounting as pluralistic and culturally sensitive. Such an analysis challenges theoreticalexplanations of accounting that are presented as a “grand narrative” universalunderstanding of accounting. Manifestations of the attributes of myth are described in thecalculus and artefacts of accounting practice to demonstrate how accounting stories andbeliefs are used as a form of myth by individuals to inform and construe their worldpicture.

  7. Accountability in Health Care

    DEFF Research Database (Denmark)

    Vrangbæk, Karsten; Byrkjeflot, Haldor

    2016-01-01

    The debate on accountability within the public sector has been lively in the past decade. Significant progress has been made in developing conceptual frameworks and typologies for characterizing different features and functions of accountability. However, there is a lack of sector specific...... adjustment of such frameworks. In this article we present a framework for analyzing accountability within health care. The paper makes use of the concept of "accountability regime" to signify the combination of different accountability forms, directions and functions at any given point in time. We show...... that reforms can introduce new forms of accountability, change existing accountability relations or change the relative importance of different accountability forms. They may also change the dominant direction and shift the balance between different functions of accountability. We further suggest...

  8. Critical evidence for the prediction error theory in associative learning.

    Science.gov (United States)

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  9. Barriers to medication error reporting among hospital nurses.

    Science.gov (United States)

    Rutledge, Dana N; Retrosi, Tina; Ostrowski, Gary

    2018-03-01

    The study purpose was to report medication error reporting barriers among hospital nurses, and to determine validity and reliability of an existing medication error reporting barriers questionnaire. Hospital medication errors typically occur between ordering of a medication to its receipt by the patient with subsequent staff monitoring. To decrease medication errors, factors surrounding medication errors must be understood; this requires reporting by employees. Under-reporting can compromise patient safety by disabling improvement efforts. This 2017 descriptive study was part of a larger workforce engagement study at a faith-based Magnet ® -accredited community hospital in California (United States). Registered nurses (~1,000) were invited to participate in the online survey via email. Reported here are sample demographics (n = 357) and responses to the 20-item medication error reporting barriers questionnaire. Using factor analysis, four factors that accounted for 67.5% of the variance were extracted. These factors (subscales) were labelled Fear, Cultural Barriers, Lack of Knowledge/Feedback and Practical/Utility Barriers; each demonstrated excellent internal consistency. The medication error reporting barriers questionnaire, originally developed in long-term care, demonstrated good validity and excellent reliability among hospital nurses. Substantial proportions of American hospital nurses (11%-48%) considered specific factors as likely reporting barriers. Average scores on most barrier items were categorised "somewhat unlikely." The highest six included two barriers concerning the time-consuming nature of medication error reporting and four related to nurses' fear of repercussions. Hospitals need to determine the presence of perceived barriers among nurses using questionnaires such as the medication error reporting barriers and work to encourage better reporting. Barriers to medication error reporting make it less likely that nurses will report medication

  10. Two-species occupancy modeling accounting for species misidentification and nondetection

    Science.gov (United States)

    Chambert, Thierry; Grant, Evan H. Campbell; Miller, David A. W.; Nichols, James; Mulder, Kevin P.; Brand, Adrianne B,

    2018-01-01

    1. In occupancy studies, species misidentification can lead to false positive detections, which can cause severe estimator biases. Currently, all models that account for false positive errors only consider omnibus sources of false detections and are limited to single species occupancy. 2. However, false detections for a given species often occur because of the misidentification with another, closely-related species. To exploit this explicit source of false positive detection error, we develop a two-species occupancy model that accounts for misidentifications between two species of interest. As with other false positive models, identifiability is greatly improved by the availability of unambiguous detections at a subset of site-occasions. Here, we consider the case where some of the field observations can be confirmed using laboratory or other independent identification methods (“confirmatory data”). 3. We performed three simulation studies to (1) assess the model’s performance under various realistic scenarios, (2) investigate the influence of the proportion of confirmatory data on estimator accuracy, and (3) compare the performance of this two-species model with that of the single-species false positive model. The model shows good performance under all scenarios, even when only small proportions of detections are confirmed (e.g., 5%). It also clearly outperforms the single-species model.

  11. Numerical simulation of spatial whole-body vibration behaviour of sitting man taking into account individual anthropometry and position; Numerische Simulation des raeumlichen Ganzkoerperschwingungsverhaltens des sitzenden Menschen unter Beruecksichtigung der individuellen Anthropometrie und Haltung

    Energy Technology Data Exchange (ETDEWEB)

    Pankoke, S.

    2003-07-01

    A dynamic FE model of the anatomy of humans in sitting position is presented for assessing the dynamic internal response of the human body to the effect of external vibrations. The model can be adapted to individual body measures, different positions and different spatial orientation. It was verified on the basis of extensive measured data. The problem of contact between the human body and the driver seat is solved by a simplified static description. The model comprises a sub-model of the lumbar vertebral column for assessing the spatial load distributions in this body region. [German] In der vorliegenden Arbeit wird ein dynamisches, an der menschlichen Anatomie orientiertes Finite-Elemente-Modell des sitzenden Menschen vorgestellt, das es gestaltet, dynamische innere Antworten des Koerpers auf von aussen auf den Menschen einwirkende Schwingungen zu ermitteln. Das Modell ist ueber eine Auswahl anthropometrischer Masse an das Schwingungsverhalten eines Individuums anpassbar und ermoeglicht zudem die Simulation von Schwingungseinwirkungen in unterschiedlichen Haltungen und in allen Raumrichtungen. Die Modellverifikation erfolgte an umfangreichen Messdatenbestaenden. Das Kontaktproblem des Menschen zum Fahrzeugsitz ist durch eine vereinfachte statische Beschreibung abgebildet. Ferner beinhaltet das Ganzkoerpermodell ein Submodell der Lendenwirbelsaeule, mit dessen Hilfe die aus den Ganzkoerperschwingungen folgenden raeumlichen Beanspruchungsverteilungen in der Lendenwirbelsaeule ermittelt werden koennen. (orig.)

  12. Understanding error generation in fused deposition modeling

    International Nuclear Information System (INIS)

    Bochmann, Lennart; Transchel, Robert; Wegener, Konrad; Bayley, Cindy; Helu, Moneer; Dornfeld, David

    2015-01-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08–0.30 mm) are generally greater than in the x direction (0.12–0.62 mm) and the z direction (0.21–0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology. (paper)

  13. Understanding error generation in fused deposition modeling

    Science.gov (United States)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  14. Asteroid orbital error analysis: Theory and application

    Science.gov (United States)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  15. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  16. Harmonisation of agricultural accounting

    Directory of Open Access Journals (Sweden)

    Jaroslav Sedláček

    2007-01-01

    Full Text Available This paper deals with the accounting of the biological assets. There are described two approaches: Czech and international. The International Accounting Standards are emulative of more authentic presentment of economic processes in agricultural activities than Czech accounting legislation. From the comparison the both approaches accrued some differences, which can influent the financial statements of enterprises. The causation of main difference appears an application of fair value, which is prescribed for biological assets in international accounting standards. In international accounting standards is preferred principle of fair and true view, while in Czech accounting is preferred prudence principle.

  17. Residual rotational set-up errors after daily cone-beam CT image guided radiotherapy of locally advanced cervical cancer

    International Nuclear Information System (INIS)

    Laursen, Louise Vagner; Elstrøm, Ulrik Vindelev; Vestergaard, Anne; Muren, Ludvig P.; Petersen, Jørgen Baltzer; Lindegaard, Jacob Christian; Grau, Cai; Tanderup, Kari

    2012-01-01

    Purpose: Due to the often quite extended treatment fields in cervical cancer radiotherapy, uncorrected rotational set-up errors result in a potential risk of target miss. This study reports on the residual rotational set-up error after using daily cone beam computed tomography (CBCT) to position cervical cancer patients for radiotherapy treatment. Methods and materials: Twenty-five patients with locally advanced cervical cancer had daily CBCT scans (650 CBCTs in total) prior to treatment delivery. We retrospectively analyzed the translational shifts made in the clinic prior to each treatment fraction as well as the residual rotational errors remaining after translational correction. Results: The CBCT-guided couch movement resulted in a mean translational 3D vector correction of 7.4 mm. Residual rotational error resulted in a target shift exceeding 5 mm in 57 of the 650 treatment fractions. Three patients alone accounted for 30 of these fractions. Nine patients had no shifts exceeding 5 mm and 13 patients had 5 or less treatment fractions with such shifts. Conclusion: Twenty-two of the 25 patients have none or few treatment fractions with target shifts larger than 5 mm due to residual rotational error. However, three patients display a significant number of shifts suggesting a more systematic set-up error.

  18. Analysis of error-correction constraints in an optical disk

    Science.gov (United States)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  19. Organizational safety culture and medical error reporting by Israeli nurses.

    Science.gov (United States)

    Kagan, Ilya; Barnoy, Sivia

    2013-09-01

    To investigate the association between patient safety culture (PSC) and the incidence and reporting rate of medical errors by Israeli nurses. Self-administered structured questionnaires were distributed to a convenience sample of 247 registered nurses enrolled in training programs at Tel Aviv University (response rate = 91%). The questionnaire's three sections examined the incidence of medication mistakes in clinical practice, the reporting rate for these errors, and the participants' views and perceptions of the safety culture in their workplace at three levels (organizational, departmental, and individual performance). Pearson correlation coefficients, t tests, and multiple regression analysis were used to analyze the data. Most nurses encountered medical errors from a daily to a weekly basis. Six percent of the sample never reported their own errors, while half reported their own errors "rarely or sometimes." The level of PSC was positively and significantly correlated with the error reporting rate. PSC, place of birth, error incidence, and not having an academic nursing degree were significant predictors of error reporting, together explaining 28% of variance. This study confirms the influence of an organizational safety climate on readiness to report errors. Senior healthcare executives and managers can make a major impact on safety culture development by creating and promoting a vision and strategy for quality and safety and fostering their employees' motivation to implement improvement programs at the departmental and individual level. A positive, carefully designed organizational safety culture can encourage error reporting by staff and so improve patient safety. © 2013 Sigma Theta Tau International.

  20. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    Directory of Open Access Journals (Sweden)

    Macarena Suárez-Pellicioni

    Full Text Available This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA and seventeen low math-anxious (LMA individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN, the error positivity component (Pe, classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  1. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  2. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  3. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  4. 7 CFR 1000.77 - Adjustment of accounts.

    Science.gov (United States)

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE GENERAL PROVISIONS OF FEDERAL MILK MARKETING ORDERS... handler's reports, books, records, or accounts, or other verification discloses errors resulting in money...

  5. A slicing-based approach for locating type errors

    NARCIS (Netherlands)

    T.B. Dinesh; F. Tip (Frank)

    1998-01-01

    htmlabstractThe effectiveness of a type checking tool strongly depends on the accuracy of the positional information that is associated with type errors. We present an approach where the location associated with an error message e is defined as a slice P_e of the program P being type checked. We

  6. A slicing-based approach for locating type errors

    NARCIS (Netherlands)

    T.B. Dinesh; F. Tip (Frank)

    1998-01-01

    textabstractThe effectiveness of a type checking tool strongly depends on the accuracy of the positional information that is associated with type errors. We present an approach where the location associated with an error message e is defined as a slice P_e of the program P being type checked. We

  7. The Frame Constraint on Experimentally Elicited Speech Errors in Japanese

    Science.gov (United States)

    Saito, Akie; Inoue, Tomoyoshi

    2017-01-01

    The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the…

  8. Accountability: A Mosaic Image

    Science.gov (United States)

    Turner, Teri

    1977-01-01

    The problems involved in definition, implementation and control of accountability processes are discussed. It is stated that "...emotional involvement in accountability is one of the most difficult aspects to deal with, the chief emotion being fear". (Author/RW)

  9. Keeping Books of Account

    OpenAIRE

    2009-01-01

    Books of account are a record of a company’s income and spending. These records may be kept in paper or electronic form. The books of account contain the information for preparing the company’s annual financial statements.

  10. Culture and Accounting Practices

    Directory of Open Access Journals (Sweden)

    Carataș Maria Alina

    2017-01-01

    Besides the financial statements, rules, and calculations, the accounting also impliesprofessional reasoning, and the organizational culture promoted within the firm influences theaccounting decisions. We analyzed and identified several of accounting policies determined by thecorporate governance and organizational culture influence.

  11. Making Collaborative Innovation Accountable

    DEFF Research Database (Denmark)

    Sørensen, Eva

    The public sector is increasingly expected to be innovative, but the prize for a more innovative public sector might be that it becomes difficult to hold public authorities to account for their actions. The article explores the tensions between innovative and accountable governance, describes...... the foundation for these tensions in different accountability models, and suggest directions to take in analyzing the accountability of collaborative innovation processes....

  12. TIME MANAGEMENT FOR ACCOUNTANTS

    Directory of Open Access Journals (Sweden)

    Cristina Elena BIGIOI

    2016-06-01

    Full Text Available Time is money. Every accountant knows that. In our country, the taxes are changing frequently. The accountants have to update their fiscal knowledge. The purpose of the article is to find how the accountants manage their time, taking into consideration the number of fiscal declarations and the fiscal changes. In this article we present some ways to improve time management for accountants.

  13. Accounting Applications---Introduction

    OpenAIRE

    Joshua Ronen

    1980-01-01

    Introduction to special issue on accounting applications. By publishing these papers together in one issue of Management Science we wish to accomplish the dual purpose of exposing management scientists to the application of their discipline to important accounting problems and of allowing management scientists and accountants to interact in areas of research and problem-solving, thus stimulating the interest of readers who are concerned with the problems of accounting.

  14. Cash Advance Accounting: Accounting Regulations and Practices

    Directory of Open Access Journals (Sweden)

    Aristita Rotila

    2012-12-01

    Full Text Available It is known the fact that often the entities offer to staff or third parties certain amounts of money, in order to make payments for the entities, such sums being registered differently in the accounting as cash advances. In the case in which the advances are offered in a foreign currency, there is the problem of the exchange rate used when justifying the advance, for the conversion in lei of payments that were carried out. In this article we wanted to signal the effect that the exchange rate, used in the assessment for reflecting in the accounting operations concerning cash advance reimbursements in a foreign currency, has on the information presented in the financial statement. Therewith, we signal some aspects from the content of the accounting regulations, with reference at defining the cash advances, meaning, and the presentation in the balance sheet of cash advances, which, in our opinion, impose clarifications.

  15. The error sources appearing for the gamma radioactive source measurement in dynamic condition

    International Nuclear Information System (INIS)

    Sirbu, M.

    1977-01-01

    The error analysis for the measurement of the gamma radioactive sources, placed on the soil, with the help of the helicopter are presented. The analysis is based on a new formula that takes account of the attenuation gamma ray factor in the helicopter walls. They give a complete error formula and an error diagram. (author)

  16. Quantification of human errors in level-1 PSA studies in NUPEC/JINS

    International Nuclear Information System (INIS)

    Hirano, M.; Hirose, M.; Sugawara, M.; Hashiba, T.

    1991-01-01

    THERP (Technique for Human Error Rate Prediction) method is mainly adopted to evaluate the pre-accident and post-accident human error rates. Performance shaping factors are derived by taking Japanese operational practice into account. Several examples of human error rates with calculational procedures are presented. The important human interventions of typical Japanese NPPs are also presented. (orig./HP)

  17. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    Science.gov (United States)

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  18. Designing account management organizations

    NARCIS (Netherlands)

    Hart, van der H.W.C.; Kempeners, M.A.

    1999-01-01

    Organizational structures of account management systems are one of the most interesting and controversial parts of account management systems, because of the variety of organizational options that are available. The main focus is on the organization of account management systems and particularly on

  19. Lo Strategic Management Accounting

    OpenAIRE

    G. INVERNIZZI

    2005-01-01

    Il saggio indaga gli aggregati informativi e gli elementi che compongono lo strategic management accounting. Sono quindi analizzate le funzioni svolte nei diversi stadi del processo di gestione strategica osservando il suo ruolo all’interno del management accounting. Infine sono approfonditi i rapporti fra i livelli della gestione strategica e lo strategic management accounting.

  20. Automated Accounting. Instructor Guide.

    Science.gov (United States)

    Moses, Duane R.

    This curriculum guide was developed to assist business instructors using Dac Easy Accounting College Edition Version 2.0 software in their accounting programs. The module consists of four units containing assignment sheets and job sheets designed to enable students to master competencies identified in the area of automated accounting. The first…

  1. Intelligent Accountability in Education

    Science.gov (United States)

    O'Neill, Onora

    2013-01-01

    Systems of accountability are "second order" ways of using evidence of the standard to which "first order" tasks are carried out for a great variety of purposes. However, more accountability is not always better, and processes of holding to account can impose high costs without securing substantial benefits. At their worst,…

  2. The Accounting Capstone Problem

    Science.gov (United States)

    Elrod, Henry; Norris, J. T.

    2012-01-01

    Capstone courses in accounting programs bring students experiences integrating across the curriculum (University of Washington, 2005) and offer unique (Sanyal, 2003) and transformative experiences (Sill, Harward, & Cooper, 2009). Students take many accounting courses without preparing complete sets of financial statements. Accountants not only…

  3. Implementing Replacement Cost Accounting

    Science.gov (United States)

    1976-12-01

    cost accounting Clickener, John Ross Monterey, California. Naval Postgraduate School http://hdl.handle.net/10945/17810 Downloaded from NPS Archive...Calhoun IMPLEMENTING REPLACEMENT COST ACCOUNTING John Ross CHckener NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS IMPLEMENTING REPLACEMENT COST ...Implementing Replacement Cost Accounting 7. AUTHORS John Ross Clickener READ INSTRUCTIONS BEFORE COMPLETING FORM 3. RECIPIENT’S CATALOG NUMBER 9. TYRE OF

  4. Managerial Accounting. Study Guide.

    Science.gov (United States)

    Plachta, Leonard E.

    This self-instructional study guide is part of the materials for a college-level programmed course in managerial accounting. The study guide is intended for use by students in conjuction with a separate textbook, Horngren's "Accounting for Management Control: An Introduction," and a workbook, Curry's "Student Guide to Accounting for Management…

  5. Analysis of difficulties accounting and evaluating nuclear material of PWR fuel plant

    International Nuclear Information System (INIS)

    Zhang Min; Jue Ji; Liu Tianshu

    2013-01-01

    Background: Nuclear materials accountancy must be developed for nuclear facilities, which is required by regulatory in China. Currently, there are some unresolved problems for nuclear materials accountancy of bulk nuclear facilities. Purpose: The retention values and measurement errors are analyzed in nuclear materials accountancy of Power Water Reactor (PWR) fuel plant to meet the regulatory requirements. Methods: On the basis of nuclear material accounting and evaluation data of PWR fuel plant, a deep analysis research including ratio among random error variance, long-term systematic error variance, short-term systematic error variance and total error involving Material Unaccounted For (MUF) evaluation is developed by the retention value measure in equipment and pipeline. Results: In the equipment pipeline, the holdup estimation error and its total proportion are not more than 5% and 1.5%, respectively. And the holdup estimation can be regraded as a constant in the PWR nuclear material accountancy. Random error variance, long-term systematic error variance, short-term systematic error variance of overall measurement, and analytical and sampling methods are also obtained. A valuable reference is provided for nuclear material accountancy. Conclusion: In nuclear material accountancy, the retention value can be considered as a constant. The long-term systematic error is a main factor in all errors, especially in overall measurement error and sampling error: The long-term systematic errors of overall measurement and sampling are considered important in the PWR nuclear material accountancy. The proposals and measures are applied to the nuclear materials accountancy of PWR fuel plant, and the capacity of nuclear materials accountancy is improved. (authors)

  6. Accounting for human factor in QC and QA inspections

    International Nuclear Information System (INIS)

    Goodman, J.

    1986-01-01

    Two types of human error during QC/QA inspection have been identified. The method of accounting for the effects of human error in QC/QA inspections was developed. The result of evaluation of the proportion of discrepant items in the population is affected significantly by human factor

  7. ACCOUNTING RESPONSIBILITY FOR BUSINESS EVALUATION

    Directory of Open Access Journals (Sweden)

    DORU CÎRNU

    2015-12-01

    Full Text Available In the world today the need for improvement the business management quality assumes significant change in organization and mode of business management. Establishing of appropriate level, structure and authority of business management depends in most cases on the size, number of employees, complexness of technological and business process, market position and other factors. Development of a business requires decentralization of operative functions. The decentralization of a business means the increase of operative activities control of greater number of managers in such a business. An important segment that so far was neither sufficiently applied in the romanian practice, not sufficiently treated is a system of responsibility. One of the aims of this research is also stimulation of more intensive activities on initiating the process of accounting modernisation. First of all, on the improvement and more rational legal accounting regulation and motivation of professional accountants organization for quicker development of contemporary accounting principles and standards in compliance with tendencies of the european environment. The known experiences just point to the necessity of more complex perception of place and role of the management accounting and within it of the system of accounting responsibility in preparing of business plans and buget, in creation of development and investment policy. Therefore, the system of accounting responsibility should eneble monitoring and control of actual operational activities of each part of decentralized business. The process of performance evaluation and accounting responsibility in a descentralized business organization represents a significant element of an internal control system and in that sense the emphasis was put on that fact in this paper.

  8. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  9. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... that contains very detailed information about incomes. This gives a unique opportunity to learn about the magnitude and nature of the measurement error in income reported by the respondents in the Danish NTS compared to income from the administrative register (correct measure). We find that the classical...

  10. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  11. Analytical N beam position monitor method

    Directory of Open Access Journals (Sweden)

    A. Wegscheider

    2017-11-01

    Full Text Available Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β^{*}-leveling on luminosity will require many operational optics. A fast measurement of the β-function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs. A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.

  12. Analytical N beam position monitor method

    Science.gov (United States)

    Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.

    2017-11-01

    Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.

  13. The Responsibilities of Accountants

    OpenAIRE

    Ronald F Duska

    2005-01-01

    An accountant is a good accountant if in practicing his craft he is superb in handling the numbers. But a good accountant in handling the numbers can use that skill to misstate earnings to cover a multitude of problems with a company's books while staying within the law. So, the notion of a moral or ethical accountant is not the same as the notion of a good accountant. Our general principle would be that to be ethical a person has a responsibility to fulfil one's role or roles, as long as tha...

  14. Nuclear fuel lease accounting

    International Nuclear Information System (INIS)

    Danielson, A.H.

    1986-01-01

    The subject of nuclear fuel lease accounting is a controversial one that has received much attention over the years. This has occurred during a period when increasing numbers of utilities, seeking alternatives to traditional financing methods, have turned to leasing their nuclear fuel inventories. The purpose of this paper is to examine the current accounting treatment of nuclear fuel leases as prescribed by the Financial Accounting Standards Board (FASB) and the Federal Energy Regulatory Commission's (FERC's) Uniform System of Accounts. Cost accounting for leased nuclear fuel during the fuel cycle is also discussed

  15. Application of Joint Error Maximal Mutual Compensation to hexapod robots

    DEFF Research Database (Denmark)

    Veryha, Yauheni; Petersen, Henrik Gordon

    2008-01-01

    A good practice to ensure high-positioning accuracy in industrial robots is to use joint error maximum mutual compensation (JEMMC). This paper presents an application of JEMMC for positioning of hexapod robots to improve end-effector positioning accuracy. We developed an algorithm and simulation ...

  16. Accuracy of crystal structure error estimates

    International Nuclear Information System (INIS)

    Taylor, R.; Kennard, O.

    1986-01-01

    A statistical analysis of 100 crystal structures retrieved from the Cambridge Structural Database is reported. Each structure has been determined independently by two different research groups. Comparison of the independent results leads to the following conclusions: (a) The e.s.d.'s of non-hydrogen-atom positional parameters are almost invariably too small. Typically, they are underestimated by a factor of 1.4-1.45. (b) The extent to which e.s.d.'s are underestimated varies significantly from structure to structure and from atom to atom within a structure. (c) Errors in the positional parameters of atoms belonging to the same chemical residue tend to be positively correlated. (d) The e.s.d.'s of heavy-atom positions are less reliable than those of light-atom positions. (e) Experimental errors in atomic positional parameters are normally, or approximately normally, distributed. (f) The e.s.d.'s of cell parameters are grossly underestimated, by an average factor of about 5 for cell lengths and 2.5 for cell angles. There is marginal evidence that the accuracy of atomic-coordinate e.s.d.'s also depends on diffractometer geometry, refinement procedure, whether or not the structure has a centre of symmetry, and the degree of precision attained in the structure determination. (orig.)

  17. ACCOUNTING ASPECTS OF PRICING AND TRANSFER PRICING

    Directory of Open Access Journals (Sweden)

    TÜNDE VERES

    2011-01-01

    Full Text Available The pricing methods in practice need really complex view of the business situation and depend on the strategy and market position of a company. The structure of a price seems simple: cost plus margin. Both categories are special area in the management accounting. Information about the product costs, the allocation methodologies in cost accounting, the analyzing of revenue and different level of the margin needs information from accounting system. This paper analyzes the pricing methods from management accounting aspects to show out the role of the accounting system in the short term and long term pricing and transfer pricing decisions.

  18. Cognitive errors: thinking clearly when it could be child maltreatment.

    Science.gov (United States)

    Laskey, Antoinette L

    2014-10-01

    Cognitive errors have been studied in a broad array of fields, including medicine. The more that is understood about how the human mind processes complex information, the more it becomes clear that certain situations are particularly susceptible to less than optimal outcomes because of these errors. This article explores how some of the known cognitive errors may influence the diagnosis of child abuse, resulting in both false-negative and false-positive diagnoses. Suggested remedies for these errors are offered. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Focusing errors in radiography - how they can be recognized and avoided. 2. rev. ed.

    International Nuclear Information System (INIS)

    Zimmer, E.A.; Zimmer-Brossy, M.

    1979-01-01

    The importance of the problem of recognizing and judging focusing errors for the daily practice has caused the authors to give this systematic and abundantly pictured account of the most frequent focusing errors, for as yet no such book has been published either in Germany or abroad. To keep it as concise and handy as possible the authors have restricted themselves to the most important standard pictures and omitted to list errors such as: blurred pictures owing to breathing or movement as well as under and overexposed pictures, which are easy to recognize and avoid. By contrast, they describe in detail those characteristic points and lines of orientation that must be checked to verify the technical quality of an X-ray picture. Knowledge of the typical aspects in a correctly focused X-ray picture is a precondition for understanding incorrectly focused pictures which have some characteristic properties as well. Proper interpretation of an incorrectly focused picture then permits to detect also the cause of the focusing error, be it false centring or false positioning. Thus quick and aimed correction becomes possible. To avoid unnecessary repeat X-rays, which are self-prohibitive for reasons of radiation protection alone, each chapter contains at the end a remark starting in which cases the medical indication requires the repetition of an unserviceable X-ray. (orig./ORU) [de

  20. Optimization of sample absorbance for quantitative analysis in the presence of pathlength error in the IR and NIR regions

    International Nuclear Information System (INIS)

    Hirschfeld, T.; Honigs, D.; Hieftje, G.

    1985-01-01

    Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable

  1. Accounting for uncertain fault geometry in earthquake source inversions - I: theory and simplified application

    Science.gov (United States)

    Ragon, Théa; Sladen, Anthony; Simons, Mark

    2018-05-01

    The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of

  2. The image of accountants

    DEFF Research Database (Denmark)

    Baldvinsdottir, Gudrun; Burns, John; Nørreklit, Hanne

    2009-01-01

    Purpose - The aim of this paper is to investigate the extent to which a profound change in the image of accountants can be seen in the discourse used in accounting software adverts that have appeared in the professional publications of the Chartered Institute of Management Accountants over the last...... four decades. Design/methodology/approach - Methodologically, the paper draws from Barthes' work on the rhetoric of images and Giddens' work on modernity. By looking at accounting software adverts, an attempt is made to investigate the image of the accountant produced by the discourse of the adverts......, and whether the image produced reflects a wide social change in society. Findings - It was found that in the 1970s and the 1980s the accountant was constructed as a responsible and rational person. In the 1990s, the accountant was presented as an instructed action man. However, in a recent advert...

  3. Cost accounting at GKSS

    International Nuclear Information System (INIS)

    Hinz, R.

    1979-01-01

    The GKSS has a cost accounting system comprising cost type, cost centre and cost unit accounting which permits of a comprehensive and detailed supervision of the accural of costs and use of funds, makes price setting for outside orders possible and provides the necessary data for decision-making and planning. It fulfills the requirement for an ordered accounting system; it is therefore guaranteed that there exists between financial accounts department and cost accounting a proper demarcation and transition, that costs are accounted fully only on the basis of vouchers and only once, evaluation and distribution are unified and the principle of causation is observed. Two employees are engaged in costs and services accounting. Although we strive to effect adaptations as swiftly as possible, and constantly to adapt refinements and supplementary processes for the improvement of the system, this can only occur within the scope of, and with the exactitude necessary for the required information. (author)

  4. PARADIGM OF ACCOUNTING CHANGE

    Directory of Open Access Journals (Sweden)

    Constanta Iacob

    2016-12-01

    Full Text Available The words and phrases swop with each other and the apparent stability of a word’s meaning sometimes change in time. This explains why the generic term of accounting is used when referring to the qualities attributed to accounting,but also when it comes to organizing financial accounting function within the entity, and when referring concretely to keeping a double record with its specific means, methods and tools specific, respectively seen as a technical accounting.Speaking about the qualities of accounting, but also about the organizational form it takes, we note that there is a manifold meaning of the word accounting, which is why the purpose of this article is to demonstrate that the paradigm shift aimed at a new set of rules and if the rules changes, then we can change the very purpose of accounting.

  5. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  6. Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans

    International Nuclear Information System (INIS)

    Stapleton, S; Zavgorodni, S; Popescu, I A; Beckham, W A

    2005-01-01

    The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot

  7. Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans

    Science.gov (United States)

    Stapleton, S.; Zavgorodni, S.; Popescu, I. A.; Beckham, W. A.

    2005-02-01

    The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot.

  8. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  9. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  10. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  11. A Justified Initial Accounting Estimate as an Integral Part of the Enterprise Accounting Policy

    Directory of Open Access Journals (Sweden)

    Marenych Tetyana H

    2016-05-01

    Full Text Available The aim of the article is justification of the need to specify in the order on accounting policies not only the elements of the accounting policy itself but also the initial accounting estimates, which will increase the reliability of financial reporting and the development of proposals on improvement of the given administrative documents of the enterprise. It is noted that in recent years the importance of a high-quality accounting policy has increased significantly not only for users of financial reports but also for achieving the purposes of determining the object of levying the profits tax. There revealed significant differences at reflecting in accounting the consequences of changes in the accounting policy and accounting estimate. There has been generalized the information in the order on the enterprise accounting policy with respect to accounting estimates. It is proposed to provide a separate section in the order, where there should be presented information about the list of accounting estimates taken, about how the company will make changes in the accounting policy, accounting estimate as well as correct errors

  12. Organizational and Managerial Accounting Challenges

    OpenAIRE

    Anamaria Tepes-Bobescu

    2015-01-01

    The article focuses on the concept of value chain analysis and its positive effects on the development of an enterprise. After reading literature and conceptual discussions of value analysis and classification of company’s activities (main and support), there are described the steps that takes up the value chain analysis. It describes the role of management accountant and role of enterprise’s management based on value chain analysis. An analysis is made between the value chain and conventiona...

  13. Architecture design for soft errors

    CERN Document Server

    Mukherjee, Shubu

    2008-01-01

    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  14. Accounting organizing development tendencies

    Directory of Open Access Journals (Sweden)

    G.I. Lyakhovich

    2017-12-01

    Full Text Available The development of accounting takes place under the influence of many factors. The study pays special attention to the impact of technological determinants on the process of organizing accountin. The carried-out analysis of scientists’ works allowed to determine the principal tendencies in the development of accounting organizing; these tendencies were expanded taking into account the development of technologies and innovations. It was found out the particular element, which undergo changes in the organization of accounting and the factors that prevent their further development for every tendency (the use of cloud technologies; a wide use of expert systems; a social media strategy in accounting; mobility among accounting personnel; outsourcing of accounting services, Internet things. The paper substantiates the shifts in functional duties of an accountant (the exclusion of data recording and intensification of analytical functions as a result of application of modern information and computer technologies. It was established that the amounts of accountants’ work with primary instruments will be reduced taking into account the possibilities of automatic preparation of such documents. The author substantiates the importance of the good knowledge in the field of information technologies while training accountants.

  15. Identifying Error in AUV Communication

    National Research Council Canada - National Science Library

    Coleman, Joseph; Merrill, Kaylani; O'Rourke, Michael; Rajala, Andrew G; Edwards, Dean B

    2006-01-01

    Mine Countermeasures (MCM) involving Autonomous Underwater Vehicles (AUVs) are especially susceptible to error, given the constraints on underwater acoustic communication and the inconstancy of the underwater communication channel...

  16. Human Errors in Decision Making

    OpenAIRE

    Mohamad, Shahriari; Aliandrina, Dessy; Feng, Yan

    2005-01-01

    The aim of this paper was to identify human errors in decision making process. The study was focused on a research question such as: what could be the human error as a potential of decision failure in evaluation of the alternatives in the process of decision making. Two case studies were selected from the literature and analyzed to find the human errors contribute to decision fail. Then the analysis of human errors was linked with mental models in evaluation of alternative step. The results o...

  17. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  18. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  19. Predicting Error Bars for QSAR Models

    International Nuclear Information System (INIS)

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Mueller, Klaus-Robert

    2007-01-01

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D 7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches

  20. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...