WorldWideScience

Sample records for account form errors

  1. 40 CFR 73.37 - Account error.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Account error. 73.37 Section 73.37 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) SULFUR DIOXIDE ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole...

  2. THE INFLUENCE OF ACCOUNTANCY ERRORS ON FINANCIAL AND TAX REPORTS

    Directory of Open Access Journals (Sweden)

    Mariana GURĂU

    2016-06-01

    Full Text Available To make mistakes is human. An accountant may do mistakes, too. Accountancy errors are defined and classsified by accounting regulations. These set what is the accountant treatment for correcting accountancy errors. However, even though one of the objectives in accounting normalization is made by the disconnection between accountancy and taxation, the accountancy errors influence especially tax reports. We will further point the impact of accountancy errors on financial and tax reports. We will also approach the accountancy principles that impose the rules described for correcting the errors.

  3. IAS 8, Accounting Policies, Changes in Accounting Estimates and Errors – A Closer Look

    OpenAIRE

    Muthupandian, K S

    2008-01-01

    The International Accounting Standards Board issued the revised version of the International Accounting Standard 8, Accounting Policies, Changes in Accounting Estimates and Errors. The objective of IAS 8 is to prescribe the criteria for selecting, applying and changing accounting policies, together with the accounting treatment and disclosure of changes in accounting policies, changes in accounting estimates and the corrections of errors. This article presents a closer look of the standard (o...

  4. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  5. A Research on the Responsibility of Accounting Professionals to Determine and Prevent Accounting Errors and Frauds: Edirne Sample

    Directory of Open Access Journals (Sweden)

    Semanur Adalı

    2017-09-01

    Full Text Available In this study, the ethical dimensions of accounting professionals related to accounting errors and frauds were examined. Firstly, general and technical information about accounting were provided. Then, some terminology on error, fraud and ethics in accounting were discussed. Study also included recent statistics about accounting errors and fraud as well as presenting a literature review. As the methodology of research, a questionnaire was distributed to 36 accounting professionals residing in Edirne city of Turkey. The collected data were then entered to the SPSS package program for analysis. The study revealed very important results. Accounting professionals think that, accounting chambers do not organize enough seminars/conferences on errors and fraud. They also believe that supervision and disciplinary boards of professional accounting chambers fulfill their responsibilities partially. Attitude of professional accounting chambers in terms of errors, fraud and ethics is considered neither strict nor lenient. But, most accounting professionals are aware of colleagues who had disciplinary penalties. Most important and effective tool to prevent errors and fraud is indicated as external audit, but internal audit and internal control are valued as well. According to accounting professionals, most errors occur due to incorrect data received from clients and as a result of recording. Fraud is generally made in order to get credit from banks and for providing benefits to the organization by not showing the real situation of the firm. Finally, accounting professionals state that being honest, trustworthy and impartial is the basis of accounting profession and accountants must adhere to ethical rules.

  6. Accounting for measurement error: a critical but often overlooked process.

    Science.gov (United States)

    Harris, Edward F; Smith, Richard N

    2009-12-01

    Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.

  7. ERRORS AND FRAUD IN ACCOUNTING. THE ROLE OF EXTERNAL AUDIT IN FIGHTING CORRUPTION

    Directory of Open Access Journals (Sweden)

    Luminita Ionescu

    2017-12-01

    Full Text Available Accounting errors and fraud are common in most businesses, but there is a difference between fraud and misinterpretation of communication or accounting regulations. The role of management in preventing fraud becomes important in the last decades and the importance of auditing in curbing corruption is increasingly revealed. There is a strong connection between fraud and corruption, accelerated by electronic systems and modern platforms. The most recent developments tend to confirm that external auditing is curbing corruption, due to international accounting and auditing standards at national and regional levels. Thus, a better implementation of accounting standards and high quality of external control could prevent errors and fraud in accounting, and reduce corruption, as well. The aim of this paper is to present some particular aspects of errors and fraud in accounting, and how external audit could ensure accuracy and accountability in financial reporting.

  8. CORRECTING ACCOUNTING ERRORS AND ACKNOWLEDGING THEM IN THE EARNINGS TO THE PERIOD

    Directory of Open Access Journals (Sweden)

    BUSUIOCEANU STELIANA

    2013-08-01

    Full Text Available The accounting information is reliable when it does not contain significant errors, is not biasedand accurately represents the transactions and events. In the light of the regulations complying with Europeandirectives, the information is significant if its omission or wrong presentation may influence the decisions users makebased on annual financial statements. Given that the professional practice sees errors in registering or interpretinginformation, as well as omissions and wrong calculations, the Romanian accounting regulations stipulate treatmentsfor correcting errors in compliance with international references. Thus, the correction of the errors corresponding tothe current period is accomplished based on the retained earnings in the case of significant errors or on the currentearnings when the errors are insignificant. The different situations in the professional practice triggered by errorsrequire both knowledge of regulations and professional rationale to be addressed.

  9. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  10. Multiple imputation to account for measurement error in marginal structural models

    Science.gov (United States)

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  11. Multiple Imputation to Account for Measurement Error in Marginal Structural Models.

    Science.gov (United States)

    Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel; Crane, Heidi; Eron, Joseph J; Mathews, W Christopher; Moore, Richard; Boswell, Stephen L; Lesko, Catherine R; Mugavero, Michael J

    2015-09-01

    Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and nondifferential measurement error in a marginal structural model. We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3,686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality (hazard ratio [HR]: 1.2 [95% confidence interval [CI] = 0.6, 2.3]). The HR for current smoking and therapy [0.4 (95% CI = 0.2, 0.7)] was similar to the HR for no smoking and therapy (0.4; 95% CI = 0.2, 0.6). Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies.

  12. Effects of variable transformations on errors in FORM results

    International Nuclear Information System (INIS)

    Qin Quan; Lin Daojin; Mei Gang; Chen Hao

    2006-01-01

    On the basis of studies on second partial derivatives of the variable transformation functions for nine different non-normal variables the paper comprehensively discusses the effects of the transformation on FORM results and shows that senses and values of the errors in FORM results depend on distributions of the basic variables, whether resistances or actions basic variables represent, and the design point locations in the standard normal space. The transformations of the exponential or Gamma resistance variables can generate +24% errors in the FORM failure probability, and the transformation of Frechet action variables could generate -31% errors

  13. Analysis and Compensation for Gear Accuracy with Setting Error in Form Grinding

    Directory of Open Access Journals (Sweden)

    Chenggang Fang

    2015-01-01

    Full Text Available In the process of form grinding, gear setting error was the main factor that influenced the form grinding accuracy; we proposed an effective method to improve form grinding accuracy that corrected the error by controlling the machine operations. Based on establishing the geometry model of form grinding and representing the gear setting errors as homogeneous coordinate, tooth mathematic model was obtained and simplified under the gear setting error. Then, according to the gear standard of ISO1328-1: 1997 and the ANSI/AGMA 2015-1-A01: 2002, the relationship was investigated by changing the gear setting errors with respect to tooth profile deviation, helix deviation, and cumulative pitch deviation, respectively, under the condition of gear eccentricity error, gear inclination error, and gear resultant error. An error compensation method was proposed based on solving sensitivity coefficient matrix of setting error in a five-axis CNC form grinding machine; simulation and experimental results demonstrated that the method can effectively correct the gear setting error, as well as further improving the forming grinding accuracy.

  14. Detecting errors and anomalies in computerized materials control and accountability databases

    International Nuclear Information System (INIS)

    Whiteson, R.; Hench, K.; Yarbro, T.; Baumgart, C.

    1998-01-01

    The Automated MC and A Database Assessment project is aimed at improving anomaly and error detection in materials control and accountability (MC and A) databases and increasing confidence in the data that they contain. Anomalous data resulting in poor categorization of nuclear material inventories greatly reduces the value of the database information to users. Therefore it is essential that MC and A data be assessed periodically for anomalies or errors. Anomaly detection can identify errors in databases and thus provide assurance of the integrity of data. An expert system has been developed at Los Alamos National Laboratory that examines these large databases for anomalous or erroneous data. For several years, MC and A subject matter experts at Los Alamos have been using this automated system to examine the large amounts of accountability data that the Los Alamos Plutonium Facility generates. These data are collected and managed by the Material Accountability and Safeguards System, a near-real-time computerized nuclear material accountability and safeguards system. This year they have expanded the user base, customizing the anomaly detector for the varying requirements of different groups of users. This paper describes the progress in customizing the expert systems to the needs of the users of the data and reports on their results

  15. Reducing Approximation Error in the Fourier Flexible Functional Form

    Directory of Open Access Journals (Sweden)

    Tristan D. Skolrud

    2017-12-01

    Full Text Available The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms.

  16. On the effect of systematic errors in near real time accountancy

    International Nuclear Information System (INIS)

    Avenhaus, R.

    1987-01-01

    Systematic measurement errors have a decisive impact on nuclear materials accountancy. This has been demonstrated at various occasions for a fixed number of inventory periods, i.e. for situations where the overall probability of detection is taken as the measure of effectiveness. In the framework of Near Real Time Accountancy (NRTA), however, such analyses have not yet been performed. In this paper sequential test procedures are considered which are based on the so-called MUF-Residuals. It is shown that, if the decision maker does not know the systematic error variance, the average run lengths tend towards infinity if this variance is equal or longer than that of the random error. Furthermore, if the decision maker knows this invariance, the average run length for constant loss or diversion is not shorter than that without loss or diversion. These results cast some doubt on the present practice of data evaluation where systematic errors are tacitly assumed to persist for an infinite time. In fact, information about the time dependence of the variances of these errors has to be gathered in order that the efficiency of NRTA evaluation methods can be estimated realistically

  17. Accounting for model error due to unresolved scales within ensemble Kalman filtering

    OpenAIRE

    Mitchell, Lewis; Carrassi, Alberto

    2014-01-01

    We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are describe...

  18. CORRECTING ERRORS: THE RELATIVE EFFICACY OF DIFFERENT FORMS OF ERROR FEEDBACK IN SECOND LANGUAGE WRITING

    Directory of Open Access Journals (Sweden)

    Chitra Jayathilake

    2013-01-01

    Full Text Available Error correction in ESL (English as a Second Language classes has been a focal phenomenon in SLA (Second Language Acquisition research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates, this study employed an experimental research design with a pretest-treatment-posttests structure. The research revealed that the degree of success in acquiring L2 (Second Language grammar through error correction differs according to the form of the correction and to learning contexts. While the findings are discussed in relation to the previous literature, this paper concludes creating a cline of error correction forms to be promoted in Sri Lankan L2 writing contexts, particularly in ESL contexts in Universities.

  19. An in-process form error measurement system for precision machining

    International Nuclear Information System (INIS)

    Gao, Y; Huang, X; Zhang, Y

    2010-01-01

    In-process form error measurement for precision machining is studied. Due to two key problems, opaque barrier and vibration, the study of in-process form error optical measurement for precision machining has been a hard topic and so far very few existing research works can be found. In this project, an in-process form error measurement device is proposed to deal with the two key problems. Based on our existing studies, a prototype system has been developed. It is the first one of the kind that overcomes the two key problems. The prototype is based on a single laser sensor design of 50 nm resolution together with two techniques, a damping technique and a moving average technique, proposed for use with the device. The proposed damping technique is able to improve vibration attenuation by up to 21 times compared to the case of natural attenuation. The proposed moving average technique is able to reduce errors by seven to ten times without distortion to the form profile results. The two proposed techniques are simple but they are especially useful for the proposed device. For a workpiece sample, the measurement result under coolant condition is only 2.5% larger compared with the one under no coolant condition. For a certified Wyko test sample, the overall system measurement error can be as low as 0.3 µm. The measurement repeatability error can be as low as 2.2%. The experimental results give confidence in using the proposed in-process form error measurement device. For better results, further improvement in design and tests are necessary

  20. On a Test of Hypothesis to Verify the Operating Risk Due to Accountancy Errors

    Directory of Open Access Journals (Sweden)

    Paola Maddalena Chiodini

    2014-12-01

    Full Text Available According to the Statement on Auditing Standards (SAS No. 39 (AU 350.01, audit sampling is defined as “the application of an audit procedure to less than 100 % of the items within an account balance or class of transactions for the purpose of evaluating some characteristic of the balance or class”. The audit system develops in different steps: some are not susceptible to sampling procedures, while others may be held using sampling techniques. The auditor may also be interested in two types of accounting error: the number of incorrect records in the sample that overcome a given threshold (natural error rate, which may be indicative of possible fraud, and the mean amount of monetary errors found in incorrect records. The aim of this study is to monitor jointly both types of errors through an appropriate system of hypotheses, with particular attention to the second type error that indicates the risk of non-reporting errors overcoming the upper precision limits.

  1. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  2. 12 CFR 563g.7 - Form, content, and accounting.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Form, content, and accounting. 563g.7 Section... § 563g.7 Form, content, and accounting. (a) Form and content. Any offering circular or amendment filed... which they are made, not misleading. (b) Accounting requirements. To be declared effective an offering...

  3. Accounting for response misclassification and covariate measurement error improves power and reduces bias in epidemiologic studies.

    Science.gov (United States)

    Cheng, Dunlei; Branscum, Adam J; Stamey, James D

    2010-07-01

    To quantify the impact of ignoring misclassification of a response variable and measurement error in a covariate on statistical power, and to develop software for sample size and power analysis that accounts for these flaws in epidemiologic data. A Monte Carlo simulation-based procedure is developed to illustrate the differences in design requirements and inferences between analytic methods that properly account for misclassification and measurement error to those that do not in regression models for cross-sectional and cohort data. We found that failure to account for these flaws in epidemiologic data can lead to a substantial reduction in statistical power, over 25% in some cases. The proposed method substantially reduced bias by up to a ten-fold margin compared to naive estimates obtained by ignoring misclassification and mismeasurement. We recommend as routine practice that researchers account for errors in measurement of both response and covariate data when determining sample size, performing power calculations, or analyzing data from epidemiological studies. 2010 Elsevier Inc. All rights reserved.

  4. Accounting for optical errors in microtensiometry.

    Science.gov (United States)

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup

  5. Robust topology optimization accounting for spatially varying manufacturing errors

    DEFF Research Database (Denmark)

    Schevenels, M.; Lazarov, Boyan Stefanov; Sigmund, Ole

    2011-01-01

    This paper presents a robust approach for the design of macro-, micro-, or nano-structures by means of topology optimization, accounting for spatially varying manufacturing errors. The focus is on structures produced by milling or etching; in this case over- or under-etching may cause parts...... optimization problem is formulated in a probabilistic way: the objective function is defined as a weighted sum of the mean value and the standard deviation of the structural performance. The optimization problem is solved by means of a Monte Carlo method: in each iteration of the optimization scheme, a Monte...

  6. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    Science.gov (United States)

    Helle, Samuli

    2018-03-01

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  7. Closed Form Aliasing Probability For Q-ary Symmetric Errors

    Directory of Open Access Journals (Sweden)

    Geetani Edirisooriya

    1996-01-01

    Full Text Available In Built-In Self-Test (BIST techniques, test data reduction can be achieved using Linear Feedback Shift Registers (LFSRs. A faulty circuit may escape detection due to loss of information inherent to data compaction schemes. This is referred to as aliasing. The probability of aliasing in Multiple-Input Shift-Registers (MISRs has been studied under various bit error models. By modeling the signature analyzer as a Markov process we show that the closed form expression derived for aliasing probability previously, for MISRs with primitive polynomials under q-ary symmetric error model holds for all MISRs irrespective of their feedback polynomials and for group cellular automata signature analyzers as well. If the erroneous behaviour of a circuit can be modelled with q-ary symmetric errors, then the test circuit complexity and propagation delay associated with the signature analyzer can be minimized by using a set of m single bit LFSRs without increasing the probability of aliasing.

  8. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    International Nuclear Information System (INIS)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; Hove, Sybille van den

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.

  9. Correcting a fundamental error in greenhouse gas accounting related to bioenergy.

    Science.gov (United States)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-06-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of 'additional biomass' - biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy - can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.

  10. Accounting for measurement error in log regression models with applications to accelerated testing.

    Science.gov (United States)

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  11. Accounting for measurement error in log regression models with applications to accelerated testing.

    Directory of Open Access Journals (Sweden)

    Robert Richardson

    Full Text Available In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  12. POSSIBILITIES TO CORRECT ACCOUNTING ERRORS IN THE CONTEXT OF COMPLYING WITH THE OPENING BALANCE SHEET INTANGIBILITY PRINCIPLE

    Directory of Open Access Journals (Sweden)

    PALIU – POPA LUCIA

    2017-12-01

    Full Text Available There are still different views on the intangibility of the opening balance sheet at global level in the process of convergence and accounting harmonization. Fnding a total difference between the Anglo-Saxon accounting system and that of the Western European continental influence, in the sense that the former is less rigid in regard with the application of the principle of intangibility, whereas that of mainland inspiration apply the provisions of this principle in its entirety. Looking from this perspective and taking into account the major importance of the financial statements that are intended to provide information for all categories of users, ie both for managers and users external to the entity whose position does not allow them to request specific reports, we considered useful to conduct a study aimed at correcting the errors in the context of compliance with the opening balance sheet intangibility principle versus the need to adjust the comparative information on the financial position, financial performance and change in the financial position generated by the correction of the errors in the previous years. In this regard, we will perform a comparative analysis of the application of the intangibility principle both in the two major accounting systems and at international level and we will approach issues related to the correction of the errors in terms of the main differences between the provisions of the continental accounting regulations (represented by the European and national ones in our approach, Anglo-Saxon and those of the international referential on the opening balance sheet intangibility.

  13. Seeing the conflict: an attentional account of reasoning errors.

    Science.gov (United States)

    Mata, André; Ferreira, Mário B; Voss, Andreas; Kollei, Tanja

    2017-12-01

    In judgment and reasoning, intuition and deliberation can agree on the same responses, or they can be in conflict and suggest different responses. Incorrect responses to conflict problems have traditionally been interpreted as a sign of faulty problem-solving-an inability to solve the conflict. However, such errors might emerge earlier, from insufficient attention to the conflict. To test this attentional hypothesis, we manipulated the conflict in reasoning problems and used eye-tracking to measure attention. Across several measures, correct responders paid more attention than incorrect responders to conflict problems, and they discriminated between conflict and no-conflict problems better than incorrect responders. These results are consistent with a two-stage account of reasoning, whereby sound problem solving in the second stage can only lead to accurate responses when sufficient attention is paid in the first stage.

  14. Problematic issues of accounting reflection and accounting recognition of contributions while carrying out joint activities without forming a legal entity

    OpenAIRE

    Куришко, Лілія Анатоліївна

    2015-01-01

    The methodic of accounting reflection of the business transactions related to contributions into the joint activities without forming a legal entity has been studied; the types of contributions defined legally and the possibility of their reflection in accounting have been elucidated; the author’s understanding of the essence of contributions’ types has been formed as well as the approach towards their identification in accounting has been offered

  15. Unification of behavioural, computational and neural accounts of word production errors in post-stroke aphasia

    Directory of Open Access Journals (Sweden)

    Marija Tochadse

    Full Text Available Neuropsychological assessment, brain imaging and computational modelling have augmented our understanding of the multifaceted functional deficits in people with language disorders after stroke. Despite the volume of research using each technique, no studies have attempted to assimilate all three approaches in order to generate a unified behavioural-computational-neural model of post-stroke aphasia.The present study included data from 53 participants with chronic post-stroke aphasia and merged: aphasiological profiles based on a detailed neuropsychological assessment battery which was analysed with principal component and correlational analyses; measures of the impairment taken from Dell's computational model of word production; and the neural correlates of both behavioural and computational accounts analysed by voxel-based correlational methodology.As a result, all three strands coincide with the separation of semantic and phonological stages of aphasic naming, revealing the prominence of these dimensions for the explanation of aphasic performance. Over and above three previously described principal components (phonological ability, semantic ability, executive-demand, we observed auditory working memory as a novel factor. While the phonological Dell parameter was uniquely related to phonological errors/factor, the semantic parameter was less clear-cut, being related to both semantic errors and omissions, and loading heavily with semantic ability and auditory working memory factors. The close relationship between the semantic Dell parameter and omission errors recurred in their high lesion-correlate overlap in the anterior middle temporal gyrus. In addition, the simultaneous overlap of the lesion correlate of omission errors with more dorsal temporal regions, associated with the phonological parameter, highlights the multiple drivers that underpin this error type. The novel auditory working memory factor was located along left superior

  16. Explaining quantitative variation in the rate of Optional Infinitive errors across languages: a comparison of MOSAIC and the Variational Learning Model.

    Science.gov (United States)

    Freudenthal, Daniel; Pine, Julian; Gobet, Fernand

    2010-06-01

    In this study, we use corpus analysis and computational modelling techniques to compare two recent accounts of the OI stage: Legate & Yang's (2007) Variational Learning Model and Freudenthal, Pine & Gobet's (2006) Model of Syntax Acquisition in Children. We first assess the extent to which each of these accounts can explain the level of OI errors across five different languages (English, Dutch, German, French and Spanish). We then differentiate between the two accounts by testing their predictions about the relation between children's OI errors and the distribution of infinitival verb forms in the input language. We conclude that, although both accounts fit the cross-linguistic patterning of OI errors reasonably well, only MOSAIC is able to explain why verbs that occur more frequently as infinitives than as finite verb forms in the input also occur more frequently as OI errors than as correct finite verb forms in the children's output.

  17. A method for the quantification of model form error associated with physical systems.

    Energy Technology Data Exchange (ETDEWEB)

    Wallen, Samuel P.; Brake, Matthew Robert

    2014-03-01

    In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.

  18. A self-organizing learning account of number-form synaesthesia.

    Science.gov (United States)

    Makioka, Shogo

    2009-09-01

    Some people automatically and involuntarily "see" mental images of numbers in spatial arrays when they think of numbers. This phenomenon, called number forms, shares three key characteristics with the other types of synaesthesia, within-individual consistency, between-individual variety, and mixture of regularity and randomness. A theoretical framework called SOLA (self-organizing learning account of number forms) is proposed, which explains the generation process of number forms and the origin of those three characteristics. The simulations replicated the qualitative properties of the shapes of number forms, the property that numbers are aligned in order of size, that discontinuity usually occurs at the point of carry, and that continuous lines tend to have many bends.

  19. Accountability: new challenges, new forms

    NARCIS (Netherlands)

    van Woerkum, C.; Aarts, N.

    2012-01-01

    The general call for more accountability, affecting all western institutions, has reached the communication professionals as well. How can they cope with this new challenge? The danger is that they focus mainly on outcomes, so on performative accountability, whereas decisional accountability,

  20. Analysis of alpha spectrum instrumental errors accounting for the low energy part of semiconductor detector response function

    International Nuclear Information System (INIS)

    Gurbich, A.F.

    1981-01-01

    A technique for processing of instrumental spectrum of charged particles permitting to take account of a low-energy part of spectrometer line shape, to improve accuracy and to estimate detection efficiency is stated on the example of 226 Ra alpha spectrum. The results obtained show that relative intensities of alpha lines within the limits of statistical errors coincide with the known values, line ''tails'' constituting to 3% of total area of the line. Taking account of ''the line tail'' results in shift of centers of peak gravity by 10-20 keV. So low-energy part of the alpha spectrometer line, which is usually not taken account during spectra processing, markedly affect the results [ru

  1. Accounting for sampling error when inferring population synchrony from time-series data: a Bayesian state-space modelling approach with applications.

    Directory of Open Access Journals (Sweden)

    Hugues Santin-Janin

    Full Text Available BACKGROUND: Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal with respect to extrinsic factors (the Moran effect in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. METHODOLOGY/PRINCIPAL FINDINGS: The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i has been previously estimated, and (ii has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. CONCLUSION/SIGNIFICANCE: The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for

  2. An improved triple collocation algorithm for decomposing autocorrelated and white soil moisture retrieval errors

    Science.gov (United States)

    If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...

  3. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    DEFF Research Database (Denmark)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which...... already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants...... and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon...

  4. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    Science.gov (United States)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion

  5. Accounting for the measurement error of spectroscopically inferred soil carbon data for improved precision of spatial predictions.

    Science.gov (United States)

    Somarathna, P D S N; Minasny, Budiman; Malone, Brendan P; Stockmann, Uta; McBratney, Alex B

    2018-08-01

    Spatial modelling of environmental data commonly only considers spatial variability as the single source of uncertainty. In reality however, the measurement errors should also be accounted for. In recent years, infrared spectroscopy has been shown to offer low cost, yet invaluable information needed for digital soil mapping at meaningful spatial scales for land management. However, spectrally inferred soil carbon data are known to be less accurate compared to laboratory analysed measurements. This study establishes a methodology to filter out the measurement error variability by incorporating the measurement error variance in the spatial covariance structure of the model. The study was carried out in the Lower Hunter Valley, New South Wales, Australia where a combination of laboratory measured, and vis-NIR and MIR inferred topsoil and subsoil soil carbon data are available. We investigated the applicability of residual maximum likelihood (REML) and Markov Chain Monte Carlo (MCMC) simulation methods to generate parameters of the Matérn covariance function directly from the data in the presence of measurement error. The results revealed that the measurement error can be effectively filtered-out through the proposed technique. When the measurement error was filtered from the data, the prediction variance almost halved, which ultimately yielded a greater certainty in spatial predictions of soil carbon. Further, the MCMC technique was successfully used to define the posterior distribution of measurement error. This is an important outcome, as the MCMC technique can be used to estimate the measurement error if it is not explicitly quantified. Although this study dealt with soil carbon data, this method is amenable for filtering the measurement error of any kind of continuous spatial environmental data. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Development of a simulation program to study error propagation in the reprocessing input accountancy measurements

    International Nuclear Information System (INIS)

    Sanfilippo, L.

    1987-01-01

    A physical model and a computer program have been developed to simulate all the measurement operations involved with the Isotopic Dilution Analysis technique currently applied in the Volume - Concentration method for the Reprocessing Input Accountancy, together with their errors or uncertainties. The simulator is apt to easily solve a number of problems related to the measurement sctivities of the plant operator and the inspector. The program, written in Fortran 77, is based on a particular Montecarlo technique named ''Random Sampling''; a full description of the code is reported

  7. Counting OCR errors in typeset text

    Science.gov (United States)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  8. The input ambiguity hypothesis and case blindness: an account of cross-linguistic and intra-linguistic differences in case errors.

    Science.gov (United States)

    Pelham, Sabra D

    2011-03-01

    English-acquiring children frequently make pronoun case errors, while German-acquiring children rarely do. Nonetheless, German-acquiring children frequently make article case errors. It is proposed that when child-directed speech contains a high percentage of case-ambiguous forms, case errors are common in child language; when percentages are low, case errors are rare. Input to English and German children was analyzed for percentage of case-ambiguous personal pronouns on adult tiers of corpora from 24 English-acquiring and 24 German-acquiring children. Also analyzed for German was the percentage of case-ambiguous articles. Case-ambiguous pronouns averaged 63·3% in English, compared with 7·6% in German. The percentage of case-ambiguous articles in German was 77·0%. These percentages align with the children's errors reported in the literature. It appears children may be sensitive to levels of ambiguity such that low ambiguity may aid error-free acquisition, while high ambiguity may blind children to case distinctions, resulting in errors.

  9. Normalization of Deviation: Quotation Error in Human Factors.

    Science.gov (United States)

    Lock, Jordan; Bearman, Chris

    2018-05-01

    Objective The objective of this paper is to examine quotation error in human factors. Background Science progresses through building on the work of previous research. This requires accurate quotation. Quotation error has a number of adverse consequences: loss of credibility, loss of confidence in the journal, and a flawed basis for academic debate and scientific progress. Quotation error has been observed in a number of domains, including marine biology and medicine, but there has been little or no previous study of this form of error in human factors, a domain that specializes in the causes and management of error. Methods A study was conducted examining quotation accuracy of 187 extracts from 118 published articles that cited a control article (Vaughan's 1996 book: The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA). Results Of extracts studied, 12.8% ( n = 24) were classed as inaccurate, with 87.2% ( n = 163) being classed as accurate. A second dimension of agreement was examined with 96.3% ( n = 180) agreeing with the control article and only 3.7% ( n = 7) disagreeing. The categories of accuracy and agreement form a two by two matrix. Conclusion Rather than simply blaming individuals for quotation error, systemic factors should also be considered. Vaughan's theory, normalization of deviance, is one systemic theory that can account for quotation error. Application Quotation error is occurring in human factors and should receive more attention. According to Vaughan's theory, the normal everyday systems that promote scholarship may also allow mistakes, mishaps, and quotation error to occur.

  10. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  11. 75 FR 46945 - Proposed Collection; Comment Request; the Drug Accountability Record (Form NIH 2564) (NCI)

    Science.gov (United States)

    2010-08-04

    ... Request; the Drug Accountability Record (Form NIH 2564) (NCI) SUMMARY: In compliance with the requirement... Management and Budget (OMB) for review and approval. Proposed Collection Title: The Drug Accountability... agent accountability. In order to fulfill these requirements, a standard Investigational Drug...

  12. 75 FR 61763 - Submission of OMB Review; Comment Request; Drug Accountability Record (Form NIH 2564) (NCI)

    Science.gov (United States)

    2010-10-06

    ...; Comment Request; Drug Accountability Record (Form NIH 2564) (NCI) SUMMARY: In compliance with the..., 2011, unless it displays a valid OMB control number. Proposed Collection: Title: Drug Accountability... accountability. In order to fulfill these requirements, a standard Investigational Drug Accountability Report...

  13. The roles of word-form frequency and phonological neighbourhood density in the acquisition of Lithuanian noun morphology.

    Science.gov (United States)

    Savičiūtė, Eglė; Ambridge, Ben; Pine, Julian M

    2018-05-01

    Four- and five-year-old children took part in an elicited familiar and novel Lithuanian noun production task to test predictions of input-based accounts of the acquisition of inflectional morphology. Two major findings emerged. First, as predicted by input-based accounts, correct production rates were correlated with the input frequency of the target form, and with the phonological neighbourhood density of the noun. Second, the error patterns were not compatible with the systematic substitution of target forms by either (a) the most frequent form of that noun or (b) a single morphosyntactic default form, as might be predicted by naive versions of a constructivist and generativist account, respectively. Rather, most errors reflected near-miss substitutions of singular for plural, masculine for feminine, or nominative/accusative for a less frequent case. Together, these findings provide support for an input-based approach to morphological acquisition, but are not adequately explained by any single account in its current form.

  14. Does the transformation of accounting firms’ organizational form improve audit quality? Evidence from China

    Directory of Open Access Journals (Sweden)

    Chunfei Wang

    2015-12-01

    Full Text Available In this study, we examine the effects of the transformation of accounting firms’ organizational form on audit quality. We find that the transformation from limited liability to limited liability partnerships has a significant negative effect on the absolute value of discretionary accruals of audited companies. In particular, the transformation has a significant negative effect on positive discretionary accruals and no effect on negative discretionary accruals. We also find that CPAs are more likely to issue modified audit opinions in the year after the transformation, and that there is no evidence that accounting firm size and listed company ownership influence the relationship between the transformation and audit quality. Our conclusions provide empirical evidence for policy makers and enrich the literature on accounting firms’ organizational forms.

  15. Pendulum Shifts, Context, Error, and Personal Accountability

    Energy Technology Data Exchange (ETDEWEB)

    Harold Blackman; Oren Hester

    2011-09-01

    This paper describes a series of tools that were developed to achieve a balance in under-standing LOWs and the human component of events (including accountability) as the INL continues its shift to a learning culture where people report, are accountable and interested in making a positive difference - and want to report because information is handled correctly and the result benefits both the reporting individual and the organization. We present our model for understanding these interrelationships; the initiatives that were undertaken to improve overall performance.

  16. Accounting for baseline differences and measurement error in the analysis of change over time.

    Science.gov (United States)

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Estimating the acute health effects of coarse particulate matter accounting for exposure measurement error.

    Science.gov (United States)

    Chang, Howard H; Peng, Roger D; Dominici, Francesca

    2011-10-01

    In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.

  18. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data.

    Science.gov (United States)

    Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin

    2016-12-30

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Nonparametric Second-Order Theory of Error Propagation on Motion Groups.

    Science.gov (United States)

    Wang, Yunfeng; Chirikjian, Gregory S

    2008-01-01

    Error propagation on the Euclidean motion group arises in a number of areas such as in dead reckoning errors in mobile robot navigation and joint errors that accumulate from the base to the distal end of kinematic chains such as manipulators and biological macromolecules. We address error propagation in rigid-body poses in a coordinate-free way. In this paper we show how errors propagated by convolution on the Euclidean motion group, SE(3), can be approximated to second order using the theory of Lie algebras and Lie groups. We then show how errors that are small (but not so small that linearization is valid) can be propagated by a recursive formula derived here. This formula takes into account errors to second-order, whereas prior efforts only considered the first-order case. Our formulation is nonparametric in the sense that it will work for probability density functions of any form (not only Gaussians). Numerical tests demonstrate the accuracy of this second-order theory in the context of a manipulator arm and a flexible needle with bevel tip.

  20. Medication Errors in Patients with Enteral Feeding Tubes in the Intensive Care Unit.

    Science.gov (United States)

    Sohrevardi, Seyed Mojtaba; Jarahzadeh, Mohammad Hossein; Mirzaei, Ehsan; Mirjalili, Mahtabalsadat; Tafti, Arefeh Dehghani; Heydari, Behrooz

    2017-01-01

    Most patients admitted to Intensive Care Units (ICU) have problems in using oral medication or ingesting solid forms of drugs. Selecting the most suitable dosage form in such patients is a challenge. The current study was conducted to assess the frequency and types of errors of oral medication administration in patients with enteral feeding tubes or suffering swallowing problems. A cross-sectional study was performed in the ICU of Shahid Sadoughi Hospital, Yazd, Iran. Patients were assessed for the incidence and types of medication errors occurring in the process of preparation and administration of oral medicines. Ninety-four patients were involved in this study and 10,250 administrations were observed. Totally, 4753 errors occurred among the studied patients. The most commonly used drugs were pantoprazole tablet, piracetam syrup, and losartan tablet. A total of 128 different types of drugs and nine different oral pharmaceutical preparations were prescribed for the patients. Forty-one (35.34%) out of 116 different solid drugs (except effervescent tablets and powders) could be substituted by liquid or injectable forms. The most common error was the wrong time of administration. Errors of wrong dose preparation and administration accounted for 24.04% and 25.31% of all errors, respectively. In this study, at least three-fourth of the patients experienced medication errors. The occurrence of these errors can greatly impair the quality of the patients' pharmacotherapy, and more attention should be paid to this issue.

  1. An Empirical State Error Covariance Matrix for Batch State Estimation

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  2. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Branch-based model for the diameters of the pulmonary airways: accounting for departures from self-consistency and registration errors.

    Science.gov (United States)

    Neradilek, Moni B; Polissar, Nayak L; Einstein, Daniel R; Glenny, Robb W; Minard, Kevin R; Carson, James P; Jiao, Xiangmin; Jacob, Richard E; Cox, Timothy C; Postlethwait, Edward M; Corley, Richard A

    2012-06-01

    We examine a previously published branch-based approach for modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that take account of error. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys, and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from self-consistency exist, we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. The new variance model can be used instead. Measurement error has an important impact on the estimated morphometry models and needs to be addressed in the analysis. Copyright © 2012 Wiley Periodicals, Inc.

  4. Accounting for model error in air quality forecasts: an application of 4DEnVar to the assimilation of atmospheric composition using QG-Chem 1.0

    Directory of Open Access Journals (Sweden)

    E. Emili

    2016-11-01

    Full Text Available Model errors play a significant role in air quality forecasts. Accounting for them in the data assimilation (DA procedures is decisive to obtain improved forecasts. We address this issue using a reduced-order coupled chemistry–meteorology model based on quasi-geostrophic dynamics and a detailed tropospheric chemistry mechanism, which we name QG-Chem. This model has been coupled to the software library for the data assimilation Object Oriented Prediction System (OOPS and used to assess the potential of the 4DEnVar algorithm for air quality analyses and forecasts. The assets of 4DEnVar include the possibility to deal with multivariate aspects of atmospheric chemistry and to account for model errors of a generic type. A simple diagnostic procedure for detecting model errors is proposed, based on the 4DEnVar analysis and one additional model forecast. A large number of idealized data assimilation experiments are shown for several chemical species of relevance for air quality forecasts (O3, NOx, CO and CO2 with very different atmospheric lifetimes and chemical couplings. Experiments are done both under a perfect model hypothesis and including model error through perturbation of surface chemical emissions. Some key elements of the 4DEnVar algorithm such as the ensemble size and localization are also discussed. A comparison with results of 3D-Var, widely used in operational centers, shows that, for some species, analysis and next-day forecast errors can be halved when model error is taken into account. This result was obtained using a small ensemble size, which remains affordable for most operational centers. We conclude that 4DEnVar has a promising potential for operational air quality models. We finally highlight areas that deserve further research for applying 4DEnVar to large-scale chemistry models, i.e., localization techniques, propagation of analysis covariance between DA cycles and treatment for chemical nonlinearities. QG-Chem can provide a

  5. Analysis of difficulties accounting and evaluating nuclear material of PWR fuel plant

    International Nuclear Information System (INIS)

    Zhang Min; Jue Ji; Liu Tianshu

    2013-01-01

    Background: Nuclear materials accountancy must be developed for nuclear facilities, which is required by regulatory in China. Currently, there are some unresolved problems for nuclear materials accountancy of bulk nuclear facilities. Purpose: The retention values and measurement errors are analyzed in nuclear materials accountancy of Power Water Reactor (PWR) fuel plant to meet the regulatory requirements. Methods: On the basis of nuclear material accounting and evaluation data of PWR fuel plant, a deep analysis research including ratio among random error variance, long-term systematic error variance, short-term systematic error variance and total error involving Material Unaccounted For (MUF) evaluation is developed by the retention value measure in equipment and pipeline. Results: In the equipment pipeline, the holdup estimation error and its total proportion are not more than 5% and 1.5%, respectively. And the holdup estimation can be regraded as a constant in the PWR nuclear material accountancy. Random error variance, long-term systematic error variance, short-term systematic error variance of overall measurement, and analytical and sampling methods are also obtained. A valuable reference is provided for nuclear material accountancy. Conclusion: In nuclear material accountancy, the retention value can be considered as a constant. The long-term systematic error is a main factor in all errors, especially in overall measurement error and sampling error: The long-term systematic errors of overall measurement and sampling are considered important in the PWR nuclear material accountancy. The proposals and measures are applied to the nuclear materials accountancy of PWR fuel plant, and the capacity of nuclear materials accountancy is improved. (authors)

  6. 17 CFR 274.11b - Form N-3, registration statement of separate accounts organized as management investment companies.

    Science.gov (United States)

    2010-04-01

    ... statement of separate accounts organized as management investment companies. 274.11b Section 274.11b... accounts organized as management investment companies. Form N-3 shall be used as the registration statement... offer variable annuity contracts to register as management investment companies. This form shall also be...

  7. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  8. User interface for MAWST limit of error program

    International Nuclear Information System (INIS)

    Crain, B. Jr.

    1991-01-01

    This paper reports on a user-friendly interface which is being developed to aid in preparation of input data for the Los Alamos National Laboratory software module MAWST (Materials Accounting With Sequential Testing) used at Savannah River Site to propagate limits of error for facility material balances. The forms-based interface is being designed using traditional software project management tools and using the Ingres family of database management and application development products (products of Relational Technology, Inc.). The software will run on VAX computers (products of Digital Equipment Corporation) on which the VMS operating system and Ingres database management software are installed. Use of the interface software will reduce time required to prepare input data for calculations and also reduce errors associated with data preparation

  9. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  10. Clinical measuring system for the form and position errors of circular workpieces using optical fiber sensors

    Science.gov (United States)

    Tan, Jiubin; Qiang, Xifu; Ding, Xuemei

    1991-08-01

    Optical sensors have two notable advantages in modern precision measurement. One is that they can be used in nondestructive measurement because the sensors need not touch the surfaces of workpieces in measuring. The other one is that they can strongly resist electromagnetic interferences, vibrations, and noises, so they are suitable to be used in machining sites. But the drift of light intensity and the changing of the reflection coefficient at different measuring positions of a workpiece may have great influence on measured results. To solve the problem, a spectroscopic differential characteristic compensating method is put forward. The method can be used effectively not only in compensating the measuring errors resulted from the drift of light intensity but also in eliminating the influence to measured results caused by the changing of the reflection coefficient. Also, the article analyzes the possibility of and the means of separating data errors of a clinical measuring system for form and position errors of circular workpieces.

  11. Measurement errors in network load measurement: Effects on lead management and accounting. Messfehler bei der Netzlasterfassung: Einfluss auf Lastregelung und Leistungsverrechnung

    Energy Technology Data Exchange (ETDEWEB)

    Bunten, B. (Teilbereich Lastfuehrung, ABB Netzleittechnik GmbH, Ladenburg (Germany)); Dib, R.N. (Fachhochschule Giessen-Friedberg, Bereich Elektrische Energietechnik, Friedberg (Germany))

    1994-05-16

    In electric power supply systems continuous power measurement in the delivery points is necessary both for the purpose of load-management and for energy and power accounting. Electricity meters with pulse output points are commonly used for both applications today. The authors quantify the resulting errors in peak load measurement and load management as a function of the main influencing factors. (orig.)

  12. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  13. Accounting for Berkson and Classical Measurement Error in Radon Exposure Using a Bayesian Structural Approach in the Analysis of Lung Cancer Mortality in the French Cohort of Uranium Miners.

    Science.gov (United States)

    Hoffmann, Sabine; Rage, Estelle; Laurier, Dominique; Laroche, Pierre; Guihenneuc, Chantal; Ancelet, Sophie

    2017-02-01

    Many occupational cohort studies on underground miners have demonstrated that radon exposure is associated with an increased risk of lung cancer mortality. However, despite the deleterious consequences of exposure measurement error on statistical inference, these analyses traditionally do not account for exposure uncertainty. This might be due to the challenging nature of measurement error resulting from imperfect surrogate measures of radon exposure. Indeed, we are typically faced with exposure uncertainty in a time-varying exposure variable where both the type and the magnitude of error may depend on period of exposure. To address the challenge of accounting for multiplicative and heteroscedastic measurement error that may be of Berkson or classical nature, depending on the year of exposure, we opted for a Bayesian structural approach, which is arguably the most flexible method to account for uncertainty in exposure assessment. We assessed the association between occupational radon exposure and lung cancer mortality in the French cohort of uranium miners and found the impact of uncorrelated multiplicative measurement error to be of marginal importance. However, our findings indicate that the retrospective nature of exposure assessment that occurred in the earliest years of mining of this cohort as well as many other cohorts of underground miners might lead to an attenuation of the exposure-risk relationship. More research is needed to address further uncertainties in the calculation of lung dose, since this step will likely introduce important sources of shared uncertainty.

  14. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  15. Human Error Mechanisms in Complex Work Environments

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1988-01-01

    will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...

  16. Composite Reliability and Standard Errors of Measurement for a Seven-Subtest Short Form of the Wechsler Adult Intelligence Scale-Revised.

    Science.gov (United States)

    Schretlen, David; And Others

    1994-01-01

    Composite reliability and standard errors of measurement were computed for prorated Verbal, Performance, and Full-Scale intelligence quotient (IQ) scores from a seven-subtest short form of the Wechsler Adult Intelligence Scale-Revised. Results with 1,880 adults (standardization sample) indicate that this form is as reliable as the complete test.…

  17. Shouting and providing: Forms of exchange in the drinking accounts of young Australians.

    Science.gov (United States)

    Murphy, Dean A; Hart, Aaron; Moore, David

    2017-07-01

    Australian health promotion campaigns encourage people to manage their alcohol consumption by avoiding involvement in a form of round drinking known as 'shouting'. We consider this individualist advice in light of our analysis of the social relations established by young people through collective drinking, in which we conceptualise friends, family and work colleagues as participants in complex networks of exchange. Data were gathered during in-depth, semistructured interviews and ethnographic fieldwork conducted in a socioeconomically disadvantaged outer suburb of Melbourne, Australia. The interview sample comprised nine men and seven women of diverse ethnic backgrounds, with a median age of 21 years. We identified two types of exchange-'shouting' and 'providing'-enacted by round drinking and other collective drinking practices. 'Shouting' is a form of balanced reciprocity in which participants take turns buying drinks for all others in the group. It is an immediate, direct exchange of alcoholic gifts that are equivalent in value. 'Providing' is characterised by indirect reciprocity in which the social aspects of the transaction are emphasised over the value of the goods exchanged. In addition to risking social exclusion, rejecting this form of collective drinking may also risk rejecting the other resources exchanged in this form of sharing, such as food, transport and accommodation. Exchanges of alcoholic gifts complicate the straightforward application of individualist health promotion advice. Social relations need to be taken into account when designing health promotion interventions that seek to reduce alcohol-related harm. [Murphy DA, Hart A, Moore D. Shouting and providing: Forms of exchange in the drinking accounts of young Australians. Drug Alcohol Rev 2017;36:442-448]. © 2016 Australasian Professional Society on Alcohol and other Drugs.

  18. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  19. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  20. Assessing errors related to characteristics of the items measured

    International Nuclear Information System (INIS)

    Liggett, W.

    1980-01-01

    Errors that are related to some intrinsic property of the items measured are often encountered in nuclear material accounting. An example is the error in nondestructive assay measurements caused by uncorrected matrix effects. Nuclear material accounting requires for each materials type one measurement method for which bounds on these errors can be determined. If such a method is available, a second method might be used to reduce costs or to improve precision. If the measurement error for the first method is longer-tailed than Gaussian, then precision might be improved by measuring all items by both methods. 8 refs

  1. Analyzing temozolomide medication errors: potentially fatal.

    Science.gov (United States)

    Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee

    2014-10-01

    The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.

  2. Pathways to extinction: beyond the error threshold.

    Science.gov (United States)

    Manrubia, Susanna C; Domingo, Esteban; Lázaro, Ester

    2010-06-27

    Since the introduction of the quasispecies and the error catastrophe concepts for molecular evolution by Eigen and their subsequent application to viral populations, increased mutagenesis has become a common strategy to cause the extinction of viral infectivity. Nevertheless, the high complexity of virus populations has shown that viral extinction can occur through several other pathways apart from crossing an error threshold. Increases in the mutation rate enhance the appearance of defective forms and promote the selection of mechanisms that are able to counteract the accelerated appearance of mutations. Current models of viral evolution take into account more realistic scenarios that consider compensatory and lethal mutations, a highly redundant genotype-to-phenotype map, rough fitness landscapes relating phenotype and fitness, and where phenotype is described as a set of interdependent traits. Further, viral populations cannot be understood without specifying the characteristics of the environment where they evolve and adapt. Altogether, it turns out that the pathways through which viral quasispecies go extinct are multiple and diverse.

  3. Spacecraft and propulsion technician error

    Science.gov (United States)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  4. Evaluation influence of machining parameters on shape form errors in turning of machine parts clamped in the chuck with adaptive jaws

    Directory of Open Access Journals (Sweden)

    I.V. Lutsiv

    2017-12-01

    Full Text Available The paper deals with the derivation problem of the dependence of machine part geometric form deviation in cross section area on clamping diameter as well as cutting speed, feed and cutting depth in semi finish machining. The analysis of single factor circular deviation dependences on machining conditions values is performed. Using the special software application package the laboratory conditions experiment results are analyzed. The dispersion analysis including options for main linear and quadratic effects evaluation is given and the simplification model of experiment results is obtained. It presents the evaluation empiric dependence of cutting conditions and clamping diameter influence on shape error forming (dynamic error. It is found that to obtain the necessary form accuracy in machining with lathe chuck equipped with the adaptive clamping jaws it is desirable to control the most statistically significant factors that actually are the cutting depth and feed.

  5. Critical evidence for the prediction error theory in associative learning.

    Science.gov (United States)

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  6. 17 CFR 239.17a - Form N-3, registration statement for separate accounts organized as management investment companies.

    Science.gov (United States)

    2010-04-01

    ... statement for separate accounts organized as management investment companies. 239.17a Section 239.17a... accounts organized as management investment companies. Form N-3 shall be used for registration under the... register under the Investment Company Act of 1940 as management investment companies, and certain other...

  7. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  8. Incorporating measurement error in n=1 psychological autoregressive modeling

    NARCIS (Netherlands)

    Schuurman, Noemi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive

  9. Importance of interpolation and coincidence errors in data fusion

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2018-02-01

    Full Text Available The complete data fusion (CDF method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  10. Importance of interpolation and coincidence errors in data fusion

    Science.gov (United States)

    Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana

    2018-02-01

    The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  11. PROBLEMATIC AREAS OF ACCOUNTING: SOME EVIDENCE FROM THE CZECH REPUBLIC

    Directory of Open Access Journals (Sweden)

    Marie Paseková

    2018-03-01

    Full Text Available Accounting is a tool of evidence for reporting assets, equities, liabilities, expenses, revenues and profits or losses of the accounting unit. This information is collectively presented in financial statements, which are an essential source of information for external subjects. Nevertheless, the resulting financial statements are relevant only to the extent to which the information charged in the accounting is correct and error-free. For this reason, the aim of this article is to examine the individual areas recorded in accounting in terms of their possible bias due to an error. The objective of this article is to determine which of the areas of accounting are riskiest in relation to the occurrence of errors, and this in connection to the existence of an important foreign partner of the accounting unit. The risk of of the error occurrence is examined from the accountants’ perspective. For this purpose, a questionnaire survey was used for data collection focusing on areas that are considered to be the most important by the accountants and the areas which are the most problematic. The receivables, expenses and revenues were indicated as the most significant. The areas of long-term assets, financial assets and inventories appear to be problematic due to tax impacts. Expenses, revenues, accruals and deferrals appear to be problematic due to issues with correct valuation. The difference in perception of risk of the error occurrence in relation to the existence of a foreign business partner was proven only for some accounting areas such as liabilities, inventories or expenses.

  12. A measurement strategy and an error-compensation model for the on-machine laser measurement of large-scale free-form surfaces

    International Nuclear Information System (INIS)

    Li, Bin; Li, Feng; Liu, Hongqi; Cai, Hui; Mao, Xinyong; Peng, Fangyu

    2014-01-01

    This study presents a novel measurement strategy and an error-compensation model for the measurement of large-scale free-form surfaces in on-machine laser measurement systems. To improve the measurement accuracy, the effects of the scan depth, surface roughness, incident angle and azimuth angle on the measurement results were investigated experimentally, and a practical measurement strategy considering the position and orientation of the sensor is presented. Also, a semi-quantitative model based on geometrical optics is proposed to compensate for the measurement error associated with the incident angle. The normal vector of the measurement point is determined using a cross-curve method from the acquired surface data. Then, the azimuth angle and incident angle are calculated to inform the measurement strategy and error-compensation model, respectively. The measurement strategy and error-compensation model are verified through the measurement of a large propeller blade on a heavy machine tool in a factory environment. The results demonstrate that the strategy and the model are effective in increasing the measurement accuracy. (paper)

  13. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  14. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2011-06-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  15. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  16. Interpreting the change detection error matrix

    NARCIS (Netherlands)

    Oort, van P.A.J.

    2007-01-01

    Two different matrices are commonly reported in assessment of change detection accuracy: (1) single date error matrices and (2) binary change/no change error matrices. The third, less common form of reporting, is the transition error matrix. This paper discuses the relation between these matrices.

  17. Potential for medical error: Incorrectly completed request forms for ...

    African Journals Online (AJOL)

    Thyroid-stimulating hormone (TSH) is a first-line thyroid function test and, if abnormal, reflex thyroxine (T4) or tri-iodothyronine (T3) testing is requested, depending on clinical and medication data provided. Interpretative comments are added to all TFT results. Objectives. In view of the paucity of articles describing such errors ...

  18. The Pupillary Orienting Response Predicts Adaptive Behavioral Adjustment after Errors.

    Directory of Open Access Journals (Sweden)

    Peter R Murphy

    Full Text Available Reaction time (RT is commonly observed to slow down after an error. This post-error slowing (PES has been thought to arise from the strategic adoption of a more cautious response mode following deployment of cognitive control. Recently, an alternative account has suggested that PES results from interference due to an error-evoked orienting response. We investigated whether error-related orienting may in fact be a pre-cursor to adaptive post-error behavioral adjustment when the orienting response resolves before subsequent trial onset. We measured pupil dilation, a prototypical measure of autonomic orienting, during performance of a choice RT task with long inter-stimulus intervals, and found that the trial-by-trial magnitude of the error-evoked pupil response positively predicted both PES magnitude and the likelihood that the following response would be correct. These combined findings suggest that the magnitude of the error-related orienting response predicts an adaptive change of response strategy following errors, and thereby promote a reconciliation of the orienting and adaptive control accounts of PES.

  19. Revaluation and the the internal audit limits that influence the financial-accounting activity

    OpenAIRE

    Dragos Laurentiu Zaharia; Luminita Dragne; Doina Maria Tilea

    2014-01-01

    Regarding the financial-accounting system, the internal audit aims at understanding the accounting and control systems finding and correcting the errors. The auditor must inform the general management about the obvious errors within the entity. The board must know evrything concerning these errors.

  20. Design of an error-free nondestructive plutonium assay facility

    International Nuclear Information System (INIS)

    Moore, C.B.; Steward, W.E.

    1987-01-01

    An automated, at-line nondestructive assay (NDA) laboratory is installed in facilities recently constructed at the Savannah River Plant. The laboratory will enhance nuclear materials accounting in new plutonium scrap and waste recovery facilities. The advantages of at-line NDA operations will not be realized if results are clouded by errors in analytical procedures, sample identification, record keeping, or techniques for extracting samples from process streams. Minimization of such errors has been a primary design objective for the new facility. Concepts for achieving that objective include mechanizing the administrative tasks of scheduling activities in the laboratory, identifying samples, recording and storing assay data, and transmitting results information to process control and materials accounting functions. These concepts have been implemented in an analytical computer system that is programmed to avoid the obvious sources of error encountered in laboratory operations. The laboratory computer exchanges information with process control and materials accounting computers, transmitting results information and obtaining process data and accounting information as required to guide process operations and maintain current records of materials flow through the new facility

  1. Accountability in Health Care

    DEFF Research Database (Denmark)

    Vrangbæk, Karsten; Byrkjeflot, Haldor

    2016-01-01

    The debate on accountability within the public sector has been lively in the past decade. Significant progress has been made in developing conceptual frameworks and typologies for characterizing different features and functions of accountability. However, there is a lack of sector specific...... adjustment of such frameworks. In this article we present a framework for analyzing accountability within health care. The paper makes use of the concept of "accountability regime" to signify the combination of different accountability forms, directions and functions at any given point in time. We show...... that reforms can introduce new forms of accountability, change existing accountability relations or change the relative importance of different accountability forms. They may also change the dominant direction and shift the balance between different functions of accountability. We further suggest...

  2. Incorporating measurement error in n = 1 psychological autoregressive modeling

    Science.gov (United States)

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  3. BEAM-FORMING ERRORS IN MURCHISON WIDEFIELD ARRAY PHASED ARRAY ANTENNAS AND THEIR EFFECTS ON EPOCH OF REIONIZATION SCIENCE

    Energy Technology Data Exchange (ETDEWEB)

    Neben, Abraham R.; Hewitt, Jacqueline N.; Dillon, Joshua S.; Goeke, R.; Morgan, E. [Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Bradley, Richard F. [Dept. of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, 22904 (United States); Bernardi, G. [Square Kilometre Array South Africa (SKA SA), Cape Town 7405 (South Africa); Bowman, J. D. [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287 (United States); Briggs, F. [Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611 (Australia); Cappallo, R. J.; Corey, B. E.; Lonsdale, C. J.; McWhirter, S. R. [MIT Haystack Observatory, Westford, MA 01886 (United States); Deshpande, A. A. [Raman Research Institute, Bangalore 560080 (India); Greenhill, L. J. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Hazelton, B. J.; Morales, M. F. [Department of Physics, University of Washington, Seattle, WA 98195 (United States); Johnston-Hollitt, M. [School of Chemical and Physical Sciences, Victoria University of Wellington, Wellington 6140 (New Zealand); Kaplan, D. L. [Department of Physics, University of Wisconsin–Milwaukee, Milwaukee, WI 53201 (United States); Mitchell, D. A. [CSIRO Astronomy and Space Science (CASS), P.O. Box 76, Epping, NSW 1710 (Australia); and others

    2016-03-20

    Accurate antenna beam models are critical for radio observations aiming to isolate the redshifted 21 cm spectral line emission from the Dark Ages and the Epoch of Reionization (EOR) and unlock the scientific potential of 21 cm cosmology. Past work has focused on characterizing mean antenna beam models using either satellite signals or astronomical sources as calibrators, but antenna-to-antenna variation due to imperfect instrumentation has remained unexplored. We characterize this variation for the Murchison Widefield Array (MWA) through laboratory measurements and simulations, finding typical deviations of the order of ±10%–20% near the edges of the main lobe and in the sidelobes. We consider the ramifications of these results for image- and power spectrum-based science. In particular, we simulate visibilities measured by a 100 m baseline and find that using an otherwise perfect foreground model, unmodeled beam-forming errors severely limit foreground subtraction accuracy within the region of Fourier space contaminated by foreground emission (the “wedge”). This region likely contains much of the cosmological signal, and accessing it will require measurement of per-antenna beam patterns. However, unmodeled beam-forming errors do not contaminate the Fourier space region expected to be free of foreground contamination (the “EOR window”), showing that foreground avoidance remains a viable strategy.

  4. Bootstrap-Based Improvements for Inference with Clustered Errors

    OpenAIRE

    Doug Miller; A. Colin Cameron; Jonah B. Gelbach

    2006-01-01

    Microeconometrics researchers have increasingly realized the essential need to account for any within-group dependence in estimating standard errors of regression parameter estimates. The typical preferred solution is to calculate cluster-robust or sandwich standard errors that permit quite general heteroskedasticity and within-cluster error correlation, but presume that the number of clusters is large. In applications with few (5-30) clusters, standard asymptotic tests can over-reject consid...

  5. Accounting as Myth Maker

    Directory of Open Access Journals (Sweden)

    Kathy Rudkin

    2007-06-01

    Full Text Available Accounting is not only a technical apparatus, but also manifests a societal dimension. Thispaper proposes that accounting is a protean and complex form of myth making, and as suchforms a cohesive tenet in societies. It is argued that there are intrinsic parallels between thetheoretical attributes of myth and accounting practice, and that these mythicalcharacteristics sustain the existence and acceptance of accounting and its consequences insocieties over time. A theoretical exploration of accounting as a form of myth revealsaccounting as pluralistic and culturally sensitive. Such an analysis challenges theoreticalexplanations of accounting that are presented as a “grand narrative” universalunderstanding of accounting. Manifestations of the attributes of myth are described in thecalculus and artefacts of accounting practice to demonstrate how accounting stories andbeliefs are used as a form of myth by individuals to inform and construe their worldpicture.

  6. Role of Management Accounting in Accounting in General

    OpenAIRE

    Oleksandr Panadiy

    2015-01-01

    The article elucidates main scientific approaches to understanding of essence of management accounting. From scientific studies of domestic scientists it is concluded that most of them classify management accounting under competence of accounting in general. It is shown that the first institutional factors concomitant with division of general theory of accounting into financial and management components were development of forms of business and implementation of accounting standards and manua...

  7. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  8. The capital structure impact on forming company’s accounting policy

    OpenAIRE

    Česnavičiūtė, Giedrė

    2011-01-01

    KEWORDS: Accounting policy, accounting policy choice, disclosure of accounting policies, capital structure, financial leverage, legitimacy theory, agency theory, signal theory, stakeholder theory. The optimal structure of the capital has a huge impact assuring its goals and financial stability. The company’s appropriate situation of financial condition depends on the accounting policies formation as well. In this paper there was made the investigation of correlation of company’s capital struc...

  9. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  10. Human error mechanisms in complex work environments

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1988-01-01

    Human error taxonomies have been developed from analysis of industrial incident reports as well as from psychological experiments. In this paper the results of the two approaches are reviewed and compared. It is found, in both cases, that a fairly small number of basic psychological mechanisms will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations. The implications for system safety and briefly mentioned, together with the implications for system design. (author)

  11. Human error mechanisms in complex work environments

    International Nuclear Information System (INIS)

    Rasmussen, Jens; Danmarks Tekniske Hoejskole, Copenhagen)

    1988-01-01

    Human error taxonomies have been developed from analysis of industrial incident reports as well as from psychological experiments. In this paper the results of the two approaches are reviewed and compared. It is found, in both cases, that a fairly small number of basic psychological mechanisms will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations. The implications for system safety are briefly mentioned, together with the implications for system design. (author)

  12. Designing new nuclear chemical processing plants for safeguards accountability

    International Nuclear Information System (INIS)

    Sprouse, K.M.

    1987-01-01

    New nuclear chemical processing plants will be required to develop material accountability control limits from measurement error propagation analysis rather than historical inventory difference data as performed in the past. In order for measurement error propagation methods to be viable alternatives, process designers must ensure that two nondimensional accountability parameters are maintained below 0.1. These parameters are ratios between the material holdup increase and the variance in inventory difference measurement uncertainty. Measurement uncertainty data for use in error propagation analysis is generally available in the open literature or readily derived from instrument calibration data. However, nuclear material holdup data has not been adequately developed for use in the material accountability design process. Long duration development testing on isolated unit operations is required to generate this necessary information

  13. What do children with specific language impairment do with multiple forms of DO?

    Science.gov (United States)

    Rice, Mabel L; Blossom, Megan

    2013-02-01

    This study was designed to examine the early usage patterns of multiple grammatical functions of DO in children with and without specific language impairment (SLI). Children's use of this plurifunctional form is informative for evaluation of theoretical accounts of the deficit in SLI. Spontaneous uses of multiple functions of DO were analyzed in language samples from 89 children: 37 children with SLI, ages 5;0-5;6 (years;months); 37 age-equivalent children; and 15 language-equivalent children, ages 2;8-4;10. Proportion correct and types of errors produced were analyzed for each function of DO. Children with SLI had significantly lower levels of proportion correct auxiliary DO use compared to both control groups, with omissions of the DO form as the primary error type. Children with SLI had near-ceiling performance on lexical DO and elliptical DO, similar to both control groups. Plurifunctionality is not problematic: Children acquire each function of DO separately. Grammatical properties of the function, rather than surface properties of the form, dictate whether children with SLI will have difficulty using the word. Overall, these results support the extended optional infinitive account of SLI and the use of auxiliary DO omissions as part of a clinical marker for SLI.

  14. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  15. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  16. New approaches towards information materiality in accounting

    OpenAIRE

    Карзаева, Наталия Николаевна

    2015-01-01

    A theoretically substantiated method of the calculation of the level of the materiality factor, taking into account the interest of the persons making decision on the basis of financial factors, formulas of the calculations of the crucial level of error, above which the data of the accounting reporting cannot be adopted as reliable and appropriate corrections must be introduced in accounting reporting. The materials on the order of making corrections in accounting records and accounting repor...

  17. Taking human error into account in the design of nuclear reactor centres

    International Nuclear Information System (INIS)

    Prouillac; Lerat; Janoir.

    1982-05-01

    The role of the operator in the centralized management of pressurized water reactors is studied. Different types of human error likely to arise, the means of their prevention and methods of mitigating their consequences are presented. Some possible improvements are outlined

  18. Bit error rate analysis of free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy

    2016-12-24

    In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.

  19. Female residents experiencing medical errors in general internal medicine: a qualitative study.

    Science.gov (United States)

    Mankaka, Cindy Ottiger; Waeber, Gérard; Gachoud, David

    2014-07-10

    Doctors, especially doctors-in-training such as residents, make errors. They have to face the consequences even though today's approach to errors emphasizes systemic factors. Doctors' individual characteristics play a role in how medical errors are experienced and dealt with. The role of gender has previously been examined in a few quantitative studies that have yielded conflicting results. In the present study, we sought to qualitatively explore the experience of female residents with respect to medical errors. In particular, we explored the coping mechanisms displayed after an error. This study took place in the internal medicine department of a Swiss university hospital. Within a phenomenological framework, semi-structured interviews were conducted with eight female residents in general internal medicine. All interviews were audiotaped, fully transcribed, and thereafter analyzed. Seven main themes emerged from the interviews: (1) A perception that there is an insufficient culture of safety and error; (2) The perceived main causes of errors, which included fatigue, work overload, inadequate level of competences in relation to assigned tasks, and dysfunctional communication; (3) Negative feelings in response to errors, which included different forms of psychological distress; (4) Variable attitudes of the hierarchy toward residents involved in an error; (5) Talking about the error, as the core coping mechanism; (6) Defensive and constructive attitudes toward one's own errors; and (7) Gender-specific experiences in relation to errors. Such experiences consisted in (a) perceptions that male residents were more confident and therefore less affected by errors than their female counterparts and (b) perceptions that sexist attitudes among male supervisors can occur and worsen an already painful experience. This study offers an in-depth account of how female residents specifically experience and cope with medical errors. Our interviews with female residents convey the

  20. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  1. The epidemiology and type of medication errors reported to the National Poisons Information Centre of Ireland.

    Science.gov (United States)

    Cassidy, Nicola; Duggan, Edel; Williams, David J P; Tracey, Joseph A

    2011-07-01

    Medication errors are widely reported for hospitalised patients, but limited data are available for medication errors that occur in community-based and clinical settings. Epidemiological data from poisons information centres enable characterisation of trends in medication errors occurring across the healthcare spectrum. The objective of this study was to characterise the epidemiology and type of medication errors reported to the National Poisons Information Centre (NPIC) of Ireland. A 3-year prospective study on medication errors reported to the NPIC was conducted from 1 January 2007 to 31 December 2009 inclusive. Data on patient demographics, enquiry source, location, pharmaceutical agent(s), type of medication error, and treatment advice were collated from standardised call report forms. Medication errors were categorised as (i) prescribing error (i.e. physician error), (ii) dispensing error (i.e. pharmacy error), and (iii) administration error involving the wrong medication, the wrong dose, wrong route, or the wrong time. Medication errors were reported for 2348 individuals, representing 9.56% of total enquiries to the NPIC over 3 years. In total, 1220 children and adolescents under 18 years of age and 1128 adults (≥ 18 years old) experienced a medication error. The majority of enquiries were received from healthcare professionals, but members of the public accounted for 31.3% (n = 736) of enquiries. Most medication errors occurred in a domestic setting (n = 2135), but a small number occurred in healthcare facilities: nursing homes (n = 110, 4.68%), hospitals (n = 53, 2.26%), and general practitioner surgeries (n = 32, 1.36%). In children, medication errors with non-prescription pharmaceuticals predominated (n = 722) and anti-pyretics and non-opioid analgesics, anti-bacterials, and cough and cold preparations were the main pharmaceutical classes involved. Medication errors with prescription medication predominated for adults (n = 866) and the major medication

  2. The epidemiology and type of medication errors reported to the National Poisons Information Centre of Ireland.

    LENUS (Irish Health Repository)

    Cassidy, Nicola

    2012-02-01

    INTRODUCTION: Medication errors are widely reported for hospitalised patients, but limited data are available for medication errors that occur in community-based and clinical settings. Epidemiological data from poisons information centres enable characterisation of trends in medication errors occurring across the healthcare spectrum. AIM: The objective of this study was to characterise the epidemiology and type of medication errors reported to the National Poisons Information Centre (NPIC) of Ireland. METHODS: A 3-year prospective study on medication errors reported to the NPIC was conducted from 1 January 2007 to 31 December 2009 inclusive. Data on patient demographics, enquiry source, location, pharmaceutical agent(s), type of medication error, and treatment advice were collated from standardised call report forms. Medication errors were categorised as (i) prescribing error (i.e. physician error), (ii) dispensing error (i.e. pharmacy error), and (iii) administration error involving the wrong medication, the wrong dose, wrong route, or the wrong time. RESULTS: Medication errors were reported for 2348 individuals, representing 9.56% of total enquiries to the NPIC over 3 years. In total, 1220 children and adolescents under 18 years of age and 1128 adults (>\\/= 18 years old) experienced a medication error. The majority of enquiries were received from healthcare professionals, but members of the public accounted for 31.3% (n = 736) of enquiries. Most medication errors occurred in a domestic setting (n = 2135), but a small number occurred in healthcare facilities: nursing homes (n = 110, 4.68%), hospitals (n = 53, 2.26%), and general practitioner surgeries (n = 32, 1.36%). In children, medication errors with non-prescription pharmaceuticals predominated (n = 722) and anti-pyretics and non-opioid analgesics, anti-bacterials, and cough and cold preparations were the main pharmaceutical classes involved. Medication errors with prescription medication predominated for

  3. A Justified Initial Accounting Estimate as an Integral Part of the Enterprise Accounting Policy

    Directory of Open Access Journals (Sweden)

    Marenych Tetyana H

    2016-05-01

    Full Text Available The aim of the article is justification of the need to specify in the order on accounting policies not only the elements of the accounting policy itself but also the initial accounting estimates, which will increase the reliability of financial reporting and the development of proposals on improvement of the given administrative documents of the enterprise. It is noted that in recent years the importance of a high-quality accounting policy has increased significantly not only for users of financial reports but also for achieving the purposes of determining the object of levying the profits tax. There revealed significant differences at reflecting in accounting the consequences of changes in the accounting policy and accounting estimate. There has been generalized the information in the order on the enterprise accounting policy with respect to accounting estimates. It is proposed to provide a separate section in the order, where there should be presented information about the list of accounting estimates taken, about how the company will make changes in the accounting policy, accounting estimate as well as correct errors

  4. Prevalence of Refractive Error and Visual Impairment among Rural ...

    African Journals Online (AJOL)

    Refractive error was the major cause of visual impairment accounting for 54% of all causes in the study group. No child was found wearing ... So, large scale community level screening for refractive error should be conducted and integrated with regular school eye screening programs. Effective strategies need to be devised ...

  5. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  6. Putting into practice error management theory: Unlearning and learning to manage action errors in construction.

    Science.gov (United States)

    Love, Peter E D; Smith, Jim; Teo, Pauline

    2018-05-01

    Error management theory is drawn upon to examine how a project-based organization, which took the form of a program alliance, was able to change its established error prevention mindset to one that enacted a learning mindfulness that provided an avenue to curtail its action errors. The program alliance was required to unlearn its existing routines and beliefs to accommodate the practices required to embrace error management. As a result of establishing an error management culture the program alliance was able to create a collective mindfulness that nurtured learning and supported innovation. The findings provide a much-needed context to demonstrate the relevance of error management theory to effectively address rework and safety problems in construction projects. The robust theoretical underpinning that is grounded in practice and presented in this paper provides a mechanism to engender learning from errors, which can be utilized by construction organizations to improve the productivity and performance of their projects. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. The role of errors in the measurements performed at the reprocessing plant head-end for material accountancy purposes

    International Nuclear Information System (INIS)

    Foggi, C.; Liebetrau, A.M.; Petraglia, E.

    1999-01-01

    One of the most common procedures used in determining the amount of nuclear material contained in solutions consists of first measuring the volume and the density of the solution, and then determining the concentrations of this material. This presentation will focus on errors generated at the process lime in the measurement of volume and density. These errors and their associated uncertainties can be grouped into distinct categories depending on their origin: those attributable to measuring instruments; those attributable to operational procedures; variability in measurement conditions; errors in the analysis and interpretation of results. Possible errors sources, their relative magnitudes, and an error propagation rationale are discussed, with emphasis placed on bases and errors of the last three types called systematic errors [ru

  8. IMPACT OF THE PRINCIPLES OF FINANCIAL ACCOUNTING ON THE MANAGEMENT ACCOUNTING

    OpenAIRE

    Daniela CREŢU

    2014-01-01

    The paper studied the impact of the financial accounting principles on the management accounting. There are similarities and differences between the financial accounting and management accounting. The differences are numerous, but in the present paper we are more interested in similarities that are very deep. Not accidentally, in other accounting systems, two types of accounting information form one functional, integrated circuit (accounting monism in U.S.A. or accounting systems of compromis...

  9. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  10. On the performance of mixed RF/FSO variable gain dual-hop transmission systems with pointing errors

    KAUST Repository

    Ansari, Imran Shafique

    2013-09-01

    In this work, the performance analysis of a dualhop relay transmission system composed of asymmetric radiofrequency (RF) and unified free-space optical (FSO) links subject to pointing errors is presented. These unified FSO links account for both types of detection techniques (i.e. indirect modulation/ direct detection (IM/DD) as well as heterodyne detection). More specifically, we derive new exact closed-form expressions for the cumulative distribution function, probability density function, moment generating function, and moments of the end-to-end signal-to-noise ratio of these systems in terms of the Meijer\\'s G function. We then capitalize on these results to offer new exact closed-form expressions for the outage probability, higherorder amount of fading, average error rate for binary and Mary modulation schemes, and ergodic capacity, all in terms of Meijer\\'s G functions. All our new analytical results are verified via computer-based Monte-Carlo simulations. Copyright © 2013 by the Institute of Electrical and Electronic Engineers, Inc.

  11. Error in the delivery of radiation therapy: Results of a quality assurance review

    International Nuclear Information System (INIS)

    Huang, Grace; Medlam, Gaylene; Lee, Justin; Billingsley, Susan; Bissonnette, Jean-Pierre; Ringash, Jolie; Kane, Gabrielle; Hodgson, David C.

    2005-01-01

    Purpose: To examine error rates in the delivery of radiation therapy (RT), technical factors associated with RT errors, and the influence of a quality improvement intervention on the RT error rate. Methods and materials: We undertook a review of all RT errors that occurred at the Princess Margaret Hospital (Toronto) from January 1, 1997, to December 31, 2002. Errors were identified according to incident report forms that were completed at the time the error occurred. Error rates were calculated per patient, per treated volume (≥1 volume per patient), and per fraction delivered. The association between tumor site and error was analyzed. Logistic regression was used to examine the association between technical factors and the risk of error. Results: Over the study interval, there were 555 errors among 28,136 patient treatments delivered (error rate per patient = 1.97%, 95% confidence interval [CI], 1.81-2.14%) and among 43,302 treated volumes (error rate per volume = 1.28%, 95% CI, 1.18-1.39%). The proportion of fractions with errors from July 1, 2000, to December 31, 2002, was 0.29% (95% CI, 0.27-0.32%). Patients with sarcoma or head-and-neck tumors experienced error rates significantly higher than average (5.54% and 4.58%, respectively); however, when the number of treated volumes was taken into account, the head-and-neck error rate was no longer higher than average (1.43%). The use of accessories was associated with an increased risk of error, and internal wedges were more likely to be associated with an error than external wedges (relative risk = 2.04; 95% CI, 1.11-3.77%). Eighty-seven errors (15.6%) were directly attributed to incorrect programming of the 'record and verify' system. Changes to planning and treatment processes aimed at reducing errors within the head-and-neck site group produced a substantial reduction in the error rate. Conclusions: Errors in the delivery of RT are uncommon and usually of little clinical significance. Patient subgroups and

  12. Program Cost Accounting Manual. Form J-380/Form J-580, 1989-90.

    Science.gov (United States)

    California State Dept. of Education, Sacramento. Office of Financial Management Practices and Standards.

    In response to criticism by legislators, the business community, and other publics for an apparent lack of sound financial management, the California State Department of Education, together with representatives from the field and from state control agencies, began to develop a new program cost accounting system in 1984. After pilot testing, the…

  13. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  14. Non-parametric tests of productive efficiency with errors-in-variables

    NARCIS (Netherlands)

    Kuosmanen, T.K.; Post, T.; Scholtes, S.

    2007-01-01

    We develop a non-parametric test of productive efficiency that accounts for errors-in-variables, following the approach of Varian. [1985. Nonparametric analysis of optimizing behavior with measurement error. Journal of Econometrics 30(1/2), 445-458]. The test is based on the general Pareto-Koopmans

  15. On the Theoretical Integration of Accounting Discipline and the Boundary of Accounting

    Institute of Scientific and Technical Information of China (English)

    CAO Wei

    2016-01-01

    The discipline of accounting has formed many branches,and from it split some independent majors as well,such as financial management,auditing and so on.However,due to lacking of comprehensive thinking and theoretical summary,some of the basic relationships between accounting branches still cannot be explained clearly in theory,thus making people have difficulty in understanding clearly the hierarchical structure of the accounting discipline and the nature and boundary of accounting.The idea of theoretical integration presented by this study is:to reconstruct (or return to) the basic theoretical structure of accounting,and on this basis to establish the basic accounting;to shape some accounting branches through the cross links between the basic accounting and other related disciplines;to form a narrow-sense accounting with the external and internal two information systems of the accounting entity,which should be developed on the basis of the basic accounting;and to integrate such disciplines as the narrow-sense accounting,financial management and auditing into a generalized accounting through the value management of the accounting entity,which is necessary.Some interdisciplinary subjects shaped by the accounting information system with some related crossing disciplines (such as national economic accounting,forensic accounting,etc.) belong to a more generalized accounting.

  16. Automated Material Accounting Statistics System at Rockwell Hanford Operations

    International Nuclear Information System (INIS)

    Eggers, R.F.; Giese, E.W.; Kodman, G.P.

    1986-01-01

    The Automated Material Accounting Statistics System (AMASS) was developed under the sponsorship of the U.S. Nuclear Regulatory Commission. The AMASS was developed when it was realized that classical methods of error propagation, based only on measured quantities, did not properly control false alarm rate and that errors other than measurement errors affect inventory differences. The classical assumptions that (1) the mean value of the inventory difference (ID) for a particular nuclear material processing facility is zero, and (2) the variance of the inventory difference is due only to errors in measured quantities are overly simplistic. The AMASS provides a valuable statistical tool for estimating the true mean value and variance of the ID data produced by a particular material balance area. In addition it provides statistical methods of testing both individual and cumulative sums of IDs, taking into account the estimated mean value and total observed variance of the ID

  17. Error Distributions on Large Entangled States with Non-Markovian Dynamics

    DEFF Research Database (Denmark)

    McCutcheon, Dara; Lindner, Netanel H.; Rudolph, Terry

    2014-01-01

    We investigate the distribution of errors on a computationally useful entangled state generated via the repeated emission from an emitter undergoing strongly non-Markovian evolution. For emitter-environment coupling of pure-dephasing form, we show that the probability that a particular patten...... of errors occurs has a bound of Markovian form, and thus, accuracy threshold theorems based on Markovian models should be just as effective. Beyond the pure-dephasing assumption, though complicated error structures can arise, they can still be qualitatively bounded by a Markovian error model....

  18. [Errors in laboratory daily practice].

    Science.gov (United States)

    Larrose, C; Le Carrer, D

    2007-01-01

    Legislation set by GBEA (Guide de bonne exécution des analyses) requires that, before performing analysis, the laboratory directors have to check both the nature of the samples and the patients identity. The data processing of requisition forms, which identifies key errors, was established in 2000 and in 2002 by the specialized biochemistry laboratory, also with the contribution of the reception centre for biological samples. The laboratories follow a strict criteria of defining acceptability as a starting point for the reception to then check requisition forms and biological samples. All errors are logged into the laboratory database and analysis report are sent to the care unit specifying the problems and the consequences they have on the analysis. The data is then assessed by the laboratory directors to produce monthly or annual statistical reports. This indicates the number of errors, which are then indexed to patient files to reveal the specific problem areas, therefore allowing the laboratory directors to teach the nurses and enable corrective action.

  19. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks.

    Science.gov (United States)

    Jarama, Ángel J; López-Araquistain, Jaime; Miguel, Gonzalo de; Besada, Juan A

    2017-09-21

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  20. Impact of error management culture on knowledge performance in professional service firms

    Directory of Open Access Journals (Sweden)

    Tabea Scheel

    2014-01-01

    Full Text Available Knowledge is the most crucial resource of the 21st century. For professional service firms (PSFs, knowledge represents the input as well as the output, and thus the fundamental base for performance. As every organization, PSFs have to deal with errors – and how they do that indicates their error culture. Considering the positive potential of errors (e.g., innovation, error management culture is positively related to organizational performance. This longitudinal quantitative study investigates the impact of error management culture on knowledge performance in four waves. The study was conducted in 131 PSFs, i.e. tax accounting offices. As a standard quality management system (QMS was assumed to moderate the relationship between error management culture and knowledge performance, offices' ISO 9000 certification was assessed. Error management culture correlated positively with knowledge performance at a significant level and predicted knowledge performance one year later. While the ISO 9000 certification correlated positively with knowledge performance, its assumed moderation of the relationship between error management culture and knowledge performance was not consistent. The process-oriented QMS seems to function as facilitator for the more behavior-oriented error management culture. However, the benefit of ISO 9000 certification for tax accounting remains to be proven. Given the impact of error management culture on knowledge performance, PSFs should focus on actively promoting positive attitudes towards errors.

  1. An adaptive orienting theory of error processing.

    Science.gov (United States)

    Wessel, Jan R

    2018-03-01

    The ability to detect and correct action errors is paramount to safe and efficient goal-directed behaviors. Existing work on the neural underpinnings of error processing and post-error behavioral adaptations has led to the development of several mechanistic theories of error processing. These theories can be roughly grouped into adaptive and maladaptive theories. While adaptive theories propose that errors trigger a cascade of processes that will result in improved behavior after error commission, maladaptive theories hold that error commission momentarily impairs behavior. Neither group of theories can account for all available data, as different empirical studies find both impaired and improved post-error behavior. This article attempts a synthesis between the predictions made by prominent adaptive and maladaptive theories. Specifically, it is proposed that errors invoke a nonspecific cascade of processing that will rapidly interrupt and inhibit ongoing behavior and cognition, as well as orient attention toward the source of the error. It is proposed that this cascade follows all unexpected action outcomes, not just errors. In the case of errors, this cascade is followed by error-specific, controlled processing, which is specifically aimed at (re)tuning the existing task set. This theory combines existing predictions from maladaptive orienting and bottleneck theories with specific neural mechanisms from the wider field of cognitive control, including from error-specific theories of adaptive post-error processing. The article aims to describe the proposed framework and its implications for post-error slowing and post-error accuracy, propose mechanistic neural circuitry for post-error processing, and derive specific hypotheses for future empirical investigations. © 2017 Society for Psychophysiological Research.

  2. [Analysis of intrusion errors in free recall].

    Science.gov (United States)

    Diesfeldt, H F A

    2017-06-01

    Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.

  3. Considerations for sampling nuclear materials for SNM accounting measurements. Special nuclear material accountability report

    International Nuclear Information System (INIS)

    Brouns, R.J.; Roberts, F.P.; Upson, U.L.

    1978-05-01

    This report presents principles and guidelines for sampling nuclear materials to measure chemical and isotopic content of the material. Development of sampling plans and procedures that maintain the random and systematic errors of sampling within acceptable limits for SNM(Special Nuclear Materials) accounting purposes are emphasized

  4. A new approach to the form and position error measurement of the auto frame surface based on laser

    Science.gov (United States)

    Wang, Hua; Li, Wei

    2013-03-01

    Auto frame is a very large workpiece, with length up to 12 meters and width up to 2 meters, and it's very easy to know that it's inconvenient and not automatic to measure such a large workpiece by independent manual operation. In this paper we propose a new approach to reconstruct the 3D model of the large workpiece, especially the auto truck frame, based on multiple pulsed lasers, for the purpose of measuring the form and position errors. In a concerned area, it just needs one high-speed camera and two lasers. It is a fast, high-precision and economical approach.

  5. Cognitive aspect of diagnostic errors.

    Science.gov (United States)

    Phua, Dong Haur; Tan, Nigel C K

    2013-01-01

    Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.

  6. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    Science.gov (United States)

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  7. Using Generalizability Theory to Disattenuate Correlation Coefficients for Multiple Sources of Measurement Error.

    Science.gov (United States)

    Vispoel, Walter P; Morris, Carrie A; Kilinc, Murat

    2018-05-02

    Over the years, research in the social sciences has been dominated by reporting of reliability coefficients that fail to account for key sources of measurement error. Use of these coefficients, in turn, to correct for measurement error can hinder scientific progress by misrepresenting true relationships among the underlying constructs being investigated. In the research reported here, we addressed these issues using generalizability theory (G-theory) in both traditional and new ways to account for the three key sources of measurement error (random-response, specific-factor, and transient) that affect scores from objectively scored measures. Results from 20 widely used measures of personality, self-concept, and socially desirable responding showed that conventional indices consistently misrepresented reliability and relationships among psychological constructs by failing to account for key sources of measurement error and correlated transient errors within occasions. The results further revealed that G-theory served as an effective framework for remedying these problems. We discuss possible extensions in future research and provide code from the computer package R in an online supplement to enable readers to apply the procedures we demonstrate to their own research.

  8. Accounting for human factor in QC and QA inspections

    International Nuclear Information System (INIS)

    Goodman, J.

    1986-01-01

    Two types of human error during QC/QA inspection have been identified. The method of accounting for the effects of human error in QC/QA inspections was developed. The result of evaluation of the proportion of discrepant items in the population is affected significantly by human factor

  9. Using the Bootstrap to Account for Linkage Errors when Analysing Probabilistically Linked Categorical Data

    Directory of Open Access Journals (Sweden)

    Chipperfield James O.

    2015-09-01

    Full Text Available Record linkage is the act of bringing together records that are believed to belong to the same unit (e.g., person or business from two or more files. Record linkage is not an error-free process and can lead to linking a pair of records that do not belong to the same unit. This occurs because linking fields on the files, which ideally would uniquely identify each unit, are often imperfect. There has been an explosion of record linkage applications, particularly involving government agencies and in the field of health, yet there has been little work on making correct inference using such linked files. Naively treating a linked file as if it were linked without errors can lead to biased inferences. This article develops a method of making inferences for cross tabulated variables when record linkage is not an error-free process. In particular, it develops a parametric bootstrap approach to estimation which can accommodate the sophisticated probabilistic record linkage techniques that are widely used in practice (e.g., 1-1 linkage. The article demonstrates the effectiveness of this method in a simulation and in a real application.

  10. Medication Errors in an Internal Intensive Care Unit of a Large Teaching Hospital: A Direct Observation Study

    Directory of Open Access Journals (Sweden)

    Saadat Delfani

    2012-06-01

    Full Text Available Medication errors account for about 78% of serious medical errors in intensive care unit (ICU. So far no study has been performed in Iran to evaluate all type of possible medication errors in ICU. Therefore the objective of this study was to reveal the frequency, type and consequences of all type of errors in an ICU of a large teaching hospital. The prospective observational study was conducted in an 11 bed internal ICU of a university hospital in Shiraz. In each shift all processes that were performed on one selected patient was observed and recorded by a trained pharmacist. Observer would intervene only if medication error would cause substantial harm. The data was evaluated and then were entered in a form that was designed for this purpose. The study continued for 38 shifts. During this period, a total of 442 errors per 5785 opportunities for errors (7.6% occurred. Of those, there were 9.8% administration errors, 6.8% prescribing errors, 3.3% transcription errors and, 2.3% dispensing errors. Totally 45 interventions were made, 40% of interventions result in the correction of errors. The most common causes of errors were observed to be: rule violations, slip and memory lapses and lack of drug knowledge. According to our results, the rate of errors is alarming and requires implementation of a serious solution. Since our system lacks a well-organize detection and reporting mechanism, there is no means for preventing errors in the first place. Hence, as the first step we must implement a system where errors are routinely detected and reported.

  11. Review of Pre-Analytical Errors in Oral Glucose Tolerance Testing in a Tertiary Care Hospital.

    Science.gov (United States)

    Nanda, Rachita; Patel, Suprava; Sahoo, Sibashish; Mohapatra, Eli

    2018-03-13

    The pre-pre-analytical and pre-analytical phases form a major chunk of the errors in a laboratory. The process has taken into consideration a very common procedure which is the oral glucose tolerance test to identify the pre-pre-analytical errors. Quality indicators provide evidence of quality, support accountability and help in the decision making of laboratory personnel. The aim of this research is to evaluate pre-analytical performance of the oral glucose tolerance test procedure. An observational study that was conducted overa period of three months, in the phlebotomy and accessioning unit of our laboratory using questionnaire that examined the pre-pre-analytical errors through a scoring system. The pre-analytical phase was analyzed for each sample collected as per seven quality indicators. About 25% of the population gave wrong answer with regard to the question that tested the knowledge of patient preparation. The appropriateness of test result QI-1 had the most error. Although QI-5 for sample collection had a low error rate, it is a very important indicator as any wrongly collected sample can alter the test result. Evaluating the pre-analytical and pre-pre-analytical phase is essential and must be conducted routinely on a yearly basis to identify errors and take corrective action and to facilitate their gradual introduction into routine practice.

  12. Error correction and degeneracy in surface codes suffering loss

    International Nuclear Information System (INIS)

    Stace, Thomas M.; Barrett, Sean D.

    2010-01-01

    Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.

  13. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    Directory of Open Access Journals (Sweden)

    Ángel J. Jarama

    2017-09-01

    Full Text Available In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature. It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  14. Quantification of human errors in level-1 PSA studies in NUPEC/JINS

    International Nuclear Information System (INIS)

    Hirano, M.; Hirose, M.; Sugawara, M.; Hashiba, T.

    1991-01-01

    THERP (Technique for Human Error Rate Prediction) method is mainly adopted to evaluate the pre-accident and post-accident human error rates. Performance shaping factors are derived by taking Japanese operational practice into account. Several examples of human error rates with calculational procedures are presented. The important human interventions of typical Japanese NPPs are also presented. (orig./HP)

  15. IMPACT OF THE PRINCIPLES OF FINANCIAL ACCOUNTING ON THE MANAGEMENT ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Daniela CREŢU

    2014-11-01

    Full Text Available The paper studied the impact of the financial accounting principles on the management accounting. There are similarities and differences between the financial accounting and management accounting. The differences are numerous, but in the present paper we are more interested in similarities that are very deep. Not accidentally, in other accounting systems, two types of accounting information form one functional, integrated circuit (accounting monism in U.S.A. or accounting systems of compromise between the accounting monism and dualism. Since there is not only one accounting system, but a set of accounting systems, the accounting principles are not absolute, but relative. This relativity is given by the assumed objectives of the accounting.

  16. The primary organization of accounting in budgetary institutions of Ukraine: development of the work plan accounts

    Directory of Open Access Journals (Sweden)

    Svirko S.V.

    2017-03-01

    Full Text Available The questions of primary accounting of economic activity of budgetary institutions in the modernization budget accounting subsystem. With the object of study chosen mechanism for building and operating its own chart of accounts of economic activity of budgetary institutions. In order to develop a thesaurus budget accounting formed the definition of «work plan accounts economic activity of budgetary institutions». The approaches to determining impacts, principles and stages of construction work plan accounts, based on what model of development formed a working plan of accounts of economic activity of budgetary institutions, based on the combined general and technical principles and allows for certain vectors regulatory impact. The necessity of formation at the Ministry of Finance of Ukraine three typical plans for major accounts of the public sector. Based on the study of the Chart of Accounts in the public sector generated and presented a typical chart of accounts of budget institutions. Conclusions about the necessity of forming mechanism of development of analytical accounts for different entities public sector.

  17. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  18. Animal movement constraints improve resource selection inference in the presence of telemetry error

    Science.gov (United States)

    Brost, Brian M.; Hooten, Mevin B.; Hanks, Ephraim M.; Small, Robert J.

    2016-01-01

    Multiple factors complicate the analysis of animal telemetry location data. Recent advancements address issues such as temporal autocorrelation and telemetry measurement error, but additional challenges remain. Difficulties introduced by complicated error structures or barriers to animal movement can weaken inference. We propose an approach for obtaining resource selection inference from animal location data that accounts for complicated error structures, movement constraints, and temporally autocorrelated observations. We specify a model for telemetry data observed with error conditional on unobserved true locations that reflects prior knowledge about constraints in the animal movement process. The observed telemetry data are modeled using a flexible distribution that accommodates extreme errors and complicated error structures. Although constraints to movement are often viewed as a nuisance, we use constraints to simultaneously estimate and account for telemetry error. We apply the model to simulated data, showing that it outperforms common ad hoc approaches used when confronted with measurement error and movement constraints. We then apply our framework to an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that is constrained to move within the marine environment and adjacent coastlines.

  19. Error-rate performance analysis of opportunistic regenerative relaying

    KAUST Repository

    Tourki, Kamel

    2011-09-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.

  20. Accounting for spatiotemporal errors of gauges: A critical step to evaluate gridded precipitation products

    Science.gov (United States)

    Tang, Guoqiang; Behrangi, Ali; Long, Di; Li, Changming; Hong, Yang

    2018-04-01

    Rain gauge observations are commonly used to evaluate the quality of satellite precipitation products. However, the inherent difference between point-scale gauge measurements and areal satellite precipitation, i.e. a point of space in time accumulation v.s. a snapshot of time in space aggregation, has an important effect on the accuracy and precision of qualitative and quantitative evaluation results. This study aims to quantify the uncertainty caused by various combinations of spatiotemporal scales (0.1°-0.8° and 1-24 h) of gauge network designs in the densely gauged and relatively flat Ganjiang River basin, South China, in order to evaluate the state-of-the-art satellite precipitation, the Integrated Multi-satellite Retrievals for Global Precipitation Measurement (IMERG). For comparison with the dense gauge network serving as "ground truth", 500 sparse gauge networks are generated through random combinations of gauge numbers at each set of spatiotemporal scales. Results show that all sparse gauge networks persistently underestimate the performance of IMERG according to most metrics. However, the probability of detection is overestimated because hit and miss events are more likely fewer than the reference numbers derived from dense gauge networks. A nonlinear error function of spatiotemporal scales and the number of gauges in each grid pixel is developed to estimate the errors of using gauges to evaluate satellite precipitation. Coefficients of determination of the fitting are above 0.9 for most metrics. The error function can also be used to estimate the required minimum number of gauges in each grid pixel to meet a predefined error level. This study suggests that the actual quality of satellite precipitation products could be better than conventionally evaluated or expected, and hopefully enables non-subject-matter-expert researchers to have better understanding of the explicit uncertainties when using point-scale gauge observations to evaluate areal products.

  1. The Errors of Our Ways

    Science.gov (United States)

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  2. The Influence Of Costs Accounting And Professional Accountant On The Training Of Sales Price Of Micro And Small Enterprises In Porto Velho City

    Directory of Open Access Journals (Sweden)

    César Licório

    2016-06-01

    Full Text Available The aim of this paper is to identify the contribution of cost accounting and professional accountant, as a competitive advantage in the formation of the selling price of micro and small enterprises in Porto Velho city. The methodology used for the preparation of this work was a survey, of course, literature, descriptive and qualitative, as well as a field study using questionnaire. Starting from the following problem: in a globalized economy, with such aggressive pricing, which the importance of cost accounting and the advice of a professional accountant in the formation of the selling price? This research is justified in the increasingly competitive market, such as the Old Port, where any detail can bring a great advantage in the bottom line. The main results identified show that the vast majority of micro and small entrepreneurs of Porto Velho do not use cost accounting, nor the aid of a counter, a qualified professional with the necessary techniques to support the formation of their prices so they do not incur errors that bring you losses also do not use the advantage of the information that the cost accounting features for decision making. In a globalized economy, specifically in a growing market as the Porto Velho any detail can make the difference between success and failure of a micro and small business, cost accounting becomes an increasingly necessary tool, and it is undeniable the contribution that the counter as specialized professional can bring to the manager has grounds to form their prices, deciding marketing strategies, and control costs.

  3. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  4. Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.

    Science.gov (United States)

    Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A

    2017-05-01

    Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.

  5. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  6. INFORMATION, KEY ELEMENT OF ACCOUNTING AND AUDIT IN THE KNOWLEDGE SOCIETY

    Directory of Open Access Journals (Sweden)

    NELUTA MITEA

    2011-04-01

    Full Text Available In a knowledge society, the advantage of nations will not result from their natural resources, nor to the cheap labor, but from their ability to valorize the intellectual potential and to use efficiently the information. The knowledge based economy represents a new step in the development of human civilization that promises us a better future. The knowledge transfer between people and generations in order to facilitate human society’s evolution is the basic function of information science. This paper aims to examine how, in Romania, accounting and audit use and create information in current conditions of economic development. The purpose of this study consists in offering perspectives of improving the information quality. An information is high quality when, by its form and content, it corresponds integrally to all the needs, the exigencies and expectations of its user, without sacrificing the reality. A number of errors made by the accounting profession have been identified along this paper. These errors led to the decrease of information’s credibility. But the study proposes some changes in order to restore the image of this profession: the changes are sustained by the advantages of Knowledge Economy and Information Society. The research method consists in studying a rich background material, including reference items, such as works of applied and fundamental research. The originality of this work is given by the identification of knowledge society’s challenge which could be used as a lever of revival for accounting and audit in Romania.

  7. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  8. The error sources appearing for the gamma radioactive source measurement in dynamic condition

    International Nuclear Information System (INIS)

    Sirbu, M.

    1977-01-01

    The error analysis for the measurement of the gamma radioactive sources, placed on the soil, with the help of the helicopter are presented. The analysis is based on a new formula that takes account of the attenuation gamma ray factor in the helicopter walls. They give a complete error formula and an error diagram. (author)

  9. An individual differences approach to multiple-target visual search errors: How search errors relate to different characteristics of attention.

    Science.gov (United States)

    Adamo, Stephen H; Cain, Matthew S; Mitroff, Stephen R

    2017-12-01

    A persistent problem in visual search is that searchers are more likely to miss a target if they have already found another in the same display. This phenomenon, the Subsequent Search Miss (SSM) effect, has remained despite being a known issue for decades. Increasingly, evidence supports a resource depletion account of SSM errors-a previously detected target consumes attentional resources leaving fewer resources available for the processing of a second target. However, "attention" is broadly defined and is composed of many different characteristics, leaving considerable uncertainty about how attention affects second-target detection. The goal of the current study was to identify which attentional characteristics (i.e., selection, limited capacity, modulation, and vigilance) related to second-target misses. The current study compared second-target misses to an attentional blink task and a vigilance task, which both have established measures that were used to operationally define each of four attentional characteristics. Second-target misses in the multiple-target search were correlated with (1) a measure of the time it took for the second target to recovery from the blink in the attentional blink task (i.e., modulation), and (2) target sensitivity (d') in the vigilance task (i.e., vigilance). Participants with longer recovery and poorer vigilance had more second-target misses in the multiple-target visual search task. The results add further support to a resource depletion account of SSM errors and highlight that worse modulation and poor vigilance reflect a deficit in attentional resources that can account for SSM errors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Speech Errors in Progressive Non-Fluent Aphasia

    Science.gov (United States)

    Ash, Sharon; McMillan, Corey; Gunawardena, Delani; Avants, Brian; Morgan, Brianna; Khan, Alea; Moore, Peachie; Gee, James; Grossman, Murray

    2010-01-01

    The nature and frequency of speech production errors in neurodegenerative disease have not previously been precisely quantified. In the present study, 16 patients with a progressive form of non-fluent aphasia (PNFA) were asked to tell a story from a wordless children's picture book. Errors in production were classified as either phonemic,…

  11. Locked modes and magnetic field errors in MST

    International Nuclear Information System (INIS)

    Almagri, A.F.; Assadi, S.; Prager, S.C.; Sarff, J.S.; Kerst, D.W.

    1992-06-01

    In the MST reversed field pinch magnetic oscillations become stationary (locked) in the lab frame as a result of a process involving interactions between the modes, sawteeth, and field errors. Several helical modes become phase locked to each other to form a rotating localized disturbance, the disturbance locks to an impulsive field error generated at a sawtooth crash, the error fields grow monotonically after locking (perhaps due to an unstable interaction between the modes and field error), and over the tens of milliseconds of growth confinement degrades and the discharge eventually terminates. Field error control has been partially successful in eliminating locking

  12. Optimal measurement uncertainties for materials accounting in a fast breeder reactor spent-fuel reprocessing plant

    International Nuclear Information System (INIS)

    Dayem, H.A.; Kern, E.A.; Markin, J.T.

    1982-01-01

    Optimization techniques are used to calculate measurement uncertainties for materials accountability instruments in a fast breeder reactor spent-fuel reprocessing plant. Optimal measurement uncertainties are calculated so that performance goals for detecting materials loss are achieved while minimizing the total instrument development cost. Improved materials accounting in the chemical separations process (111 kg Pu/day) to meet 8-kg plutonium abrupt (1 day) and 40-kg plutonium protracted (6 months) loss-detection goals requires: process tank volume and concentration measurements having precisions less than or equal to 1%; accountability and plutonium sample tank volume measurements having precisions less than or equal to 0.3%, short-term correlated errors less than or equal to 0.04%, and long-term correlated errors less than or equal to 0.04%; and accountability and plutonium sample tank concentration measurements having precisions less than or equal to 0.4%, short-term correlated errors less than or equal to 0.1%, and long-term correlated errors less than or equal to 0.05%

  13. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  14. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  15. Medication error detection in two major teaching hospitals: What are the types of errors?

    Directory of Open Access Journals (Sweden)

    Fatemeh Saghafi

    2014-01-01

    Full Text Available Background: Increasing number of reports on medication errors and relevant subsequent damages, especially in medical centers has become a growing concern for patient safety in recent decades. Patient safety and in particular, medication safety is a major concern and challenge for health care professionals around the world. Our prospective study was designed to detect prescribing, transcribing, dispensing, and administering medication errors in two major university hospitals. Materials and Methods: After choosing 20 similar hospital wards in two large teaching hospitals in the city of Isfahan, Iran, the sequence was randomly selected. Diagrams for drug distribution were drawn by the help of pharmacy directors. Direct observation technique was chosen as the method for detecting the errors. A total of 50 doses were studied in each ward to detect prescribing, transcribing and administering errors in each ward. The dispensing error was studied on 1000 doses dispensed in each hospital pharmacy. Results: A total of 8162 number of doses of medications were studied during the four stages, of which 8000 were complete data to be analyzed. 73% of prescribing orders were incomplete and did not have all six parameters (name, dosage form, dose and measuring unit, administration route, and intervals of administration. We found 15% transcribing errors. One-third of administration of medications on average was erroneous in both hospitals. Dispensing errors ranged between 1.4% and 2.2%. Conclusion: Although prescribing and administrating compromise most of the medication errors, improvements are needed in all four stages with regard to medication errors. Clear guidelines must be written and executed in both hospitals to reduce the incidence of medication errors.

  16. Spectrum of diagnostic errors in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca

    2010-10-28

    Diagnostic errors are important in all branches of medicine because they are an indication of poor patient care. Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. Most often, a plaintiff's complaint against a radiologist will focus on a failure to diagnose. The etiology of radiological error is multi-factorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. The work of diagnostic radiology consists of the complete detection of all abnormalities in an imaging examination and their accurate diagnosis. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Error traps need to be uncovered and highlighted, in order to prevent repetition of the same mistakes. This article focuses on the spectrum of diagnostic errors in radiology, including a classification of the errors, and stresses the malpractice issues in mammography, chest radiology and obstetric sonography. Missed fractures in emergency and communication issues between radiologists and physicians are also discussed.

  17. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Directory of Open Access Journals (Sweden)

    Gyungho Khim

    2015-01-01

    Full Text Available We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement.

  19. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Science.gov (United States)

    Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok

    2015-01-01

    We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715

  20. Self-Interaction Error in Density Functional Theory: An Appraisal.

    Science.gov (United States)

    Bao, Junwei Lucas; Gagliardi, Laura; Truhlar, Donald G

    2018-05-03

    Self-interaction error (SIE) is considered to be one of the major sources of error in most approximate exchange-correlation functionals for Kohn-Sham density-functional theory (KS-DFT), and it is large with all local exchange-correlation functionals and with some hybrid functionals. In this work, we consider systems conventionally considered to be dominated by SIE. For these systems, we demonstrate that by using multiconfiguration pair-density functional theory (MC-PDFT), the error of a translated local density-functional approximation is significantly reduced (by a factor of 3) when using an MCSCF density and on-top density, as compared to using KS-DFT with the parent functional; the error in MC-PDFT with local on-top functionals is even lower than the error in some popular KS-DFT hybrid functionals. Density-functional theory, either in MC-PDFT form with local on-top functionals or in KS-DFT form with some functionals having 50% or more nonlocal exchange, has smaller errors for SIE-prone systems than does CASSCF, which has no SIE.

  1. Ergodic Capacity Analysis of Free-Space Optical Links with Nonzero Boresight Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique

    2015-04-01

    A unified capacity analysis of a free-space optical (FSO) link that accounts for nonzero boresight pointing errors and both types of detection techniques (i.e. intensity modulation/ direct detection as well as heterodyne detection) is addressed in this work. More specifically, an exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system is presented in terms of well-known elementary functions. Capitalizing on these new moments expressions, we present approximate and simple closedform results for the ergodic capacity at high and low SNR regimes. All the presented results are verified via computer-based Monte-Carlo simulations.

  2. Competition increases binding errors in visual working memory.

    Science.gov (United States)

    Emrich, Stephen M; Ferber, Susanne

    2012-04-20

    When faced with maintaining multiple objects in visual working memory, item information must be bound to the correct object in order to be correctly recalled. Sometimes, however, binding errors occur, and participants report the feature (e.g., color) of an unprobed, non-target item. In the present study, we examine whether the configuration of sample stimuli affects the proportion of these binding errors. The results demonstrate that participants mistakenly report the identity of the unprobed item (i.e., they make a non-target response) when sample items are presented close together in space, suggesting that binding errors can increase independent of increases in memory load. Moreover, the proportion of these non-target responses is linearly related to the distance between sample items, suggesting that these errors are spatially specific. Finally, presenting sample items sequentially decreases non-target responses, suggesting that reducing competition between sample stimuli reduces the number of binding errors. Importantly, these effects all occurred without increases in the amount of error in the memory representation. These results suggest that competition during encoding can account for some of the binding errors made during VWM recall.

  3. Error estimation and adaptivity for incompressible hyperelasticity

    KAUST Repository

    Whiteley, J.P.

    2014-04-30

    SUMMARY: A Galerkin FEM is developed for nonlinear, incompressible (hyper) elasticity that takes account of nonlinearities in both the strain tensor and the relationship between the strain tensor and the stress tensor. By using suitably defined linearised dual problems with appropriate boundary conditions, a posteriori error estimates are then derived for both linear functionals of the solution and linear functionals of the stress on a boundary, where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution, where the entries of the strain tensor exhibit large, rapid variations, demonstrates the accuracy and sharpness of the error estimators. Finally, using a selection of model problems, the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity. © 2014 John Wiley & Sons, Ltd.

  4. ERROR ANALYSIS ON INFORMATION AND TECHNOLOGY STUDENTS’ SENTENCE WRITING ASSIGNMENTS

    Directory of Open Access Journals (Sweden)

    Rentauli Mariah Silalahi

    2015-03-01

    Full Text Available Students’ error analysis is very important for helping EFL teachers to develop their teaching materials, assessments and methods. However, it takes much time and effort from the teachers to do such an error analysis towards their students’ language. This study seeks to identify the common errors made by 1 class of 28 freshmen students studying English in their first semester in an IT university. The data is collected from their writing assignments for eight consecutive weeks. The errors found were classified into 24 types and the top ten most common errors committed by the students were article, preposition, spelling, word choice, subject-verb agreement, auxiliary verb, plural form, verb form, capital letter, and meaningless sentences. The findings about the students’ frequency of committing errors were, then, contrasted to their midterm test result and in order to find out the reasons behind the error recurrence; the students were given some questions to answer in a questionnaire format. Most of the students admitted that careless was the major reason for their errors and lack understanding came next. This study suggests EFL teachers to devote their time to continuously check the students’ language by giving corrections so that the students can learn from their errors and stop committing the same errors.

  5. Evo-devo and accounting for Darwin's endless forms.

    Science.gov (United States)

    Brakefield, Paul M

    2011-07-27

    Evo-devo has led to dramatic advances in our understanding of how the processes of development can contribute to explaining patterns of evolutionary diversification that underlie the endless forms of animal life on the Earth. This is increasingly the case not only for the origins of evolutionary novelties that permit new functions and open up new adaptive zones, but also for the processes of evolutionary tinkering that occur within the subsequent radiations of related species. Evo-devo has time and again yielded spectacular examples of Darwin's notions of common ancestry and of descent with modification. It has also shown that the evolution of endless forms is more about the evolution of the regulatory machinery of ancient genes than the origin and elaboration of new genes. Evolvability, especially with respect to the capacity of a developmental system to evolve and to generate the variation in form for natural selection to screen, has become a pivotal focus of evo-devo. As a consequence, a balancing of the concept of endless forms in morphospace with a greater awareness of the potential for developmental constraints and bias is becoming more general. The prospect of parallel horizons opening up for the evolution of behaviour is exciting; in particular, does Sean Carroll's phrase referring to old genes learning new tricks in the evolution of endless forms apply equally as well to patterns of diversity and disparity in behavioural trait-space?

  6. Testing the Motor Simulation Account of Source Errors for Actions in Recall

    Directory of Open Access Journals (Sweden)

    Nicholas Lange

    2017-09-01

    Full Text Available Observing someone else perform an action can lead to false memories of self-performance – the observation inflation effect. One explanation is that action simulation via mirror neuron activation during action observation is responsible for observation inflation by enriching memories of observed actions with motor representations. In three experiments we investigated this account of source memory failures, using a novel paradigm that minimized influences of verbalization and prior object knowledge. Participants worked in pairs to take turns acting out geometric shapes and letters. The next day, participants recalled either actions they had performed or those they had observed. Experiment 1 showed that participants falsely retrieved observed actions as self-performed, but also retrieved self-performed actions as observed. Experiment 2 showed that preventing participants from encoding observed actions motorically by taxing their motor system with a concurrent motor task did not lead to the predicted decrease in false claims of self-performance. Indeed, Experiment 3 showed that this was the case even if participants were asked to carefully monitor their recall. Because our data provide no evidence for a motor activation account, we also discussed our results in light of a source monitoring account.

  7. On the Performance of Free-Space Optical Systems over Generalized Atmospheric Turbulence Channels with Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique

    2015-03-01

    Generalized fading has been an imminent part and parcel of wireless communications. It not only characterizes the wireless channel appropriately but also allows its utilization for further performance analysis of various types of wireless communication systems. Under the umbrella of generalized fading channels, a unified performance analysis of a free-space optical (FSO) link over the Malaga (M) atmospheric turbulence channel that accounts for pointing errors and both types of detection techniques (i.e. indirect modulation/direct detection (IM/DD) as well as heterodyne detection) is presented. Specifically, unified exact closed-form expressions for the probability density function (PDF), the cumulative distribution function (CDF), the moment generating function (MGF), and the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system are presented, all in terms of the Meijer\\'s G function except for the moments that is in terms of simple elementary functions. Then capitalizing on these unified results, unified exact closed-form expressions for various performance metrics of FSO link transmission systems are offered, such as, the outage probability (OP), the higher-order amount of fading (AF), the average error rate for binary and M-ary modulation schemes, and the ergodic capacity (except for IM/DD technique, where closed-form lower bound results are presented), all in terms of Meijer\\'s G functions except for the higher-order AF that is in terms of simple elementary functions. Additionally, the asymptotic results are derived for all the expressions derived earlier in terms of the Meijer\\'s G function in the high SNR regime in terms of simple elementary functions via an asymptotic expansion of the Meijer\\'s G function. Furthermore, new asymptotic expressions for the ergodic capacity in the low as well as high SNR regimes are derived in terms of simple elementary functions via utilizing moments. All the presented results are

  8. Augmented Automated Material Accounting Statistics System (AMASS)

    International Nuclear Information System (INIS)

    Lumb, R.F.; Messinger, M.; Tingey, F.H.

    1983-01-01

    This paper describes an extension of the AMASS methodology which was previously presented at the 1981 INMM annual meeting. The main thrust of the current effort is to develop procedures and a computer program for estimating the variance of an Inventory Difference when many sources of variability, other than measurement error, are admitted in the model. Procedures also are included for the estimation of the variances associated with measurement error estimates and their effect on the estimated limit of error of the inventory difference (LEID). The algorithm for the LEID measurement component uncertainty involves the propagated component measurement variance estimates as well as their associated degrees of freedom. The methodology and supporting computer software is referred to as the augmented Automated Material Accounting Statistics System (AMASS). Specifically, AMASS accommodates five source effects. These are: (1) measurement errors (2) known but unmeasured effects (3) measurement adjustment effects (4) unmeasured process hold-up effects (5) residual process variation A major result of this effort is a procedure for determining the effect of bias correction on LEID, properly taking into account all the covariances that exist. This paper briefly describes the basic models that are assumed; some of the estimation procedures consistent with the model; data requirements, emphasizing availability and other practical considerations; discusses implications for bias corrections; and concludes by briefly describing the supporting computer program

  9. Bringing organizational factors to the fore of human error management

    International Nuclear Information System (INIS)

    Embrey, D.

    1991-01-01

    Human performance problems account for more than half of all significant events at nuclear power plants, even when these did not necessarily lead to severe accidents. In dealing with the management of human error, both technical and organizational factors need to be taken into account. Most important, a long-term commitment from senior management is needed. (author)

  10. Common patterns in 558 diagnostic radiology errors.

    Science.gov (United States)

    Donald, Jennifer J; Barnard, Stuart A

    2012-04-01

    As a Quality Improvement initiative our department has held regular discrepancy meetings since 2003. We performed a retrospective analysis of the cases presented and identified the most common pattern of error. A total of 558 cases were referred for discussion over 92 months, and errors were classified as perceptual or interpretative. The most common patterns of error for each imaging modality were analysed, and the misses were scored by consensus as subtle or non-subtle. Of 558 diagnostic errors, 447 (80%) were perceptual and 111 (20%) were interpretative errors. Plain radiography and computed tomography (CT) scans were the most frequent imaging modalities accounting for 246 (44%) and 241 (43%) of the total number of errors, respectively. In the plain radiography group 120 (49%) of the errors occurred in chest X-ray reports with perceptual miss of a lung nodule occurring in 40% of this subgroup. In the axial and appendicular skeleton missed fractures occurred most frequently, and metastatic bone disease was overlooked in 12 of 50 plain X-rays of the pelvis or spine. The majority of errors within the CT group were in reports of body scans with the commonest perceptual errors identified including 16 missed significant bone lesions, 14 cases of thromboembolic disease and 14 gastrointestinal tumours. Of the 558 errors, 312 (56%) were considered subtle and 246 (44%) non-subtle. Diagnostic errors are not uncommon and are most frequently perceptual in nature. Identification of the most common patterns of error has the potential to improve the quality of reporting by improving the search behaviour of radiologists. © 2012 The Authors. Journal of Medical Imaging and Radiation Oncology © 2012 The Royal Australian and New Zealand College of Radiologists.

  11. Tanks for liquids: calibration and errors assessment

    International Nuclear Information System (INIS)

    Espejo, J.M.; Gutierrez Fernandez, J.; Ortiz, J.

    1980-01-01

    After a brief reference to some of the problems raised by tanks calibration, two methods, theoretical and experimental are presented, so as to achieve it taking into account measurement errors. The method is applied to the transfer of liquid from one tank to another. Further, a practical example is developed. (author)

  12. A New Form of Nondestructive Strength-Estimating Statistical Models Accounting for Uncertainty of Model and Aging Effect of Concrete

    International Nuclear Information System (INIS)

    Hong, Kee Jeung; Kim, Jee Sang

    2009-01-01

    As concrete ages, the surrounding environment is expected to have growing influences on the concrete. As all the impacts of the environment cannot be considered in the strength-estimating model of a nondestructive concrete test, the increase in concrete age leads to growing uncertainty in the strength-estimating model. Therefore, the variation of the model error increases. It is necessary to include those impacts in the probability model of concrete strength attained from the nondestructive tests so as to build a more accurate reliability model for structural performance evaluation. This paper reviews and categorizes the existing strength-estimating statistical models of nondestructive concrete test, and suggests a new form of the strength-estimating statistical models to properly reflect the model uncertainty due to aging of the concrete. This new form of the statistical models will lay foundation for more accurate structural performance evaluation.

  13. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    Science.gov (United States)

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  14. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators......We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...

  15. ERROR VS REJECTION CURVE FOR THE PERCEPTRON

    OpenAIRE

    PARRONDO, JMR; VAN DEN BROECK, Christian

    1993-01-01

    We calculate the generalization error epsilon for a perceptron J, trained by a teacher perceptron T, on input patterns S that form a fixed angle arccos (J.S) with the student. We show that the error is reduced from a power law to an exponentially fast decay by rejecting input patterns that lie within a given neighbourhood of the decision boundary J.S = 0. On the other hand, the error vs. rejection curve epsilon(rho), where rho is the fraction of rejected patterns, is shown to be independent ...

  16. Ethics in the Accountability Profession and their Dilemmas

    Directory of Open Access Journals (Sweden)

    Mihaela Cristina ONICA

    2017-12-01

    Full Text Available The actual global crisis, fueled by financial scandals and political instability, has allowed the development of some factors that might be attributed to some deficiencies in the field of professional ethics. The accountability professional must respect certain regulated rules and principles, but the continuous change in the modern business environment makes the accounting professional face numerous challenges. That is why it is important for each accounting professional to respect the ethic requirements each time, by avoiding or mitigating, and even foreseeing, in some cases, the risk of error making, and, thus, protecting the image of the accounting entity.

  17. A Closed-Form Error Model of Straight Lines for Improved Data Association and Sensor Fusing

    Directory of Open Access Journals (Sweden)

    Volker Sommer

    2018-04-01

    Full Text Available Linear regression is a basic tool in mobile robotics, since it enables accurate estimation of straight lines from range-bearing scans or in digital images, which is a prerequisite for reliable data association and sensor fusing in the context of feature-based SLAM. This paper discusses, extends and compares existing algorithms for line fitting applicable also in the case of strong covariances between the coordinates at each single data point, which must not be neglected if range-bearing sensors are used. Besides, in particular, the determination of the covariance matrix is considered, which is required for stochastic modeling. The main contribution is a new error model of straight lines in closed form for calculating quickly and reliably the covariance matrix dependent on just a few comprehensible and easily-obtainable parameters. The model can be applied widely in any case when a line is fitted from a number of distinct points also without a priori knowledge of the specific measurement noise. By means of extensive simulations, the performance and robustness of the new model in comparison to existing approaches is shown.

  18. Social Errors in Four Cultures: Evidence about Universal Forms of Social Relations.

    Science.gov (United States)

    Fiske, Alan Page

    1993-01-01

    To test the cross-cultural generality of relational-models theory, 4 studies with 70 adults examined social errors of substitution of persons for Bengali, Korean, Chinese, and Vai (Liberia and Sierra Leone) subjects. In all four cultures, people tend to substitute someone with whom they have the same basic relationship. (SLD)

  19. The application of statistical techniques to nuclear materials accountancy

    International Nuclear Information System (INIS)

    Annibal, P.S.; Roberts, P.D.

    1990-02-01

    Over the past decade much theoretical research has been carried out on the development of statistical methods for nuclear materials accountancy. In practice plant operation may differ substantially from the idealized models often cited. This paper demonstrates the importance of taking account of plant operation in applying the statistical techniques, to improve the accuracy of the estimates and the knowledge of the errors. The benefits are quantified either by theoretical calculation or by simulation. Two different aspects are considered; firstly, the use of redundant measurements to reduce the error on the estimate of the mass of heavy metal in an accountancy tank is investigated. Secondly, a means of improving the knowledge of the 'Material Unaccounted For' (the difference between the inventory calculated from input/output data, and the measured inventory), using information about the plant measurement system, is developed and compared with existing general techniques. (author)

  20. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  1. 7 CFR 1000.77 - Adjustment of accounts.

    Science.gov (United States)

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE GENERAL PROVISIONS OF FEDERAL MILK MARKETING ORDERS... handler's reports, books, records, or accounts, or other verification discloses errors resulting in money...

  2. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    Science.gov (United States)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  3. A memory of errors in sensorimotor learning.

    Science.gov (United States)

    Herzfeld, David J; Vaswani, Pavan A; Marko, Mollie K; Shadmehr, Reza

    2014-09-12

    The current view of motor learning suggests that when we revisit a task, the brain recalls the motor commands it previously learned. In this view, motor memory is a memory of motor commands, acquired through trial-and-error and reinforcement. Here we show that the brain controls how much it is willing to learn from the current error through a principled mechanism that depends on the history of past errors. This suggests that the brain stores a previously unknown form of memory, a memory of errors. A mathematical formulation of this idea provides insights into a host of puzzling experimental data, including savings and meta-learning, demonstrating that when we are better at a motor task, it is partly because the brain recognizes the errors it experienced before. Copyright © 2014, American Association for the Advancement of Science.

  4. The concept of error and malpractice in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca; Pinto, Fabio; Reali, Riccardo; Daniele, Stefania; Romano, Luigia

    2012-08-01

    Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. The etiology of radiological error is multifactorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge, and misjudgments. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Errors are an inevitable part of human life, and every health professional has made mistakes. To improve patient safety and reduce the risk from harm, we must accept that some errors are inevitable during the delivery of health care. We must play a cultural change in medicine, wherein errors are actively sought, openly discussed, and aggressively addressed. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  6. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  7. Error mapping of high-speed AFM systems

    Science.gov (United States)

    Klapetek, Petr; Picco, Loren; Payton, Oliver; Yacoot, Andrew; Miles, Mervyn

    2013-02-01

    In recent years, there have been several advances in the development of high-speed atomic force microscopes (HSAFMs) to obtain images with nanometre vertical and lateral resolution at frame rates in excess of 1 fps. To date, these instruments are lacking in metrology for their lateral scan axes; however, by imaging a series of two-dimensional lateral calibration standards, it has been possible to obtain information about the errors associated with these HSAFM scan axes. Results from initial measurements are presented in this paper and show that the scan speed needs to be taken into account when performing a calibration as it can lead to positioning errors of up to 3%.

  8. Methods for Estimation of Radiation Risk in Epidemiological Studies Accounting for Classical and Berkson Errors in Doses

    KAUST Repository

    Kukush, Alexander

    2011-01-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  9. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    Science.gov (United States)

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  10. Evaluation of Data with Systematic Errors

    International Nuclear Information System (INIS)

    Froehner, F. H.

    2003-01-01

    Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward

  11. Descartes' embodied psychology: Descartes' or Damasio's error?

    Science.gov (United States)

    Kirkebøen, G

    2001-08-01

    Damasio (1994) claims that Descartes imagined thinking as an activity separate from the body, and that the effort to understand the mind in general biological terms was retarded as a consequence of Descartes' dualism. These claims do not hold; they are "Damasio's error". Descartes never considered what we today call thinking or cognition without taking the body into account. His new dualism required an embodied understanding of cognition. The article gives an historical overview of the development of Descartes' radically new psychology from his account of algebraic reasoning in the early Regulae (1628) to his "neurobiology of rationality" in the late Passions of the soul (1649). The author argues that Descartes' dualism opens the way for mechanistic and mathematical explanations of all kinds of physiological and psychological phenomena, including the kind of phenomena Damasio discusses in Descartes' error. The models of understanding Damasio puts forward can be seen as advanced version of models which Descartes introduced in the 1640s. A far better title for his book would have been Descartes' vision.

  12. ITCA: Inter-Task Conflict-Aware CPU accounting for CMP

    OpenAIRE

    Luque, Carlos; Moreto Planas, Miquel; Cazorla Almeida, Francisco Javier; Gioiosa, Roberto; Valero Cortés, Mateo

    2010-01-01

    Chip-MultiProcessors (CMP) introduce complexities when accounting CPU utilization to processes because the progress done by a process during an interval of time highly depends on the activity of the other processes it is coscheduled with. We propose a new hardware CPU accounting mechanism to improve the accuracy when measuring the CPU utilization in CMPs and compare it with previous accounting mechanisms. Our results show that currently known mechanisms lead to a 16% average error when it com...

  13. Influence of measurement errors and estimated parameters on combustion diagnosis

    International Nuclear Information System (INIS)

    Payri, F.; Molina, S.; Martin, J.; Armas, O.

    2006-01-01

    Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors

  14. Estimation of error fields from ferromagnetic parts in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Oliva, A. Bonito [Fusion for Energy (Spain); Chiariello, A.G.; Formisano, A.; Martone, R. [Ass. EURATOM/ENEA/CREATE, Dip. di Ing. Industriale e dell’Informazione, Seconda Università di Napoli, Via Roma 29, I-81031 Napoli (Italy); Portone, A., E-mail: alfredo.portone@f4e.europa.eu [Fusion for Energy (Spain); Testoni, P. [Fusion for Energy (Spain)

    2013-10-15

    Highlights: ► The paper deals with error fields generated in ITER by magnetic masses. ► Magnetization state is computed from simplified FEM models. ► Closed form expressions adopted for the flux density of magnetized parts are given. ► Such expressions allow to simplify the estimation of the effect of iron pieces (or lack of) on error field. -- Abstract: Error fields in tokamaks are small departures from the exact axisymmetry of the ideal magnetic field configuration. Their reduction below a threshold value by the error field correction coils is essential since sufficiently large static error fields lead to discharge disruption. The error fields are originated not only by magnets fabrication and installation tolerances, by the joints and by the busbars, but also by the presence of ferromagnetic elements. It was shown that superconducting joints, feeders and busbars play a secondary effect; however in order to estimate of the importance of each possible error field source, rough evaluations can be very useful because it can provide an order of magnitude of the correspondent effect and, therefore, a ranking in the request for in depth analysis. The paper proposes a two steps procedure. The first step aims to get the approximate magnetization state of ferromagnetic parts; the second aims to estimate the full 3D error field over the whole volume using equivalent sources for magnetic masses and taking advantage from well assessed approximate closed form expressions, well suited for the far distance effects.

  15. Keeping Books of Account

    OpenAIRE

    2009-01-01

    Books of account are a record of a company’s income and spending. These records may be kept in paper or electronic form. The books of account contain the information for preparing the company’s annual financial statements.

  16. Human error: An essential problem of nuclear power plants

    International Nuclear Information System (INIS)

    Smidt, D.

    1981-01-01

    The author first defines the part played by man in the nuclear power plant and then deals in more detail with the structure of his valse behavior in tactical and strategic repect. The dicussion of tactical errors and their avoidance is follwed by a report on the actual state of plant technology and possible improvements. Subsequently a study of the strategic errors including the conclusion to be drawn until now (joint between plant and man, personal selection and education) is made. If the joints between man and machine are designed according and physiological strenghts and weaknesses of man are fully realized and taken into account human errors not be essential problem in nuclear power plant. (GL) [de

  17. NLO error propagation exercise: statistical results

    International Nuclear Information System (INIS)

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods

  18. Prescribing errors in a Brazilian neonatal intensive care unit

    Directory of Open Access Journals (Sweden)

    Ana Paula Cezar Machado

    2015-12-01

    Full Text Available Abstract Pediatric patients, especially those admitted to the neonatal intensive care unit (ICU, are highly vulnerable to medication errors. This study aimed to measure the prescription error rate in a university hospital neonatal ICU and to identify susceptible patients, types of errors, and the medicines involved. The variables related to medicines prescribed were compared to the Neofax prescription protocol. The study enrolled 150 newborns and analyzed 489 prescription order forms, with 1,491 medication items, corresponding to 46 drugs. Prescription error rate was 43.5%. Errors were found in dosage, intervals, diluents, and infusion time, distributed across 7 therapeutic classes. Errors were more frequent in preterm newborns. Diluent and dosing were the most frequent sources of errors. The therapeutic classes most involved in errors were antimicrobial agents and drugs that act on the nervous and cardiovascular systems.

  19. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  20. Statistical analysis and Kalman filtering applied to nuclear materials accountancy

    International Nuclear Information System (INIS)

    Annibal, P.S.

    1990-08-01

    Much theoretical research has been carried out on the development of statistical methods for nuclear material accountancy. In practice, physical, financial and time constraints mean that the techniques must be adapted to give an optimal performance in plant conditions. This thesis aims to bridge the gap between theory and practice, to show the benefits to be gained from a knowledge of the facility operation. Four different aspects are considered; firstly, the use of redundant measurements to reduce the error on the estimate of the mass of heavy metal in an 'accountancy tank' is investigated. Secondly, an analysis of the calibration data for the same tank is presented, establishing bounds for the error and suggesting a means of reducing them. Thirdly, a plant-specific method of producing an optimal statistic from the input, output and inventory data, to help decide between 'material loss' and 'no loss' hypotheses, is developed and compared with existing general techniques. Finally, an application of the Kalman Filter to materials accountancy is developed, to demonstrate the advantages of state-estimation techniques. The results of the analyses and comparisons illustrate the importance of taking into account a complete and accurate knowledge of the plant operation, measurement system, and calibration methods, to derive meaningful results from statistical tests on materials accountancy data, and to give a better understanding of critical random and systematic error sources. The analyses were carried out on the head-end of the Fast Reactor Reprocessing Plant, where fuel from the prototype fast reactor is cut up and dissolved. However, the techniques described are general in their application. (author)

  1. Roles of dopamine neurons in mediating the prediction error in aversive learning in insects.

    Science.gov (United States)

    Terao, Kanta; Mizunami, Makoto

    2017-10-31

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. The prediction error theory has been proposed to account for the finding of a blocking phenomenon, in which pairing of a stimulus X with an unconditioned stimulus (US) could block subsequent association of a second stimulus Y to the US when the two stimuli were paired in compound with the same US. Evidence for this theory, however, has been imperfect since blocking can also be accounted for by competitive theories. We recently reported blocking in classical conditioning of an odor with water reward in crickets. We also reported an "auto-blocking" phenomenon in appetitive learning, which supported the prediction error theory and rejected alternative theories. The presence of auto-blocking also suggested that octopamine neurons mediate reward prediction error signals. Here we show that blocking and auto-blocking occur in aversive learning to associate an odor with salt water (US) in crickets, and our results suggest that dopamine neurons mediate aversive prediction error signals. We conclude that the prediction error theory is applicable to both appetitive learning and aversive learning in insects.

  2. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2010-10-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  3. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  4. Implementing Replacement Cost Accounting

    Science.gov (United States)

    1976-12-01

    cost accounting Clickener, John Ross Monterey, California. Naval Postgraduate School http://hdl.handle.net/10945/17810 Downloaded from NPS Archive...Calhoun IMPLEMENTING REPLACEMENT COST ACCOUNTING John Ross CHckener NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS IMPLEMENTING REPLACEMENT COST ...Implementing Replacement Cost Accounting 7. AUTHORS John Ross Clickener READ INSTRUCTIONS BEFORE COMPLETING FORM 3. RECIPIENT’S CATALOG NUMBER 9. TYRE OF

  5. Impact of a reengineered electronic error-reporting system on medication event reporting and care process improvements at an urban medical center.

    Science.gov (United States)

    McKaig, Donald; Collins, Christine; Elsaid, Khaled A

    2014-09-01

    A study was conducted to evaluate the impact of a reengineered approach to electronic error reporting at a 719-bed multidisciplinary urban medical center. The main outcome of interest was the monthly reported medication errors during the preimplementation (20 months) and postimplementation (26 months) phases. An interrupted time series analysis was used to describe baseline errors, immediate change following implementation of the current electronic error-reporting system (e-ERS), and trend of error reporting during postimplementation. Errors were categorized according to severity using the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) Medication Error Index classifications. Reported errors were further analyzed by reporter and error site. During preimplementation, the monthly reported errors mean was 40.0 (95% confidence interval [CI]: 36.3-43.7). Immediately following e-ERS implementation, monthly reported errors significantly increased by 19.4 errors (95% CI: 8.4-30.5). The change in slope of reported errors trend was estimated at 0.76 (95% CI: 0.07-1.22). Near misses and no-patient-harm errors accounted for 90% of all errors, while errors that caused increased patient monitoring or temporary harm accounted for 9% and 1%, respectively. Nurses were the most frequent reporters, while physicians were more likely to report high-severity errors. Medical care units accounted for approximately half of all reported errors. Following the intervention, there was a significant increase in reporting of prevented errors and errors that reached the patient with no resultant harm. This improvement in reporting was sustained for 26 months and has contributed to designing and implementing quality improvement initiatives to enhance the safety of the medication use process.

  6. Automated Classification of Phonological Errors in Aphasic Language

    Science.gov (United States)

    Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.

    1984-01-01

    Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.

  7. On the effect of numerical errors in large eddy simulations of turbulent flows

    International Nuclear Information System (INIS)

    Kravchenko, A.G.; Moin, P.

    1997-01-01

    Aliased and dealiased numerical simulations of a turbulent channel flow are performed using spectral and finite difference methods. Analytical and numerical studies show that aliasing errors are more destructive for spectral and high-order finite-difference calculations than for low-order finite-difference simulations. Numerical errors have different effects for different forms of the nonlinear terms in the Navier-Stokes equations. For divergence and convective forms, spectral methods are energy-conserving only if dealiasing is performed. For skew-symmetric and rotational forms, both spectral and finite-difference methods are energy-conserving even in the presence of aliasing errors. It is shown that discrepancies between the results of dealiased spectral and standard nondialiased finite-difference methods are due to both aliasing and truncation errors with the latter being the leading source of differences. The relative importance of aliasing and truncation errors as compared to subgrid scale model terms in large eddy simulations is analyzed and discussed. For low-order finite-difference simulations, truncation errors can exceed the magnitude of the subgrid scale term. 25 refs., 17 figs., 1 tab

  8. Errors and mistakes in breast ultrasound diagnostics

    Directory of Open Access Journals (Sweden)

    Wiesław Jakubowski

    2012-09-01

    Full Text Available Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Neverthe‑ less, as in each imaging method, there are errors and mistakes resulting from the techni‑ cal limitations of the method, breast anatomy (fibrous remodeling, insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts, improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS‑usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, includ‑ ing the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  9. Non-binary unitary error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.

    1996-06-01

    Error operator bases for systems of any dimension are defined and natural generalizations of the bit-flip/ sign-change error basis for qubits are given. These bases allow generalizing the construction of quantum codes based on eigenspaces of Abelian groups. As a consequence, quantum codes can be constructed form linear codes over {ital Z}{sub {ital n}} for any {ital n}. The generalization of the punctured code construction leads to many codes which permit transversal (i.e. fault tolerant) implementations of certain operations compatible with the error basis.

  10. Repeated speech errors: evidence for learning.

    Science.gov (United States)

    Humphreys, Karin R; Menzies, Heather; Lake, Johanna K

    2010-11-01

    Three experiments elicited phonological speech errors using the SLIP procedure to investigate whether there is a tendency for speech errors on specific words to reoccur, and whether this effect can be attributed to implicit learning of an incorrect mapping from lemma to phonology for that word. In Experiment 1, when speakers made a phonological speech error in the study phase of the experiment (e.g. saying "beg pet" in place of "peg bet") they were over four times as likely to make an error on that same item several minutes later at test. A pseudo-error condition demonstrated that the effect is not simply due to a propensity for speakers to repeat phonological forms, regardless of whether or not they have been made in error. That is, saying "beg pet" correctly at study did not induce speakers to say "beg pet" in error instead of "peg bet" at test. Instead, the effect appeared to be due to learning of the error pathway. Experiment 2 replicated this finding, but also showed that after 48 h, errors made at study were no longer more likely to reoccur. As well as providing constraints on the longevity of the effect, this provides strong evidence that the error reoccurrences observed are not due to item-specific difficulty that leads individual speakers to make habitual mistakes on certain items. Experiment 3 showed that the diminishment of the effect 48 h later is not due to specific extra practice at the task. We discuss how these results fit in with a larger view of language as a dynamic system that is constantly adapting in response to experience. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Managerial accounting for transaction costs

    OpenAIRE

    Лабынцев, Николай Тихонович

    2015-01-01

    Essence and significance of transaction accounting and its basic concepts – transaction and transaction costs – have been determined. Main types of transaction costs and elements of transaction accounting for expenses have been considered. Source documents forms for the purpose of accounting for transaction costs have been worked out.

  12. Medical Error Types and Causes Made by Nurses in Turkey

    Directory of Open Access Journals (Sweden)

    Dilek Kucuk Alemdar

    2013-06-01

    Full Text Available AIM: This study was carried out as a descriptive study in order to determine types, causes and prevalence of medical errors made by nurses in Turkey. METHOD: Seventy eight (78 nurses who have worked in a randomly selected hospital from five hospitals in Giresun city centre were enrolled in the study. The data was collected by the researchers using the ‘Information Form for Nurses’ and ‘Medical Error Form’. The Medical Error Form consists of 2 parts and 40 items including types and causes of medical errors. Nurses’ socio-demographic variables, medical error types and causes were evaluated using the percentage distribution and mean. RESULTS: The mean age of the nurses was 25.5 years, with a standard deviation 6.03 years. 50% of the nurses graduated health professional high school in the study. 53.8% of the nurses are single, 63.1% worked between 1-5 years, 71.8% day and night shifts and 42.3% in medical clinics. The common types of medical errors were hospital infection rate of 15.4%, diagnostic errors 12.8%, needle or cutting tool injuries and problems related to drug usage which has side effects 10.3%. In the study 38.5% of the nurses reported that they thought the cause of medical error highly was tiredness, 36.4% increased workload and 34.6% long working hours. CONCLUSION: As a result of the present study, nurses mentioned hospital infection, diagnostic errors, needle or cutting tool injuries as the most common medical errors and fatigue, over work load and long working hours as the most common medical error reasons. [TAF Prev Med Bull 2013; 12(3.000: 307-314

  13. Safety coaches in radiology: decreasing human error and minimizing patient harm

    Energy Technology Data Exchange (ETDEWEB)

    Dickerson, Julie M.; Adams, Janet M. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Koch, Bernadette L.; Donnelly, Lane F. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Cincinnati Children' s Hospital Medical Center, Department of Pediatrics, Cincinnati, OH (United States); Goodfriend, Martha A. [Cincinnati Children' s Hospital Medical Center, Department of Quality Improvement, Cincinnati, OH (United States)

    2010-09-15

    Successful programs to improve patient safety require a component aimed at improving safety culture and environment, resulting in a reduced number of human errors that could lead to patient harm. Safety coaching provides peer accountability. It involves observing for safety behaviors and use of error prevention techniques and provides immediate feedback. For more than a decade, behavior-based safety coaching has been a successful strategy for reducing error within the context of occupational safety in industry. We describe the use of safety coaches in radiology. Safety coaches are an important component of our comprehensive patient safety program. (orig.)

  14. Safety coaches in radiology: decreasing human error and minimizing patient harm

    International Nuclear Information System (INIS)

    Dickerson, Julie M.; Adams, Janet M.; Koch, Bernadette L.; Donnelly, Lane F.; Goodfriend, Martha A.

    2010-01-01

    Successful programs to improve patient safety require a component aimed at improving safety culture and environment, resulting in a reduced number of human errors that could lead to patient harm. Safety coaching provides peer accountability. It involves observing for safety behaviors and use of error prevention techniques and provides immediate feedback. For more than a decade, behavior-based safety coaching has been a successful strategy for reducing error within the context of occupational safety in industry. We describe the use of safety coaches in radiology. Safety coaches are an important component of our comprehensive patient safety program. (orig.)

  15. Safety coaches in radiology: decreasing human error and minimizing patient harm.

    Science.gov (United States)

    Dickerson, Julie M; Koch, Bernadette L; Adams, Janet M; Goodfriend, Martha A; Donnelly, Lane F

    2010-09-01

    Successful programs to improve patient safety require a component aimed at improving safety culture and environment, resulting in a reduced number of human errors that could lead to patient harm. Safety coaching provides peer accountability. It involves observing for safety behaviors and use of error prevention techniques and provides immediate feedback. For more than a decade, behavior-based safety coaching has been a successful strategy for reducing error within the context of occupational safety in industry. We describe the use of safety coaches in radiology. Safety coaches are an important component of our comprehensive patient safety program.

  16. Barriers to medication error reporting among hospital nurses.

    Science.gov (United States)

    Rutledge, Dana N; Retrosi, Tina; Ostrowski, Gary

    2018-03-01

    The study purpose was to report medication error reporting barriers among hospital nurses, and to determine validity and reliability of an existing medication error reporting barriers questionnaire. Hospital medication errors typically occur between ordering of a medication to its receipt by the patient with subsequent staff monitoring. To decrease medication errors, factors surrounding medication errors must be understood; this requires reporting by employees. Under-reporting can compromise patient safety by disabling improvement efforts. This 2017 descriptive study was part of a larger workforce engagement study at a faith-based Magnet ® -accredited community hospital in California (United States). Registered nurses (~1,000) were invited to participate in the online survey via email. Reported here are sample demographics (n = 357) and responses to the 20-item medication error reporting barriers questionnaire. Using factor analysis, four factors that accounted for 67.5% of the variance were extracted. These factors (subscales) were labelled Fear, Cultural Barriers, Lack of Knowledge/Feedback and Practical/Utility Barriers; each demonstrated excellent internal consistency. The medication error reporting barriers questionnaire, originally developed in long-term care, demonstrated good validity and excellent reliability among hospital nurses. Substantial proportions of American hospital nurses (11%-48%) considered specific factors as likely reporting barriers. Average scores on most barrier items were categorised "somewhat unlikely." The highest six included two barriers concerning the time-consuming nature of medication error reporting and four related to nurses' fear of repercussions. Hospitals need to determine the presence of perceived barriers among nurses using questionnaires such as the medication error reporting barriers and work to encourage better reporting. Barriers to medication error reporting make it less likely that nurses will report medication

  17. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  18. MODELING OF MANUFACTURING ERRORS FOR PIN-GEAR ELEMENTS OF PLANETARY GEARBOX

    Directory of Open Access Journals (Sweden)

    Ivan M. Egorov

    2014-11-01

    Full Text Available Theoretical background for calculation of k-h-v type cycloid reducers was developed relatively long ago. However, recently the matters of cycloid reducer design again attracted heightened attention. The reason for that is that such devices are used in many complex engineering systems, particularly, in mechatronic and robotics systems. The development of advanced technological capabilities for manufacturing of such reducers today gives the possibility for implementation of essential features of such devices: high efficiency, high gear ratio, kinematic accuracy and smooth motion. The presence of an adequate mathematical model gives the possibility for adjusting kinematic accuracy of the reducer by rational selection of manufacturing tolerances for its parts. This makes it possible to automate the design process for cycloid reducers with account of various factors including technological ones. A mathematical model and mathematical technique have been developed giving the possibility for modeling the kinematic error of the reducer with account of multiple factors, including manufacturing errors. The errors are considered in the way convenient for prediction of kinematic accuracy early at the manufacturing stage according to the results of reducer parts measurement on coordinate measuring machines. During the modeling, the wheel manufacturing errors are determined by the eccentricity and radius deviation of the pin tooth centers circle, and the deviation between the pin tooth axes positions and the centers circle. The satellite manufacturing errors are determined by the satellite eccentricity deviation and the satellite rim eccentricity. Due to the collinearity, the pin tooth and pin tooth hole diameter errors and the satellite tooth profile errors for a designated contact point are integrated into one deviation. Software implementation of the model makes it possible to estimate the pointed errors influence on satellite rotation angle error and

  19. Specification errors in estimating cost functions: the case of the nuclear-electric-generating industry

    International Nuclear Information System (INIS)

    Jorgensen, E.J.

    1987-01-01

    This study is an application of production-cost duality theory. Duality theory is reviewed for the competitive and rate-of-return regulated firm. The cost function is developed for the nuclear electric-power-generating industry of the United States using capital, fuel, and labor factor inputs. A comparison is made between the Generalized Box-Cox (GBC) and Fourier Flexible (FF) functional forms. The GBC functional form nests the Generalized Leontief, Generalized Square Root Quadratic and Translog functional forms, and is based upon a second-order Taylor-series expansion. The FF form follows from a Fourier-series expansion in sine and cosine terms using the Sobolev norm as the goodness-of-fit measure. The Sobolev norm takes into account first and second derivatives. The cost function and two factor shares are estimated as a system of equations using maximum-likelihood techniques, with Additive Standard Normal and Logistic Normal error distributions. In summary, none of the special cases of the GBC function form are accepted. Homotheticity of the underlying production technology can be rejected for both GBC and FF forms, leaving only the unrestricted versions supported by the data. Residual analysis indicates a slight improvement in skewness and kurtosis for univariate and multivariate cases when the Logistic Normal distribution is used

  20. Hospital medication errors in a pharmacovigilance system in Colombia

    Directory of Open Access Journals (Sweden)

    Jorge Enrique Machado-Alba

    2015-11-01

    Full Text Available Objective: this study analyzes the medication errors reported to a pharmacovigilance system by 26 hospitals for patients in the healthcare system of Colombia. Methods: this retrospective study analyzed the medication errors reported to a systematized database between 1 January 2008 and 12 September 2013. The medication is dispensed by the company Audifarma S.A. to hospitals and clinics around Colombia. Data were classified according to the taxonomy of the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP. The data analysis was performed using SPSS 22.0 for Windows, considering p-values < 0.05 significant. Results: there were 9 062 medication errors in 45 hospital pharmacies. Real errors accounted for 51.9% (n = 4 707, of which 12.0% (n = 567 reached the patient (Categories C to I and caused harm (Categories E to I to 17 subjects (0.36%. The main process involved in errors that occurred (categories B to I was prescription (n = 1 758, 37.3%, followed by dispensation (n = 1 737, 36.9%, transcription (n = 970, 20.6% and administration (n = 242, 5.1%. The errors in the administration process were 45.2 times more likely to reach the patient (CI 95%: 20.2–100.9. Conclusions: medication error reporting systems and prevention strategies should be widespread in hospital settings, prioritizing efforts to address the administration process.

  1. Measurement error and timing of predictor values for multivariable risk prediction models are poorly reported.

    Science.gov (United States)

    Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D

    2018-05-18

    Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.

  2. Reducing errors benefits the field-based learning of a fundamental movement skill in children.

    Science.gov (United States)

    Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W

    2013-03-01

    Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.

  3. A Human Error Analysis Procedure for Identifying Potential Error Modes and Influencing Factors for Test and Maintenance Activities

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Park, Jin Kyun

    2010-01-01

    Periodic or non-periodic test and maintenance (T and M) activities in large, complex systems such as nuclear power plants (NPPs) are essential for sustaining stable and safe operation of the systems. On the other hand, it also has been raised that human erroneous actions that might occur during T and M activities has the possibility of incurring unplanned reactor trips (RTs) or power derate, making safety-related systems unavailable, or making the reliability of components degraded. Contribution of human errors during normal and abnormal activities of NPPs to the unplanned RTs is known to be about 20% of the total events. This paper introduces a procedure for predictively analyzing human error potentials when maintenance personnel perform T and M tasks based on a work procedure or their work plan. This procedure helps plant maintenance team prepare for plausible human errors. The procedure to be introduced is focusing on the recurrent error forms (or modes) in execution-based errors such as wrong object, omission, too little, and wrong action

  4. Automorphic Forms and Mock Modular Forms in String Theory

    Science.gov (United States)

    Nazaroglu, Caner

    We study a variety of modular invariant objects in relation to string theory. First, we focus on Jacobi forms over generic rank lattices and Siegel forms that appear in N = 2, D = 4 compactifications of heterotic string with Wilson lines. Constraints from low energy spectrum and modularity are employed to deduce the relevant supersymmetric partition functions entirely. This procedure is applied on models that lead to Jacobi forms of index 3, 4, 5 as well as Jacobi forms over root lattices A2 and A3. These computations are then checked against an explicit orbifold model which can be Higgsed to the models under question. Models with a single Wilson line are then studied in detail with their relation to paramodular group Gammam as T-duality group made explicit. These results on the heterotic string side are then turned into predictions for geometric invariants using TypeII - Heterotic duality. Secondly, we study theta functions for indenite signature lattices of generic signature. Building on results in literature for signature (n-1,1) and (n-2,2) lattices, we work out the properties of generalized error functions which we call r-tuple error functions. We then use these functions to build such indenite theta functions and describe their modular completions.

  5. Children's mathematics 4-15 learning from errors and misconceptions

    CERN Document Server

    Ryan, Julie

    2007-01-01

    Develops concepts for teachers to use in organizing their understanding and knowledge of children's mathematics. This book offers guidance for classroom teaching and concludes with theoretical accounts of learning and teaching. It transforms research on diagnostic errors into knowledge for teaching, teacher education and research on teaching.

  6. Investigating Surface Bias Errors in the Weather Research and Forecasting (WRF) Model using a Geographic Information System (GIS)

    Science.gov (United States)

    2015-02-01

    Computational and Information Sciences Directorate Battlefield Environment Division (ATTN: RDRL- CIE -M) White Sands Missile Range, NM 88002-5501 8. PERFORMING...meteorological parameters, which became our focus. We found that elevation accounts for a significant portion of the variance in the model error. The...found that elevation accounts for a significant portion of the variance in the model error of surface temperature and relative humidity predictions

  7. Estimating Classification Errors under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC)

    NARCIS (Netherlands)

    Boeschoten, Laura; Oberski, Daniel; De Waal, Ton

    2017-01-01

    Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible

  8. On the Photometric Error Calibration for the Differential Light Curves ...

    Indian Academy of Sciences (India)

    a value of 1.75 was estimated using the DLCs derived for pairs of steady stars ... apparently steady comparison stars present on the same CCD frame. ...... (2)) on the 262 steady star–star DLCs after accounting for the photometric error.

  9. The doxastic shear pin: delusions as errors of learning and memory.

    Science.gov (United States)

    Fineberg, S K; Corlett, P R

    2016-01-01

    We reconsider delusions in terms of a "doxastic shear pin", a mechanism that errs so as to prevent the destruction of the machine (brain) and permit continued function (in an attenuated capacity). Delusions may disable flexible (but energetically expensive) inference. With each recall, delusions may be reinforced further and rendered resistant to contradiction. We aim to respond to deficit accounts of delusions - that delusions are only a problem without any benefit - by considering delusion formation and maintenance in terms of predictive coding. We posit that brains conform to a simple computational principle: to minimize prediction error (the mismatch between prior top-down expectation and current bottom-up input) across hierarchies of brain regions and psychological representation. Recent data suggest that delusions may form in the absence of constraining top-down expectations. Then, once formed, they become new priors that motivate other beliefs, perceptions, and actions by providing strong (sometimes overriding) top-down expectation. We argue that delusions form when the shear-pin breaks, permitting continued engagement with an overwhelming world, and ongoing function in the face of paralyzing difficulty. This crucial role should not be ignored when we treat delusions: we need to consider how a person will function in the world without them..

  10. Accounting for standard errors of vision-specific latent trait in regression models.

    Science.gov (United States)

    Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L

    2014-07-11

    To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits

  11. Computer-Assisted Detection of 90% of EFL Student Errors

    Science.gov (United States)

    Harvey-Scholes, Calum

    2018-01-01

    Software can facilitate English as a Foreign Language (EFL) students' self-correction of their free-form writing by detecting errors; this article examines the proportion of errors which software can detect. A corpus of 13,644 words of written English was created, comprising 90 compositions written by Spanish-speaking students at levels A2-B2…

  12. Medication errors: definitions and classification

    Science.gov (United States)

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  13. Application of human error theory in case analysis of wrong procedures.

    Science.gov (United States)

    Duthie, Elizabeth A

    2010-06-01

    The goal of this study was to contribute to the emerging body of literature about the role of human behaviors and cognitive processes in the commission of wrong procedures. Case analysis of 5 wrong procedures in operative and nonoperative settings using James Reason's human error theory was performed. The case analysis showed that cognitive underspecification, cognitive flips, automode processing, and skill-based errors were contributory to wrong procedures. Wrong-site procedures accounted for the preponderance of the cases. Front-line supervisory staff used corrective actions that focused on the performance of the individual without taking into account cognitive factors. System fixes using human cognition concepts have a greater chance of achieving sustainable safety outcomes than those that are based on the traditional approach of counseling, education, and disciplinary action for staff.

  14. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. #2 - An Empirical Assessment of Exposure Measurement Error ...

    Science.gov (United States)

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  16. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... that contains very detailed information about incomes. This gives a unique opportunity to learn about the magnitude and nature of the measurement error in income reported by the respondents in the Danish NTS compared to income from the administrative register (correct measure). We find that the classical...

  17. Improving the usefulness of accounting data in financial analysis

    Directory of Open Access Journals (Sweden)

    A Saville

    2004-04-01

    Full Text Available Accounting practices are flawed.  As a consequence, the accounting data generated by firms are generally open to interpretation, often misleading and sometimes patently false.  Yet, financial analysts place tremendous confidence in accounting data when appraising investments and investment strategies.  The implications of financial analysis based on questionable information are numerous, and range from inexact analysis to acute investment error.  To rectify this situation, this paper identifies a set of simple, yet highly effective corrective measures, which have the capacity to move accounting practice into a realm wherein accounting starts to ‘count what counts’.  The net result would be delivery of accounting data that more accurately reflect firms’ economic realities and, as such, are more useful in the task of financial analysis.

  18. Customizable pre-printed consent forms: a solution in light of the Montgomery ruling.

    Science.gov (United States)

    Owen, Deborah; Aresti, Nick; Mulligan, Alex; Kosuge, Dennis

    2018-02-02

    This article presents an audit cycle supported quality improvement project addressing best practice in the consent process for lower limb arthroplasty which takes into account the new standard in surgical consent and the importance of material risks. 50 consecutive total hip and total knee replacement consent forms over a 3-month period were reviewed for legibility and completeness. Following the introduction of a new, pre-printed but customizable consent form the review process was repeated. The introduction of a customizable, pre-printed consent form that can be adjusted to reflect the individualized material risks of each patient increased legibility, reduced inappropriate human error variation and abolished the use of abbreviations and medical jargon. When used as part of an extended consent process, the authors feel that the use of pre-printed but customizable consent forms improves legibility, completeness and consistency and also provides the ability to highlight those complications that are of particular importance for that patient to satisfy the new accepted standard in surgical consent.

  19. Composite Gauss-Legendre Quadrature with Error Control

    Science.gov (United States)

    Prentice, J. S. C.

    2011-01-01

    We describe composite Gauss-Legendre quadrature for determining definite integrals, including a means of controlling the approximation error. We compare the form and performance of the algorithm with standard Newton-Cotes quadrature. (Contains 1 table.)

  20. Data-acquisition system for the NLO error-propagation exercise

    International Nuclear Information System (INIS)

    Lower, C.W.; Gessiness, B.; Bieber, A.M. Jr.; Keisch, B.; Suda, S.C.

    1983-01-01

    An automated data-acquisition system using barcoded labels was developed for an error-propagation exercise to determine the limit of error for inventory differences (LEID) for a material balance area at NLO, Inc.'s Feed Materials Production Center, Fernald, Ohio. Each discrete item of material to be measured (weighed or analyzed) was labeled with a bar-coded identification number. Automated scale terminals, portable bar-code readers, and an automated laboratory data-entry terminal were used to read identification labels and automatically record measurement and transfer information. This system is the prototype for an entire material control and accountability system

  1. Overview of error-tolerant cockpit research

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objectives of research in intelligent cockpit aids and intelligent error-tolerant systems are stated. In intelligent cockpit aids research, the objective is to provide increased aid and support to the flight crew of civil transport aircraft through the use of artificial intelligence techniques combined with traditional automation. In intelligent error-tolerant systems, the objective is to develop and evaluate cockpit systems that provide flight crews with safe and effective ways and means to manage aircraft systems, plan and replan flights, and respond to contingencies. A subsystems fault management functional diagram is given. All information is in viewgraph form.

  2. Optimized universal color palette design for error diffusion

    Science.gov (United States)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  3. Systematic errors in VLF direction-finding of whistler ducts

    International Nuclear Information System (INIS)

    Strangeways, H.J.; Rycroft, M.J.

    1980-01-01

    In the previous paper it was shown that the systematic error in the azimuthal bearing due to multipath propagation and incident wave polarisation (when this also constitutes an error) was given by only three different forms for all VLF direction-finders currently used to investigate the position of whistler ducts. In this paper the magnitude of this error is investigated for different ionospheric and ground parameters for these three different systematic error types. By incorporating an ionosphere for which the refractive index is given by the full Appleton-Hartree formula, the variation of the systematic error with ionospheric electron density and latitude and direction of propagation is investigated in addition to the variation with wave frequency, ground conductivity and dielectric constant and distance of propagation. The systematic bearing error is also investigated for the three methods when the azimuthal bearing is averaged over a 2 kHz bandwidth. This is found to lead to a significantly smaller bearing error which, for the crossed-loops goniometer, approximates the bearing error calculated when phase-dependent terms in the receiver response are ignored. (author)

  4. Mobility-Assisted on-Demand Routing Algorithm for MANETs in the Presence of Location Errors

    Directory of Open Access Journals (Sweden)

    Trung Kien Vu

    2014-01-01

    Full Text Available We propose a mobility-assisted on-demand routing algorithm for mobile ad hoc networks in the presence of location errors. Location awareness enables mobile nodes to predict their mobility and enhances routing performance by estimating link duration and selecting reliable routes. However, measured locations intrinsically include errors in measurement. Such errors degrade mobility prediction and have been ignored in previous work. To mitigate the impact of location errors on routing, we propose an on-demand routing algorithm taking into account location errors. To that end, we adopt the Kalman filter to estimate accurate locations and consider route confidence in discovering routes. Via simulations, we compare our algorithm and previous algorithms in various environments. Our proposed mobility prediction is robust to the location errors.

  5. Errors in determination of irregularity factor for distributed parameters in a reactor core

    International Nuclear Information System (INIS)

    Vlasov, V.A.; Zajtsev, M.P.; Il'ina, L.I.; Postnikov, V.V.

    1988-01-01

    Two types errors (measurement error and error of regulation of reactor core distributed parameters), offen met during high-power density reactor operation, are analyzed. Consideration is given to errors in determination of irregularity factor for radial power distribution for a hot channel under conditions of its minimization and for the conditions when the regulation of relative power distribution is absent. The first regime is investigated by the method of statistic experiment using the program of neutron-physical calculation optimization taking as an example a large channel water cooled graphite moderated reactor. It is concluded that it is necessary, to take into account the complex interaction of measurement error with the error of parameter profiling over the core both for conditions of continuous manual or automatic parameter regulation (optimization) and for the conditions without regulation namely at a priore equalized distribution. When evaluating the error of distributed parameter control

  6. Positioning errors assessed with kV cone-beam CT for image-guided prostate radiotherapy

    International Nuclear Information System (INIS)

    Li Jiongyan; Guo Xiaomao; Yao Weiqiang; Wang Yanyang; Ma Jinli; Chen Jiayi; Zhang Zhen; Feng Yan

    2010-01-01

    Objective: To assess set-up errors measured with kilovoltage cone-beam CT (KV-CBCT), and the impact of online corrections on margins required to account for set-up variability during IMRT for patients with prostate cancer. Methods: Seven patients with prostate cancer undergoing IMRT were enrolled onto the study. The KV-CBCT scans were acquired at least twice weekly. After initial set-up using the skin marks, a CBCT scan was acquired and registered with the planning CT to determine the setup errors using an auto grey-scale registration software. Corrections would be made by moving the table if the setup errors were considered clinically significant (i. e. , > 2 mm). A second CBCT scan was acquired immediately after the corrections to evaluate the residual error. PTV margins were derived to account for the measured set-up errors and residual errors determined for this group of patients. Results: 197 KV-CBCT images in total were acquired. The random and systematic positioning errors and calculated PTV margins without correction in mm were : a) Lateral 3.1, 2.1, 9.3; b) Longitudinal 1.5, 1.8, 5.1;c) Vertical 4.2, 3.7, 13.0. The random and systematic positioning errors and calculated PTV margin with correction in mm were : a) Lateral 1.1, 0.9, 3.4; b) Longitudinal 0.7, 1.1, 2.5; c) Vertical 1.1, 1.3, 3.7. Conclusions: With the guidance of online KV-CBCT, set-up errors could be reduced significantly for patients with prostate cancer receiving IMRT. The margin required after online CBCT correction for the patients enrolled in the study would be appoximatively 3-4 mm. (authors)

  7. Learning from sensory and reward prediction errors during motor adaptation.

    Science.gov (United States)

    Izawa, Jun; Shadmehr, Reza

    2011-03-01

    Voluntary motor commands produce two kinds of consequences. Initially, a sensory consequence is observed in terms of activity in our primary sensory organs (e.g., vision, proprioception). Subsequently, the brain evaluates the sensory feedback and produces a subjective measure of utility or usefulness of the motor commands (e.g., reward). As a result, comparisons between predicted and observed consequences of motor commands produce two forms of prediction error. How do these errors contribute to changes in motor commands? Here, we considered a reach adaptation protocol and found that when high quality sensory feedback was available, adaptation of motor commands was driven almost exclusively by sensory prediction errors. This form of learning had a distinct signature: as motor commands adapted, the subjects altered their predictions regarding sensory consequences of motor commands, and generalized this learning broadly to neighboring motor commands. In contrast, as the quality of the sensory feedback degraded, adaptation of motor commands became more dependent on reward prediction errors. Reward prediction errors produced comparable changes in the motor commands, but produced no change in the predicted sensory consequences of motor commands, and generalized only locally. Because we found that there was a within subject correlation between generalization patterns and sensory remapping, it is plausible that during adaptation an individual's relative reliance on sensory vs. reward prediction errors could be inferred. We suggest that while motor commands change because of sensory and reward prediction errors, only sensory prediction errors produce a change in the neural system that predicts sensory consequences of motor commands.

  8. 48 CFR 1699.70 - Cost accounting standards.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Cost accounting standards... EMPLOYEES HEALTH BENEFITS ACQUISITION REGULATION CLAUSES AND FORMS COST ACCOUNTING STANDARDS Cost Accounting Standards 1699.70 Cost accounting standards. With respect to all experience-rated contracts currently...

  9. Round-off error in long-term orbital integrations using multistep methods

    Science.gov (United States)

    Quinlan, Gerald D.

    1994-01-01

    Techniques for reducing roundoff error are compared by testing them on high-order Stormer and summetric multistep methods. The best technique for most applications is to write the equation in summed, function-evaluation form and to store the coefficients as rational numbers. A larger error reduction can be achieved by writing the equation in backward-difference form and performing some of the additions in extended precision, but this entails a larger central processing unit (cpu) cost.

  10. Accounting control and organizational behaviour

    CERN Document Server

    Otley, David

    1987-01-01

    This book goes beyond the material usually included in traditional management accounting texts and provides both managers and management accountants with a simple guide to the major issues involved in developing and using accounting systems for management control. Attention is focused particularly on budgetary control systems because these form the basis for management control in most organisation of any size.

  11. Bayesian ensemble approach to error estimation of interatomic potentials

    DEFF Research Database (Denmark)

    Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.

    2004-01-01

    Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...

  12. Selectively Fortifying Reconfigurable Computing Device to Achieve Higher Error Resilience

    Directory of Open Access Journals (Sweden)

    Mingjie Lin

    2012-01-01

    Full Text Available With the advent of 10 nm CMOS devices and “exotic” nanodevices, the location and occurrence time of hardware defects and design faults become increasingly unpredictable, therefore posing severe challenges to existing techniques for error-resilient computing because most of them statically assign hardware redundancy and do not account for the error tolerance inherently existing in many mission-critical applications. This work proposes a novel approach to selectively fortifying a target reconfigurable computing device in order to achieve hardware-efficient error resilience for a specific target application. We intend to demonstrate that such error resilience can be significantly improved with effective hardware support. The major contributions of this work include (1 the development of a complete methodology to perform sensitivity and criticality analysis of hardware redundancy, (2 a novel problem formulation and an efficient heuristic methodology to selectively allocate hardware redundancy among a target design’s key components in order to maximize its overall error resilience, and (3 an academic prototype of SFC computing device that illustrates a 4 times improvement of error resilience for a H.264 encoder implemented with an FPGA device.

  13. Two-component model application for error calculus in the environmental monitoring data analysis

    International Nuclear Information System (INIS)

    Carvalho, Maria Angelica G.; Hiromoto, Goro

    2002-01-01

    Analysis and interpretation of results of an environmental monitoring program is often based on the evaluation of the mean value of a particular set of data, which is strongly affected by the analytical errors associated with each measurement. A model proposed by Rocke and Lorenzato assumes two error components, one additive and one multiplicative, to deal with lower and higher concentration values in a single model. In this communication, an application of this method for re-evaluation of the errors reported in a large set of results of total alpha measurements in a environmental sample is presented. The results show that the mean values calculated taking into account the new errors is higher than as obtained with the original errors, being an indicative that the analytical errors reported before were underestimated in the region of lower concentrations. (author)

  14. PARADIGM OF ACCOUNTING CHANGE

    Directory of Open Access Journals (Sweden)

    Constanta Iacob

    2016-12-01

    Full Text Available The words and phrases swop with each other and the apparent stability of a word’s meaning sometimes change in time. This explains why the generic term of accounting is used when referring to the qualities attributed to accounting,but also when it comes to organizing financial accounting function within the entity, and when referring concretely to keeping a double record with its specific means, methods and tools specific, respectively seen as a technical accounting.Speaking about the qualities of accounting, but also about the organizational form it takes, we note that there is a manifold meaning of the word accounting, which is why the purpose of this article is to demonstrate that the paradigm shift aimed at a new set of rules and if the rules changes, then we can change the very purpose of accounting.

  15. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  16. Brain mechanisms of self-control: A neurocognitive investigation of reward-based action control and error awareness

    NARCIS (Netherlands)

    Harsay, H.A.

    2014-01-01

    Motivation and the ability to detect errors are critical for the interaction with our environment. They provide us with the opportunity to engage in purposive, persistent and corrective behavior, and to take the consequences of our actions into account. Diminished motivation and error awareness have

  17. Accounting Information Systems Implementation and Management Accounting Change

    Directory of Open Access Journals (Sweden)

    Bredmar Krister

    2014-09-01

    Full Text Available Background: There is an on-going discussion within management accounting research regarding how to work with performance measures. In the process of developing new forms of performance measurement the task of choosing business metrics is central. This process is closely connected to the implementation of IT solutions. Objectives: In order to understand how new performance measurement solutions are implemented and used, it becomes crucial to understand how measures are selected and how new accounting information systems (AIS are developed and implemented. Methods/approach: The paper builds on the case of an on-going AIS project at a large, public university in Sweden. The empirical material was collected using a semi-action research approach over a two-year period. The majority of the material comes from written documentation and minutes. Results: Even though the implementation of a new AIS triggers a change in the management accounting practice, this study shows that this is done in more than one perspective. Conclusions: As the project develops, new priorities and objectives evolve, which in the end shape what management accounting change becomes.

  18. Sensitivity of Multicarrier Two-Dimensional Spreading Schemes to Synchronization Errors

    Directory of Open Access Journals (Sweden)

    Geneviève Jourdain

    2008-06-01

    Full Text Available This paper presents the impact of synchronization errors on the performance of a downlink multicarrier two-dimensional spreading OFDM-CDMA system. This impact is measured by the degradation of the signal to interference and noise ratio (SINR obtained after despreading and equalization. The contribution of this paper is twofold. First, we use some properties of random matrix and free probability theories to derive a new expression of the SINR. This expression is then independent of the actual value of the spreading codes while still accounting for the orthogonality between codes. This model is validated by means of Monte Carlo simulations. Secondly, the model is exploited to derive the SINR degradation of OFDM-CDMA systems due to synchronization errors which include a timing error, a carrier frequency offset, and a sampling frequency offset. It is also exploited to compare the sensitivities of MC-CDMA and MC-DS-CDMA systems to these errors in a frequency selective channel. This work is carried out for zero-forcing and minimum mean square error equalizers.

  19. Errors in Aviation Decision Making: Bad Decisions or Bad Luck?

    Science.gov (United States)

    Orasanu, Judith; Martin, Lynne; Davison, Jeannie; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    Despite efforts to design systems and procedures to support 'correct' and safe operations in aviation, errors in human judgment still occur and contribute to accidents. In this paper we examine how an NDM (naturalistic decision making) approach might help us to understand the role of decision processes in negative outcomes. Our strategy was to examine a collection of identified decision errors through the lens of an aviation decision process model and to search for common patterns. The second, and more difficult, task was to determine what might account for those patterns. The corpus we analyzed consisted of tactical decision errors identified by the NTSB (National Transportation Safety Board) from a set of accidents in which crew behavior contributed to the accident. A common pattern emerged: about three quarters of the errors represented plan-continuation errors, that is, a decision to continue with the original plan despite cues that suggested changing the course of action. Features in the context that might contribute to these errors were identified: (a) ambiguous dynamic conditions and (b) organizational and socially-induced goal conflicts. We hypothesize that 'errors' are mediated by underestimation of risk and failure to analyze the potential consequences of continuing with the initial plan. Stressors may further contribute to these effects. Suggestions for improving performance in these error-inducing contexts are discussed.

  20. 76 FR 50117 - Commission Rules and Forms Related to the FASB's Accounting Standards Codification

    Science.gov (United States)

    2011-08-12

    .... generally accepted accounting principles (``U.S. GAAP''). Statement No. 168 became effective for financial... Codification'' is a registered trademark of the Financial Accounting Foundation. DATES: Effective Date: August... accounting principles established by a standard-setting body that meets specified criteria. On April 25, 2003...

  1. International harmonization of accounting demands a new approach to accounting education

    Directory of Open Access Journals (Sweden)

    Milana Otrusinová

    2013-01-01

    Full Text Available Accounting and financial reporting are valuable sources of information about the financial position and performance of a company. The development of the international capital market have brought needs for international, globally valid and acknowledged accounting norms. Currently, the IFRS are used in agreement with the European Commission directive for the elaboration of financial statements of companies which are issued by securities; the other entities continue using national generally accepted accounting principles (GAAP. As the number of companies which apply the GAAP is predominant, the basis of the education of future accounting professionals is formed. However, this situation has to be changed because of the potential expansion of harmonization into a further group of companies (small and medium sized entities and also because of the increasing international cooperation among companies. Accountants should gain knowledge about all concepts of accounting – specialization narrowed down to national GAAP is limiting, as has been confirmed by recruitment agencies. The aim of the paper is to analyse the needs of accounting education in the current situation in compliance with the development trends of this field.

  2. Documentation of Accounting Records in Light of Legislative Innovations

    Directory of Open Access Journals (Sweden)

    K. V. BEZVERKHIY

    2017-05-01

    Full Text Available Legislative reforms in accounting aim to simplify accounting records and compilation of financial reports by business entities, thus increasing the position of Ukraine in the global ranking of Doing Business. This simplification is implied in the changes in the Regulation on Documentation of Accounting Records, entered into force to the Resolution of the Ukrainian Ministry of Finance. The objective of the study is to analyze the legislative innovations involved. The review of changes in documentation of accounting records is made. A comparative analysis of changes in the Regulation on Documentation of Accounting Records is made by sections: 1 General; 2 Primary documents; 3 Accounting records; 4 Correction of errors in primary documents and accounting records; 5 Organization of document circulation; 6 Storage of documents. Methods of analysis and synthesis are used for separating the differences in the editions of the Regulation on Documentation of Accounting Records. The result of the study has theoretical and practical value for the domestic business enterprise sector.

  3. Error propagation analysis for a sensor system

    International Nuclear Information System (INIS)

    Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.

    1976-01-01

    As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm

  4. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  5. Imagine-Self Perspective-Taking and Rational Self-Interested Behavior in a Simple Experimental Normal-Form Game

    Directory of Open Access Journals (Sweden)

    Adam Karbowski

    2017-09-01

    Full Text Available The purpose of this study is to explore the link between imagine-self perspective-taking and rational self-interested behavior in experimental normal-form games. Drawing on the concept of sympathy developed by Adam Smith and further literature on perspective-taking in games, we hypothesize that introduction of imagine-self perspective-taking by decision-makers promotes rational self-interested behavior in a simple experimental normal-form game. In our study, we examined behavior of 404 undergraduate students in the two-person game, in which the participant can suffer a monetary loss only if she plays her Nash equilibrium strategy and the opponent plays her dominated strategy. Results suggest that the threat of suffering monetary losses effectively discourages the participants from choosing Nash equilibrium strategy. In general, players may take into account that opponents choose dominated strategies due to specific not self-interested motivations or errors. However, adopting imagine-self perspective by the participants leads to more Nash equilibrium choices, perhaps by alleviating participants’ attributions of susceptibility to errors or non-self-interested motivation to the opponents.

  6. Imagine-Self Perspective-Taking and Rational Self-Interested Behavior in a Simple Experimental Normal-Form Game.

    Science.gov (United States)

    Karbowski, Adam; Ramsza, Michał

    2017-01-01

    The purpose of this study is to explore the link between imagine-self perspective-taking and rational self-interested behavior in experimental normal-form games. Drawing on the concept of sympathy developed by Adam Smith and further literature on perspective-taking in games, we hypothesize that introduction of imagine-self perspective-taking by decision-makers promotes rational self-interested behavior in a simple experimental normal-form game. In our study, we examined behavior of 404 undergraduate students in the two-person game, in which the participant can suffer a monetary loss only if she plays her Nash equilibrium strategy and the opponent plays her dominated strategy. Results suggest that the threat of suffering monetary losses effectively discourages the participants from choosing Nash equilibrium strategy. In general, players may take into account that opponents choose dominated strategies due to specific not self-interested motivations or errors. However, adopting imagine-self perspective by the participants leads to more Nash equilibrium choices, perhaps by alleviating participants' attributions of susceptibility to errors or non-self-interested motivation to the opponents.

  7. Optimal state estimation theory applied to safeguards accounting

    International Nuclear Information System (INIS)

    Pike, D.H.; Morrison, G.W.

    1977-01-01

    This paper presents a unified theory for the application of modern state estimation techniques to nuclear material accountability. First a summary of the current MUF/LEMUF approach is detailed. It is shown that when inventory measurement error is large in comparison to transfer measurement error, improved estimates of the losses can be achieved using the cumulative summation technique. However, the optimal estimator is shown to be the Kalman filter. An enhancement of the retrospective estimation of losses can be achieved using linear smoothing. State space models are developed for a mixed oxide fuel fabrication facility and examples are presented

  8. Public sector accounting in the education syllabi of leading chartered accountant professional bodies: A comparative study

    Directory of Open Access Journals (Sweden)

    Ahmed Mohammadali-Haji

    2016-05-01

    Full Text Available Public sector accounting has emerged as an area of concern within the sphere of professional accounting education. The International Federation of Accountants (IFAC allows its member bodies to apply discretion in the application of public sector accounting education requirements. This study explored the nature and extent to which public sector accounting features in the education syllabi of the leading chartered accountant professional bodies that form part of the IFAC contingent. By following an explorative approach, the study identified international trends within the ambit of public sector accounting education and provides guidance for other professional bodies in assessing the nature and extent of their public sector accounting education requirements

  9. Using snowball sampling method with nurses to understand medication administration errors.

    Science.gov (United States)

    Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In

    2009-02-01

    We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non

  10. Dimensioning of multiservice links taking account of soft blocking

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk; Stepanov, S.N.; Kostrov, A.V.

    2006-01-01

    of a multiservice link taking into account the possibility of soft blocking. An approximate algorithm for estimation of main performance measures is constructed. The error of estimation is numerically studied for different types of soft blocking. The optimal procedure of dimensioning is suggested....

  11. Considerations for sampling nuclear materials for SNM accounting measurements

    International Nuclear Information System (INIS)

    Brouns, R.J.; Roberts, F.P.; Upson, U.L.

    1978-01-01

    This report presents principles and guidelines for sampling nuclear materials to measure chemical and isotopic content of the material. Development of sampling plans and procedures that maintain the random and systematic errors of sampling within acceptable limits for SNM accounting purposes are emphasized

  12. 48 CFR 53.301-1439 - Schedule of Accounting Information.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Schedule of Accounting Information. 53.301-1439 Section 53.301-1439 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION (CONTINUED) CLAUSES AND FORMS FORMS Illustrations of Forms 53.301-1439 Schedule of Accounting...

  13. Signed reward prediction errors drive declarative learning

    NARCIS (Netherlands)

    De Loof, E.; Ergo, K.; Naert, L.; Janssens, C.; Talsma, D.; van Opstal, F.; Verguts, T.

    2018-01-01

    Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning–a quintessentially human form of learning–remains surprisingly absent. We

  14. Introduction of self-control of enterprise information system through accounting documentation process of

    Directory of Open Access Journals (Sweden)

    K.О. Volskа

    2017-12-01

    Full Text Available The research is devoted to determining the possibility of implementing self-control of an enterprise information system, describing the criteria for building an information system in an enterprise that will be self-organized and capable of self-analysis. The article considers the concept of self-control, its main criteria as well as the possibility of implementing the self-control in the information system of the enterprise. The current study provides the definition of intelligent information systems and how to use expert knowledge in them. The article presents the place of the self-control (in terms of its organization at the enterprise in the economic activity of the enterprise and its correlation with internal control; as a result, it is suggested to consider the self-control of the information system as the unit included in the methods of internal control. The paper carries out the comparison of the response to the error in the information system under the usual control (from the subject of the control to the person and the self-control, that made it possible to determine the latter as the method of preventing errors, that is, real-time control during the data entry in the information system of the enterprise. It is proposed to divide the control mechanisms in the information system into informational (protection of the information system from a technical point of view and special (accounting, legal, technological, etc.. The special control mechanisms of the information system should initially be formed by the experts of the relevant profile and who should present them in the form of algorithms for preventing any possible errors that will allow IT-professionals to describe them at the software level and implement one of the criteria for self-control of the information system, namely, a self-examination. The article proposes to implement the self-control at the input of the information system, when entering the data of the primary documents

  15. Accounting for L2 learners’ errors in word stress placement

    Directory of Open Access Journals (Sweden)

    Clara Herlina Karjo

    2016-01-01

    Full Text Available Stress placement in English words is governed by highly complicated rules. Thus, assigning stress correctly in English words has been a challenging task for L2 learners, especially Indonesian learners since their L1 does not recognize such stress system. This study explores the production of English word stress by 30 university students. The method used for this study is immediate repetition task. Participants are instructed to identify the stress placement of 80 English words which are auditorily presented as stimuli and immediately repeat the words with correct stress placement. The objectives of this study are to find out whether English word stress placement is problematic for L2 learners and to investigate the phonological factors which account for these problems. Research reveals that L2 learners have different ability in producing the stress, but three-syllable words are more problematic than two-syllable words. Moreover, misplacement of stress is caused by, among others, the influence of vowel lenght and vowel height.

  16. Learning from Errors at Work: A Replication Study in Elder Care Nursing

    Science.gov (United States)

    Leicher, Veronika; Mulder, Regina H.; Bauer, Johannes

    2013-01-01

    Learning from errors is an important way of learning at work. In this article, we analyse conditions under which elder care nurses use errors as a starting point for the engagement in social learning activities (ESLA) in the form of joint reflection with colleagues on potential causes of errors and ways to prevent them in future. The goal of our…

  17. What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?

    Science.gov (United States)

    Liebovitch, Larry

    1998-03-01

    The longest term correlations in living systems are the information stored in DNA which reflects the evolutionary history of an organism. The 4 bases (A,T,G,C) encode sequences of amino acids as well as locations of binding sites for proteins that regulate DNA. The fidelity of this important information is maintained by ANALOG error check mechanisms. When a single strand of DNA is replicated the complementary base is inserted in the new strand. Sometimes the wrong base is inserted that sticks out disrupting the phosphate backbone. The new base is not yet methylated, so repair enzymes, that slide along the DNA, can tear out the wrong base and replace it with the right one. The bases in DNA form a sequence of 4 different symbols and so the information is encoded in a DIGITAL form. All the digital codes in our society (ISBN book numbers, UPC product codes, bank account numbers, airline ticket numbers) use error checking code, where some digits are functions of other digits to maintain the fidelity of transmitted informaiton. Does DNA also utitlize a DIGITAL error chekcing code to maintain the fidelity of its information and increase the accuracy of replication? That is, are some bases in DNA functions of other bases upstream or downstream? This raises the interesting mathematical problem: How does one determine whether some symbols in a sequence of symbols are a function of other symbols. It also bears on the issue of determining algorithmic complexity: What is the function that generates the shortest algorithm for reproducing the symbol sequence. The error checking codes most used in our technology are linear block codes. We developed an efficient method to test for the presence of such codes in DNA. We coded the 4 bases as (0,1,2,3) and used Gaussian elimination, modified for modulus 4, to test if some bases are linear combinations of other bases. We used this method to analyze the base sequence in the genes from the lac operon and cytochrome C. We did not find

  18. Tight Error Bounds for Fourier Methods for Option Pricing for Exponential Levy Processes

    KAUST Repository

    Crocce, Fabian

    2016-01-06

    Prices of European options whose underlying asset is driven by the L´evy process are solutions to partial integrodifferential Equations (PIDEs) that generalise the Black-Scholes equation by incorporating a non-local integral term to account for the discontinuities in the asset price. The Levy -Khintchine formula provides an explicit representation of the characteristic function of a L´evy process (cf, [6]): One can derive an exact expression for the Fourier transform of the solution of the relevant PIDE. The rapid rate of convergence of the trapezoid quadrature and the speedup provide efficient methods for evaluationg option prices, possibly for a range of parameter configurations simultaneously. A couple of works have been devoted to the error analysis and parameter selection for these transform-based methods. In [5] several payoff functions are considered for a rather general set of models, whose characteristic function is assumed to be known. [4] presents the framework and theoretical approach for the error analysis, and establishes polynomial convergence rates for approximations of the option prices. [1] presents FT-related methods with curved integration contour. The classical flat FT-methods have been, on the other hand, extended for option pricing problems beyond the European framework [3]. We present a methodology for studying and bounding the error committed when using FT methods to compute option prices. We also provide a systematic way of choosing the parameters of the numerical method, minimising the error bound and guaranteeing adherence to a pre-described error tolerance. We focus on exponential L´evy processes that may be of either diffusive or pure jump in type. Our contribution is to derive a tight error bound for a Fourier transform method when pricing options under risk-neutral Levy dynamics. We present a simplified bound that separates the contributions of the payoff and of the process in an easily processed and extensible product form that

  19. Context Specificity of Post-Error and Post-Conflict Cognitive Control Adjustments

    Science.gov (United States)

    Forster, Sarah E.; Cho, Raymond Y.

    2014-01-01

    There has been accumulating evidence that cognitive control can be adaptively regulated by monitoring for processing conflict as an index of online control demands. However, it is not yet known whether top-down control mechanisms respond to processing conflict in a manner specific to the operative task context or confer a more generalized benefit. While previous studies have examined the taskset-specificity of conflict adaptation effects, yielding inconsistent results, control-related performance adjustments following errors have been largely overlooked. This gap in the literature underscores recent debate as to whether post-error performance represents a strategic, control-mediated mechanism or a nonstrategic consequence of attentional orienting. In the present study, evidence of generalized control following both high conflict correct trials and errors was explored in a task-switching paradigm. Conflict adaptation effects were not found to generalize across tasksets, despite a shared response set. In contrast, post-error slowing effects were found to extend to the inactive taskset and were predictive of enhanced post-error accuracy. In addition, post-error performance adjustments were found to persist for several trials and across multiple task switches, a finding inconsistent with attentional orienting accounts of post-error slowing. These findings indicate that error-related control adjustments confer a generalized performance benefit and suggest dissociable mechanisms of post-conflict and post-error control. PMID:24603900

  20. A general approach to error propagation

    International Nuclear Information System (INIS)

    Sanborn, J.B.

    1987-01-01

    A computational approach to error propagation is explained. It is shown that the application of the first-order Taylor theory to a fairly general expression representing an inventory or inventory-difference quantity leads naturally to a data structure that is useful for structuring error-propagation calculations. This data structure incorporates six types of data entities: (1) the objects in the material balance, (2) numerical parameters that describe these objects, (3) groups or sets of objects, (4) the terms which make up the material-balance equation, (5) the errors or sources of variance and (6) the functions or subroutines that represent Taylor partial derivatives. A simple algorithm based on this data structure can be defined using formulas that are sums of squares of sums. The data structures and algorithms described above have been implemented as computer software in FORTRAN for IBM PC-type machines. A free-form data-entry format allows users to separate data as they wish into separate files and enter data using a text editor. The program has been applied to the computation of limits of error for inventory differences (LEIDs) within the DOE complex. 1 ref., 3 figs

  1. Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor.

    Science.gov (United States)

    Zhao, Yanzhi; Cao, Yachao; Zhang, Caifeng; Zhang, Dan; Zhang, Jie

    2017-09-29

    By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint) parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H) method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments.

  2. Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor

    Directory of Open Access Journals (Sweden)

    Yanzhi Zhao

    2017-09-01

    Full Text Available By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments.

  3. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  4. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  5. Analysis and improvement of gas turbine blade temperature measurement error

    International Nuclear Information System (INIS)

    Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui

    2015-01-01

    Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed. (paper)

  6. Analysis and improvement of gas turbine blade temperature measurement error

    Science.gov (United States)

    Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui

    2015-10-01

    Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed.

  7. A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments

    Science.gov (United States)

    S. Healey; P. Patterson; S. Urbanski

    2014-01-01

    Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...

  8. Report of the Material Control and Material Accounting Task Force: summary

    International Nuclear Information System (INIS)

    1978-03-01

    A special review was made of the safeguards maintained by licensees possessing 5 kg or more of strategic special nuclear material (SSNM), i.e., plutonium, uranium-233, or uranium enriched in the uranium-235 isotope to 20 percent or more. A Task Force was formed to define the roles and objectives of material control and material accounting in the NRC safeguards program; recommend goals for material control and material accounting systems based on their roles and objectives; assess the extent to which the existing regulatory base meets or provides the capability to meet the recommended goals; and to provide direction for material control and material accounting development, including both near-term and long-term upgrades. Based on results of Task Force investigations it is recommended that licensee plans for measurement control programs be submitted in response to Section 70.57(c) of Title 10 of the Code of Federal Regulations. Other recommendations include the review and upgrading, as necessary, of measurement error propagation models used by each licensee; revision of Nuclear Materials Management and Safeguards System (NMMSS) reporting entities for SSNM licensees to be consistent with the partitioning of facilities into plants or, if appropriate, accounting units; review of NMMSS reporting entities for SSNM licensees to assure that data for high enriched uranium operations are clearly separated from low enriched uranium operations; upgrading of the editing by NMMSS of reported licensee safeguards data for accuracy and consistency; and the acquisition of (a) a secure interactive computer capability for use in collecting, storing, sorting, and analyzing special nuclear material accounting data, and (b) associated flexible computer software that presents safeguards information in a succinct and comprehensive manner

  9. A Corpus-based Study of EFL Learners’ Errors in IELTS Essay Writing

    Directory of Open Access Journals (Sweden)

    Hoda Divsar

    2017-03-01

    Full Text Available The present study analyzed different types of errors in the EFL learners’ IELTS essays. In order to determine the major types of errors, a corpus of 70 IELTS examinees’ writings were collected, and their errors were extracted and categorized qualitatively. Errors were categorized based on a researcher-developed error-coding scheme into 13 aspects. Based on the descriptive statistical analyses, the frequency of each error type was calculated and the commonest errors committed by the EFL learners in IELTS essays were identified. The results indicated that the two most frequent errors that IELTS candidates committed were related to word choice and verb forms. Based on the research results, pedagogical implications highlight analyzing EFL learners’ writing errors as a useful basis for instructional purposes including creating pedagogical teaching materials that are in line with learners’ linguistic strengths and weaknesses.

  10. Accounting Fundamentals and Variations of Stock Price: Forward Looking Information Inducement

    OpenAIRE

    Sumiyana, Sumiyana

    2011-01-01

    This study investigates a permanent issue about low association between accounting fundamentals and variations of stock prices. It induces not only historical accountingfundamentals, but also forward looking information. Investors consider forward looking information that enables them to predict potential future cash flow, increase predictive power, lessen mispricing error, increase information content and drives future price equilibrium. The accounting fundamentals are earnings yield, book v...

  11. 31 CFR 543.505 - Entries in certain accounts for normal service charges authorized.

    Science.gov (United States)

    2010-07-01

    ... reimbursement for normal service charges owed it by the owner of that blocked account. (b) As used in this....505 Entries in certain accounts for normal service charges authorized. (a) A U.S. financial... charges to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary...

  12. 31 CFR 594.505 - Entries in certain accounts for normal service charges authorized.

    Science.gov (United States)

    2010-07-01

    ... reimbursement for normal service charges owed it by the owner of that blocked account. (b) As used in this....505 Entries in certain accounts for normal service charges authorized. (a) A U.S. financial... to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and...

  13. Integrating Systems into Accounting Instruction.

    Science.gov (United States)

    Heatherington, Ralph

    1980-01-01

    By incorporating a discussion of systems into the beginning accounting class, students will have a more accurate picture of business and the role accounting plays in it. Students should understand the purpose of forms, have a basic knowledge of flowcharting principles and symbols, and know how source documents are created. (CT)

  14. Open Book Professional Accountancy Examinations

    Science.gov (United States)

    Rowlands, J. E.; Forsyth, D.

    2006-01-01

    This article describes the structure and rationale for an open-book approach in professional accountancy examinations. The concept of knowledge management and the recognition that some knowledge ought to be embedded in the minds of professional accountants while other knowledge ought to be readily accessible and capable of application forms the…

  15. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  16. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    Science.gov (United States)

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  17. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  18. Effect of lethality on the extinction and on the error threshold of quasispecies.

    Science.gov (United States)

    Tejero, Hector; Marín, Arturo; Montero, Francisco

    2010-02-21

    In this paper the effect of lethality on error threshold and extinction has been studied in a population of error-prone self-replicating molecules. For given lethality and a simple fitness landscape, three dynamic regimes can be obtained: quasispecies, error catastrophe, and extinction. Using a simple model in which molecules are classified as master, lethal and non-lethal mutants, it is possible to obtain the mutation rates of the transitions between the three regimes analytically. The numerical resolution of the extended model, in which molecules are classified depending on their Hamming distance to the master sequence, confirms the results obtained in the simple model and shows how an error catastrophe regime changes when lethality is taken in account. (c) 2009 Elsevier Ltd. All rights reserved.

  19. MATERIAL CONTROL ACCOUNTING INMM

    Energy Technology Data Exchange (ETDEWEB)

    Hasty, T.

    2009-06-14

    Since 1996, the Mining and Chemical Combine (MCC - formerly known as K-26), and the United States Department of Energy (DOE) have been cooperating under the cooperative Nuclear Material Protection, Control and Accounting (MPC&A) Program between the Russian Federation and the U.S. Governments. Since MCC continues to operate a reactor for steam and electricity production for the site and city of Zheleznogorsk which results in production of the weapons grade plutonium, one of the goals of the MPC&A program is to support implementation of an expanded comprehensive nuclear material control and accounting (MC&A) program. To date MCC has completed upgrades identified in the initial gap analysis and documented in the site MC&A Plan and is implementing additional upgrades identified during an update to the gap analysis. The scope of these upgrades includes implementation of MCC organization structure relating to MC&A, establishing material balance area structure for special nuclear materials (SNM) storage and bulk processing areas, and material control functions including SNM portal monitors at target locations. Material accounting function upgrades include enhancements in the conduct of physical inventories, limit of error inventory difference procedure enhancements, implementation of basic computerized accounting system for four SNM storage areas, implementation of measurement equipment for improved accountability reporting, and both new and revised site-level MC&A procedures. This paper will discuss the implementation of MC&A upgrades at MCC based on the requirements established in the comprehensive MC&A plan developed by the Mining and Chemical Combine as part of the MPC&A Program.

  20. The timing of spontaneous detection and repair of naming errors in aphasia.

    Science.gov (United States)

    Schuchard, Julia; Middleton, Erica L; Schwartz, Myrna F

    2017-08-01

    This study examined the timing of spontaneous self-monitoring in the naming responses of people with aphasia. Twelve people with aphasia completed a 615-item naming test twice, in separate sessions. Naming attempts were scored for accuracy and error type, and verbalizations indicating detection were coded as negation (e.g., "no, not that") or repair attempts (i.e., a changed naming attempt). Focusing on phonological and semantic errors, we measured the timing of the errors and of the utterances that provided evidence of detection. The effects of error type and detection response type on error-to-detection latencies were analyzed using mixed-effects regression modeling. We first asked whether phonological errors and semantic errors differed in the timing of the detection process or repair planning. Results suggested that the two error types primarily differed with respect to repair planning. Specifically, repair attempts for phonological errors were initiated more quickly than repair attempts for semantic errors. We next asked whether this difference between the error types could be attributed to the tendency for phonological errors to have a high degree of phonological similarity with the subsequent repair attempts, thereby speeding the programming of the repairs. Results showed that greater phonological similarity between the error and the repair was associated with faster repair times for both error types, providing evidence of error-to-repair priming in spontaneous self-monitoring. When controlling for phonological overlap, significant effects of error type and repair accuracy on repair times were also found. These effects indicated that correct repairs of phonological errors were initiated particularly quickly, whereas repairs of semantic errors were initiated relatively slowly, regardless of their accuracy. We discuss the implications of these findings for theoretical accounts of self-monitoring and the role of speech error repair in learning. Copyright

  1. 31 CFR 544.505 - Entries in certain accounts for normal service charges authorized.

    Science.gov (United States)

    2010-07-01

    ... payment or reimbursement for normal service charges owed it by the owner of that blocked account. (b) As... Licensing Policy § 544.505 Entries in certain accounts for normal service charges authorized. (a) A U.S... adjustment charges to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges...

  2. 31 CFR 593.505 - Entries in certain accounts for normal service charges authorized.

    Science.gov (United States)

    2010-07-01

    ... payment or reimbursement for normal service charges owed it by the owner of that blocked account. (b) As... Licensing Policy § 593.505 Entries in certain accounts for normal service charges authorized. (a) A U.S... adjustment charges to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges...

  3. 31 CFR 547.505 - Entries in certain accounts for normal service charges authorized.

    Science.gov (United States)

    2010-07-01

    ... reimbursement for normal service charges owed it by the owner of that blocked account. (b) As used in this... Policy § 547.505 Entries in certain accounts for normal service charges authorized. (a) A U.S. financial... charges to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary...

  4. 31 CFR 537.505 - Entries in certain accounts for normal service charges authorized.

    Science.gov (United States)

    2010-07-01

    ... normal service charges owed it by the owner of that blocked account. (b) As used in this section, the... Entries in certain accounts for normal service charges authorized. (a) A U.S. financial institution is... bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and protest fees, and...

  5. 31 CFR 541.505 - Entries in certain accounts for normal service charges authorized.

    Science.gov (United States)

    2010-07-01

    ... normal service charges owed it by the owner of that blocked account. (b) As used in this section, the... Entries in certain accounts for normal service charges authorized. (a) A U.S. financial institution is... bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and protest fees, and...

  6. 31 CFR 548.505 - Entries in certain accounts for normal service charges authorized.

    Science.gov (United States)

    2010-07-01

    ... normal service charges owed it by the owner of that blocked account. (b) As used in this section, the... Entries in certain accounts for normal service charges authorized. (a) A U.S. financial institution is... bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and protest fees, and...

  7. 31 CFR 588.505 - Entries in certain accounts for normal service charges authorized.

    Science.gov (United States)

    2010-07-01

    ... reimbursement for normal service charges owed it by the owner of that blocked account. (b) As used in this... § 588.505 Entries in certain accounts for normal service charges authorized. (a) A U.S. financial... to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and...

  8. 31 CFR 546.505 - Entries in certain accounts for normal service charges authorized.

    Science.gov (United States)

    2010-07-01

    ... normal service charges owed it by the owner of that blocked account. (b) As used in this section, the... Entries in certain accounts for normal service charges authorized. (a) A U.S. financial institution is... bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and protest fees, and...

  9. 31 CFR 542.505 - Entries in certain accounts for normal service charges authorized.

    Science.gov (United States)

    2010-07-01

    ... normal service charges owed it by the owner of that blocked account. (b) As used in this section, the... Entries in certain accounts for normal service charges authorized. (a) A U.S. financial institution is... bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and protest fees, and...

  10. 31 CFR 545.504 - Entries in certain accounts for normal service charges authorized.

    Science.gov (United States)

    2010-07-01

    ... reimbursement for normal service charges owed it by the owner of that blocked account. (b) As used in this... § 545.504 Entries in certain accounts for normal service charges authorized. (a) A U.S. financial... to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and...

  11. 31 CFR 551.505 - Entries in certain accounts for normal service charges authorized.

    Science.gov (United States)

    2010-07-01

    ... normal service charges owed it by the owner of that blocked account. (b) As used in this section, the... Entries in certain accounts for normal service charges authorized. (a) A U.S. financial institution is... bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and protest fees, and...

  12. Surprised at all the entropy: hippocampal, caudate and midbrain contributions to learning from prediction errors.

    Directory of Open Access Journals (Sweden)

    Anne-Marike Schiffer

    Full Text Available Influential concepts in neuroscientific research cast the brain a predictive machine that revises its predictions when they are violated by sensory input. This relates to the predictive coding account of perception, but also to learning. Learning from prediction errors has been suggested for take place in the hippocampal memory system as well as in the basal ganglia. The present fMRI study used an action-observation paradigm to investigate the contributions of the hippocampus, caudate nucleus and midbrain dopaminergic system to different types of learning: learning in the absence of prediction errors, learning from prediction errors, and responding to the accumulation of prediction errors in unpredictable stimulus configurations. We conducted analyses of the regions of interests' BOLD response towards these different types of learning, implementing a bootstrapping procedure to correct for false positives. We found both, caudate nucleus and the hippocampus to be activated by perceptual prediction errors. The hippocampal responses seemed to relate to the associative mismatch between a stored representation and current sensory input. Moreover, its response was significantly influenced by the average information, or Shannon entropy of the stimulus material. In accordance with earlier results, the habenula was activated by perceptual prediction errors. Lastly, we found that the substantia nigra was activated by the novelty of sensory input. In sum, we established that the midbrain dopaminergic system, the hippocampus, and the caudate nucleus were to different degrees significantly involved in the three different types of learning: acquisition of new information, learning from prediction errors and responding to unpredictable stimulus developments. We relate learning from perceptual prediction errors to the concept of predictive coding and related information theoretic accounts.

  13. Surprised at all the entropy: hippocampal, caudate and midbrain contributions to learning from prediction errors.

    Science.gov (United States)

    Schiffer, Anne-Marike; Ahlheim, Christiane; Wurm, Moritz F; Schubotz, Ricarda I

    2012-01-01

    Influential concepts in neuroscientific research cast the brain a predictive machine that revises its predictions when they are violated by sensory input. This relates to the predictive coding account of perception, but also to learning. Learning from prediction errors has been suggested for take place in the hippocampal memory system as well as in the basal ganglia. The present fMRI study used an action-observation paradigm to investigate the contributions of the hippocampus, caudate nucleus and midbrain dopaminergic system to different types of learning: learning in the absence of prediction errors, learning from prediction errors, and responding to the accumulation of prediction errors in unpredictable stimulus configurations. We conducted analyses of the regions of interests' BOLD response towards these different types of learning, implementing a bootstrapping procedure to correct for false positives. We found both, caudate nucleus and the hippocampus to be activated by perceptual prediction errors. The hippocampal responses seemed to relate to the associative mismatch between a stored representation and current sensory input. Moreover, its response was significantly influenced by the average information, or Shannon entropy of the stimulus material. In accordance with earlier results, the habenula was activated by perceptual prediction errors. Lastly, we found that the substantia nigra was activated by the novelty of sensory input. In sum, we established that the midbrain dopaminergic system, the hippocampus, and the caudate nucleus were to different degrees significantly involved in the three different types of learning: acquisition of new information, learning from prediction errors and responding to unpredictable stimulus developments. We relate learning from perceptual prediction errors to the concept of predictive coding and related information theoretic accounts.

  14. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  15. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    Science.gov (United States)

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Anticipating cognitive effort: roles of perceived error-likelihood and time demands.

    Science.gov (United States)

    Dunn, Timothy L; Inzlicht, Michael; Risko, Evan F

    2017-11-13

    Why are some actions evaluated as effortful? In the present set of experiments we address this question by examining individuals' perception of effort when faced with a trade-off between two putative cognitive costs: how much time a task takes vs. how error-prone it is. Specifically, we were interested in whether individuals anticipate engaging in a small amount of hard work (i.e., low time requirement, but high error-likelihood) vs. a large amount of easy work (i.e., high time requirement, but low error-likelihood) as being more effortful. In between-subject designs, Experiments 1 through 3 demonstrated that individuals anticipate options that are high in perceived error-likelihood (yet less time consuming) as more effortful than options that are perceived to be more time consuming (yet low in error-likelihood). Further, when asked to evaluate which of the two tasks was (a) more effortful, (b) more error-prone, and (c) more time consuming, effort-based and error-based choices closely tracked one another, but this was not the case for time-based choices. Utilizing a within-subject design, Experiment 4 demonstrated overall similar pattern of judgments as Experiments 1 through 3. However, both judgments of error-likelihood and time demand similarly predicted effort judgments. Results are discussed within the context of extant accounts of cognitive control, with considerations of how error-likelihood and time demands may independently and conjunctively factor into judgments of cognitive effort.

  17. Patient identification errors: the detective in the laboratory.

    Science.gov (United States)

    Salinas, Maria; López-Garrigós, Maite; Lillo, Rosa; Gutiérrez, Mercedes; Lugo, Javier; Leiva-Salinas, Carlos

    2013-11-01

    The eradication of errors regarding patients' identification is one of the main goals for safety improvement. As clinical laboratory intervenes in 70% of clinical decisions, laboratory safety is crucial in patient safety. We studied the number of Laboratory Information System (LIS) demographic data errors registered in our laboratory during one year. The laboratory attends a variety of inpatients and outpatients. The demographic data of outpatients is registered in the LIS, when they present to the laboratory front desk. The requests from the primary care centers (PCC) are made electronically by the general practitioner. A manual step is always done at the PCC to conciliate the patient identification number in the electronic request with the one in the LIS. Manual registration is done through hospital information system demographic data capture when patient's medical record number is registered in LIS. Laboratory report is always sent out electronically to the patient's electronic medical record. Daily, every demographic data in LIS is manually compared to the request form to detect potential errors. Fewer errors were committed when electronic order was used. There was great error variability between PCC when using the electronic order. LIS demographic data manual registration errors depended on patient origin and test requesting method. Even when using the electronic approach, errors were detected. There was a great variability between PCC even when using this electronic modality; this suggests that the number of errors is still dependent on the personnel in charge of the technology. © 2013.

  18. STRATEGIC MANAGEMENT ACCOUNTING: DEFINITION AND TOOLS

    Directory of Open Access Journals (Sweden)

    Nadiia Pylypiv

    2017-08-01

    Full Text Available The article is dedicated to learning the essence of the definition of “strategic management accounting” in domestic and foreign literature. Strategic management accounting tools has been studied and identified constraints that affect its choice. The result of the study is that the understanding of strategic management accounting was formed by authors. The tools which are common for both traditional managerial accounting and strategic and the specific tools necessary for efficient implementation of strategic management accounting have been defined. Keywords: strategic management accounting, definition, tools, strategic management decisions.

  19. A Comparative Study of Accounting Entities Under Different Business Organizations

    Institute of Scientific and Technical Information of China (English)

    LUO Hong-lan; XU Guo-xin; FAN Jin

    2001-01-01

    In terms of accounting, all types of business enterprises regardless of their organizational form are separate accounting entities. But different types of organization forms entail remarkable differences in the establishments, legal positions, liabilities, taxation obligations and accounting practices of the business enterprises as accounting entities. A good knowledge of such difference is beneficial to the promotion of the development of all types of business enterprises in China.

  20. Material control and accountancy at EDF PWR plants

    International Nuclear Information System (INIS)

    de Cormis, F.

    1991-01-01

    The paper describes the comprehensive system which is developed and implemented at Electricite de France to provide a single reliable nuclear material control and accounting system for all nuclear plants. This software aims at several objectives among which are: the control and the accountancy of nuclear material at the plant, the optimization of the consistency of data by minimizing the possibility of transcription errors, the fulfillment of the statutory requirements by automatic transfer of reports to national and international safeguards authorities, the servicing of other EDF users of nuclear material data for technical or commercial purposes

  1. Generalized Forecast Error Variance Decomposition for Linear and Nonlinear Multivariate Models

    DEFF Research Database (Denmark)

    Lanne, Markku; Nyberg, Henri

    We propose a new generalized forecast error variance decomposition with the property that the proportions of the impact accounted for by innovations in each variable sum to unity. Our decomposition is based on the well-established concept of the generalized impulse response function. The use of t...

  2. The Theory and Assessment of Spatial Straightness Error Matched New Generation GPS

    International Nuclear Information System (INIS)

    Zhang, X B; Sheng, X L; Jiang, X Q; Li, Z

    2006-01-01

    In order to assess spatial straightness error matched new generation Dimensional Geometrical Product Specification and Verification (GPS), the theory of spatial straightness error assessing is proposed and its advantages are analyzed based on metrology and statistics in this paper. Then, the assessing parameter system is proposed and it is testified in real application comparing to assessment result of the geometric tolerance theory. Statistical parameters of this assessing system post the different characteristics of spatial straightness error, and can reveal the impact of spatial straightness error on the accessory function more roundly to complement the single assessing parameter of geometrical tolerance for straightness error. The statistical spatial straightness tolerance and statistical spatial straightness error proposed in this paper is possible to be applied in evaluation of other error of form, orientation, location and run-out

  3. Accounting for imperfect forward modeling in geophysical inverse problems — Exemplified for crosshole tomography

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Holm Jacobsen, Bo

    2014-01-01

    forward models, can be more than an order of magnitude larger than the measurement uncertainty. We also found that the modeling error is strongly linked to the spatial variability of the assumed velocity field, i.e., the a priori velocity model.We discovered some general tools by which the modeling error...... synthetic ground-penetrating radar crosshole tomographic inverse problems. Ignoring the modeling error can lead to severe artifacts, which erroneously appear to be well resolved in the solution of the inverse problem. Accounting for the modeling error leads to a solution of the inverse problem consistent...

  4. Accounting Education Approach in the Context of New Turkish Commercial Code and Turkish Accounting Standards

    OpenAIRE

    Cevdet Kızıl; Ayşe Tansel Çetin; Ahmed Bulunmaz

    2014-01-01

    The aim of this article is to investigate the impact of new Turkish commercial code and Turkish accounting standards on accounting education. This study takes advantage of the survey method for gathering information and running the research analysis. For this purpose, questionnaire forms are distributed to university students personally and via the internet.This paper includes significant research questions such as “Are accounting academicians informed and knowledgeable on new Turkish commerc...

  5. Advances in metal forming expert system for metal forming

    CERN Document Server

    Hingole, Rahulkumar Shivajirao

    2015-01-01

    This comprehensive book offers a clear account of the theory and applications of advanced metal forming. It provides a detailed discussion of specific forming processes, such as deep drawing, rolling, bending extrusion and stamping. The author highlights recent developments of metal forming technologies and explains sound, new and powerful expert system techniques for solving advanced engineering problems in metal forming. In addition, the basics of expert systems, their importance and applications to metal forming processes, computer-aided analysis of metalworking processes, formability analysis, mathematical modeling and case studies of individual processes are presented.

  6. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    Science.gov (United States)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  7. Strategy of restraining ripple error on surface for optical fabrication.

    Science.gov (United States)

    Wang, Tan; Cheng, Haobo; Feng, Yunpeng; Tam, Honyuen

    2014-09-10

    The influence from the ripple error to the high imaging quality is effectively reduced by restraining the ripple height. A method based on the process parameters and the surface error distribution is designed to suppress the ripple height in this paper. The generating mechanism of the ripple error is analyzed by polishing theory with uniform removal character. The relation between the processing parameters (removal functions, pitch of path, and dwell time) and the ripple error is discussed through simulations. With these, the strategy for diminishing the error is presented. A final process is designed and demonstrated on K9 work-pieces using the optimizing strategy with magnetorheological jet polishing. The form error on the surface is decreased from 0.216λ PV (λ=632.8  nm) and 0.039λ RMS to 0.03λ PV and 0.004λ RMS. And the ripple error is restrained well at the same time, because the ripple height is less than 6 nm on the final surface. Results indicate that these strategies are suitable for high-precision optical manufacturing.

  8. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    Science.gov (United States)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  9. Performance of an Error Control System with Turbo Codes in Powerline Communications

    Directory of Open Access Journals (Sweden)

    Balbuena-Campuzano Carlos Alberto

    2014-07-01

    Full Text Available This paper reports the performance of turbo codes as an error control technique in PLC (Powerline Communications data transmissions. For this system, computer simulations are used for modeling data networks based on the model classified in technical literature as indoor, and uses OFDM (Orthogonal Frequency Division Multiplexing as a modulation technique. Taking into account the channel, modulation and turbo codes, we propose a methodology to minimize the bit error rate (BER, as a function of the average received signal noise ratio (SNR.

  10. Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors

    Science.gov (United States)

    San, Bingbing; Yang, Qingshan; Yin, Liwei

    2017-03-01

    Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.

  11. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  12. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  13. Methodology of sustainability accounting

    Directory of Open Access Journals (Sweden)

    O.H. Sokil

    2017-03-01

    Full Text Available Modern challenges of the theory and methodology of accounting are realized through the formation and implementation of new concepts, the purpose of which is to meet the needs of users in standard and unique information. The development of a methodology for sustainability accounting is a key aspect of the management of an economic entity. The purpose of the article is to form the methodological bases of accounting for sustainable development and determine its goals, objectives, object, subject, methods, functions and key aspects. The author analyzes the theoretical bases of the definition and considers the components of the traditional accounting methodology. Generalized structural diagram of the methodology for accounting for sustainable development is offered in the article. The complex of methods and principles of sustainable development accounting for systematized and non-standard provisions has been systematized. The new system of theoretical and methodological provisions of accounting for sustainable development is justified in the context of determining its purpose, objective, subject, object, methods, functions and key aspects.

  14. EPIC: an Error Propagation/Inquiry Code

    International Nuclear Information System (INIS)

    Baker, A.L.

    1985-01-01

    The use of a computer program EPIC (Error Propagation/Inquiry Code) will be discussed. EPIC calculates the variance of a materials balance closed about a materials balance area (MBA) in a processing plant operated under steady-state conditions. It was designed for use in evaluating the significance of inventory differences in the Department of Energy (DOE) nuclear plants. EPIC rapidly estimates the variance of a materials balance using average plant operating data. The intent is to learn as much as possible about problem areas in a process with simple straightforward calculations assuming a process is running in a steady-state mode. EPIC is designed to be used by plant personnel or others with little computer background. However, the user should be knowledgeable about measurement errors in the system being evaluated and have a limited knowledge of how error terms are combined in error propagation analyses. EPIC contains six variance equations; the appropriate equation is used to calculate the variance at each measurement point. After all of these variances are calculated, the total variance for the MBA is calculated using a simple algebraic sum of variances. The EPIC code runs on any computer that accepts a standard form of the BASIC language. 2 refs., 1 fig., 6 tabs

  15. How well do we know the electromagnetic form factors of the proton?

    International Nuclear Information System (INIS)

    Arrington, J.

    2003-01-01

    Several experiments have extracted proton electromagnetic form factors from elastic cross section measurements using the Rosenbluth technique. Global analyses of these measurements indicate approximate scaling of the electric and magnetic form factors (μ p G E p /G M p ≅1), in contrast to recent polarization transfer measurements from Jefferson Lab. We present here a global reanalysis of the cross section data aimed at understanding the disagreement between the Rosenbluth extraction and the polarization transfer data. We find that the individual cross section measurements are self-consistent, and that the new global analysis yields results that are still inconsistent with polarization measurements. This discrepancy indicates a fundamental problem in one of the two techniques, or a significant error in polarization transfer or cross section measurements. An error in the polarization data would imply a large error in the extracted electric form factor, while an error in the cross sections implies an uncertainty in the extracted form factors, even if the form factor ratio is measured exactly

  16. Recognition of medical errors' reporting system dimensions in educational hospitals.

    Science.gov (United States)

    Yarmohammadian, Mohammad H; Mohammadinia, Leila; Tavakoli, Nahid; Ghalriz, Parvin; Haghshenas, Abbas

    2014-01-01

    Nowadays medical errors are one of the serious issues in the health-care system and carry to account of the patient's safety threat. The most important step for achieving safety promotion is identifying errors and their causes in order to recognize, correct and omit them. Concerning about repeating medical errors and harms, which were received via theses errors concluded to designing and establishing medical error reporting systems for hospitals and centers that are presenting therapeutic services. The aim of this study is the recognition of medical errors' reporting system dimensions in educational hospitals. This research is a descriptive-analytical and qualities' study, which has been carried out in Shahid Beheshti educational therapeutic center in Isfahan during 2012. In this study, relevant information was collected through 15 face to face interviews. That each of interviews take place in about 1hr and creation of five focused discussion groups through 45 min for each section, they were composed of Metron, educational supervisor, health officer, health education, and all of the head nurses. Concluded data interviews and discussion sessions were coded, then achieved results were extracted in the presence of clear-sighted persons and after their feedback perception, they were categorized. In order to make sure of information correctness, tables were presented to the research's interviewers and final the corrections were confirmed based on their view. The extracted information from interviews and discussion groups have been divided into nine main categories after content analyzing and subject coding and their subsets have been completely expressed. Achieved dimensions are composed of nine domains of medical error concept, error cases according to nurses' prospection, medical error reporting barriers, employees' motivational factors for error reporting, purposes of medical error reporting system, error reporting's challenges and opportunities, a desired system

  17. Medical errors in hospitalized pediatric trauma patients with chronic health conditions

    Directory of Open Access Journals (Sweden)

    Xiaotong Liu

    2014-01-01

    Full Text Available Objective: This study compares medical errors in pediatric trauma patients with and without chronic conditions. Methods: The 2009 Kids’ Inpatient Database, which included 123,303 trauma discharges, was analyzed. Medical errors were identified by International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis codes. The medical error rates per 100 discharges and per 1000 hospital days were calculated and compared between inpatients with and without chronic conditions. Results: Pediatric trauma patients with chronic conditions experienced a higher medical error rate compared with patients without chronic conditions: 4.04 (95% confidence interval: 3.75–4.33 versus 1.07 (95% confidence interval: 0.98–1.16 per 100 discharges. The rate of medical error differed by type of chronic condition. After controlling for confounding factors, the presence of a chronic condition increased the adjusted odds ratio of medical error by 37% if one chronic condition existed (adjusted odds ratio: 1.37, 95% confidence interval: 1.21–1.5, and 69% if more than one chronic condition existed (adjusted odds ratio: 1.69, 95% confidence interval: 1.48–1.53. In the adjusted model, length of stay had the strongest association with medical error, but the adjusted odds ratio for chronic conditions and medical error remained significantly elevated even when accounting for the length of stay, suggesting that medical complexity has a role in medical error. Higher adjusted odds ratios were seen in other subgroups. Conclusion: Chronic conditions are associated with significantly higher rate of medical errors in pediatric trauma patients. Future research should evaluate interventions or guidelines for reducing the risk of medical errors in pediatric trauma patients with chronic conditions.

  18. Error Reduction in an Operating Environment - Comanche Peak Steam Electric Station

    International Nuclear Information System (INIS)

    Blevins, Mike; Gallman, Jim

    1998-01-01

    After having outlined that a program to manage human performance and to reduce human performance errors has reached an 88% error reduction rate and a 99% significant error reduction rate, the authors present this program. It takes three cornerstones of human performance management into account: training, leadership and procedures. Other aspects are introduced: communication, corrective action programs, a root cause analysis, seven steps of self checking, trending, and a human performance enhancement program. These other aspects and their relationships are discussed. Program strengths and downsides are outlined, as well as actions needed for success. Another approach is then proposed which comprises proactive interventions and indicators for human performance. These indicators are identified and introduced by analyzing the anatomy of an event. The limitations of this model are discussed

  19. Ways to rational management of accounts receivable at enterprises

    Directory of Open Access Journals (Sweden)

    Yevtushenko N. O.

    2015-05-01

    Full Text Available This article investigated the principal reasons of management problems an account receivable of enterprises. Credit politics of management is worked out by an account receivable of enterprises. Essence of the stages of construction of rational management an account receivable of enterprises is exposed. In the article are exposed the basic elements of control system for an account receivable such as mission, aims, strategy as politics of management. The basic stages of management politics are described for an account receivable of enterprises: analysis; organization of forming of principles of credit politics, terms of delivery of credit and procedure of collection of accounts receivable; planning of the use of modern forms of refunding, and similarly control.

  20. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  1. Medication administration errors in Eastern Saudi Arabia

    International Nuclear Information System (INIS)

    Mir Sadat-Ali

    2010-01-01

    To assess the prevalence and characteristics of medication errors (ME) in patients admitted to King Fahd University Hospital, Alkhobar, Kingdom of Saudi Arabia. Medication errors are documented by the nurses and physicians standard reporting forms (Hospital Based Incident Report). The study was carried out in King Fahd University Hospital, Alkhobar, Kingdom of Saudi Arabia and all the incident reports were collected during the period from January 2008 to December 2009. The incident reports were analyzed for age, gender, nationality, nursing unit, and time where ME was reported. The data were analyzed and the statistical significance differences between groups were determined by Student's t-test, and p-values of <0.05 using confidence interval of 95% were considered significant. There were 38 ME reported for the study period. The youngest patient was 5 days and the oldest 70 years. There were 31 Saudis, and 7 non-Saudi patients involved. The most common error was missed medication, which was seen in 15 (39.5%) patients. Over 15 (39.5%) of errors occurred in 2 units (pediatric medicine, and obstetrics and gynecology). Nineteen (50%) of the errors occurred during the 3-11 pm shift. Our study shows that the prevalence of ME in our institution is low, in comparison with the world literature. This could be due to under reporting of the errors, and we believe that ME reporting should be made less punitive so that ME can be studied and preventive measures implemented (Author).

  2. Optimization of sample absorbance for quantitative analysis in the presence of pathlength error in the IR and NIR regions

    International Nuclear Information System (INIS)

    Hirschfeld, T.; Honigs, D.; Hieftje, G.

    1985-01-01

    Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable

  3. Technology development for nuclear material accountability

    International Nuclear Information System (INIS)

    Hong, Jong Sook; Lee, Byung Doo; Cha, Hong Ryul; Choi, Hyoung Nae; Park, Ho Jun.

    1990-01-01

    Neutron yields from 19 F(α,n) 22 Na reaction of uranium neutron interaction with uranium-bass materials, and the characteristics of shielded neutron assay probe have been studied. On the basis of the above examination, U-235 enrichment in UF 6 cylinders like model 30B and model 48Y was measured by the reaction and U-235 contents in the containers by non-destructive total passive neutron assay method. Total measurement efficiency as a result was found to be 6.44 x 10 -4 and 1.25 x 10 -4 for model 30B and model 40Y UF 6 cylinder, respectively. The uncertainty of measured enrichment as compared to Tag value obtained from chemical analysis approached about 5 % of relative error at 95 % confidence interval. In the follow-up action for the previously developed (1988) computer system of nuclear material accounting the error searching and treatment routine in accordance with code 10, of IAEA and respective facility attachment has been added to easing the burden of manual error correction by operator. In addition, the procedure for LEMUF calculation has been prepared to help bulk facility operators evaluating MUF in the period of material balance. (author)

  4. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  5. Operator quantum error-correcting subsystems for self-correcting quantum memories

    International Nuclear Information System (INIS)

    Bacon, Dave

    2006-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures

  6. Accountability and need for cognition effects on contrast, halo, and accuracy in performance ratings.

    Science.gov (United States)

    Palmer, Jerry K; Feldman, Jack M

    2005-03-01

    In the present study, the authors investigated the effects of accountability and need for cognition on contrast errors, halo, and accuracy of performance ratings examined in good and poor performance context conditions, as well as in a context-free control condition. The accountability manipulation reduced the contrast effect and also modified rater recall of good ratee behavior. Accountability reduced halo in ratings and increased rating accuracy in a poor performance context. Accountability also interacted with need for cognition in predicting individual rater halo.

  7. Prediction Errors of Molecular Machine Learning Models Lower than Hybrid DFT Error.

    Science.gov (United States)

    Faber, Felix A; Hutchison, Luke; Huang, Bing; Gilmer, Justin; Schoenholz, Samuel S; Dahl, George E; Vinyals, Oriol; Kearnes, Steven; Riley, Patrick F; von Lilienfeld, O Anatole

    2017-11-14

    We investigate the impact of choosing regressors and molecular representations for the construction of fast machine learning (ML) models of 13 electronic ground-state properties of organic molecules. The performance of each regressor/representation/property combination is assessed using learning curves which report out-of-sample errors as a function of training set size with up to ∼118k distinct molecules. Molecular structures and properties at the hybrid density functional theory (DFT) level of theory come from the QM9 database [ Ramakrishnan et al. Sci. Data 2014 , 1 , 140022 ] and include enthalpies and free energies of atomization, HOMO/LUMO energies and gap, dipole moment, polarizability, zero point vibrational energy, heat capacity, and the highest fundamental vibrational frequency. Various molecular representations have been studied (Coulomb matrix, bag of bonds, BAML and ECFP4, molecular graphs (MG)), as well as newly developed distribution based variants including histograms of distances (HD), angles (HDA/MARAD), and dihedrals (HDAD). Regressors include linear models (Bayesian ridge regression (BR) and linear regression with elastic net regularization (EN)), random forest (RF), kernel ridge regression (KRR), and two types of neural networks, graph convolutions (GC) and gated graph networks (GG). Out-of sample errors are strongly dependent on the choice of representation and regressor and molecular property. Electronic properties are typically best accounted for by MG and GC, while energetic properties are better described by HDAD and KRR. The specific combinations with the lowest out-of-sample errors in the ∼118k training set size limit are (free) energies and enthalpies of atomization (HDAD/KRR), HOMO/LUMO eigenvalue and gap (MG/GC), dipole moment (MG/GC), static polarizability (MG/GG), zero point vibrational energy (HDAD/KRR), heat capacity at room temperature (HDAD/KRR), and highest fundamental vibrational frequency (BAML/RF). We present numerical

  8. Ergodic Capacity Analysis of Free-Space Optical Links with Nonzero Boresight Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique; Alouini, Mohamed-Slim; Cheng, Julian

    2015-01-01

    A unified capacity analysis of a free-space optical (FSO) link that accounts for nonzero boresight pointing errors and both types of detection techniques (i.e. intensity modulation/ direct detection as well as heterodyne detection) is addressed

  9. Kalman filtering and smoothing for linear wave equations with model error

    International Nuclear Information System (INIS)

    Lee, Wonjung; McDougall, D; Stuart, A M

    2011-01-01

    Filtering is a widely used methodology for the incorporation of observed data into time-evolving systems. It provides an online approach to state estimation inverse problems when data are acquired sequentially. The Kalman filter plays a central role in many applications because it is exact for linear systems subject to Gaussian noise, and because it forms the basis for many approximate filters which are used in high-dimensional systems. The aim of this paper is to study the effect of model error on the Kalman filter, in the context of linear wave propagation problems. A consistency result is proved when no model error is present, showing recovery of the true signal in the large data limit. This result, however, is not robust: it is also proved that arbitrarily small model error can lead to inconsistent recovery of the signal in the large data limit. If the model error is in the form of a constant shift to the velocity, the filtering and smoothing distributions only recover a partial Fourier expansion, a phenomenon related to aliasing. On the other hand, for a class of wave velocity model errors which are time dependent, it is possible to recover the filtering distribution exactly, but not the smoothing distribution. Numerical results are presented which corroborate the theory, and also propose a computational approach which overcomes the inconsistency in the presence of model error, by relaxing the model

  10. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  11. Analysis of refractive error in visual impairment among residents aged 40 years and above in Dongguan City

    Directory of Open Access Journals (Sweden)

    Shu-Hui Chen

    2015-06-01

    Full Text Available AIM: To investigate the prevalence rate of visual impairment caused by refractive error among residents aged 40 years and above, and the influence factors of vision correction. METHODS: We conducted an epidemiological survey of diabetes and diabetic retinopathy among residents aged 40 and above in Guangdong Province, Hengli Town, Dongguan City from 2011 to 2012. At the same time, according to World Health Organization(WHO, according to the daily life vision, 0.05≤visual ability RESULTS: The prevalence rate of visual impairment was 7.90%(707/8 952. The prevalence rate of visual impairment caused by refractive error was 5.57%(499/8 952, accounted for visual impairment of 70.58%(499/707. The prevalence rate of correction of refractive error among visual impairment was 5.36%(480/8 952, accounting for visual impairment of 67.89%(480/707. The prevalence rate of visual impairment uncorrected was 0.21%(19/8 952, accounting for visual impairment of 2.69%(19/707. By χ2 test, with the increase of age, the rate of visual impairment caused by refractive error was significantly decreased(PPP>0.05. The rate of visual impairment can be corrected decreases with age, from 92.1% to 49.1%, there was a statistically significant difference(PPP>0.05. CONCLUSION: Through the development of refractive error correction of positive, can make the daily life of visual impairment in about 2/3 of patients improve eyesight and improve the quality of life of residents.

  12. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Directory of Open Access Journals (Sweden)

    Pitchaiah Mandava

    Full Text Available OBJECTIVE: Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS, a range of scores ("Shift" is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. METHODS: We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. RESULTS: Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD. Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001. Taking errors into account, SAINT I would have required 24% more subjects than were randomized. CONCLUSION: We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We

  13. Using Fault Trees to Advance Understanding of Diagnostic Errors.

    Science.gov (United States)

    Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep

    2017-11-01

    Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  14. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  15. Medical error identification, disclosure, and reporting: do emergency medicine provider groups differ?

    Science.gov (United States)

    Hobgood, Cherri; Weiner, Bryan; Tamayo-Sarver, Joshua H

    2006-04-01

    To determine if the three types of emergency medicine providers--physicians, nurses, and out-of-hospital providers (emergency medical technicians [EMTs])--differ in their identification, disclosure, and reporting of medical error. A convenience sample of providers in an academic emergency department evaluated ten case vignettes that represented two error types (medication and cognitive) and three severity levels. For each vignette, providers were asked the following: 1) Is this an error? 2) Would you tell the patient? 3) Would you report this to a hospital committee? To assess differences in identification, disclosure, and reporting by provider type, error type, and error severity, the authors constructed three-way tables with the nonparametric Somers' D clustered on participant. To assess the contribution of disclosure instruction and environmental variables, fixed-effects regression stratified by provider type was used. Of the 116 providers who were eligible, 103 (40 physicians, 26 nurses, and 35 EMTs) had complete data. Physicians were more likely to classify an event as an error (78%) than nurses (71%; p = 0.04) or EMTs (68%; p error to the patient (59%) than physicians (71%; p = 0.04). Physicians were the least likely to report the error (54%) compared with nurses (68%; p = 0.02) or EMTs (78%; p error types, identification, disclosure, and reporting increased with increasing severity. Improving patient safety hinges on the ability of health care providers to accurately identify, disclose, and report medical errors. Interventions must account for differences in error identification, disclosure, and reporting by provider type.

  16. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    Science.gov (United States)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  17. Functional Forms, Exogenous Shifts, and Economic Surplus Changes

    OpenAIRE

    Xueyan Zhao; John D. Mullen; Gary R. Griffith

    1997-01-01

    Conditions for exact welfare measures in equilibrium displacement modeling are examined. These relate to the functional form of supply and demand, the nature of the exogenous shift, and the definition of percentage changes. Approximation errors when these conditions are not met in empirical applications are investigated and analytical expressions for the errors derived. Significant errors are possible when a proportional shift is assumed. The assumptions underlying Alston and Wohlgenant's emp...

  18. An approach to improving the structure of error-handling code in the linux kernel

    DEFF Research Database (Denmark)

    Saha, Suman; Lawall, Julia; Muller, Gilles

    2011-01-01

    The C language does not provide any abstractions for exception handling or other forms of error handling, leaving programmers to devise their own conventions for detecting and handling errors. The Linux coding style guidelines suggest placing error handling code at the end of each function, where...... an automatic program transformation that transforms error-handling code into this style. We have applied our transformation to the Linux 2.6.34 kernel source code, on which it reorganizes the error handling code of over 1800 functions, in about 25 minutes....

  19. Models and error analyses of measuring instruments in accountability systems in safeguards control

    International Nuclear Information System (INIS)

    Dattatreya, E.S.

    1977-05-01

    Essentially three types of measuring instruments are used in plutonium accountability systems: (1) the bubblers, for measuring the total volume of liquid in the holding tanks, (2) coulometers, titration apparatus and calorimeters, for measuring the concentration of plutonium; and (3) spectrometers, for measuring isotopic composition. These three classes of instruments are modeled and analyzed. Finally, the uncertainty in the estimation of total plutonium in the holding tank is determined

  20. A Survey of Wireless Fair Queuing Algorithms with Location-Dependent Channel Errors

    Directory of Open Access Journals (Sweden)

    Anca VARGATU

    2011-01-01

    Full Text Available The rapid development of wireless networks has brought more and more attention to topics related to fair allocation of resources, creation of suitable algorithms, taking into account the special characteristics of wireless environment and insurance fair access to the transmission channel, with delay bound and throughput guaranteed. Fair allocation of resources in wireless networks requires significant challenges, because of errors that occur only in these networks, such as location-dependent and bursty channel errors. In wireless networks, frequently happens, because interference of radio waves, that a user experiencing bad radio conditions during a period of time, not to receive resources in that period. This paper analyzes some resource allocation algorithms for wireless networks with location dependent errors, specifying the base idea for each algorithm and the way how it works. The analyzed fair queuing algorithms differ by the way they treat the following aspects: how to select the flows which should receive additional services, how to allocate these resources, which is the proportion received by error free flows and how the flows affected by errors are compensated.

  1. "First, know thyself": cognition and error in medicine.

    Science.gov (United States)

    Elia, Fabrizio; Aprà, Franco; Verhovez, Andrea; Crupi, Vincenzo

    2016-04-01

    Although error is an integral part of the world of medicine, physicians have always been little inclined to take into account their own mistakes and the extraordinary technological progress observed in the last decades does not seem to have resulted in a significant reduction in the percentage of diagnostic errors. The failure in the reduction in diagnostic errors, notwithstanding the considerable investment in human and economic resources, has paved the way to new strategies which were made available by the development of cognitive psychology, the branch of psychology that aims at understanding the mechanisms of human reasoning. This new approach led us to realize that we are not fully rational agents able to take decisions on the basis of logical and probabilistically appropriate evaluations. In us, two different and mostly independent modes of reasoning coexist: a fast or non-analytical reasoning, which tends to be largely automatic and fast-reactive, and a slow or analytical reasoning, which permits to give rationally founded answers. One of the features of the fast mode of reasoning is the employment of standardized rules, termed "heuristics." Heuristics lead physicians to correct choices in a large percentage of cases. Unfortunately, cases exist wherein the heuristic triggered fails to fit the target problem, so that the fast mode of reasoning can lead us to unreflectively perform actions exposing us and others to variable degrees of risk. Cognitive errors arise as a result of these cases. Our review illustrates how cognitive errors can cause diagnostic problems in clinical practice.

  2. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  3. Varying coefficients model with measurement error.

    Science.gov (United States)

    Li, Liang; Greene, Tom

    2008-06-01

    We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.

  4. Reducing the sensitivity of IMPT treatment plans to setup errors and range uncertainties via probabilistic treatment planning

    International Nuclear Information System (INIS)

    Unkelbach, Jan; Bortfeld, Thomas; Martin, Benjamin C.; Soukup, Martin

    2009-01-01

    Treatment plans optimized for intensity modulated proton therapy (IMPT) may be very sensitive to setup errors and range uncertainties. If these errors are not accounted for during treatment planning, the dose distribution realized in the patient may by strongly degraded compared to the planned dose distribution. The authors implemented the probabilistic approach to incorporate uncertainties directly into the optimization of an intensity modulated treatment plan. Following this approach, the dose distribution depends on a set of random variables which parameterize the uncertainty, as does the objective function used to optimize the treatment plan. The authors optimize the expected value of the objective function. They investigate IMPT treatment planning regarding range uncertainties and setup errors. They demonstrate that incorporating these uncertainties into the optimization yields qualitatively different treatment plans compared to conventional plans which do not account for uncertainty. The sensitivity of an IMPT plan depends on the dose contributions of individual beam directions. Roughly speaking, steep dose gradients in beam direction make treatment plans sensitive to range errors. Steep lateral dose gradients make plans sensitive to setup errors. More robust treatment plans are obtained by redistributing dose among different beam directions. This can be achieved by the probabilistic approach. In contrast, the safety margin approach as widely applied in photon therapy fails in IMPT and is neither suitable for handling range variations nor setup errors.

  5. Dye shift: a neglected source of genotyping error in molecular ecology.

    Science.gov (United States)

    Sutton, Jolene T; Robertson, Bruce C; Jamieson, Ian G

    2011-05-01

    Molecular ecologists must be vigilant in detecting and accounting for genotyping error, yet potential errors stemming from dye-induced mobility shift (dye shift) may be frequently neglected and largely unknown to researchers who employ 3-primer systems with automated genotyping. When left uncorrected, dye shift can lead to mis-scoring alleles and even to falsely calling new alleles if different dyes are used to genotype the same locus in subsequent reactions. When we used four different fluorophore labels from a standard dye set to genotype the same set of loci, differences in the resulting size estimates for a single allele ranged from 2.07 bp to 3.68 bp. The strongest effects were associated with the fluorophore PET, and relative degree of dye shift was inversely related to locus size. We found little evidence in the literature that dye shift is regularly accounted for in 3-primer studies, despite knowledge of this phenomenon existing for over a decade. However, we did find some references to erroneous standard correction factors for the same set of dyes that we tested. We thus reiterate the need for strict quality control when attempting to reduce possible sources of genotyping error, and in cases where different dyes are applied to a single locus, perhaps mistakenly, we strongly discourage researchers from assuming generic correction patterns. © 2011 Blackwell Publishing Ltd.

  6. Stereotype threat can reduce older adults' memory errors.

    Science.gov (United States)

    Barber, Sarah J; Mather, Mara

    2013-01-01

    Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research, we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment. Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 and 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well.

  7. An analytical examination of distortions in power spectra due to sampling errors

    International Nuclear Information System (INIS)

    Njau, E.C.

    1982-06-01

    Distortions introduced into spectral energy densities of sinusoid signals as well as those of more complex signals through different forms of errors in signal sampling are developed and shown analytically. The approach we have adopted in doing this involves, firstly, developing for each type of signal and for the corresponding form of sampling errors an analytical expression that gives the faulty digitization process involved in terms of the features of the particular signal. Secondly, we take advantage of a method described elsewhere [IC/82/44] to relate, as much as possible, the true spectral energy density of the signal and the corresponding spectral energy density of the faulty digitization process. Thirdly, we then develop expressions which reveal the distortions that are formed in the directly computed spectral energy density of the digitized signal. It is evident from the formulations developed herein that the types of sampling errors taken into consideration may create false peaks and other distortions that are of non-negligible concern in computed power spectra. (author)

  8. ACCOUNTING BETWEEN LAW, ETHICS AND MORALITY

    OpenAIRE

    Anca-Simona N. HROMEI; Maria-Madalina I. VOINEA

    2013-01-01

    This paper deals with the fact that nowadays, society and business, show high expectations regarding the accounting discipline, and therefore professionals in this area should expand their horizons to meet all requirements. First of all, accounting assumed a certain responsibility to the public interest, by its fundamental purpose, namely to provide financial-accounting information, information that will form the basis of decision making. Second of all, for the successful fulfilment of the pu...

  9. STRATEGIC MANAGEMENT ACCOUNTING: DEFINITION AND TOOLS

    OpenAIRE

    Pylypiv, Nadiia; Pіatnychuk, Iryna

    2017-01-01

    The article is dedicated to learning the essence of the definition of “strategic management accounting” in domestic and foreign literature. Strategic management accounting tools has been studied and identified constraints that affect its choice. The result of the study is that the understanding of strategic management accounting was formed by authors. The tools which are common for both traditional managerial accounting and strategic and the specific tools necessary for efficient implementati...

  10. e-Gamma: Nuclear Material Accountancy and Control System in Brazil

    International Nuclear Information System (INIS)

    Negri Ferreira, S.; Souza Dunley, L.

    2015-01-01

    The Brazilian Nuclear Energy Commission (CNEN) is the government organization responsible for regulating all nuclear activities in Brazil and for ensuring that international safeguards are implemented according to the international agreements. In 2006 CNEN initiated a project aiming at the development and implementation of a web based system (e-Gamma) for on line nuclear material accountancy and control. In January-2014, after three years of beta testing, e-Gamma finally became the official nuclear material accountancy system in Brazil. e-Gamma is a web system hosted in a dedicated server under a secure environment maintained at CNEN headquarters. Secure access is provided by the use of Digital Client Certificate and internal user pre-authorization for login as well as multiple access profiles each one with specific function menus. The System operation is based on source documents for each inventory change prepared and updated by the MBA operators with the help of specific forms with strong validations. After the document conclusion the System records the inventory change in a general ledger. Monthly the officers of CNEN analyzes the general ledgers of each MBA and generates the applicable reports through the System [Inventory Change Reports (ICR), Physical Inventory List (PIL), and Material Balance Report (MBR)]. The System allows the running of managerial queries and has brought to CNEN much more control and traceability of the inventory changes and significant reduction in typing errors, costs and inspection efforts. Therefore, more efficient accountancy verification procedures at national and international levels are expected, as well as remote accountancy verification previous to an inspection. The proposed paper will describe the e-Gamma System, its main features and the oral presentation will contain a brief demonstration of some functionalities through the use of a local version installed on a notebook. (author)

  11. ACCOUNTING PARADIGMS WHICH FAVOR HISTORICAL COST

    Directory of Open Access Journals (Sweden)

    Valentin Gabriel CRISTEA

    2014-05-01

    Full Text Available Henning Kirkegaard shows that the evolution of accounting is to shift from one paradigm to another . Business continuity perspective should guide the company into the future , without confine it exclusively in the past. Accounting in its classical form , however, can not be dissociated from the historical cost evaluation .

  12. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  13. Using Benford’s Law to detect data error and fraud: An examination of companies listed on the Johannesburg Stock Exchange

    Directory of Open Access Journals (Sweden)

    A Saville

    2014-06-01

    Full Text Available Accounting numbers generally obey a mathematical law called Benford’s Law, and this outcome is so unexpected that manipulators of information generally fail to observe the law. Armed with this knowledge, it becomes possible to detect the occurrence of accounting data that are presented fraudulently. However, the law also allows for the possibility of detecting instances where data are presented containing errors. Given this backdrop, this paper uses data drawn from companies listed on the Johannesburg Stock Exchange to test the hypothesis that Benford’s Law can be used to identify false or fraudulent reporting of accounting data. The results support the argument that Benford’s Law can be used effectively to detect accounting error and fraud. Accordingly, the findings are of particular relevance to auditors, shareholders, financial analysts, investment managers, private investors and other users of publicly reported accounting data, such as the revenue services

  14. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  15. PRESENTATION OF STATE SUPPORT (GRANTS IN ACCOUNTING POLICY OF POLAND

    Directory of Open Access Journals (Sweden)

    K. Zuk

    2014-01-01

    Full Text Available Since admission of Poland to the European Union Polish enterprises can make use of the state support in various forms including support in investments, investigations and developments, consulting, higher qualification, financing of exhibition participation, salary additional payments for invalid workers, repayment of loan portions. The purpose of the given publication is to make an analysis of accounting method for state support which is granted for an organization within the frameworks of the accounting policy depending on the obtained grants.Enterprises must select themselves a grant accounting form as in account books so while presenting financial reporting and these accounting and reporting forms must be reflected in the enterprise policy of accounting. The enterprise accounting policy indicates principles for creation of reserves and conditional obligations related with grants. Enterprises can use some simplifications and they can exclude creation of reserves and withhold conditional obligations concerning the grants if these measures are considered as insignificant.In accordance with the enterprise accounting policy account books must contain recordings on grant provision when a grant is transferred to the bank account or when an enterprise receives a written notice confirming final decision about payments from a financing institution. The accounting policy must determine principles of bank operation break-up on grant accounts and security system of data and files including accounting documents, accounts and other documents related to the obtained grant and the required archivation term

  16. Nonlinear error dynamics for cycled data assimilation methods

    International Nuclear Information System (INIS)

    Moodey, Alexander J F; Lawless, Amos S; Potthast, Roland W E; Van Leeuwen, Peter Jan

    2013-01-01

    We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at t k , k = 1, 2, 3, …, with a first guess given by the state propagated via a dynamical system model M k from time t k−1 to time t k . In particular, for nonlinear dynamical systems M k that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ‖e k ‖ ≔ ‖x (a) k − x (t) k ‖ between the estimated state x (a) and the true state x (t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system M k under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ‖e k ‖, depending on the size δ of the observation error, the reconstruction operator R α , the observation operator H and the Lipschitz constants K (1) and K (2) on the lower and higher modes of M k controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c‖R α ‖δ with some constant c. Since ‖R α ‖ → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz ‘63 system. (paper)

  17. Orbit-related sea level errors for TOPEX altimetry at seasonal to decadal timescales

    Science.gov (United States)

    Esselborn, Saskia; Rudenko, Sergei; Schöne, Tilo

    2018-03-01

    Interannual to decadal sea level trends are indicators of climate variability and change. A major source of global and regional sea level data is satellite radar altimetry, which relies on precise knowledge of the satellite's orbit. Here, we assess the error budget of the radial orbit component for the TOPEX/Poseidon mission for the period 1993 to 2004 from a set of different orbit solutions. The errors for seasonal, interannual (5-year), and decadal periods are estimated on global and regional scales based on radial orbit differences from three state-of-the-art orbit solutions provided by different research teams: the German Research Centre for Geosciences (GFZ), the Groupe de Recherche de Géodésie Spatiale (GRGS), and the Goddard Space Flight Center (GSFC). The global mean sea level error related to orbit uncertainties is of the order of 1 mm (8 % of the global mean sea level variability) with negligible contributions on the annual and decadal timescales. In contrast, the orbit-related error of the interannual trend is 0.1 mm yr-1 (27 % of the corresponding sea level variability) and might hamper the estimation of an acceleration of the global mean sea level rise. For regional scales, the gridded orbit-related error is up to 11 mm, and for about half the ocean the orbit error accounts for at least 10 % of the observed sea level variability. The seasonal orbit error amounts to 10 % of the observed seasonal sea level signal in the Southern Ocean. At interannual and decadal timescales, the orbit-related trend uncertainties reach regionally more than 1 mm yr-1. The interannual trend errors account for 10 % of the observed sea level signal in the tropical Atlantic and the south-eastern Pacific. For decadal scales, the orbit-related trend errors are prominent in a several regions including the South Atlantic, western North Atlantic, central Pacific, South Australian Basin, and the Mediterranean Sea. Based on a set of test orbits calculated at GFZ, the sources of the

  18. Orbit-related sea level errors for TOPEX altimetry at seasonal to decadal timescales

    Directory of Open Access Journals (Sweden)

    S. Esselborn

    2018-03-01

    Full Text Available Interannual to decadal sea level trends are indicators of climate variability and change. A major source of global and regional sea level data is satellite radar altimetry, which relies on precise knowledge of the satellite's orbit. Here, we assess the error budget of the radial orbit component for the TOPEX/Poseidon mission for the period 1993 to 2004 from a set of different orbit solutions. The errors for seasonal, interannual (5-year, and decadal periods are estimated on global and regional scales based on radial orbit differences from three state-of-the-art orbit solutions provided by different research teams: the German Research Centre for Geosciences (GFZ, the Groupe de Recherche de Géodésie Spatiale (GRGS, and the Goddard Space Flight Center (GSFC. The global mean sea level error related to orbit uncertainties is of the order of 1 mm (8 % of the global mean sea level variability with negligible contributions on the annual and decadal timescales. In contrast, the orbit-related error of the interannual trend is 0.1 mm yr−1 (27 % of the corresponding sea level variability and might hamper the estimation of an acceleration of the global mean sea level rise. For regional scales, the gridded orbit-related error is up to 11 mm, and for about half the ocean the orbit error accounts for at least 10 % of the observed sea level variability. The seasonal orbit error amounts to 10 % of the observed seasonal sea level signal in the Southern Ocean. At interannual and decadal timescales, the orbit-related trend uncertainties reach regionally more than 1 mm yr−1. The interannual trend errors account for 10 % of the observed sea level signal in the tropical Atlantic and the south-eastern Pacific. For decadal scales, the orbit-related trend errors are prominent in a several regions including the South Atlantic, western North Atlantic, central Pacific, South Australian Basin, and the Mediterranean Sea. Based on a set of test

  19. Impact of Representing Model Error in a Hybrid Ensemble-Variational Data Assimilation System for Track Forecast of Tropical Cyclones over the Bay of Bengal

    Science.gov (United States)

    Kutty, Govindan; Muraleedharan, Rohit; Kesarkar, Amit P.

    2018-03-01

    Uncertainties in the numerical weather prediction models are generally not well-represented in ensemble-based data assimilation (DA) systems. The performance of an ensemble-based DA system becomes suboptimal, if the sources of error are undersampled in the forecast system. The present study examines the effect of accounting for model error treatments in the hybrid ensemble transform Kalman filter—three-dimensional variational (3DVAR) DA system (hybrid) in the track forecast of two tropical cyclones viz. Hudhud and Thane, formed over the Bay of Bengal, using Advanced Research Weather Research and Forecasting (ARW-WRF) model. We investigated the effect of two types of model error treatment schemes and their combination on the hybrid DA system; (i) multiphysics approach, which uses different combination of cumulus, microphysics and planetary boundary layer schemes, (ii) stochastic kinetic energy backscatter (SKEB) scheme, which perturbs the horizontal wind and potential temperature tendencies, (iii) a combination of both multiphysics and SKEB scheme. Substantial improvements are noticed in the track positions of both the cyclones, when flow-dependent ensemble covariance is used in 3DVAR framework. Explicit model error representation is found to be beneficial in treating the underdispersive ensembles. Among the model error schemes used in this study, a combination of multiphysics and SKEB schemes has outperformed the other two schemes with improved track forecast for both the tropical cyclones.

  20. HB-Line Material Control and Accountability Measurements at SRS

    International Nuclear Information System (INIS)

    Casella, V.R.

    2003-01-01

    Presently, HB-Line work at the Savannah River Site consists primarily of the stabilization and packaging of nuclear materials for storage and the characterization of materials for disposition in H-Area. In order to ensure compliance with Material Control and Accountability (MC and A) Regulations, accountability measurements are performed throughout the HB-Line processes. Accountability measurements are used to keep track of the nuclear material inventory by constantly updating the amount of material in the MBAs (Material Balance Area) and sub-MBAs. This is done by subtracting the amount of accountable material that is added to a process and by adding the amount of accountable material that is put back in storage. A Physical Inventory is taken and compared to the ''Book Value'' listed in the Nuclear Material Accounting System. The difference (BPID) in the Book Inventory minus the Physical Inventory of a sub-account for bulk material must agree within the measurement errors combined in quadrature to provide assurance that nuclear material is accounted for. This work provides an overview of HB-Line processes and accountability measurements. The Scrap Recovery Line and Neptunium-237/Plutonium-239 Oxide Line are described and sampling and analyses for Phase II are provided. Recommendations for improvements are provided to improve efficiency and cost effectiveness

  1. Multihop Relaying over IM/DD FSO Systems with Pointing Errors

    KAUST Repository

    Zedini, Emna

    2015-10-19

    In this paper, the end-to-end performance of a multihop free-space optical system with amplify-and-forward channelstate- information-assisted or fixed-gain relays using intensity modulation with direct detection technique over Gamma-Gamma turbulence fading with pointing error impairments is studied. More specifically, novel closed-form results for the probability density function and the cumulative distribution function of the end-to-end signal-to-noise ratio (SNR) are derived in terms of the Fox’s H function. Based on these formulas, closed-form bounds for the outage probability, the average bit-error rate (BER) of on-off keying modulation scheme, the moments, and the ergodic capacity are presented. Furthermore, using the momentsbased approach, tight asymptotic approximations at high and low average SNR regimes are derived for the ergodic capacity in terms of simple elementary functions. The obtained results indicate that the overall system performance degrades with an increase of the number of hops. The effects of the atmospheric turbulence conditions and the pointing error are also quantified. All the analytical results are verified via computer-based Monte- Carlo simulations.

  2. Multihop Relaying over IM/DD FSO Systems with Pointing Errors

    KAUST Repository

    Zedini, Emna; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, the end-to-end performance of a multihop free-space optical system with amplify-and-forward channelstate- information-assisted or fixed-gain relays using intensity modulation with direct detection technique over Gamma-Gamma turbulence fading with pointing error impairments is studied. More specifically, novel closed-form results for the probability density function and the cumulative distribution function of the end-to-end signal-to-noise ratio (SNR) are derived in terms of the Fox’s H function. Based on these formulas, closed-form bounds for the outage probability, the average bit-error rate (BER) of on-off keying modulation scheme, the moments, and the ergodic capacity are presented. Furthermore, using the momentsbased approach, tight asymptotic approximations at high and low average SNR regimes are derived for the ergodic capacity in terms of simple elementary functions. The obtained results indicate that the overall system performance degrades with an increase of the number of hops. The effects of the atmospheric turbulence conditions and the pointing error are also quantified. All the analytical results are verified via computer-based Monte- Carlo simulations.

  3. Vulnerabilities and Risks in Romanian Public Financial Accounting

    Directory of Open Access Journals (Sweden)

    Terinte Paula-Andreea

    2017-01-01

    Full Text Available Internal audit plays an important contributive role in the economic credibility of an economy. By improving, the quality of internal audit and control in public institutions we can increase the efficiency of public spending and the whole economy in general. The aim of our paper is to identify the main vulnerabilities and risks in Romanian public sector after based upon the analysis performed by the Romanian Court of Accounts. Based upon the annual reports of the Romanian Court of Accounts from 2009 to 2016, we identify several issues and risks in the Romanian public sector, and especially in the local government financial accounting. In order to prevent more deviations and errors and to improve the quality of internal auditing and internal control systems, in the Romanian public sector, this paper proposes a series of measures in this matter.

  4. Analysis of the Expediency of Switching to Accounting Outsourcing

    Directory of Open Access Journals (Sweden)

    Liakhovych Halyna І.

    2017-12-01

    Full Text Available The aim of the article is to justify the methodology for analysis of the expediency of switching to accounting outsourcing. Based on considering scientific papers, approaches to analyzing the expediency of switching to accounting outsourcing are generalized; special attention is paid to analyzing transaction costs of accounting outsourcing, which under current conditions are one of the defining quantitative criteria for choosing this form of organization of accounting. The choice of the type of accounting outsourcing (full or partial for various groups of enterprises (micro-enterprises, small and medium-sized, large ones is justified. There determined the sequence of analyzing the expediency of switching to accounting outsourcing that implies identification of the areas of accounting for transferring; selection of an outsourcer; assessment of risks associated with reliability of the outsourcer. Prospects for further research are the issue of analyzing the effectiveness of outsourcing as a form of organization of accounting.

  5. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. The refractive index in electron microscopy and the errors of its approximations

    Energy Technology Data Exchange (ETDEWEB)

    Lentzen, M.

    2017-05-15

    In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. - Highlights: • The standard model for the refractive index in electron microscopy is investigated. • The error of the standard model is proportional to the electric potential squared. • Relativistically correct error terms are derived from the energy-momentum relation. • The errors are assessed for Coulomb scattering varying energy and atomic number. • Errors of scattering cross-sections are pronounced at large angles and attain 10%.

  7. The refractive index in electron microscopy and the errors of its approximations

    International Nuclear Information System (INIS)

    Lentzen, M.

    2017-01-01

    In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. - Highlights: • The standard model for the refractive index in electron microscopy is investigated. • The error of the standard model is proportional to the electric potential squared. • Relativistically correct error terms are derived from the energy-momentum relation. • The errors are assessed for Coulomb scattering varying energy and atomic number. • Errors of scattering cross-sections are pronounced at large angles and attain 10%.

  8. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  9. Performance Analysis of Free-Space Optical Links Over Malaga (M) Turbulence Channels with Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique

    2015-08-12

    In this work, we present a unified performance analysis of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection (IM/DD) as well as heterodyne detection). More specifically, we present unified exact closedform expressions for the cumulative distribution function, the probability density function, the moment generating function, and the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system, all in terms of the Meijer’s G function except for the moments that is in terms of simple elementary functions. We then capitalize on these unified results to offer unified exact closed-form expressions for various performance metrics of FSO link transmission systems, such as, the outage probability, the scintillation index (SI), the average error rate for binary and M-ary modulation schemes, and the ergodic capacity (except for IM/DD technique, where we present closed-form lower bound results), all in terms of Meijer’s G functions except for the SI that is in terms of simple elementary functions. Additionally, we derive the asymptotic results for all the expressions derived earlier in terms of Meijer’s G function in the high SNR regime in terms of simple elementary functions via an asymptotic expansion of the Meijer’s G function. We also derive new asymptotic expressions for the ergodic capacity in the low as well as high SNR regimes in terms of simple elementary functions via utilizing moments. All the presented results are verified via computer-based Monte-Carlo simulations.

  10. Performance Analysis of Free-Space Optical Links Over Malaga (M) Turbulence Channels with Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique; Yilmaz, Ferkan; Alouini, Mohamed-Slim

    2015-01-01

    In this work, we present a unified performance analysis of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection (IM/DD) as well as heterodyne detection). More specifically, we present unified exact closedform expressions for the cumulative distribution function, the probability density function, the moment generating function, and the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system, all in terms of the Meijer’s G function except for the moments that is in terms of simple elementary functions. We then capitalize on these unified results to offer unified exact closed-form expressions for various performance metrics of FSO link transmission systems, such as, the outage probability, the scintillation index (SI), the average error rate for binary and M-ary modulation schemes, and the ergodic capacity (except for IM/DD technique, where we present closed-form lower bound results), all in terms of Meijer’s G functions except for the SI that is in terms of simple elementary functions. Additionally, we derive the asymptotic results for all the expressions derived earlier in terms of Meijer’s G function in the high SNR regime in terms of simple elementary functions via an asymptotic expansion of the Meijer’s G function. We also derive new asymptotic expressions for the ergodic capacity in the low as well as high SNR regimes in terms of simple elementary functions via utilizing moments. All the presented results are verified via computer-based Monte-Carlo simulations.

  11. Calculation of the Nucleon Axial Form Factor Using Staggered Lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Aaron S. [Fermilab; Hill, Richard J. [Perimeter Inst. Theor. Phys.; Kronfeld, Andreas S. [Fermilab; Li, Ruizi [Indiana U.; Simone, James N. [Fermilab

    2016-10-14

    The nucleon axial form factor is a dominant contribution to errors in neutrino oscillation studies. Lattice QCD calculations can help control theory errors by providing first-principles information on nucleon form factors. In these proceedings, we present preliminary results on a blinded calculation of $g_A$ and the axial form factor using HISQ staggered baryons with 2+1+1 flavors of sea quarks. Calculations are done using physical light quark masses and are absolutely normalized. We discuss fitting form factor data with the model-independent $z$ expansion parametrization.

  12. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  13. Identifying medication error chains from critical incident reports: a new analytic approach.

    Science.gov (United States)

    Huckels-Baumgart, Saskia; Manser, Tanja

    2014-10-01

    Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety. © 2014, The American College of Clinical Pharmacology.

  14. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    Science.gov (United States)

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  15. Reducing Diagnostic Errors through Effective Communication: Harnessing the Power of Information Technology

    Science.gov (United States)

    Naik, Aanand Dinkar; Rao, Raghuram; Petersen, Laura Ann

    2008-01-01

    Diagnostic errors are poorly understood despite being a frequent cause of medical errors. Recent efforts have aimed to advance the "basic science" of diagnostic error prevention by tracing errors to their most basic origins. Although a refined theory of diagnostic error prevention will take years to formulate, we focus on communication breakdown, a major contributor to diagnostic errors and an increasingly recognized preventable factor in medical mishaps. We describe a comprehensive framework that integrates the potential sources of communication breakdowns within the diagnostic process and identifies vulnerable steps in the diagnostic process where various types of communication breakdowns can precipitate error. We then discuss potential information technology-based interventions that may have efficacy in preventing one or more forms of these breakdowns. These possible intervention strategies include using new technologies to enhance communication between health providers and health systems, improve patient involvement, and facilitate management of information in the medical record. PMID:18373151

  16. Reducing patient identification errors related to glucose point-of-care testing

    Directory of Open Access Journals (Sweden)

    Gaurav Alreja

    2011-01-01

    Full Text Available Background: Patient identification (ID errors in point-of-care testing (POCT can cause test results to be transferred to the wrong patient′s chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number is checked against patient registration data from admission, discharge, and transfer (ADT feeds and only matched results are transferred to the patient′s electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015% in comparison with 61.5 errors/month (0.319% before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.

  17. Mismeasurement and the resonance of strong confounders: uncorrelated errors.

    Science.gov (United States)

    Marshall, J R; Hastrup, J L

    1996-05-15

    Greenland first documented (Am J Epidemiol 1980; 112:564-9) that error in the measurement of a confounder could resonate--that it could bias estimates of other study variables, and that the bias could persist even with statistical adjustment for the confounder as measured. An important question is raised by this finding: can such bias be more than trivial within the bounds of realistic data configurations? The authors examine several situations involving dichotomous and continuous data in which a confounder and a null variable are measured with error, and they assess the extent of resultant bias in estimates of the effect of the null variable. They show that, with continuous variables, measurement error amounting to 40% of observed variance in the confounder could cause the observed impact of the null study variable to appear to alter risk by as much as 30%. Similarly, they show, with dichotomous independent variables, that 15% measurement error in the form of misclassification could lead the null study variable to appear to alter risk by as much as 50%. Such bias would result only from strong confounding. Measurement error would obscure the evidence that strong confounding is a likely problem. These results support the need for every epidemiologic inquiry to include evaluations of measurement error in each variable considered.

  18. Propagation of angular errors in two-axis rotation systems

    Science.gov (United States)

    Torrington, Geoffrey K.

    2003-10-01

    Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.

  19. Safety analysis methodology with assessment of the impact of the prediction errors of relevant parameters

    International Nuclear Information System (INIS)

    Galia, A.V.

    2011-01-01

    The best estimate plus uncertainty approach (BEAU) requires the use of extensive resources and therefore it is usually applied for cases in which the available safety margin obtained with a conservative methodology can be questioned. Outside the BEAU methodology, there is not a clear approach on how to deal with the issue of considering the uncertainties resulting from prediction errors in the safety analyses performed for licensing submissions. However, the regulatory document RD-310 mentions that the analysis method shall account for uncertainties in the analysis data and models. A possible approach is presented, that is simple and reasonable, representing just the author's views, to take into account the impact of prediction errors and other uncertainties when performing safety analysis in line with regulatory requirements. The approach proposes taking into account the prediction error of relevant parameters. Relevant parameters would be those plant parameters that are surveyed and are used to initiate the action of a mitigating system or those that are representative of the most challenging phenomena for the integrity of a fission barrier. Examples of the application of the methodology are presented involving a comparison between the results with the new approach and a best estimate calculation during the blowdown phase for two small breaks in a generic CANDU 6 station. The calculations are performed with the CATHENA computer code. (author)

  20. Narrative Accounting Practices in Indonesia Companies

    Directory of Open Access Journals (Sweden)

    Inten Meutia

    2017-05-01

    Full Text Available This research aimed to reveal creative accounting practices in the form of narrative accounting occuring in companies in Indonesia. Using content analysis, this research analyzed the management discussion and analysis section in the annual report on the group of companies whose performance had increased and declined in several companies listed on the Indonesian Stock Exchange. This research finds that the narrative accounting practices are applied in these companies. The four methods of accounting narratives are found in both groups of companies. There are stressing the positive and downplaying the negative, baffling the readers, differential reporting, and attribution.

  1. How Rational Are Inflation Expectations? A Vector Autoregression Decomposition of Inflation Forecasts and Their Errors

    National Research Council Canada - National Science Library

    Ladvogt, Timothy

    2002-01-01

    ... the persistence of forecast errors. A reduced form VAR is used to identify potential inefficiencies and then calculate the impulse response functions and variance decompositions of forecasts errors to analyze how shocks to the other endogenous...

  2. Banking Accounting Practices “Humanist”

    Directory of Open Access Journals (Sweden)

    Nanang Shonhadji

    2016-04-01

    Full Text Available Humanism is a universal value that should be attached to all forms and operational activities of the bank because the bank’s main business is to provide service to humanity. The purpose of this study was to determine the humanist banking practice. The use of qualitative methods with a phenomenological approach that involve an account officer and marketing staff as key informants. In-depth interviews conducted to obtain comprehensive information. The results of the study informs that the banking practice in the lending and the funding activities were still oriented to material interests or to achieve maximum profit with unbalanced position. It caused the values of humanism in the form of truth and equitable negated by bank stakeholder. The results of this study also found that there are two forms of awareness is needed in the value of accountability in the process of funding and lending activities to customers humanistic were the awareness of responsibility to themselves and to God.

  3. Calculating disadvantage factor for fuel taking into account the neutron energy distribution

    International Nuclear Information System (INIS)

    Pop-Jordanov, J.

    1964-01-01

    Errors in calculating the disadvantage factor are caused by applying the diffusion approximation and one-group method. This paper describes the method for calculating the fuel disadvantage factor by applying a non-diffusion method taking into account neutron thermalization

  4. Adaptive color halftoning for minimum perceived error using the blue noise mask

    Science.gov (United States)

    Yu, Qing; Parker, Kevin J.

    1997-04-01

    Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.

  5. Development of a Computerized Multifunctional Form and Position Measurement Instrument

    International Nuclear Information System (INIS)

    Liu, P; Tian, W Y

    2006-01-01

    A model machine of multifunctional form and position measurement instrument controlled by a personal computer has been successfully developed. The instrument is designed in rotary table type with a high precision air bearing and the radial rotation error of the rotary table is 0.08 μm. Since a high precision vertical sliding carriage supported by an air bearing is used for the instrument, the straightaway motion error of the carriage is 0.3 μm/200 mm and the parallelism error of the motion of the carriage relative to the rotation axis of the rotary table is 0.4 μm/200 mm. The mathematical models have been established for assessing planar and spatial straightness, flatness, roundness, cylindricity, and coaxality errors. By radial deviation measurement, the instrument can accurately measure form and position errors of such workpieces as shafts, round plates and sleeves of medium or small dimensions with the tolerance grades mostly used in industry

  6. Standard Error Computations for Uncertainty Quantification in Inverse Problems: Asymptotic Theory vs. Bootstrapping.

    Science.gov (United States)

    Banks, H T; Holm, Kathleen; Robbins, Danielle

    2010-11-01

    We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both constant variance absolute error data and relative error which produces non-constant variance data in our parameter estimation formulations. We compare and contrast parameter estimates, standard errors, confidence intervals, and computational times for both bootstrapping and asymptotic theory methods.

  7. Estimation of Branch Topology Errors in Power Networks by WLAN State Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hong Rae [Soonchunhyang University(Korea); Song, Kyung Bin [Kei Myoung University(Korea)

    2000-06-01

    The purpose of this paper is to detect and identify topological errors in order to maintain a reliable database for the state estimator. In this paper, a two stage estimation procedure is used to identify the topology errors. At the first stage, the WLAV state estimator which has characteristics to remove bad data during the estimation procedure is run for finding out the suspected branches at which topology errors take place. The resulting residuals are normalized and the measurements with significant normalized residuals are selected. A set of suspected branches is formed based on these selected measurements; if the selected measurement if a line flow, the corresponding branch is suspected; if it is an injection, then all the branches connecting the injection bus to its immediate neighbors are suspected. A new WLAV state estimator adding the branch flow errors in the state vector is developed to identify the branch topology errors. Sample cases of single topology error and topology error with a measurement error are applied to IEEE 14 bus test system. (author). 24 refs., 1 fig., 9 tabs.

  8. Theta coordinated error-driven learning in the hippocampus.

    Directory of Open Access Journals (Sweden)

    Nicholas Ketz

    Full Text Available The learning mechanism in the hippocampus has almost universally been assumed to be Hebbian in nature, where individual neurons in an engram join together with synaptic weight increases to support facilitated recall of memories later. However, it is also widely known that Hebbian learning mechanisms impose significant capacity constraints, and are generally less computationally powerful than learning mechanisms that take advantage of error signals. We show that the differential phase relationships of hippocampal subfields within the overall theta rhythm enable a powerful form of error-driven learning, which results in significantly greater capacity, as shown in computer simulations. In one phase of the theta cycle, the bidirectional connectivity between CA1 and entorhinal cortex can be trained in an error-driven fashion to learn to effectively encode the cortical inputs in a compact and sparse form over CA1. In a subsequent portion of the theta cycle, the system attempts to recall an existing memory, via the pathway from entorhinal cortex to CA3 and CA1. Finally the full theta cycle completes when a strong target encoding representation of the current input is imposed onto the CA1 via direct projections from entorhinal cortex. The difference between this target encoding and the attempted recall of the same representation on CA1 constitutes an error signal that can drive the learning of CA3 to CA1 synapses. This CA3 to CA1 pathway is critical for enabling full reinstatement of recalled hippocampal memories out in cortex. Taken together, these new learning dynamics enable a much more robust, high-capacity model of hippocampal learning than was available previously under the classical Hebbian model.

  9. Mutual Calculations in Creating Accounting Models: A Demonstration of the Power of Matrix Mathematics in Accounting Education

    Science.gov (United States)

    Vysotskaya, Anna; Kolvakh, Oleg; Stoner, Greg

    2016-01-01

    The aim of this paper is to describe the innovative teaching approach used in the Southern Federal University, Russia, to teach accounting via a form of matrix mathematics. It thereby contributes to disseminating the technique of teaching to solve accounting cases using mutual calculations to a worldwide audience. The approach taken in this course…

  10. PERPETUAL LEASE: FEATURES OF ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Tetiana Yurchenko

    2017-03-01

    Full Text Available The article namely deals with the peculiarities of legal regulation of the right to use someone else's land for agricultural purposes under the perpetual lease contract. Recognition of the legitimacy of perpetual lease as an intangible asset and, therefore, the object of accounting was justified. The features of the primary account perpetual lease rights were analyzed. It was found that for documenting transactions receipt, commissioning, de-recognition perpetual lease specialized forms of primary documents is not installed. The main aspects of accounting reflection of land under perpetual lease contracts were identified and ways of their improving were developed. The period on which the land is transferred for use under perpetual lease contract was proposed. During the study, general scientific methods – induction, deduction, synthesis, analysis, dialectical, historical, generalizations and specific methods of accounting – documentation, evaluation, accounting records were used. Keywords: accounting, land, perpetual lease, intangible assets, the right to use.

  11. Simultaneous Treatment of Missing Data and Measurement Error in HIV Research Using Multiple Overimputation.

    Science.gov (United States)

    Schomaker, Michael; Hogger, Sara; Johnson, Leigh F; Hoffmann, Christopher J; Bärnighausen, Till; Heumann, Christian

    2015-09-01

    Both CD4 count and viral load in HIV-infected persons are measured with error. There is no clear guidance on how to deal with this measurement error in the presence of missing data. We used multiple overimputation, a method recently developed in the political sciences, to account for both measurement error and missing data in CD4 count and viral load measurements from four South African cohorts of a Southern African HIV cohort collaboration. Our knowledge about the measurement error of ln CD4 and log10 viral load is part of an imputation model that imputes both missing and mismeasured data. In an illustrative example, we estimate the association of CD4 count and viral load with the hazard of death among patients on highly active antiretroviral therapy by means of a Cox model. Simulation studies evaluate the extent to which multiple overimputation is able to reduce bias in survival analyses. Multiple overimputation emphasizes more strongly the influence of having high baseline CD4 counts compared to both a complete case analysis and multiple imputation (hazard ratio for >200 cells/mm vs. <25 cells/mm: 0.21 [95% confidence interval: 0.18, 0.24] vs. 0.38 [0.29, 0.48], and 0.29 [0.25, 0.34], respectively). Similar results are obtained when varying assumptions about measurement error, when using p-splines, and when evaluating time-updated CD4 count in a longitudinal analysis. The estimates of the association with viral load are slightly more attenuated when using multiple imputation instead of multiple overimputation. Our simulation studies suggest that multiple overimputation is able to reduce bias and mean squared error in survival analyses. Multiple overimputation, which can be used with existing software, offers a convenient approach to account for both missing and mismeasured data in HIV research.

  12. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  13. Predicting Error Bars for QSAR Models

    International Nuclear Information System (INIS)

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Mueller, Klaus-Robert

    2007-01-01

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D 7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches

  14. Individual Differences and Rating Errors in First Impressions of Psychopathy

    Directory of Open Access Journals (Sweden)

    Christopher T. A. Gillen

    2016-10-01

    Full Text Available The current study is the first to investigate whether individual differences in personality are related to improved first impression accuracy when appraising psychopathy in female offenders from thin-slices of information. The study also investigated the types of errors laypeople make when forming these judgments. Sixty-seven undergraduates assessed 22 offenders on their level of psychopathy, violence, likability, and attractiveness. Psychopathy rating accuracy improved as rater extroversion-sociability and agreeableness increased and when neuroticism and lifestyle and antisocial characteristics decreased. These results suggest that traits associated with nonverbal rating accuracy or social functioning may be important in threat detection. Raters also made errors consistent with error management theory, suggesting that laypeople overappraise danger when rating psychopathy.

  15. Subject-Auxillary Inversion Errors and Wh Question Acquisition: What Children Do Know.

    Science.gov (United States)

    Rowland, Caroline F.; Pine, Julian M.

    2000-01-01

    Analyzed correct wh-question production and subject-auxiliary inversion errors in one child's wh-question data. Argues that two current movement rule accounts cannot explain patterning of early wh-questions. Data can be explained by the child's knowledge of particular lexically-specific wh-word+auxiliary combinations, and inversion and universion…

  16. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  17. Overview of an automated, near realtime materials accounting system in use at the Savannah River Plant

    International Nuclear Information System (INIS)

    Clark, W.C. Jr.

    1987-01-01

    A reliable material accounting system is a requirement for the operation of any nuclear facility. At the Savannah River Plant, an automated, near realtime, accounting system has been developed to provide such reliability. The system's design provides timely detection of diversion or accounting problems by monitoring the activity in 18 unit process areas (UPAs). Material balance calculations are performed for each UPA after a batch of material has completed a processing step. In most cases, an inventory difference (ID) for a UPA is established at least every 24 hours. Detection of an accounting problem is further enhanced by an online measurement control program. This program evaluates the performance of most measurement equipment every 12 hours. Error estimates are propagated when a material balance is closed to provide a realtime limit of error for the inventory difference. To minimize false alarms, the data must be reliable and free of input errors. Solution volumes, container identifications, material weights, etc., are all collected via direct computer connections. Manual data input is used only as a backup to the automated system. Automatic data collection also provides a quick and easy method of entering accounting data. Data entry is therefore performed simultaneously with production operations, without reducing throughput. Finally, requests for analytical results required to determine nuclear material concentrations are made online. Concentrations are determined using one of ten assay devices or by analysis performed in a dedicated laboratory. When results are available, the information is posted on the accounting computer and any required adjustments are performed automatically. If necessary, material balances are reclosed to reflect the ID changes caused by a posted results

  18. Effects of past and recent blood pressure and cholesterol level on coronary heart disease and stroke mortality, accounting for measurement error.

    Science.gov (United States)

    Boshuizen, Hendriek C; Lanti, Mariapaola; Menotti, Alessandro; Moschandreas, Joanna; Tolonen, Hanna; Nissinen, Aulikki; Nedeljkovic, Srecko; Kafatos, Anthony; Kromhout, Daan

    2007-02-15

    The authors aimed to quantify the effects of current systolic blood pressure (SBP) and serum total cholesterol on the risk of mortality in comparison with SBP or serum cholesterol 25 years previously, taking measurement error into account. The authors reanalyzed 35-year follow-up data on mortality due to coronary heart disease and stroke among subjects aged 65 years or more from nine cohorts of the Seven Countries Study. The two-step method of Tsiatis et al. (J Am Stat Assoc 1995;90:27-37) was used to adjust for regression dilution bias, and results were compared with those obtained using more commonly applied methods of adjustment for regression dilution bias. It was found that the commonly used univariate adjustment for regression dilution bias overestimates the effects of both SBP and cholesterol compared with multivariate methods. Also, the two-step method makes better use of the information available, resulting in smaller confidence intervals. Results comparing recent and past exposure indicated that past SBP is more important than recent SBP in terms of its effect on coronary heart disease mortality, while both recent and past values seem to be important for effects of cholesterol on coronary heart disease mortality and effects of SBP on stroke mortality. Associations between serum cholesterol concentration and risk of stroke mortality are weak.

  19. Endogeneity, Time-Varying Coefficients, and Incorrect vs. Correct Ways of Specifying the Error Terms of Econometric Models

    Directory of Open Access Journals (Sweden)

    P.A.V.B. Swamy

    2017-02-01

    Full Text Available Using the net effect of all relevant regressors omitted from a model to form its error term is incorrect because the coefficients and error term of such a model are non-unique. Non-unique coefficients cannot possess consistent estimators. Uniqueness can be achieved if; instead; one uses certain “sufficient sets” of (relevant regressors omitted from each model to represent the error term. In this case; the unique coefficient on any non-constant regressor takes the form of the sum of a bias-free component and omitted-regressor biases. Measurement-error bias can also be incorporated into this sum. We show that if our procedures are followed; accurate estimation of bias-free components is possible.

  20. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    Science.gov (United States)

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  1. Detecting Soft Errors in Stencil based Computations

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, V. [Univ. of Utah, Salt Lake City, UT (United States); Gopalkrishnan, G. [Univ. of Utah, Salt Lake City, UT (United States); Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  2. Confounding and exposure measurement error in air pollution epidemiology

    NARCIS (Netherlands)

    Sheppard, L.; Burnett, R.T.; Szpiro, A.A.; Kim, J.Y.; Jerrett, M.; Pope, C.; Brunekreef, B.|info:eu-repo/dai/nl/067548180

    2012-01-01

    Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution.

  3. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  4. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    Science.gov (United States)

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  5. The Impact of Error-Management Climate, Error Type and Error Originator on Auditors’ Reporting Errors Discovered on Audit Work Papers

    NARCIS (Netherlands)

    A.H. Gold-Nöteberg (Anna); U. Gronewold (Ulfert); S. Salterio (Steve)

    2010-01-01

    textabstractWe examine factors affecting the auditor’s willingness to report their own or their peers’ self-discovered errors in working papers subsequent to detailed working paper review. Prior research has shown that errors in working papers are detected in the review process; however, such

  6. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  7. Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.

    Science.gov (United States)

    Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D

    2016-10-01

    Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  8. The impact of measurement errors in the identification of regulatory networks

    Directory of Open Access Journals (Sweden)

    Sato João R

    2009-12-01

    Full Text Available Abstract Background There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent and non-time series (independent data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models and dependent (autoregressive models data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error. The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.

  9. Multigroup transport calculations of critical and fuel assemblies with taking into account the scattering anisotropy

    International Nuclear Information System (INIS)

    Rubin, I.E.; Dneprovskaya, N.M.

    2005-01-01

    A technique for calculation of reactor lattices by means of the transmission probabilities with taking into account the scattering anisotropy is generalized for the multigroup case. The errors of the calculated multiplication coefficients and energy release distributions do noe exceed practically the errors, of these values, obtained by the Monte Carlo method. The proposed method is most effective when determining the small difference effects [ru

  10. Calculation of the Strip Foundation on Solid Elastic Base, Taking into Account the Karst Collapse

    Science.gov (United States)

    Sharapov, R.; Lodigina, N.

    2017-07-01

    Karst processes greatly complicate the construction and operation of buildings and structures. Due to the karstic deformations at different times there have been several major accidents, which analysis showed that in all cases the fundamental errors committed at different stages of building development: site selection, engineering survey, design, construction or operation of the facilities. Theory analysis of beams on elastic foundation is essential in building practice. Specialist engineering facilities often have to resort to multiple designing in finding efficient forms of construction of these facilities. In work the calculation of stresses in cross-sections of the strip foundation evenly distributed load in the event of karst. A comparison of extreme stress in the event of karst and without accounting for the strip foundation as a beam on an elastic foundation.

  11. Policies on documentation and disciplinary action in hospital pharmacies after a medication error.

    Science.gov (United States)

    Bauman, A N; Pedersen, C A; Schommer, J C; Griffith, N L

    2001-06-15

    Hospital pharmacies were surveyed about policies on medication error documentation and actions taken against pharmacists involved in an error. The survey was mailed to 500 randomly selected hospital pharmacy directors in the United States. Data were collected on the existence of medication error reporting policies, what types of errors were documented and how, and hospital demographics. The response rate was 28%. Virtually all of the hospitals had policies and procedures for medication error reporting. Most commonly, documentation of oral and written reprimand was placed in the personnel file of a pharmacist involved in an error. One sixth of respondents had no policy on documentation or disciplinary action in the event of an error. Approximately one fourth of respondents reported that suspension or termination had been used as a form of disciplinary action; legal action was rarely used. Many respondents said errors that caused harm (42%) or death (40%) to the patient were documented in the personnel file, but 34% of hospitals did not document errors in the personnel file regardless of error type. Nearly three fourths of respondents differentiated between errors caught and not caught before a medication leaves the pharmacy and between errors caught and not caught before administration to the patient. More emphasis is needed on documentation of medication errors in hospital pharmacies.

  12. Account Deletion Prediction on RuNet: A Case Study of Suspicious Twitter Accounts Active During the Russian-Ukrainian Crisis

    Energy Technology Data Exchange (ETDEWEB)

    Volkova, Svitlana; Bell, Eric B.

    2016-06-17

    Social networks are dynamically changing over time e.g., some accounts are being created and some are being deleted or become private. This ephemerality at both an account level and content level results from a combination of privacy concerns, spam, and deceptive behaviors. In this study we analyze a large dataset of 180,340 accounts active during the Russian-Ukrainian crisis to discover a series of predictive features for the removal or shutdown of a suspicious account. We find that unlike previously reported profile and net- work features, lexical features form the basis for highly accurate prediction of the deletion of an account.

  13. SIMULATION OF INERTIAL NAVIGATION SYSTEM ERRORS AT AERIAL PHOTOGRAPHY FROM UAV

    Directory of Open Access Journals (Sweden)

    R. Shults

    2017-05-01

    Full Text Available The problem of accuracy determination of the UAV position using INS at aerial photography can be resolved in two different ways: modelling of measurement errors or in-field calibration for INS. The paper presents the results of INS errors research by mathematical modelling. In paper were considered the following steps: developing of INS computer model; carrying out INS simulation; using reference data without errors, estimation of errors and their influence on maps creation accuracy by UAV data. It must be remembered that the values of orientation angles and the coordinates of the projection centre may change abruptly due to the influence of the atmosphere (different air density, wind, etc.. Therefore, the mathematical model of the INS was constructed taking into account the use of different models of wind gusts. For simulation were used typical characteristics of micro electromechanical (MEMS INS and parameters of standard atmosphere. According to the simulation established domination of INS systematic errors that accumulate during the execution of photographing and require compensation mechanism, especially for orientation angles. MEMS INS have a high level of noise at the system input. Thanks to the developed model, we are able to investigate separately the impact of noise in the absence of systematic errors. According to the research was found that on the interval of observations in 5 seconds the impact of random and systematic component is almost the same. The developed model of INS errors studies was implemented in Matlab software environment and without problems can be improved and enhanced with new blocks.

  14. Evidence, exaggeration, and error in historical accounts of chaparral wildfires in California.

    Science.gov (United States)

    Goforth, Brett R; Minnich, Richard A

    2007-04-01

    For more than half a century, ecologists and historians have been integrating the contemporary study of ecosystems with data gathered from historical sources to evaluate change over broad temporal and spatial scales. This approach is especially useful where ecosystems were altered before formal study as a result of natural resources management, land development, environmental pollution, and climate change. Yet, in many places, historical documents do not provide precise information, and pre-historical evidence is unavailable or has ambiguous interpretation. There are similar challenges in evaluating how the fire regime of chaparral in California has changed as a result of fire suppression management initiated at the beginning of the 20th century. Although the firestorm of October 2003 was the largest officially recorded in California (approximately 300,000 ha), historical accounts of pre-suppression wildfires have been cited as evidence that such a scale of burning was not unprecedented, suggesting the fire regime and patch mosaic in chaparral have not substantially changed. We find that the data do not support pre-suppression megafires, and that the impression of large historical wildfires is a result of imprecision and inaccuracy in the original reports, as well as a parlance that is beset with hyperbole. We underscore themes of importance for critically analyzing historical documents to evaluate ecological change. A putative 100 mile long by 10 mile wide (160 x 16 km) wildfire reported in 1889 was reconstructed to an area of chaparral approximately 40 times smaller by linking local accounts to property tax records, voter registration rolls, claimed insurance, and place names mapped with a geographical information system (GIS) which includes data from historical vegetation surveys. We also show that historical sources cited as evidence of other large chaparral wildfires are either demonstrably inaccurate or provide anecdotal information that is immaterial in the

  15. APPLYING THE PRINCIPLES OF ACCOUNTING IN

    OpenAIRE

    NAGY CRISTINA MIHAELA; SABĂU CRĂCIUN; ”Tibiscus” University of Timişoara, Faculty of Economic Science

    2015-01-01

    The application of accounting principles (accounting principle on accrual basis; principle of business continuity; method consistency principle; prudence principle; independence principle; the principle of separate valuation of assets and liabilities; intangibility principle; non-compensation principle; the principle of substance over form; the principle of threshold significance) to companies that are in bankruptcy procedure has a number of particularities. Thus, some principl...

  16. Analysis technique for controlling system wavefront error with active/adaptive optics

    Science.gov (United States)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  17. Student Accountability in Team-Based Learning Classes

    Science.gov (United States)

    Stein, Rachel E.; Colyer, Corey J.; Manning, Jason

    2016-01-01

    Team-based learning (TBL) is a form of small-group learning that assumes stable teams promote accountability. Teamwork promotes communication among members; application exercises promote active learning. Students must prepare for each class; failure to do so harms their team's performance. Therefore, TBL promotes accountability. As part of the…

  18. Le parole che noi usiamo: l’errore in storia

    Directory of Open Access Journals (Sweden)

    Aurelio Musi

    2017-04-01

    Full Text Available Unlike the hard sciences, historiography lacks a specific nomenclature. The lexicon employed by historians is drawn from the plain language of everyday life. Therefore, the words of history are to be defined within the spatio-temporal framework, and to be construed through processes of contextualization and comparison. My work here stems from these considerations, and attempts to chart the occurrence of errors in historiography. In particular, I take into account the way in which historiographic mistakes arise from the intermingling of words, space, and historical time. Another significant aspect concerns the relationship between history, fiction, and arbitrariness. The latter concept is linked to historical interpretation, which constitutes the last stage of historiographical work, after the analysis and the reconstruction of events. The last part of this paper offers a typology of frequent errors in historiography.

  19. Drug administration errors in an institution for individuals with intellectual disability : an observational study

    NARCIS (Netherlands)

    van den Bemt, P M L A; Robertz, R; de Jong, A L; van Roon, E N; Leufkens, H G M

    BACKGROUND: Medication errors can result in harm, unless barriers to prevent them are present. Drug administration errors are less likely to be prevented, because they occur in the last stage of the drug distribution process. This is especially the case in non-alert patients, as patients often form

  20. Safe and effective error rate monitors for SS7 signaling links

    Science.gov (United States)

    Schmidt, Douglas C.

    1994-04-01

    This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.

  1. Accountability in Times of Austerity

    DEFF Research Database (Denmark)

    Hansen, Hanne Foss; Kristiansen, Mads Bøge

    Like other countries Denmark has been hit by the global financial, economic and fiscal crisis. The pressure on the public finances has increased and public sector reforms such as new and/or changed accountability systems for budgetng, spending controls and financial management hav been launched...... in the form of a Budget Law and new requirements for financial management. This makes it interesting to assess how these initiatives introduced in times of austerity affect accountability in central government, and to discuss the potential effects of them. Based on a democratic, a constitutional...... and a learning perspective on public accountability, we assess the two initiatives through documentary material and interviews with civil servants who have designed or implemented the initiatives. The paper shows that the two initiatives strenthen and increase accountability from a democratic...

  2. JLab SRF Cavity Fabrication Errors, Consequences and Lessons Learned

    International Nuclear Information System (INIS)

    Marhauser, Frank

    2011-01-01

    Today, elliptical superconducting RF (SRF) cavities are preferably made from deep-drawn niobium sheets as pursued at Jefferson Laboratory (JLab). The fabrication of a cavity incorporates various cavity cell machining, trimming and electron beam welding (EBW) steps as well as surface chemistry that add to forming errors creating geometrical deviations of the cavity shape from its design. An analysis of in-house built cavities over the last years revealed significant errors in cavity production. Past fabrication flaws are described and lessons learned applied successfully to the most recent in-house series production of multi-cell cavities.

  3. Problems of intangible assets commercialization accounting

    Directory of Open Access Journals (Sweden)

    S.F. Legenchyk

    2016-03-01

    Full Text Available The growing role of intangible assets in conditions of global economy postindustrialization is grounded. The problems of intangible assets accounting are singled out. The basic tasks of the intangible assets accounting commercialization process are determined. The difference between the commercialization of intellectual property and intangible assets is considered. The basic approaches to understanding the essence of the intangible assets commercialization are singled out and grounded. The basic forms and methods of intangible assets commercialization researched by the author are analyzed. The order of accounting reflection of licensee royalties is considered. The factors of influence on the accounting process of intangible assets commercialization are determined. The necessity of solving the problem of accounting of lease payments for computer program by providing access to SaaS environment is grounded. The prospects of further studies of intangible assets accounting commercialization are determined.

  4. The role of certified reference materials in material control and accounting

    International Nuclear Information System (INIS)

    Turel, S.P.

    1979-01-01

    One way of providing an adequate material control and accounting system for the nuclear fuel cycle is to calculate material unaccounted for (MUF) after a physical inventory and to compare the limit of error of the MUF value (LEMUF) against prescribed criteria. To achieve a meaningful LEMUF, a programme for the continuing determination of systematic and random errors is necessary. Within this programme it is necessary to achieve traceability of all Special Nuclear Material (SNM) control and accounting measurements to an International/National Measurement System by means of Certified Reference Materials. SNM measurements for control and accounting are made internationally on a great variety of materials using many diverse measurement procedures by a large number of facilities. To achieve valid overall accountability over this great variety of measurements there must be some means of relating all these measurements and their uncertainties to each other. This is best achieved by an International/National Measurement System (IMS/NMS). To this end, all individual measurement systems must be compatible to the IMS/NMS and all measurement results must be traceable to appropriate international/national Primary Certified Reference Materials. To obtain this necessary compatibility for any given SNM measurement system, secondary certified reference materials or working reference materials are needed for every class of SNM and each type of measurement system. Ways to achieve ''traceability'' and the various types of certified reference material are defined and discussed in this paper. (author)

  5. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  6. Conceptual study of calibration software for large scale input accountancy tank

    International Nuclear Information System (INIS)

    Uchikoshi, Seiji; Yasu, Kan-ichi; Watanabe, Yuichi; Matsuda, Yuji; Kawai, Akio; Tamura, Toshiyuki; Shimizu, Hidehiko.

    1996-01-01

    Demonstration experiments for large scale input accountancy tank are going to be under way by Nuclear Material Control Center. Development of calibration software for accountancy system with dip-tube manometer is an important task in the experiments. A conceptual study of the software has been carried out to construct high precision accountancy system. And, the study was based on ANSI N15.19-1989. Items of the study are overall configuration, correction method for influence of bubble formation, function model of calibration, and fitting method for calibration curve. Following remarks are the results of this study. 1) Overall configuration of the software was constructed. 2) It was shown by numerical solution, that the influence of bubble formation can be corrected using period of pressure wave. 3) Two function models of calibration for well capacity and for inner structure volume were prepared from tank design, and good fitness of the model for net capacity (balance of both models) was confirmed by fitting to designed shape of the tank. 4) The necessity of further consideration about both-variables-in-error-model and cumulative-error-model was recognized. We are going to develop a practical software on the basis of the results, and to verify it by the demonstration experiments. (author)

  7. The use of error and uncertainty methods in the medical laboratory.

    Science.gov (United States)

    Oosterhuis, Wytze P; Bayat, Hassan; Armbruster, David; Coskun, Abdurrahman; Freeman, Kathleen P; Kallner, Anders; Koch, David; Mackenzie, Finlay; Migliarino, Gabriel; Orth, Matthias; Sandberg, Sverre; Sylte, Marit S; Westgard, Sten; Theodorsson, Elvar

    2018-01-26

    Error methods - compared with uncertainty methods - offer simpler, more intuitive and practical procedures for calculating measurement uncertainty and conducting quality assurance in laboratory medicine. However, uncertainty methods are preferred in other fields of science as reflected by the guide to the expression of uncertainty in measurement. When laboratory results are used for supporting medical diagnoses, the total uncertainty consists only partially of analytical variation. Biological variation, pre- and postanalytical variation all need to be included. Furthermore, all components of the measuring procedure need to be taken into account. Performance specifications for diagnostic tests should include the diagnostic uncertainty of the entire testing process. Uncertainty methods may be particularly useful for this purpose but have yet to show their strength in laboratory medicine. The purpose of this paper is to elucidate the pros and cons of error and uncertainty methods as groundwork for future consensus on their use in practical performance specifications. Error and uncertainty methods are complementary when evaluating measurement data.

  8. EMERGING COMMON LAW DECISIONS IN GOODWILL ACCOUNTING REGULATION

    OpenAIRE

    Radu-Daniel LOGHIN

    2014-01-01

    In respect to financial reporting, statutory accounting standards and regulations form only a part of the normative landscape. Considering the case of common law countries, besides these classic sources of norms and practices there is an alternative base for exercising the professional judgement of the accountant, the case law precedents which drive and supplement in cases accounting regulations. For the purpose of this paper, goodwill accounting is explored from a normative perspective which...

  9. SNP discovery in nonmodel organisms: strand bias and base-substitution errors reduce conversion rates.

    Science.gov (United States)

    Gonçalves da Silva, Anders; Barendse, William; Kijas, James W; Barris, Wes C; McWilliam, Sean; Bunch, Rowan J; McCullough, Russell; Harrison, Blair; Hoelzel, A Rus; England, Phillip R

    2015-07-01

    Single nucleotide polymorphisms (SNPs) have become the marker of choice for genetic studies in organisms of conservation, commercial or biological interest. Most SNP discovery projects in nonmodel organisms apply a strategy for identifying putative SNPs based on filtering rules that account for random sequencing errors. Here, we analyse data used to develop 4723 novel SNPs for the commercially important deep-sea fish, orange roughy (Hoplostethus atlanticus), to assess the impact of not accounting for systematic sequencing errors when filtering identified polymorphisms when discovering SNPs. We used SAMtools to identify polymorphisms in a velvet assembly of genomic DNA sequence data from seven individuals. The resulting set of polymorphisms were filtered to minimize 'bycatch'-polymorphisms caused by sequencing or assembly error. An Illumina Infinium SNP chip was used to genotype a final set of 7714 polymorphisms across 1734 individuals. Five predictors were examined for their effect on the probability of obtaining an assayable SNP: depth of coverage, number of reads that support a variant, polymorphism type (e.g. A/C), strand-bias and Illumina SNP probe design score. Our results indicate that filtering out systematic sequencing errors could substantially improve the efficiency of SNP discovery. We show that BLASTX can be used as an efficient tool to identify single-copy genomic regions in the absence of a reference genome. The results have implications for research aiming to identify assayable SNPs and build SNP genotyping assays for nonmodel organisms. © 2014 John Wiley & Sons Ltd.

  10. Reduction of errors during practice facilitates fundamental movement skill learning in children with intellectual disabilities.

    Science.gov (United States)

    Capio, C M; Poolton, J M; Sit, C H P; Eguia, K F; Masters, R S W

    2013-04-01

    Children with intellectual disabilities (ID) have been found to have inferior motor proficiencies in fundamental movement skills (FMS). This study examined the effects of training the FMS of overhand throwing by manipulating the amount of practice errors. Participants included 39 children with ID aged 4-11 years who were allocated into either an error-reduced (ER) training programme or a more typical programme in which errors were frequent (error-strewn, ES). Throwing movement form, throwing accuracy, and throwing frequency during free play were evaluated. The ER programme improved movement form, and increased throwing activity during free play to a greater extent than the ES programme. Furthermore, ER learners were found to be capable of engaging in a secondary cognitive task while manifesting robust throwing accuracy performance. The findings support the use of movement skills training programmes that constrain practice errors in children with ID, suggesting that such approach results in improved performance and heightened movement engagement in free play. © 2012 The Authors. Journal of Intellectual Disability Research © 2012 Blackwell Publishing Ltd.

  11. When accounting was economics

    Directory of Open Access Journals (Sweden)

    Mieczysław Dobija

    2015-04-01

    Full Text Available The presented considerations and reflections aim to search for the beginning of accounting in terms of both ideas and procedures that make up a system that is operative in practice. The thesis that the economic calculus and procedures forming an accounting system have existed since the beginnings of civilization seems to be sufficiently justified. It should, however, be recognized that there was an initial activation period of civilization processes. Research has led to the conclusion that it was accounting for labor, not capital, that served communities from their beginnings. However, on the basis of theory, labor and capital are two related categories and both lead to double-entry, which is a characteristic feature of accounting. In the days before the creation of writing, tokens were used for thousands of years for recording and accounting purposes, being a useful tool in maintaining balance in the socio-economic system. The development of city-states and the emergence of writing techniques have improved the system by replacing token records on clay tablets. The dominance of labor accounting continued until the eleventh century BC, to the dark ages. Contemporary accounting, although geared more to the measurement of capital and its changes in the economic processes, still continues to operate according to the old paradigm and is focused on inputs in their historical cost perspective.

  12. The role of comprehensive check at the blood bank reception on blood requisitions in detecting potential transfusion errors.

    Science.gov (United States)

    Jain, Ashish; Kumari, Sonam; Marwaha, Neelam; Sharma, Ratti Ram

    2015-06-01

    Pre-transfusion testing includes proper requisitions, compatibility testing and pre-release checks. Proper labelling of samples and blood units and accurate patient details check helps to minimize the risk of errors in transfusion. This study was aimed to identify requisition errors before compatibility testing. The study was conducted in the blood bank of a tertiary care hospital in north India over a period of 3 months. The requisitions were screened at the reception counter and inside the pre-transfusion testing laboratory for errors. This included checking the Central Registration number (C.R. No.) and name of patient on the requisition form and the sample label; appropriateness of sample container and sample label; incomplete requisitions; blood group discrepancy. Out of the 17,148 blood requisitions, 474 (2.76 %) requisition errors were detected before the compatibility testing. There were 192 (1.11 %) requisitions where the C.R. No. on the form and the sample were not tallying and in 70 (0.40 %) requisitions patient's name on the requisition form and the sample were different. Highest number of requisitions errors were observed in those received from the Emergency and Trauma services (27.38 %) followed by Medical wards (15.82 %) and the lowest number (3.16 %) of requisition errors were observed from Hematology and Oncology wards. C.R. No. error was the most common error observed in our study. Thus a careful check of the blood requisitions at the blood bank reception counter helps in identifying the potential transfusion errors.

  13. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  14. 48 CFR 49.602-3 - Schedule of accounting information.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Schedule of accounting information. 49.602-3 Section 49.602-3 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION... accounting information. Standard Form 1439, Schedule of Accounting Information, shall be filed in support of...

  15. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    this work, additional calculations can be performed as needed. Changes in the identification of correlated effects, the magnitude of errors, and the analytical form of voltage response can be made easily. The calculated errors on growth may be used to reduce conservative margins on plugging limits and the sensitivity analysis can be used to identify the technique improvements that would provide the greatest benefits. (author)

  16. Effects of a direct refill program for automated dispensing cabinets on medication-refill errors.

    Science.gov (United States)

    Helmons, Pieter J; Dalton, Ashley J; Daniels, Charles E

    2012-10-01

    The effects of a direct refill program for automated dispensing cabinets (ADCs) on medication-refill errors were studied. This study was conducted in designated acute care areas of a 386-bed academic medical center. A wholesaler-to-ADC direct refill program, consisting of prepackaged delivery of medications and bar-code-assisted ADC refilling, was implemented in the inpatient pharmacy of the medical center in September 2009. Medication-refill errors in 26 ADCs from the general medicine units, the infant special care unit, the surgical and burn intensive care units, and intermediate units were assessed before and after the implementation of this program. Medication-refill errors were defined as an ADC pocket containing the wrong drug, wrong strength, or wrong dosage form. ADC refill errors decreased by 77%, from 62 errors per 6829 refilled pockets (0.91%) to 8 errors per 3855 refilled pockets (0.21%) (p error type detected before the intervention was the incorrect medication (wrong drug, wrong strength, or wrong dosage form) in the ADC pocket. Of the 54 incorrect medications found before the intervention, 38 (70%) were loaded in a multiple-drug drawer. After the implementation of the new refill process, 3 of the 5 incorrect medications were loaded in a multiple-drug drawer. There were 3 instances of expired medications before and only 1 expired medication after implementation of the program. A redesign of the ADC refill process using a wholesaler-to-ADC direct refill program that included delivery of prepackaged medication and bar-code-assisted refill significantly decreased the occurrence of ADC refill errors.

  17. An analysis of error patterns in children's backward digit recall in noise

    Science.gov (United States)

    Osman, Homira; Sullivan, Jessica R.

    2015-01-01

    The purpose of the study was to determine whether perceptual masking or cognitive processing accounts for a decline in working memory performance in the presence of competing speech. The types and patterns of errors made on the backward digit span in quiet and multitalker babble at -5 dB signal-to-noise ratio (SNR) were analyzed. The errors were classified into two categories: item (if digits that were not presented in a list were repeated) and order (if correct digits were repeated but in an incorrect order). Fifty five children with normal hearing were included. All the children were aged between 7 years and 10 years. Repeated measures of analysis of variance (RM-ANOVA) revealed the main effects for error type and digit span length. In terms of listening condition interaction it was found that the order errors occurred more frequently than item errors in the degraded listening condition compared to quiet. In addition, children had more difficulty recalling the correct order of intermediate items, supporting strong primacy and recency effects. Decline in children's working memory performance was not primarily related to perceptual difficulties alone. The majority of errors was related to the maintenance of sequential order information, which suggests that reduced performance in competing speech may result from increased cognitive processing demands in noise. PMID:26168949

  18. Towards continuous improvement of endoscopy standards: Validation of a colonoscopy assessment form.

    LENUS (Irish Health Repository)

    2012-02-01

    Aim: Assessment of procedural colonoscopy skills is an important and topical. The aim of this study was to develop and validate a competency-based colonoscopy assessment form that would be easy to use, suitable for the assessment of junior and senior endoscopists and potentially be a useful instrument to detect differences in performance standards following different training interventions. Method: A standardised assessment form was developed incorporating a checklist with dichotomous yes\\/no responses and a global assessment section incorporating several different elements. This form was used prospectively to evaluate colonoscopy cases during the period of the study in several university teaching hospitals. Results were analysed using ANOVA with Bonferroni corrections for post-hoc analysis. Results: 81 procedures were assessed, performed by eight consultant and 19 trainee endoscopists. There were no serious errors. When divided into three groups based on previous experience (novice, intermediate and expert) the assessment form demonstrated statistically significant differences between all three groups (p<0.05). When separate elements were taken into account, the global assessment section was a better discriminator of skill level than the checklist. Conclusion: This form is a valid, easy to use assessment method. We intend to use it to assess the value of simulator training in trainee endoscopists. It also has the potential to be a useful training tool when feedback is given to the trainee.

  19. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  20. Determination of charged particle beam parameters with taking into account of space charge

    International Nuclear Information System (INIS)

    Ishkhanov, B.S.; Poseryaev, A.V.; Shvedunov, V.I.

    2005-01-01

    One describes a procedure to determine the basic parameters of a paraxial axially-symmetric beam of charged particles taking account of space charge contribution. The described procedure is based on application of the general equation for beam envelope. Paper presents data on its convergence and resistance to measurement errors. The position determination error of crossover (stretching) and radius of beam in crossover is maximum 15% , while the emittance determination error depends on emittance and space charge correlation. The introduced procedure was used to determine parameters of the available electron gun 20 keV energy beam with 0.64 A current. The derived results turned to agree closely with the design parameters [ru

  1. Digitization errors using digital charge division positionsensitive detectors

    International Nuclear Information System (INIS)

    Berliner, R.; Mildner, D.F.R.; Pringle, O.A.

    1981-01-01

    The data acquisition speed and electronic stability of a charge division position-sensitive detector may be improved by using digital signal processing with a table look-up high speed multiply to form the charge division quotient. This digitization process introduces a positional quantization difficulty which reduces the detector position sensitivity. The degree of the digitization error is dependent on the pulse height spectrum of the detector and on the resolution or dynamic range of the system analog-to-digital converters. The effects have been investigated analytically and by computer simulation. The optimum algorithm for position sensing determination using 8-bit digitization and arithmetic has a digitization error of less than 1%. (orig.)

  2. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  3. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  5. Performance of muon reconstruction including Alignment Position Errors for 2016 Collision Data

    CERN Document Server

    CMS Collaboration

    2016-01-01

    From 2016 Run muon reconstruction is using non-zero Alignment Position Errors to account for the residual uncertainties of muon chambers' positions. Significant improvements are obtained in particular for the startup phase after opening/closing the muon detector. Performance results are presented for real data and MC simulations, related to both the offline reconstruction and the High-Level Trigger.

  6. How to minimize perceptual error and maximize expertise in medical imaging

    Science.gov (United States)

    Kundel, Harold L.

    2007-03-01

    Visual perception is such an intimate part of human experience that we assume that it is entirely accurate. Yet, perception accounts for about half of the errors made by radiologists using adequate imaging technology. The true incidence of errors that directly affect patient well being is not known but it is probably at the lower end of the reported values of 3 to 25%. Errors in screening for lung and breast cancer are somewhat better characterized than errors in routine diagnosis. About 25% of cancers actually recorded on the images are missed and cancer is falsely reported in about 5% of normal people. Radiologists must strive to decrease error not only because of the potential impact on patient care but also because substantial variation among observers undermines confidence in the reliability of imaging diagnosis. Observer variation also has a major impact on technology evaluation because the variation between observers is frequently greater than the difference in the technologies being evaluated. This has become particularly important in the evaluation of computer aided diagnosis (CAD). Understanding the basic principles that govern the perception of medical images can provide a rational basis for making recommendations for minimizing perceptual error. It is convenient to organize thinking about perceptual error into five steps. 1) The initial acquisition of the image by the eye-brain (contrast and detail perception). 2) The organization of the retinal image into logical components to produce a literal perception (bottom-up, global, holistic). 3) Conversion of the literal perception into a preferred perception by resolving ambiguities in the literal perception (top-down, simulation, synthesis). 4) Selective visual scanning to acquire details that update the preferred perception. 5) Apply decision criteria to the preferred perception. The five steps are illustrated with examples from radiology with suggestions for minimizing error. The role of perceptual

  7. Influence of calculation error of total field anomaly in strongly magnetic environments

    Science.gov (United States)

    Yuan, Xiaoyu; Yao, Changli; Zheng, Yuanman; Li, Zelin

    2016-04-01

    An assumption made in many magnetic interpretation techniques is that ΔTact (total field anomaly - the measurement given by total field magnetometers, after we remove the main geomagnetic field, T0) can be approximated mathematically by ΔTpro (the projection of anomalous field vector in the direction of the earth's normal field). In order to meet the demand for high-precision processing of magnetic prospecting, the approximate error E between ΔTact and ΔTpro is studied in this research. Generally speaking, the error E is extremely small when anomalies not greater than about 0.2T0. However, the errorE may be large in highly magnetic environments. This leads to significant effects on subsequent quantitative inference. Therefore, we investigate the error E through numerical experiments of high-susceptibility bodies. A systematic error analysis was made by using a 2-D elliptic cylinder model. Error analysis show that the magnitude of ΔTact is usually larger than that of ΔTpro. This imply that a theoretical anomaly computed without accounting for the error E overestimate the anomaly associated with the body. It is demonstrated through numerical experiments that the error E is obvious and should not be ignored. It is also shown that the curves of ΔTpro and the error E had a certain symmetry when the directions of magnetization and geomagnetic field changed. To be more specific, the Emax (the maximum of the error E) appeared above the center of the magnetic body when the magnetic parameters are determined. Some other characteristics about the error Eare discovered. For instance, the curve of Emax with respect to the latitude was symmetrical on both sides of magnetic equator, and the extremum of the Emax can always be found in the mid-latitudes, and so on. It is also demonstrated that the error Ehas great influence on magnetic processing transformation and inversion results. It is conclude that when the bodies have highly magnetic susceptibilities, the error E can

  8. A continuous quality improvement project to reduce medication error in the emergency department.

    Science.gov (United States)

    Lee, Sara Bc; Lee, Larry Ly; Yeung, Richard Sd; Chan, Jimmy Ts

    2013-01-01

    Medication errors are a common source of adverse healthcare incidents particularly in the emergency department (ED) that has a number of factors that make it prone to medication errors. This project aims to reduce medication errors and improve the health and economic outcomes of clinical care in Hong Kong ED. In 2009, a task group was formed to identify problems that potentially endanger medication safety and developed strategies to eliminate these problems. Responsible officers were assigned to look after seven error-prone areas. Strategies were proposed, discussed, endorsed and promulgated to eliminate the problems identified. A reduction of medication incidents (MI) from 16 to 6 was achieved before and after the improvement work. This project successfully established a concrete organizational structure to safeguard error-prone areas of medication safety in a sustainable manner.

  9. Human decision error (HUMDEE) trees

    International Nuclear Information System (INIS)

    Ostrom, L.T.

    1993-01-01

    Graphical presentations of human actions in incident and accident sequences have been used for many years. However, for the most part, human decision making has been underrepresented in these trees. This paper presents a method of incorporating the human decision process into graphical presentations of incident/accident sequences. This presentation is in the form of logic trees. These trees are called Human Decision Error Trees or HUMDEE for short. The primary benefit of HUMDEE trees is that they graphically illustrate what else the individuals involved in the event could have done to prevent either the initiation or continuation of the event. HUMDEE trees also present the alternate paths available at the operator decision points in the incident/accident sequence. This is different from the Technique for Human Error Rate Prediction (THERP) event trees. There are many uses of these trees. They can be used for incident/accident investigations to show what other courses of actions were available and for training operators. The trees also have a consequence component so that not only the decision can be explored, also the consequence of that decision

  10. Nuclear material statistical accountancy system

    International Nuclear Information System (INIS)

    Argentest, F.; Casilli, T.; Franklin, M.

    1979-01-01

    The statistical accountancy system developed at JRC Ispra is refered as 'NUMSAS', ie Nuclear Material Statistical Accountancy System. The principal feature of NUMSAS is that in addition to an ordinary material balance calcultation, NUMSAS can calculate an estimate of the standard deviation of the measurement error accumulated in the material balance calculation. The purpose of the report is to describe in detail, the statistical model on wich the standard deviation calculation is based; the computational formula which is used by NUMSAS in calculating the standard deviation and the information about nuclear material measurements and the plant measurement system which are required as data for NUMSAS. The material balance records require processing and interpretation before the material balance calculation is begun. The material balance calculation is the last of four phases of data processing undertaken by NUMSAS. Each of these phases is implemented by a different computer program. The activities which are carried out in each phase can be summarised as follows; the pre-processing phase; the selection and up-date phase; the transformation phase, and the computation phase

  11. An Agent-Based Approach for Evaluating Basic Design Options of Management Accounting Systems

    Directory of Open Access Journals (Sweden)

    Friederike Wall

    2013-12-01

    Full Text Available This paper investigates the effectiveness of reducing errors in management accounting systems with respect to organizational performance. In particular, different basic design options of management accounting systems of how to improve the information base by measurements of actual values are analyzed in different organizational contexts. The paper applies an agent-based simulation based on the idea of NK fitness landscapes. The results provide broad, but no universal support for conventional wisdom that lower inaccuracies of accounting information lead to more effective adaptation processes. Furthermore, results indicate that the effectiveness of improving the management accounting system subtly interferes with the complexity of the interactions within the organization and the coordination mode applied

  12. Application of a repeat-measure biomarker measurement error model to 2 validation studies: examination of the effect of within-person variation in biomarker measurements.

    Science.gov (United States)

    Preis, Sarah Rosner; Spiegelman, Donna; Zhao, Barbara Bojuan; Moshfegh, Alanna; Baer, David J; Willett, Walter C

    2011-03-15

    Repeat-biomarker measurement error models accounting for systematic correlated within-person error can be used to estimate the correlation coefficient (ρ) and deattenuation factor (λ), used in measurement error correction. These models account for correlated errors in the food frequency questionnaire (FFQ) and the 24-hour diet recall and random within-person variation in the biomarkers. Failure to account for within-person variation in biomarkers can exaggerate correlated errors between FFQs and 24-hour diet recalls. For 2 validation studies, ρ and λ were calculated for total energy and protein density. In the Automated Multiple-Pass Method Validation Study (n=471), doubly labeled water (DLW) and urinary nitrogen (UN) were measured twice in 52 adults approximately 16 months apart (2002-2003), yielding intraclass correlation coefficients of 0.43 for energy (DLW) and 0.54 for protein density (UN/DLW). The deattenuated correlation coefficient for protein density was 0.51 for correlation between the FFQ and the 24-hour diet recall and 0.49 for correlation between the FFQ and the biomarker. Use of repeat-biomarker measurement error models resulted in a ρ of 0.42. These models were similarly applied to the Observing Protein and Energy Nutrition Study (1999-2000). In conclusion, within-person variation in biomarkers can be substantial, and to adequately assess the impact of correlated subject-specific error, this variation should be assessed in validation studies of FFQs. © The Author 2011. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved.

  13. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    Science.gov (United States)

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ

  14. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    KAUST Repository

    Sang, Huiyan; Jun, Mikyoung; Huang, Jianhua Z.

    2011-01-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models

  15. The ACUMEN Portfolio: Accounting for Alternative Forms of Scholarly Output

    NARCIS (Netherlands)

    Wouters, P.; Tatum, C.

    2013-01-01

    New tools for measuring the impact of research (altmetrics) bring much needed attention to changing scholarly communication practices. However, alternative forms of output are still widely excluded from the evaluation of individual researchers. The ACUMEN project addresses this problem in two ways.

  16. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  17. Comment on "Infants' perseverative search errors are induced by pragmatic misinterpretation".

    Science.gov (United States)

    Spencer, John P; Dineva, Evelina; Smith, Linda B

    2009-09-25

    Topál et al. (Reports, 26 September 2008, p. 1831) proposed that infants' perseverative search errors can be explained by ostensive cues from the experimenter. We use the dynamic field theory to test the proposal that infants encode locations more weakly when social cues are present. Quantitative simulations show that this account explains infants' performance without recourse to the theory of natural pedagogy.

  18. Exploring human error in military aviation flight safety events using post-incident classification systems.

    Science.gov (United States)

    Hooper, Brionny J; O'Hare, David P A

    2013-08-01

    Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.

  19. State Technical Committee for Accounting. Official Report.

    Science.gov (United States)

    Jensen, Claudia

    This report contains validated task inventory listings for accounting occupations. An introductory report in brief outline form gives background of the work of the technical committee that identified the duties and tasks. This is followed by four attachments which make up most of the document. Attachment A has two parts: (1) an accounting skills…

  20. FORMS OF YOUTH TRAVEL

    OpenAIRE

    Moisã Claudia Olimpia; Moisã Claudia Olimpia

    2011-01-01

    Taking into account the suite of motivation that youth has when practicing tourism, it can be said that the youth travel takes highly diverse forms. These forms are educational tourism, volunteer programs and “work and travel”, cultural exchanges or sports tourism and adventure travel. In this article, we identified and analyzed in detail the main forms of youth travel both internationally and in Romania. We also illustrated for each form of tourism the specific tourism products targeting you...