WorldWideScience

Sample records for geometrical errors studied

  1. Compensation of kinematic geometric parameters error and comparative study of accuracy testing for robot

    Science.gov (United States)

    Du, Liang; Shi, Guangming; Guan, Weibin; Zhong, Yuansheng; Li, Jin

    2014-12-01

    Geometric error is the main error of the industrial robot, and it plays a more significantly important fact than other error facts for robot. The compensation model of kinematic error is proposed in this article. Many methods can be used to test the robot accuracy, therefore, how to compare which method is better one. In this article, a method is used to compare two methods for robot accuracy testing. It used Laser Tracker System (LTS) and Three Coordinate Measuring instrument (TCM) to test the robot accuracy according to standard. According to the compensation result, it gets the better method which can improve the robot accuracy apparently.

  2. Geometrical error calibration in reflective surface testing based on reverse Hartmann test

    Science.gov (United States)

    Gong, Zhidong; Wang, Daodang; Xu, Ping; Wang, Chao; Liang, Rongguang; Kong, Ming; Zhao, Jun; Mo, Linhai; Mo, Shuhui

    2017-08-01

    In the fringe-illumination deflectometry based on reverse-Hartmann-test configuration, ray tracing of the modeled testing system is performed to reconstruct the test surface error. Careful calibration of system geometry is required to achieve high testing accuracy. To realize the high-precision surface testing with reverse Hartmann test, a computer-aided geometrical error calibration method is proposed. The aberrations corresponding to various geometrical errors are studied. With the aberration weights for various geometrical errors, the computer-aided optimization of system geometry with iterative ray tracing is carried out to calibration the geometrical error, and the accuracy in the order of subnanometer is achieved.

  3. Forward error correction based on algebraic-geometric theory

    CERN Document Server

    A Alzubi, Jafar; M Chen, Thomas

    2014-01-01

    This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.

  4. Studies in geometric quantization

    International Nuclear Information System (INIS)

    Tuynman, G.M.

    1988-01-01

    This thesis contains five chapters, of which the first, entitled 'What is prequantization, and what is geometric quantization?', is meant as an introduction to geometric quantization for the non-specialist. The second chapter, entitled 'Central extensions and physics' deals with the notion of central extensions of manifolds and elaborates and proves the statements made in the first chapter. Central extensions of manifolds occur in physics as the freedom of a phase factor in the quantum mechanical state vector, as the phase factor in the prequantization process of classical mechanics and it appears in mathematics when studying central extension of Lie groups. In this chapter the connection between these central extensions is investigated and a remarkable similarity between classical and quantum mechanics is shown. In chapter three a classical model is given for the hydrogen atom including spin-orbit and spin-spin interaction. The method of geometric quantization is applied to this model and the results are discussed. In the final chapters (4 and 5) an explicit method to calculate the operators corresponding to classical observables is given when the phase space is a Kaehler manifold. The obtained formula are then used to quantise symplectic manifolds which are irreducible hermitian symmetric spaces and the results are compared with other quantization procedures applied to these manifolds (in particular to Berezin's quantization). 91 refs.; 3 tabs

  5. Analytical sensitivity analysis of geometric errors in a three axis machine tool

    International Nuclear Information System (INIS)

    Park, Sung Ryung; Yang, Seung Han

    2012-01-01

    In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors

  6. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  7. Geometrical modelling of scanning probe microscopes and characterization of errors

    International Nuclear Information System (INIS)

    Marinello, F; Savio, E; Bariani, P; Carmignato, S

    2009-01-01

    Scanning probe microscopes (SPMs) allow quantitative evaluation of surface topography with ultra-high resolution, as a result of accurate actuation combined with the sharpness of tips. SPMs measure sequentially, by scanning surfaces in a raster fashion: topography maps commonly consist of data sets ideally reported in an orthonormal rectilinear Cartesian coordinate system. However, due to scanning errors and measurement distortions, the measurement process is far from the ideal Cartesian condition. The paper addresses geometrical modelling of the scanning system dynamics, presenting a mathematical model which describes the surface metric x-, y- and z- coordinates as a function of the measured x'-, y'- and z'-coordinates respectively. The complete mathematical model provides a relevant contribution to characterization and calibration, and ultimately to traceability, of SPMs, when applied for quantitative characterization

  8. The problem of assessing landmark error in geometric morphometrics: theory, methods, and modifications.

    Science.gov (United States)

    von Cramon-Taubadel, Noreen; Frazier, Brenda C; Lahr, Marta Mirazón

    2007-09-01

    Geometric morphometric methods rely on the accurate identification and quantification of landmarks on biological specimens. As in any empirical analysis, the assessment of inter- and intra-observer error is desirable. A review of methods currently being employed to assess measurement error in geometric morphometrics was conducted and three general approaches to the problem were identified. One such approach employs Generalized Procrustes Analysis to superimpose repeatedly digitized landmark configurations, thereby establishing whether repeat measures fall within an acceptable range of variation. The potential problem of this error assessment method (the "Pinocchio effect") is demonstrated and its effect on error studies discussed. An alternative approach involves employing Euclidean distances between the configuration centroid and repeat measures of a landmark to assess the relative repeatability of individual landmarks. This method is also potentially problematic as the inherent geometric properties of the specimen can result in misleading estimates of measurement error. A third approach involved the repeated digitization of landmarks with the specimen held in a constant orientation to assess individual landmark precision. This latter approach is an ideal method for assessing individual landmark precision, but is restrictive in that it does not allow for the incorporation of instrumentally defined or Type III landmarks. Hence, a revised method for assessing landmark error is proposed and described with the aid of worked empirical examples. (c) 2007 Wiley-Liss, Inc.

  9. Geometrical Design Errors in Duhok Intersections by Driver Behavior

    Directory of Open Access Journals (Sweden)

    Dilshad Ali Mohammed

    2018-03-01

    Full Text Available In many situations, drivers if certain of the absence traffic monitoring system tend to shorten their driving paths and travel time across intersections. This behavior will be encouraged if the geometrical design suffers from mistakes, or the geometrical design and road conditions make it harder for drivers to follow the correct routes. Sometimes the intersection arrangement is confusing for the driver to distinguish the right from the wrong track. In this study, two sites with large number of driving mistakes were noticed. One site is a roundabout within the university of Duhok campus. The other is the intersection just outside the University of Duhok eastern main gate. At both sites, the geometry is very confusing and encourage driving mistakes. The university roundabout which was the first site investigated, was not properly designed encouraging wrong side driving. Many traffic accidents took place at this roundabout.  Wrong side driving reaches 32 % at peak hour in one approach.  This was reduced to 6% when temporary divisional island was installed. The other approach has a 15% wrong side driving and no remedy could be done to it. At the intersection near the university gate, wrong side driving reaches 56% of the traffic emerging from the main gate at peak hour. This was reduced to 14% when drivers are guided through direction sign. This percentage was reduced further to 9% with standing policeman.

  10. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    Science.gov (United States)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  11. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    Science.gov (United States)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  12. Virtual machining considering dimensional, geometrical and tool deflection errors in three-axis CNC milling machines

    OpenAIRE

    Soori, Mohsen; Arezoo, Behrooz; Habibi, Mohsen

    2014-01-01

    Virtual manufacturing systems can provide useful means for products to be manufactured without the need of physical testing on the shop floor. As a result, the time and cost of part production can be decreased. There are different error sources in machine tools such as tool deflection, geometrical deviations of moving axis and thermal distortions of machine tool structures. Some of these errors can be decreased by controlling the machining process and environmental parameters. However other e...

  13. Virtual machining considering dimensional, geometrical and tool deflection errors in three-axis CNC milling machines

    OpenAIRE

    Soori, Mohsen; Arezoo, Behrooz; Habibi, Mohsen

    2016-01-01

    Virtual manufacturing systems can provide useful means for products to be manufactured without the need of physical testing on the shop floor. As a result, the time and cost of part production can be decreased. There are different error sources in machine tools such as tool deflection, geometrical deviations of moving axis and thermal distortions of machine tool structures. Some of these errors can be decreased by controlling the machining process and environmental parameters. However other e...

  14. Estimation of Dynamic Errors in Laser Optoelectronic Dimension Gauges for Geometric Measurement of Details

    Directory of Open Access Journals (Sweden)

    Khasanov Zimfir

    2018-01-01

    Full Text Available The article reviews the capabilities and particularities of the approach to the improvement of metrological characteristics of fiber-optic pressure sensors (FOPS based on estimation estimation of dynamic errors in laser optoelectronic dimension gauges for geometric measurement of details. It is shown that the proposed criteria render new methods for conjugation of optoelectronic converters in the dimension gauge for geometric measurements in order to reduce the speed and volume requirements for the Random Access Memory (RAM of the video controller which process the signal. It is found that the lower relative error, the higher the interrogetion speed of the CCD array. It is shown that thus, the maximum achievable dynamic accuracy characteristics of the optoelectronic gauge are determined by the following conditions: the parameter stability of the electronic circuits in the CCD array and the microprocessor calculator; linearity of characteristics; error dynamics and noise in all electronic circuits of the CCD array and microprocessor calculator.

  15. Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2014-01-01

    In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...

  16. Measurement system and model for simultaneously measuring 6DOF geometric errors.

    Science.gov (United States)

    Zhao, Yuqiong; Zhang, Bin; Feng, Qibo

    2017-09-04

    A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.

  17. Modeling of random geometric errors in superconducting magnets with applications to the CERN Large Hadron Collider

    Directory of Open Access Journals (Sweden)

    P. Ferracin

    2000-12-01

    Full Text Available Estimates of random field-shape errors induced by cable mispositioning in superconducting magnets are presented and specific applications to the Large Hadron Collider (LHC main dipoles and quadrupoles are extensively discussed. Numerical simulations obtained with Monte Carlo methods are compared to analytic estimates and are used to interpret the experimental data for the LHC dipole and quadrupole prototypes. The proposed approach can predict the effect of magnet tolerances on geometric components of random field-shape errors, and it is a useful tool to monitor the obtained tolerances during magnet production.

  18. Systematic error in the precision measurement of the mean wavelength of a nearly monochromatic neutron beam due to geometric errors

    Energy Technology Data Exchange (ETDEWEB)

    Coakley, K.J., E-mail: kevin.coakley@nist.go [National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305 (United States); Dewey, M.S. [National Institute of Standards and Technology, Gaithersburg, MD (United States); Yue, A.T. [University of Tennessee, Knoxville, TN (United States); Laptev, A.B. [Tulane University, New Orleans, LA (United States)

    2009-12-11

    Many experiments at neutron scattering facilities require nearly monochromatic neutron beams. In such experiments, one must accurately measure the mean wavelength of the beam. We seek to reduce the systematic uncertainty of this measurement to approximately 0.1%. This work is motivated mainly by an effort to improve the measurement of the neutron lifetime determined from data collected in a 2003 in-beam experiment performed at NIST. More specifically, we seek to reduce systematic uncertainty by calibrating the neutron detector used in this lifetime experiment. This calibration requires simultaneous measurement of the responses of both the neutron detector used in the lifetime experiment and an absolute black neutron detector to a highly collimated nearly monochromatic beam of cold neutrons, as well as a separate measurement of the mean wavelength of the neutron beam. The calibration uncertainty will depend on the uncertainty of the measured efficiency of the black neutron detector and the uncertainty of the measured mean wavelength. The mean wavelength of the beam is measured by Bragg diffracting the beam from a nearly perfect silicon analyzer crystal. Given the rocking curve data and knowledge of the directions of the rocking axis and the normal to the scattering planes in the silicon crystal, one determines the mean wavelength of the beam. In practice, the direction of the rocking axis and the normal to the silicon scattering planes are not known exactly. Based on Monte Carlo simulation studies, we quantify systematic uncertainties in the mean wavelength measurement due to these geometric errors. Both theoretical and empirical results are presented and compared.

  19. Three-point method for measuring the geometric error components of linear and rotary axes based on sequential multilateration

    International Nuclear Information System (INIS)

    Zhang, Zhenjiu; Hu, Hong

    2013-01-01

    The linear and rotary axes are fundamental parts of multi-axis machine tools. The geometric error components of the axes must be measured for motion error compensation to improve the accuracy of the machine tools. In this paper, a simple method named the three point method is proposed to measure the geometric error of the linear and rotary axes of the machine tools using a laser tracker. A sequential multilateration method, where uncertainty is verified through simulation, is applied to measure the 3D coordinates. Three noncollinear points fixed on the stage of each axis are selected. The coordinates of these points are simultaneously measured using a laser tracker to obtain their volumetric errors by comparing these coordinates with ideal values. Numerous equations can be established using the geometric error models of each axis. The geometric error components can be obtained by solving these equations. The validity of the proposed method is verified through a series of experiments. The results indicate that the proposed method can measure the geometric error of the axes to compensate for the errors in multi-axis machine tools.

  20. Study into Point Cloud Geometric Rigidity and Accuracy of TLS-Based Identification of Geometric Bodies

    Science.gov (United States)

    Klapa, Przemyslaw; Mitka, Bartosz; Zygmunt, Mariusz

    2017-12-01

    Capability of obtaining a multimillion point cloud in a very short time has made the Terrestrial Laser Scanning (TLS) a widely used tool in many fields of science and technology. The TLS accuracy matches traditional devices used in land surveying (tacheometry, GNSS - RTK), but like any measurement it is burdened with error which affects the precise identification of objects based on their image in the form of a point cloud. The point’s coordinates are determined indirectly by means of measuring the angles and calculating the time of travel of the electromagnetic wave. Each such component has a measurement error which is translated into the final result. The XYZ coordinates of a measuring point are determined with some uncertainty and the very accuracy of determining these coordinates is reduced as the distance to the instrument increases. The paper presents the results of examination of geometrical stability of a point cloud obtained by means terrestrial laser scanner and accuracy evaluation of solids determined using the cloud. Leica P40 scanner and two different settings of measuring points were used in the tests. The first concept involved placing a few balls in the field and then scanning them from various sides at similar distances. The second part of measurement involved placing balls and scanning them a few times from one side but at varying distances from the instrument to the object. Each measurement encompassed a scan of the object with automatic determination of its position and geometry. The desk studies involved a semiautomatic fitting of solids and measurement of their geometrical elements, and comparison of parameters that determine their geometry and location in space. The differences of measures of geometrical elements of balls and translations vectors of the solids centres indicate the geometrical changes of the point cloud depending on the scanning distance and parameters. The results indicate the changes in the geometry of scanned objects

  1. The Most Common Geometric and Semantic Errors in CityGML Datasets

    Science.gov (United States)

    Biljecki, F.; Ledoux, H.; Du, X.; Stoter, J.; Soon, K. H.; Khoo, V. H. S.

    2016-10-01

    To be used as input in most simulation and modelling software, 3D city models should be geometrically and topologically valid, and semantically rich. We investigate in this paper what is the quality of currently available CityGML datasets, i.e. we validate the geometry/topology of the 3D primitives (Solid and MultiSurface), and we validate whether the semantics of the boundary surfaces of buildings is correct or not. We have analysed all the CityGML datasets we could find, both from portals of cities and on different websites, plus a few that were made available to us. We have thus validated 40M surfaces in 16M 3D primitives and 3.6M buildings found in 37 CityGML datasets originating from 9 countries, and produced by several companies with diverse software and acquisition techniques. The results indicate that CityGML datasets without errors are rare, and those that are nearly valid are mostly simple LOD1 models. We report on the most common errors we have found, and analyse them. One main observation is that many of these errors could be automatically fixed or prevented with simple modifications to the modelling software. Our principal aim is to highlight the most common errors so that these are not repeated in the future. We hope that our paper and the open-source software we have developed will help raise awareness for data quality among data providers and 3D GIS software producers.

  2. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    International Nuclear Information System (INIS)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua; Anastasio, Mark A.; Low, Daniel A.

    2015-01-01

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets

  3. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua, E-mail: huli@radonc.wustl.edu [Department of Radiation Oncology, Washington University, St. Louis, Missouri 63110 (United States); Anastasio, Mark A. [Department of Biomedical Engineering, Washington University, St. Louis, Missouri 63110 (United States); Low, Daniel A. [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets

  4. A Method to Optimize Geometric Errors of Machine Tool based on SNR Quality Loss Function and Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Cai Ligang

    2017-01-01

    Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.

  5. Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach

    KAUST Repository

    Afify, Laila H.

    2015-09-14

    In this work, we develop an analytical paradigm to analyze the average symbol error probability (ASEP) performance of uplink traffic in a multi-tier cellular network. The analysis is based on the recently developed Equivalent-in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important communication system parameters and goes beyond signal-to-interference-plus-noise ratio characterization. That is, the presented model accounts for the modulation scheme, constellation type, and signal recovery techniques to model the ASEP. To this end, we derive single integral expressions for the ASEP for different modulation schemes due to aggregate network interference. Finally, all theoretical findings of the paper are verified via Monte Carlo simulations.

  6. Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide.

    Science.gov (United States)

    Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You

    2013-11-04

    A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.

  7. System for simultaneously measuring 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser.

    Science.gov (United States)

    Cui, Cunxing; Feng, Qibo; Zhang, Bin; Zhao, Yuqiong

    2016-03-21

    A novel method for simultaneously measuring six degree-of-freedom (6DOF) geometric motion errors is proposed in this paper, and the corresponding measurement instrument is developed. Simultaneous measurement of 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser is accomplished for the first time to the best of the authors' knowledge. Dual-frequency laser beams that are orthogonally linear polarized were adopted as the measuring datum. Positioning error measurement was achieved by heterodyne interferometry, and other 5DOF geometric motion errors were obtained by fiber collimation measurement. A series of experiments was performed to verify the effectiveness of the developed instrument. The experimental results showed that the stability and accuracy of the positioning error measurement are 31.1 nm and 0.5 μm, respectively. For the straightness error measurements, the stability and resolution are 60 and 40 nm, respectively, and the maximum deviation of repeatability is ± 0.15 μm in the x direction and ± 0.1 μm in the y direction. For pitch and yaw measurements, the stabilities are 0.03″ and 0.04″, the maximum deviations of repeatability are ± 0.18″ and ± 0.24″, and the accuracies are 0.4″ and 0.35″, respectively. The stability and resolution of roll measurement are 0.29″ and 0.2″, respectively, and the accuracy is 0.6″.

  8. Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach

    KAUST Repository

    Afify, Laila H.; Elsawy, Hesham; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    -in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important

  9. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  10. Error studies of Halbach Magnets

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, S. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2017-03-02

    These error studies were done on the Halbach magnets for the CBETA “First Girder” as described in note [CBETA001]. The CBETA magnets have since changed slightly to the lattice in [CBETA009]. However, this is not a large enough change to significantly affect the results here. The QF and BD arc FFAG magnets are considered. For each assumed set of error distributions and each ideal magnet, 100 random magnets with errors are generated. These are then run through an automated version of the iron wire multipole cancellation algorithm. The maximum wire diameter allowed is 0.063” as in the proof-of-principle magnets. Initially, 32 wires (2 per Halbach wedge) are tried, then if this does not achieve 1e-­4 level accuracy in the simulation, 48 and then 64 wires. By “1e-4 accuracy”, it is meant the FOM defined by √(Σn≥sextupole an 2+bn 2) is less than 1 unit, where the multipoles are taken at the maximum nominal beam radius, R=23mm for these magnets. The algorithm initially uses 20 convergence interations. If 64 wires does not achieve 1e-­4 accuracy, this is increased to 50 iterations to check for slow converging cases. There are also classifications for magnets that do not achieve 1e-4 but do achieve 1e-3 (FOM ≤ 10 units). This is technically within the spec discussed in the Jan 30, 2017 review; however, there will be errors in practical shimming not dealt with in the simulation, so it is preferable to do much better than the spec in the simulation.

  11. B-spline goal-oriented error estimators for geometrically nonlinear rods

    Science.gov (United States)

    2011-04-01

    respectively, for the output functionals q2–q4 (linear and nonlinear with the trigonometric functions sine and cosine) in all the tests considered...of the errors resulting from the linear, quadratic and nonlinear (with trigonometric functions sine and cosine) outputs and for p = 1, 2. If the... Portugal . References [1] A.T. Adams. Sobolev Spaces. Academic Press, Boston, 1975. [2] M. Ainsworth and J.T. Oden. A posteriori error estimation in

  12. Study of Errors among Nursing Students

    Directory of Open Access Journals (Sweden)

    Ella Koren

    2007-09-01

    Full Text Available The study of errors in the health system today is a topic of considerable interest aimed at reducing errors through analysis of the phenomenon and the conclusions reached. Errors that occur frequently among health professionals have also been observed among nursing students. True, in most cases they are actually “near errors,” but these could be a future indicator of therapeutic reality and the effect of nurses' work environment on their personal performance. There are two different approaches to such errors: (a The EPP (error prone person approach lays full responsibility at the door of the individual involved in the error, whether a student, nurse, doctor, or pharmacist. According to this approach, handling consists purely in identifying and penalizing the guilty party. (b The EPE (error prone environment approach emphasizes the environment as a primary contributory factor to errors. The environment as an abstract concept includes components and processes of interpersonal communications, work relations, human engineering, workload, pressures, technical apparatus, and new technologies. The objective of the present study was to examine the role played by factors in and components of personal performance as compared to elements and features of the environment. The study was based on both of the aforementioned approaches, which, when combined, enable a comprehensive understanding of the phenomenon of errors among the student population as well as a comparison of factors contributing to human error and to error deriving from the environment. The theoretical basis of the study was a model that combined both approaches: one focusing on the individual and his or her personal performance and the other focusing on the work environment. The findings emphasize the work environment of health professionals as an EPE. However, errors could have been avoided by means of strict adherence to practical procedures. The authors examined error events in the

  13. A Comparative Study on Error Analysis

    DEFF Research Database (Denmark)

    Wu, Xiaoli; Zhang, Chun

    2015-01-01

    Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production...... of errors in the written and spoken production of L2 learners has a long tradition in L2 pedagogy. Yet, in teaching and learning Chinese as a foreign language (CFL), only handful studies have been made either to define the ‘error’ in a pedagogically insightful way or to empirically investigate...

  14. The Zoom Lens: A Case Study in Geometrical Optics.

    Science.gov (United States)

    Cheville, Alan; Scepanovic, Misa

    2002-01-01

    Introduces a case study on a motion picture company considering the purchase of a newly developed zoom lens in which students act as the engineers designing the zoom lens based on the criteria of company's specifications. Focuses on geometrical optics. Includes teaching notes and classroom management strategies. (YDS)

  15. Extension of instance search technique by geometric coding and quantization error compensation

    OpenAIRE

    García Del Molino, Ana

    2013-01-01

    [ANGLÈS] This PFC analyzes two ways of improving the video retrieval techniques for instance search problem. In one hand, "Pairing Interest Points for a better Signature using Sparse Detector's Spatial Information", allows the Bag-of-Words model to keep some spatial information. In the other, "Study of the Hamming Embedding Signature Symmetry in Video Retrieval" provides binary signatures that refine the matching based on visual words, and aims to find the best way of matching taking into acc...

  16. Compensation for geometric modeling errors by positioning of electrodes in electrical impedance tomography

    International Nuclear Information System (INIS)

    Hyvönen, N; Majander, H; Staboulis, S

    2017-01-01

    Electrical impedance tomography aims at reconstructing the conductivity inside a physical body from boundary measurements of current and voltage at a finite number of contact electrodes. In many practical applications, the shape of the imaged object is subject to considerable uncertainties that render reconstructing the internal conductivity impossible if they are not taken into account. This work numerically demonstrates that one can compensate for inaccurate modeling of the object boundary in two spatial dimensions by finding compatible locations and sizes for the electrodes as a part of a reconstruction algorithm. The numerical studies, which are based on both simulated and experimental data, are complemented by proving that the employed complete electrode model is approximately conformally invariant, which suggests that the obtained reconstructions in mismodeled domains reflect conformal images of the true targets. The numerical experiments also confirm that a similar approach does not, in general, lead to a functional algorithm in three dimensions. (paper)

  17. Sex determination from the frontal bone: a geometric morphometric study.

    Science.gov (United States)

    Perlaza, Néstor A

    2014-09-01

    Sex estimation in human skeletal remains when using the cranium through traditional methods is a fundamental pillar in human identification; however, it may be possible to incur in a margin of error due because of the state of preservation in incomplete or fragmented remains. The aim of this investigation was sex estimation through the geometric morphometric analysis of the frontal bone. The sample employed 60 lateral radiographs of adult subjects of both sexes (30 males and 30 females), aged between 18 and 40 years, with mean age for males of 28 ± 4 and 30 ± 6 years for females. Thin-plate splines evidenced strong expansion of the glabellar region in males and contraction in females. No significant differences were found between sexes with respect to size. The findings suggest differences in shape and size in the glabellar region, besides reaffirming the use of geometric morphometrics as a quantitative method in sex estimation. © 2014 American Academy of Forensic Sciences.

  18. Errors in the universal and sufficient heuristic criteria of estimating validity limits of geometric optics and of the geometric theory of diffraction

    International Nuclear Information System (INIS)

    Borovikov, V.A.; Kinber, B.E.

    1988-01-01

    The heuristic criteria (HC) of validity of geometric optics (GO) and of the geometric theory of diffraction (GTD), suggested in [2-7, 13, 14] and based on identifying the physical volume occupied by the ray with the Fresnel volume (FV) introduced in these papers (i.e., the envelope of the first Fresnel zone), are analyzed. Numerous examples of HC invalidity are given, as well as the reasons. In particular, HC provide an incorrect answer for all GO problems with caustics, since in these problems there always exists a ray, whose FV is nonlocal and covers the FV of other rays. The HC are shown to be unsuitable for multiple ray GTD problems, as well as for the simplest problems of diffraction of a cylindrical wave by a half-plane and of a plane wave by a curved half-plane

  19. Geometrical study of phyllotactic patterns by Bernoulli spiral lattices.

    Science.gov (United States)

    Sushida, Takamichi; Yamagishi, Yoshikazu

    2017-06-01

    Geometrical studies of phyllotactic patterns deal with the centric or cylindrical models produced by ideal lattices. van Iterson (Mathematische und mikroskopisch - anatomische Studien über Blattstellungen nebst Betrachtungen über den Schalenbau der Miliolinen, Verlag von Gustav Fischer, Jena, 1907) suggested a centric model representing ideal phyllotactic patterns as disk packings of Bernoulli spiral lattices and presented a phase diagram now called Van Iterson's diagram explaining the bifurcation processes of their combinatorial structures. Geometrical properties on disk packings were shown by Rothen & Koch (J. Phys France, 50(13), 1603-1621, 1989). In contrast, as another centric model, we organized a mathematical framework of Voronoi tilings of Bernoulli spiral lattices and showed mathematically that the phase diagram of a Voronoi tiling is graph-theoretically dual to Van Iterson's diagram. This paper gives a review of two centric models for disk packings and Voronoi tilings of Bernoulli spiral lattices. © 2017 Japanese Society of Developmental Biologists.

  20. Geometrical correction for the inter- and intramolecular basis set superposition error in periodic density functional theory calculations.

    Science.gov (United States)

    Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan

    2013-09-26

    We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.

  1. Study of WATCH GRB error boxes

    DEFF Research Database (Denmark)

    Gorosabel, J.; Castro-Tirado, A. J.; Lund, Niels

    1995-01-01

    We have studied the first WATCH GRB Catalogue ofγ-ray Bursts in order to find correlations between WATCH GRB error boxes and a great variety of celestial objects present in 33 different catalogues. No particular class of objects has been found to be significantly correlated with the WATCH GRBs....

  2. Entropy Measures as Geometrical Tools in the Study of Cosmology

    Directory of Open Access Journals (Sweden)

    Gilbert Weinstein

    2017-12-01

    Full Text Available Classical chaos is often characterized as exponential divergence of nearby trajectories. In many interesting cases these trajectories can be identified with geodesic curves. We define here the entropy by S = ln χ ( x with χ ( x being the distance between two nearby geodesics. We derive an equation for the entropy, which by transformation to a Riccati-type equation becomes similar to the Jacobi equation. We further show that the geodesic equation for a null geodesic in a double-warped spacetime leads to the same entropy equation. By applying a Robertson–Walker metric for a flat three-dimensional Euclidean space expanding as a function of time, we again reach the entropy equation stressing the connection between the chosen entropy measure and time. We finally turn to the Raychaudhuri equation for expansion, which also is a Riccati equation similar to the transformed entropy equation. Those Riccati-type equations have solutions of the same form as the Jacobi equation. The Raychaudhuri equation can be transformed to a harmonic oscillator equation, and it has been shown that the geodesic deviation equation of Jacobi is essentially equivalent to that of a harmonic oscillator. The Raychaudhuri equations are strong geometrical tools in the study of general relativity and cosmology. We suggest a refined entropy measure applicable in cosmology and defined by the average deviation of the geodesics in a congruence.

  3. A simple and efficient dispersion correction to the Hartree-Fock theory (2): Incorporation of a geometrical correction for the basis set superposition error.

    Science.gov (United States)

    Yoshida, Tatsusada; Hayashi, Takahisa; Mashima, Akira; Chuman, Hiroshi

    2015-10-01

    One of the most challenging problems in computer-aided drug discovery is the accurate prediction of the binding energy between a ligand and a protein. For accurate estimation of net binding energy ΔEbind in the framework of the Hartree-Fock (HF) theory, it is necessary to estimate two additional energy terms; the dispersion interaction energy (Edisp) and the basis set superposition error (BSSE). We previously reported a simple and efficient dispersion correction, Edisp, to the Hartree-Fock theory (HF-Dtq). In the present study, an approximation procedure for estimating BSSE proposed by Kruse and Grimme, a geometrical counterpoise correction (gCP), was incorporated into HF-Dtq (HF-Dtq-gCP). The relative weights of the Edisp (Dtq) and BSSE (gCP) terms were determined to reproduce ΔEbind calculated with CCSD(T)/CBS or /aug-cc-pVTZ (HF-Dtq-gCP (scaled)). The performance of HF-Dtq-gCP (scaled) was compared with that of B3LYP-D3(BJ)-bCP (dispersion corrected B3LYP with the Boys and Bernadi counterpoise correction (bCP)), by taking ΔEbind (CCSD(T)-bCP) of small non-covalent complexes as 'a golden standard'. As a critical test, HF-Dtq-gCP (scaled)/6-31G(d) and B3LYP-D3(BJ)-bCP/6-31G(d) were applied to the complex model for HIV-1 protease and its potent inhibitor, KNI-10033. The present results demonstrate that HF-Dtq-gCP (scaled) is a useful and powerful remedy for accurately and promptly predicting ΔEbind between a ligand and a protein, albeit it is a simple correction procedure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Nursing Errors in Intensive Care Unit by Human Error Identification in Systems Tool: A Case Study

    Directory of Open Access Journals (Sweden)

    Nezamodini

    2016-03-01

    Full Text Available Background Although health services are designed and implemented to improve human health, the errors in health services are a very common phenomenon and even sometimes fatal in this field. Medical errors and their cost are global issues with serious consequences for the patients’ community that are preventable and require serious attention. Objectives The current study aimed to identify possible nursing errors applying human error identification in systems tool (HEIST in the intensive care units (ICUs of hospitals. Patients and Methods This descriptive research was conducted in the intensive care unit of a hospital in Khuzestan province in 2013. Data were collected through observation and interview by nine nurses in this section in a period of four months. Human error classification was based on Rose and Rose and Swain and Guttmann models. According to HEIST work sheets the guide questions were answered and error causes were identified after the determination of the type of errors. Results In total 527 errors were detected. The performing operation on the wrong path had the highest frequency which was 150, and the second rate with a frequency of 136 was doing the tasks later than the deadline. Management causes with a frequency of 451 were the first rank among identified errors. Errors mostly occurred in the system observation stage and among the performance shaping factors (PSFs, time was the most influencing factor in occurrence of human errors. Conclusions Finally, in order to prevent the occurrence and reduce the consequences of identified errors the following suggestions were proposed : appropriate training courses, applying work guidelines and monitoring their implementation, increasing the number of work shifts, hiring professional workforce, equipping work space with appropriate facilities and equipment.

  5. Optical, geometric and thermal study for solar parabolic concentrator efficiency improvement under Tunisia environment: A case study

    International Nuclear Information System (INIS)

    Skouri, Safa; Ben Salah, Mohieddine; Bouadila, Salwa; Balghouthi, Moncef; Ben Nasrallah, Sassi

    2013-01-01

    Highlights: • Design and construction of solar parabolic concentrator. • Photogrammetry study of SPC. • Slope error and optical efficiency of SPC. • Reflector materials of SPC. • Programmed tracking solar system. - Abstract: Renewable energy generation is becoming more prevalent today. It is relevant to consider that solar concentration technologies contribute to provide a real alternative to the consumption of fossil fuels. The purpose of this work is the characterization of a solar parabolic solar concentrator (SPC) designed, constructed and tested in the Research and Technologies Centre of Energy in Tunisia (CRTEn) in order to improve the performance of the system. Photogrammetry measurement used to analyze the slope errors and to determine hence determining the geometric deformation of the SPC system, which presents an average slope error around 0.0002 and 0.0073 mrad respectively in the center and in the extremities. An optimization of the most performed reflector material has been done by an experimental study of three types of reflectors. A two axes programmed tracking system realized, used and tested in this study. An experimental study is carried out to evaluate the solar parabolic concentrator thermal efficiency after the mechanical and the optical SPC optimization. The thermal energy efficiency varies from 40% to 77%, the concentrating system reaches an average concentration factor around 178

  6. Geometrical optics in general relativity: A study of the higher order corrections

    International Nuclear Information System (INIS)

    Anile, A.M.

    1976-01-01

    The higher order corrections to geometrical optics are studied in general relativity for an electromagnetic test wave. An explicit expression is found for the average energy--momentum tensor which takes into account the first-order corrections. Finally the first-order corrections to the well-known area-intensity law of geometrical optics are derived

  7. The Study of Birefringent Homogenous Medium with Geometric Phase

    International Nuclear Information System (INIS)

    Banerjee, Dipti

    2010-12-01

    The property of linear and circular birefringence at each point of the optical medium has been evaluated here from differential matrix N using the Jones calculus. This matrix lies on the OAM sphere for l = 1 orbital angular momentum. The geometric phase is developed by twisting the medium uniformly about the direction of propagation of the light ray. The circular birefringence of the medium, is visualized through the solid angle and the angular twist per unit thickness of the medium, k, that is equivalent to the topological charge of the optical element. (author)

  8. Experimental Study of Vibration Isolation Characteristics of a Geometric Anti-Spring Isolator

    Directory of Open Access Journals (Sweden)

    Lixun Yan

    2017-07-01

    Full Text Available In order to realize low-frequency vibration isolation, a novel geometric anti-spring isolator consisting of several cantilever blade springs are developed in this paper. The optimal design parameters of the geometric anti-spring isolator for different nonlinear geometric parameters are theoretically obtained. The transmissibility characteristic of the geometric anti-spring isolator is investigated through mathematical simulation. A geometric anti-spring isolator with a nonlinear geometric parameter of 0.92 is designed and its vibration isolation performance and nonlinearity characteristic is experimentally studied. The experiment results show that the designed isolator has good low-frequency vibration isolation performance, of which the initial isolation frequency is less than 3.6 Hz when the load weight is 21 kg. The jump phenomena of the response of the isolator under linear frequency sweep excitation are observed, and this result demonstrates that the geometric anti-spring isolator has a complex nonlinearity characteristics with the increment of excitation amplitude. This research work provides a theoretical and experimental basis for the application of the nonlinear geometric anti-spring low-frequency passive vibration isolation technology in engineering practice.

  9. Geometric analysis

    CERN Document Server

    Bray, Hubert L; Mazzeo, Rafe; Sesum, Natasa

    2015-01-01

    This volume includes expanded versions of the lectures delivered in the Graduate Minicourse portion of the 2013 Park City Mathematics Institute session on Geometric Analysis. The papers give excellent high-level introductions, suitable for graduate students wishing to enter the field and experienced researchers alike, to a range of the most important areas of geometric analysis. These include: the general issue of geometric evolution, with more detailed lectures on Ricci flow and Kähler-Ricci flow, new progress on the analytic aspects of the Willmore equation as well as an introduction to the recent proof of the Willmore conjecture and new directions in min-max theory for geometric variational problems, the current state of the art regarding minimal surfaces in R^3, the role of critical metrics in Riemannian geometry, and the modern perspective on the study of eigenfunctions and eigenvalues for Laplace-Beltrami operators.

  10. APPLICATION OPENFOAM TO STUDY THE EFFECT OF GEOMETRICAL PARAMETERS ON THE AERODYNAMIC CHARACTERISTICS OF BLUFF BODIES

    Directory of Open Access Journals (Sweden)

    V. V. Efimov

    2014-01-01

    Full Text Available Justification of possibility of application of an OpenFOAM package for obtaining aerodynamic characteristics of bluff bodies and studying of their dependence on geometrical parameters.

  11. Dose variations caused by setup errors in intracranial stereotactic radiotherapy: A PRESAGE study

    International Nuclear Information System (INIS)

    Teng, Kieyin; Gagliardi, Frank; Alqathami, Mamdooh; Ackerly, Trevor; Geso, Moshi

    2014-01-01

    Stereotactic radiotherapy (SRT) requires tight margins around the tumor, thus producing a steep dose gradient between the tumor and the surrounding healthy tissue. Any setup errors might become clinically significant. To date, no study has been performed to evaluate the dosimetric variations caused by setup errors with a 3-dimensional dosimeter, the PRESAGE. This research aimed to evaluate the potential effect that setup errors have on the dose distribution of intracranial SRT. Computed tomography (CT) simulation of a CIRS radiosurgery head phantom was performed with 1.25-mm slice thickness. An ideal treatment plan was generated using Brainlab iPlan. A PRESAGE was made for every treatment with and without errors. A prescan using the optical CT scanner was carried out. Before treatment, the phantom was imaged using Brainlab ExacTrac. Actual radiotherapy treatments with and without errors were carried out with the Novalis treatment machine. Postscan was performed with an optical CT scanner to analyze the dose irradiation. The dose variation between treatments with and without errors was determined using a 3-dimensional gamma analysis. Errors are clinically insignificant when the passing ratio of the gamma analysis is 95% and above. Errors were clinically significant when the setup errors exceeded a 0.7-mm translation and a 0.5° rotation. The results showed that a 3-mm translation shift in the superior-inferior (SI), right-left (RL), and anterior-posterior (AP) directions and 2° couch rotation produced a passing ratio of 53.1%. Translational and rotational errors of 1.5 mm and 1°, respectively, generated a passing ratio of 62.2%. Translation shift of 0.7 mm in the directions of SI, RL, and AP and a 0.5° couch rotation produced a passing ratio of 96.2%. Preventing the occurrences of setup errors in intracranial SRT treatment is extremely important as errors greater than 0.7 mm and 0.5° alter the dose distribution. The geometrical displacements affect dose delivery

  12. A Circumzenithal Arc to Study Optics Concepts with Geometrical Optics

    Science.gov (United States)

    Isik, Hakan

    2017-01-01

    This paper describes the formation of a circumzenithal arc for the purpose of teaching light and optics. A circumzenithal arc, an optic formation rarely witnessed by people, is formed in this study using a water-filled cylindrical glass illuminated by sunlight. Sunlight refracted at the top and side surfaces of the glass of water is dispersed into…

  13. Dissociable genetic contributions to error processing: a multimodal neuroimaging study.

    Directory of Open Access Journals (Sweden)

    Yigal Agam

    Full Text Available Neuroimaging studies reliably identify two markers of error commission: the error-related negativity (ERN, an event-related potential, and functional MRI activation of the dorsal anterior cingulate cortex (dACC. While theorized to reflect the same neural process, recent evidence suggests that the ERN arises from the posterior cingulate cortex not the dACC. Here, we tested the hypothesis that these two error markers also have different genetic mediation.We measured both error markers in a sample of 92 comprised of healthy individuals and those with diagnoses of schizophrenia, obsessive-compulsive disorder or autism spectrum disorder. Participants performed the same task during functional MRI and simultaneously acquired magnetoencephalography and electroencephalography. We examined the mediation of the error markers by two single nucleotide polymorphisms: dopamine D4 receptor (DRD4 C-521T (rs1800955, which has been associated with the ERN and methylenetetrahydrofolate reductase (MTHFR C677T (rs1801133, which has been associated with error-related dACC activation. We then compared the effects of each polymorphism on the two error markers modeled as a bivariate response.We replicated our previous report of a posterior cingulate source of the ERN in healthy participants in the schizophrenia and obsessive-compulsive disorder groups. The effect of genotype on error markers did not differ significantly by diagnostic group. DRD4 C-521T allele load had a significant linear effect on ERN amplitude, but not on dACC activation, and this difference was significant. MTHFR C677T allele load had a significant linear effect on dACC activation but not ERN amplitude, but the difference in effects on the two error markers was not significant.DRD4 C-521T, but not MTHFR C677T, had a significant differential effect on two canonical error markers. Together with the anatomical dissociation between the ERN and error-related dACC activation, these findings suggest that

  14. A Simulation Study on Patient Setup Errors in External Beam Radiotherapy Using an Anthropomorphic 4D Phantom

    Directory of Open Access Journals (Sweden)

    Payam Samadi Miandoab

    2016-12-01

    Full Text Available Introduction Patient set-up optimization is required in radiotherapy to fill the accuracy gap between personalized treatment planning and uncertainties in the irradiation set-up. In this study, we aimed to develop a new method based on neural network to estimate patient geometrical setup using 4-dimensional (4D XCAT anthropomorphic phantom. Materials and Methods To access 4D modeling of motion of dynamic organs, a phantom employs non-uniform rational B-splines (NURBS-based Cardiac-Torso method with spline-based model to generate 4D computed tomography (CT images. First, to generate all the possible roto-translation positions, the 4D CT images were imported to Medical Image Data Examiner (AMIDE. Then, for automatic, real time verification of geometrical setup, an artificial neural network (ANN was proposed to estimate patient displacement, using training sets. Moreover, three external motion markers were synchronized with a patient couch position as reference points. In addition, the technique was validated through simulated activities by using reference 4D CT data acquired from five patients. Results The results indicated that patient geometrical set-up is highly depended on the comprehensiveness of training set. By using ANN model, the average patient setup error in XCAT phantom was reduced from 17.26 mm to 0.50 mm. In addition, in the five real patients, these average errors were decreased from 18.26 mm to 1.48 mm various breathing phases ranging from inhalation to exhalation were taken into account for patient setup. Uncertainty error assessment and different setup errors were obtained from each respiration phase. Conclusion This study proposed a new method for alignment of patient setup error using ANN model. Additionally, our correlation model (ANN could estimate true patient position with less error.

  15. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems.

    Science.gov (United States)

    Kruse, Holger; Grimme, Stefan

    2012-04-21

    A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model

  16. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems

    Science.gov (United States)

    Kruse, Holger; Grimme, Stefan

    2012-04-01

    A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model

  17. Students’ Written Production Error Analysis in the EFL Classroom Teaching: A Study of Adult English Learners Errors

    Directory of Open Access Journals (Sweden)

    Ranauli Sihombing

    2016-12-01

    Full Text Available Errors analysis has become one of the most interesting issues in the study of Second Language Acquisition. It can not be denied that some teachers do not know a lot about error analysis and related theories of how L1, L2 or foreign language acquired. In addition, the students often feel upset since they find a gap between themselves and the teachers for the errors the students make and the teachers’ understanding about the error correction. The present research aims to investigate what errors adult English learners make in written production of English. The significances of the study is to know what errors students make in writing that the teachers can find solution to the errors the students make for a better English language teaching and learning especially in teaching English for adults. The study employed qualitative method. The research was undertaken at an airline education center in Bandung. The result showed that syntax errors are more frequently found than morphology errors, especially in terms of verb phrase errors. It is recommended that it is important for teacher to know the theory of second language acquisition in order to know how the students learn and produce theirlanguage. In addition, it will be advantages for teachers if they know what errors students frequently make in their learning, so that the teachers can give solution to the students for a better English language learning achievement.   DOI: https://doi.org/10.24071/llt.2015.180205

  18. Berkson error adjustment and other exposure surrogates in occupational case-control studies, with application to the Canadian INTEROCC study.

    Science.gov (United States)

    Oraby, Tamer; Sivaganesan, Siva; Bowman, Joseph D; Kincl, Laurel; Richardson, Lesley; McBride, Mary; Siemiatycki, Jack; Cardis, Elisabeth; Krewski, Daniel

    2018-05-01

    Many epidemiological studies assessing the relationship between exposure and disease are carried out without data on individual exposures. When this barrier is encountered in occupational studies, the subject exposures are often evaluated with a job-exposure matrix (JEM), which consists of mean exposure for occupational categories measured on a comparable group of workers. One of the objectives of the seven-country case-control study of occupational exposure and brain cancer risk, INTEROCC, was to investigate the relationship of occupational exposure to electromagnetic fields (EMF) in different frequency ranges and brain cancer risk. In this paper, we use the Canadian data from INTEROCC to estimate the odds of developing brain tumours due to occupational exposure to EMF. The first step was to find the best EMF exposure surrogate among the arithmetic mean, the geometric mean, and the mean of log-normal exposure distribution for each occupation in the JEM, in comparison to Berkson error adjustments via numerical approximation of the likelihood function. Contrary to previous studies of Berkson errors in JEMs, we found that the geometric mean was the best exposure surrogate. This analysis provided no evidence that cumulative lifetime exposure to extremely low frequency magnetic fields increases brain cancer risk, a finding consistent with other recent epidemiological studies.

  19. Coherent error study in a retarding field energy analyzer

    International Nuclear Information System (INIS)

    Cui, Y.; Zou, Y.; Reiser, M.; Kishek, R.A.; Haber, I.; Bernal, S.; O'Shea, P.G.

    2005-01-01

    A novel cylindrical retarding electrostatic field energy analyzer for low-energy beams has been designed, simulated, and tested with electron beams of several keV, in which space charge effects play an important role. A cylindrical focusing electrode is used to overcome the beam expansion inside the device due to space-charge forces, beam emittance, etc. In this paper, we present the coherent error analysis for this energy analyzer with beam envelope equation including space charge and emittance effects. The study shows that this energy analyzer can achieve very high resolution (with relative error of around 10 -5 ) if taking away the coherent errors by using proper focusing voltages. The theoretical analysis is compared with experimental results

  20. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    Science.gov (United States)

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling

  1. Calculation of track and vertex errors for detector design studies

    International Nuclear Information System (INIS)

    Harr, R.

    1995-01-01

    The Kalman Filter technique has come into wide use for charged track reconstruction in high-energy physics experiments. It is also well suited for detector design studies, allowing for the efficient estimation of optimal track covariance matrices without the need of a hit level Monte Carlo simulation. Although much has been published about the Kalman filter equations, there is a lack of previous literature explaining how to implement the equations. In this paper, the operators necessary to implement the Kalman filter equations for two common detector configurations are worked out: a central detector in a uniform solenoidal magnetic field, and a fixed-target detector with no magnetic field in the region of the interactions. With the track covariance matrices in hand, vertex and invariant mass errors are readily calculable. These quantities are particularly interesting for evaluating experiments designed to study weakly decaying particles which give rise to displaced vertices. The optimal vertex errors are obtained via a constrained vertex fit. Solutions are presented to the constrained vertex problem with and without kinematic constraints. Invariant mass errors are obtained via propagation of errors; the use of vertex constrained track parameters is discussed. Many of the derivations are new or previously unpublished

  2. Studies on a Double Poisson-Geometric Insurance Risk Model with Interference

    Directory of Open Access Journals (Sweden)

    Yujuan Huang

    2013-01-01

    Full Text Available This paper mainly studies a generalized double Poisson-Geometric insurance risk model. By martingale and stopping time approach, we obtain adjustment coefficient equation, the Lundberg inequality, and the formula for the ruin probability. Also the Laplace transformation of the time when the surplus reaches a given level for the first time is discussed, and the expectation and its variance are obtained. Finally, we give the numerical examples.

  3. Sensitivity of dose-finding studies to observation errors.

    Science.gov (United States)

    Zohar, Sarah; O'Quigley, John

    2009-11-01

    The purpose of Phase I designs is to estimate the MTD (maximum tolerated dose, in practice a dose with some given acceptable rate of toxicity) while, at the same time, minimizing the number of patients treated at doses too far removed from the MTD. Our purpose here is to investigate the sensitivity of conclusions from dose-finding designs to recording or observation errors. Certain toxicities may go undetected and, conversely, certain non-toxicities may be incorrectly recorded as dose-limiting toxicities. Recording inaccuracies would be expected to have an influence on final and within trial recommendations and, in this paper, we study in greater depth this question. We focus, in particular on three designs used currently; the standard '3+3' design, the grouped up-and-down design [M. Gezmu, N. Flournoy, Group up-and-down designs for dose finding. Journal of Statistical Planning and Inference 2006; 136 (6): 1749-1764.] and the continual reassessment method (CRM, [J. O'Quigley, M. Pepe, L. Fisher, Continual reassessment method: a practical design for phase 1 clinical trials in cancer. Biometrics 1990; 46 (1): 33-48.]). A non-toxicity incorrectly recorded as a toxicity (error of first kind) has a greater influence in general than the converse (error of second kind). These results are illustrated via figures which suggest that the standard '3+3' design in particular is sensitive to errors of the second kind. Such errors can have a very important impact on drug development in that, if carried through to the Phase 2 and Phase 3 studies, we can significantly increase the probability of failure to detect efficacy as a result of having delivered an inadequate dose.

  4. Study of identification of geometrically shaped solids using colour and range information

    International Nuclear Information System (INIS)

    Ebihara, Kenichi

    1997-05-01

    This report is the revision of the Technical Report (MECSE 1996-7) of Monash University in Melbourne, Australia which has been distributed to the Department Library in this University. The main work which is described in this report was carried out at Intelligent Robotics Research Center (IRRC) in the Department of Electrical and Computer Systems Engineering of Monash University from March in 1995 to March in 1996 and was be supported by a grant from Research Development Corporation of Japan (JRDC). This report describes the study of identification of geometrically shaped solids with unique colour using colour and range information. This study aims at recognition of equipment in nuclear plants. For this purpose, it is hypothesized that equipment in nuclear plants can be represented by combination of geometrically shaped solids with unique colour, such as a sphere, an ellipsoid, a cone, a cylinder, a rectangular solid and a pyramid. In this report, the colour image of geometrically shaped solids could be segmented comparatively easily and effectively into regions of each solid by using colour and range information. The range data of each solid was extracted using the segmented colour image. Thus the extracted range data could be classified into a plane surface or a curved surface by checking its spatial distribution. (author)

  5. A Corpus-based Study of EFL Learners’ Errors in IELTS Essay Writing

    OpenAIRE

    Hoda Divsar; Robab Heydari

    2017-01-01

    The present study analyzed different types of errors in the EFL learners’ IELTS essays. In order to determine the major types of errors, a corpus of 70 IELTS examinees’ writings were collected, and their errors were extracted and categorized qualitatively. Errors were categorized based on a researcher-developed error-coding scheme into 13 aspects. Based on the descriptive statistical analyses, the frequency of each error type was calculated and the commonest errors committed by the EFL learne...

  6. Study of systematic errors in the luminosity measurement

    International Nuclear Information System (INIS)

    Arima, Tatsumi

    1993-01-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O(α 2 ) QED correction in leading-log approximation. (J.P.N.)

  7. Study of systematic errors in the luminosity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics

    1993-04-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).

  8. Geometrically Constructed Markov Chain Monte Carlo Study of Quantum Spin-phonon Complex Systems

    Science.gov (United States)

    Suwa, Hidemaro

    2013-03-01

    We have developed novel Monte Carlo methods for precisely calculating quantum spin-boson models and investigated the critical phenomena of the spin-Peierls systems. Three significant methods are presented. The first is a new optimization algorithm of the Markov chain transition kernel based on the geometric weight allocation. This algorithm, for the first time, satisfies the total balance generally without imposing the detailed balance and always minimizes the average rejection rate, being better than the Metropolis algorithm. The second is the extension of the worm (directed-loop) algorithm to non-conserved particles, which cannot be treated efficiently by the conventional methods. The third is the combination with the level spectroscopy. Proposing a new gap estimator, we are successful in eliminating the systematic error of the conventional moment method. Then we have elucidated the phase diagram and the universality class of the one-dimensional XXZ spin-Peierls system. The criticality is totally consistent with the J1 -J2 model, an effective model in the antiadiabatic limit. Through this research, we have succeeded in investigating the critical phenomena of the effectively frustrated quantum spin system by the quantum Monte Carlo method without the negative sign. JSPS Postdoctoral Fellow for Research Abroad

  9. Study of geometrical and operational parameters controlling the low frequency microjet atmospheric pressure plasma characteristics

    International Nuclear Information System (INIS)

    Kim, Dan Bee; Rhee, J. K.; Moon, S. Y.; Choe, W.

    2006-01-01

    Controllability of small size atmospheric pressure plasma generated at low frequency in a pin to dielectric plane electrode configuration was studied. It was shown that the plasma characteristics could be controlled by geometrical and operational parameters of the experiment. Under most circumstances, continuous glow discharges were observed, but both the corona and/or the dielectric barrier discharge characteristics were observed depending on the position of the pin electrode. The plasma size and the rotational temperature were also varied by the parameters. The rotational temperature was between 300 and 490 K, being low enough to treat thermally sensitive materials

  10. Model Study of Wave Overtopping of Marine Structure for a Wide Range of Geometric Parameters

    DEFF Research Database (Denmark)

    Kofoed, Jens Peter

    2000-01-01

    The objective of the study described in this paper is to enable estimation of wave overtopping rates for slopes/ramps given by a wide range of geometric parameters when subjected to varying wave conditions. To achieve this a great number of model tests are carried out in a wave tank using irregul...... 2-D waves. On the basis of the first part of these tests an exponential overtopping expression for a linear slope, including the effect of limited draught and varying slope angle, is presented. The plans for further tests with other slope geometries are described....

  11. Human error in maintenance: An investigative study for the factories of the future

    International Nuclear Information System (INIS)

    Dhillon, B S

    2014-01-01

    This paper presents a study of human error in maintenance. Many different aspects of human error in maintenance considered useful for the factories of the future are studied, including facts, figures, and examples; occurrence of maintenance error in equipment life cycle, elements of a maintenance person's time, maintenance environment and the causes for the occurrence of maintenance error, types and typical maintenance errors, common maintainability design errors and useful design guidelines to reduce equipment maintenance errors, maintenance work instructions, and maintenance error analysis methods

  12. Awareness of Diagnostic Error among Japanese Residents: a Nationwide Study.

    Science.gov (United States)

    Nishizaki, Yuji; Shinozaki, Tomohiro; Kinoshita, Kensuke; Shimizu, Taro; Tokuda, Yasuharu

    2018-04-01

    Residents' understanding of diagnostic error may differ between countries. We sought to explore the relationship between diagnostic error knowledge and self-study, clinical knowledge, and experience. Our nationwide study involved postgraduate year 1 and 2 (PGY-1 and -2) Japanese residents. The Diagnostic Error Knowledge Assessment Test (D-KAT) and General Medicine In-Training Examination (GM-ITE) were administered at the end of the 2014 academic year. D-KAT scores were compared with the benchmark scores of US residents. Associations between D-KAT score and gender, PGY, emergency department (ED) rotations per month, mean number of inpatients handled at any given time, and mean daily minutes of self-study were also analyzed, both with and without adjusting for GM-ITE scores. Student's t test was used for comparisons with linear mixed models and structural equation models (SEM) to explore associations with D-KAT or GM-ITE scores. The mean D-KAT score among Japanese PGY-2 residents was significantly lower than that of their US PGY-2 counterparts (6.2 vs. 8.3, p ITE scores correlated with ED rotations (≥6 rotations: 2.14; 0.16-4.13; p = 0.03), inpatient caseloads (5-9 patients: 1.79; 0.82-2.76; p ITE scores (ß = 0.37, 95% CI: 0.34-0.41) and indirectly associated with ED rotations (ß = 0.06, 95% CI: 0.02-0.10), inpatient caseload (ß = 0.04, 95% CI: 0.003-0.08), and average daily minutes of study (ß = 0.13, 95% CI: 0.09-0.17). Knowledge regarding diagnostic error among Japanese residents was poor compared with that among US residents. D-KAT scores correlated strongly with GM-ITE scores, and the latter scores were positively associated with a greater number of ED rotations, larger caseload (though only up to 15 patients), and more time spent studying.

  13. Learning from Errors: An Exploratory Study Among Dutch Auditors

    NARCIS (Netherlands)

    Gold, A.H.; van Mourik, O.; Van Dyck, Cathy; Wallage, P.

    2017-01-01

    Despite the presence of substantial quality control measures present at audit firms, results from regulator inspections suggest that auditors make errors during their work. According to the error management literature, even though errors often lead to negative immediate consequences, they also offer

  14. Learning from Errors: An Exploratory Study Among Dutch Auditors

    NARCIS (Netherlands)

    Gold, A.H.; Van Dyck, Cathy; Wallage, P.

    Despite the presence of substantial quality control measures present at audit firms, results from regulator inspections suggest that auditors make errors during their work. According to the error management literature, even though errors often lead to negative immediate consequences, they also offer

  15. Learning from Errors: An Exploratory Study Among Dutch Auditors

    NARCIS (Netherlands)

    Gold, A.H.; Van Dyck, Cathy; Wallage, P.

    2016-01-01

    Despite the presence of substantial quality control measures present at audit firms, results from regulator inspections suggest that auditors make errors during their work. According to the error management literature, even though errors often lead to negative immediate consequences, they also offer

  16. Electronic error-reporting systems: a case study into the impact on nurse reporting of medical errors.

    Science.gov (United States)

    Lederman, Reeva; Dreyfus, Suelette; Matchan, Jessica; Knott, Jonathan C; Milton, Simon K

    2013-01-01

    Underreporting of errors in hospitals persists despite the claims of technology companies that electronic systems will facilitate reporting. This study builds on previous analyses to examine error reporting by nurses in hospitals using electronic media. This research asks whether the electronic media creates additional barriers to error reporting, and, if so, what practical steps can all hospitals take to reduce these barriers. This is a mixed-method case study nurses' use of an error reporting system, RiskMan, in two hospitals. The case study involved one large private hospital and one large public hospital in Victoria, Australia, both of which use the RiskMan medical error reporting system. Information technology-based error reporting systems have unique access problems and time demands and can encourage nurses to develop alternative reporting mechanisms. This research focuses on nurses and raises important findings for hospitals using such systems or considering installation. This article suggests organizational and technical responses that could reduce some of the identified barriers. Crown Copyright © 2013. Published by Mosby, Inc. All rights reserved.

  17. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    Science.gov (United States)

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in 3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Geometric morphometrics as a tool for improving the comparative study of behavioural postures.

    Science.gov (United States)

    Fureix, Carole; Hausberger, Martine; Seneque, Emilie; Morisset, Stéphane; Baylac, Michel; Cornette, Raphaël; Biquand, Véronique; Deleporte, Pierre

    2011-07-01

    Describing postures has always been a central concern when studying behaviour. However, attempts to compare postures objectively at phylogenetical, populational, inter- or intra-individual levels generally either rely upon a few key elements or remain highly subjective. Here, we propose a novel approach, based on well-established geometric morphometrics, to describe and to analyse postures globally (i.e. considering the animal's body posture in its entirety rather than focusing only on a few salient elements, such as head or tail position). Geometric morphometrics is concerned with describing and comparing variation and changes in the form (size and shape) of organisms using the coordinates of a series of homologous landmarks (i.e. positioned in relation to skeletal or muscular cues that are the same for different species for every variety of form and function and that have derived from a common ancestor, i.e. they have a common evolutionary ancestry, e.g. neck, wings, flipper/hand). We applied this approach to horses, using global postures (1) to characterise behaviours that correspond to different arousal levels, (2) to test potential impact of environmental changes on postures. Our application of geometric morphometrics to horse postures showed that this method can be used to characterise behavioural categories, to evaluate the impact of environmental factors (here human actions) and to compare individuals and groups. Beyond its application to horses, this promising approach could be applied to all questions involving the analysis of postures (evolution of displays, expression of emotions, stress and welfare, behavioural repertoires…) and could lead to a whole new line of research.

  19. Ventricular dyssynchrony assessed by gated myocardial perfusion SPECT using a geometrical approach: a feasibility study

    International Nuclear Information System (INIS)

    Veen, Berlinda J. van der; Younis, Imad Al; Ajmone-Marsan, Nina; Bax, Jeroen J.; Westenberg, Jos J.M.; Roos, Albert de; Stokkel, Marcel P.M.

    2012-01-01

    Left ventricular dyssynchrony may predict response to cardiac resynchronization therapy and may well predict adverse cardiac events. Recently, a geometrical approach for dyssynchrony analysis of myocardial perfusion scintigraphy (MPS) was introduced. In this study the feasibility of this geometrical method to detect dyssynchrony was assessed in a population with a normal MPS and in patients with documented ventricular dyssynchrony. For the normal population 80 patients (40 men and 40 women) with normal perfusion (summed stress score ≤2 and summed rest score ≤2) and function (left ventricular ejection fraction 55-80%) on MPS were selected; 24 heart failure patients with proven dyssynchrony on MRI were selected for comparison. All patients underwent a 2-day stress/rest MPS protocol. Perfusion, function and dyssynchrony parameters were obtained by the Corridor4DM software package (Version 6.1). For the normal population time to peak motion was 42.8 ± 5.1% RR cycle, SD of time to peak motion was 3.5 ± 1.4% RR cycle and bandwidth was 18.2 ± 6.0% RR cycle. No significant gender-related differences or differences between rest and post-stress acquisition were found for the dyssynchrony parameters. Discrepancies between the normal and abnormal populations were most profound for the mean wall motion (p value <0.001), SD of time to peak motion (p value <0.001) and bandwidth (p value <0.001). It is feasible to quantify ventricular dyssynchrony in MPS using the geometrical approach as implemented by Corridor4DM. (orig.)

  20. Correction for tissue attenuation in radionuclide gastric emptying studies: a comparison of a lateral image method and a geometric mean method

    Energy Technology Data Exchange (ETDEWEB)

    Collins, P.J.; Chatterton, B.E. (Royal Adelaide Hospital (Australia)); Horowitz, M.; Shearman, D.J.C. (Adelaide Univ. (Australia). Dept. of Medicine)

    1984-08-01

    Variation in depth of radionuclide within the stomach may result in significant errors in the measurement of gastric emptying if no attempt is made to correct for gamma-ray attenuation by the patient's tissues. A method of attenuation correction, which uses a single posteriorly located scintillation camera and correction factors derived from a lateral image of the stomach, was compared with a two-camera geometric mean method, in phantom studies and in five volunteer subjects. A meal of 100 g of ground beef containing /sup 99/Tcsup(m)-chicken liver, and 150 ml of water was used in the in vivo studies. In all subjects the geometric mean data showed that solid food emptied in two phases: an initial lag period, followed by a linear emptying phase. Using the geometric mean data as a standard, the anterior camera overestimated the 50% emptying time (T/sub 50/) by an average of 15% (range 5-18) and the posterior camera underestimated this parameter by 15% (4-22). The posterior data, corrected for attenuation using the lateral image method, underestimated the T/sub 50/ by 2% (-7 to +7). The difference in the distances of the proximal and distal stomach from the posterior detector was large in all subjects (mean 5.7 cm, range 3.9-7.4).

  1. A geometric parameter study of piezoelectric coverage on a rectangular cantilever energy harvester

    International Nuclear Information System (INIS)

    Patel, R; McWilliam, S; Popov, A A

    2011-01-01

    This paper proposes a versatile model for optimizing the performance of a rectangular cantilever beam piezoelectric energy harvester used to convert ambient vibrations into electrical energy. The developed model accounts for geometric changes to the natural frequencies, mode shapes and damping in the structure. This is achieved through the combination of finite element modelling and a distributed parameter electromechanical model, including load resistor and charging capacitor models. The model has the potential for use in investigating the influence of numerous geometric changes on harvester performance, and incorporates a model for accounting for changes in damping as the geometry changes. The model is used to investigate the effects of substrate and piezoelectric layer length, and piezoelectric layer thickness on the performance of a microscale device. Findings from a parameter study indicate the existence of an optimum sample length due to increased mechanical damping for longer beams and improved power output using thicker piezoelectric layers. In practice, harvester design is normally based around a fixed operating frequency for a particular application, and improved performance is often achieved by operating at or near resonance. To achieve unbiased comparisons between different harvester designs, parameter studies are performed by changing multiple parameters simultaneously with the natural frequency held fixed. Performance enhancements were observed using shorter piezoelectric layers as compared to the conventional design, in which the piezoelectric layer and substrate are of equal length

  2. Study on principle and method of measuring system for external dimensions, geometric density and appearance quality of uranium dioxide pellet

    International Nuclear Information System (INIS)

    Cao Wei; Deng Hua; Wang Tao

    2010-01-01

    To adapt to the need of nuclear power development, and keep in step with the increasingly growing nuclear fuel element production, a special measuring system for integrated measuring, calculation, data processing method of External Dimensions, Tolerance of figure and place, Geometric Density and Appearance Quality of Uranium Dioxide Pellet is studied and discussed. This system is with important guiding significance for the improvement of technologic and frocking level.. The measuring system is primarily applied to sampling test during production and is the same with several types of products.The successful application of this measuring method ensures the accuracy and reliability of measured data, reduces the artificial error and makes the measuring be move convenient and fast, thus achieves high precision and high efficiency of measuring process. The measuring method is approach the advanced world level of measuring method at the same industry. So, based on the product inspection requirement, using special measuring instrument and computer data processing system is an important approach we use for nonce and future. (authors)

  3. Geometrical parton

    Energy Technology Data Exchange (ETDEWEB)

    Ebata, T [Tohoku Univ., Sendai (Japan). Coll. of General Education

    1976-06-01

    The geometrical distribution inferred from the inelastic cross section is assumed to be proportional to the partial waves. The precocious scaling and the Q/sup 2/-dependence of various quantities are treated from the geometrical point of view. It is shown that the approximate conservation of the orbital angular momentum may be a very practical rule to understand the helicity structure of various hadronic and electromagnetic reactions. The rule can be applied to inclusive reactions as well. The model is also applied to large angle processes. Through the discussion, it is suggested that many peculiar properties of the quark-parton can be ascribed to the geometrical effects.

  4. A geometric morphometric study into the sexual dimorphism of the human scapula.

    Science.gov (United States)

    Scholtz, Y; Steyn, M; Pretorius, E

    2010-08-01

    Sex determination is vital when attempting to establish identity from skeletal remains. Two approaches to sex determination exists: morphological and metrical. The aim of this paper was to use geometric morphometrics to study the shape of the scapula and its sexual dimorphism. The sample comprised 45 adult black male and 45 adult black female scapulae of known sex. The scapulae were photographed and 21 homologous landmarks were plotted to use for geometric morphometric analysis with the 'tps' series of programs, as well as the IMP package. Consensus thin-plate splines and vector plots for males and females were compared. The CVA and TwoGroup analyses indicated that significant differences exist between males and females. The lateral and medial borders of females are straighter while the supraspinous fossa is more convexly curved than that of males. More than 91% of the females and 95% of the males were correctly assigned. Hotelling's T(2)-test yielded a significant p-value of 0.00039. In addition, 100 equidistant landmarks representing the curve only were also assigned. These, however, yielded considerably poorer results. It is concluded that it is better to use homologous landmarks rather than curve data only, as it is most probable that the shape of the outline relative to the fixed homologous points on the scapula is sexually dimorphic.

  5. Multi-parameter geometrical scaledown study for energy optimization of MTJ and related spintronics nanodevices

    Science.gov (United States)

    Farhat, I. A. H.; Alpha, C.; Gale, E.; Atia, D. Y.; Stein, A.; Isakovic, A. F.

    The scaledown of magnetic tunnel junctions (MTJ) and related nanoscale spintronics devices poses unique challenges for energy optimization of their performance. We demonstrate the dependence of the switching current on the scaledown variable, while considering the influence of geometric parameters of MTJ, such as the free layer thickness, tfree, lateral size of the MTJ, w, and the anisotropy parameter of the MTJ. At the same time, we point out which values of the saturation magnetization, Ms, and anisotropy field, Hk, can lead to lowering the switching current and overall decrease of the energy needed to operate an MTJ. It is demonstrated that scaledown via decreasing the lateral size of the MTJ, while allowing some other parameters to be unconstrained, can improve energy performance by a measurable factor, shown to be the function of both geometric and physical parameters above. Given the complex interdependencies among both families of parameters, we developed a particle swarm optimization (PSO) algorithm that can simultaneously lower energy of operation and the switching current density. Results we obtained in scaledown study and via PSO optimization are compared to experimental results. Support by Mubadala-SRC 2012-VJ-2335 is acknowledged, as are staff at Cornell-CNF and BNL-CFN.

  6. Retinal dysfunction and refractive errors: an electrophysiological study of children

    Science.gov (United States)

    Flitcroft, D I; Adams, G G W; Robson, A G; Holder, G E

    2005-01-01

    Aims: To evaluate the relation between refractive error and electrophysiological retinal abnormalities in children referred for investigation of reduced vision. Methods: The study group comprised 123 consecutive patients referred over a 14 month period from the paediatric service of Moorfields Eye Hospital for electrophysiological investigation of reduced vision. Subjects were divided into five refractive categories according to their spectacle correction: high myopia (⩽−6D), low myopia (>−6D and ⩽−0.75D), emmetropia (>−0.75 and 1.5D) and ERG abnormalities (18/35 with high astigmatism v 20/88 without, χ2 test, p = 0.002). There was no significant variation in frequency of abnormalities between low myopes, emmetropes, and low hyperopes. The rate of abnormalities was very similar in both high myopes (8/15) and high hyperopes (5/10). Conclusions: High ametropia and astigmatism in children being investigated for poor vision are associated with a higher rate of retinal electrophysiological abnormalities. An increased rate of refractive errors in the presence of retinal pathology is consistent with the hypothesis that the retina is involved in the process of emmetropisation. Electrophysiological testing should be considered in cases of high ametropia in childhood to rule out associated retinal pathology. PMID:15774929

  7. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    Science.gov (United States)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.

  8. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    Science.gov (United States)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.

  9. The study of geometric structure of Na films on Cu(110)

    International Nuclear Information System (INIS)

    Zeybek, O.

    2004-01-01

    In order to understand the geometric structure of Na films on Cu( 110) substrate, a couple of surface science techniques have been applied in this study. The thickness of the Na films were calculated using X-ray Photoelectron Spectroscopy data and Mean Free Path. The coverages were compared with the work function changes in this study and other investigations. Sub-monolayer and up to 2 ML thickness of Na films on Cu(110) have been investigated using Low Energy Electron Diffraction (LEED) and Ultra Violet Inverse Photoelectron Spectroscopy. It is found that the (1 x 1) LEED pattern of Cu(110) changes to (1 x 2) with increasing Na coverage up to 1 ML. Af ter 1 ML Na films, LEED shows again (1 x 1) structure

  10. On numerical heat transfer characteristic study of flat surface subjected to variation in geometric thickness

    Science.gov (United States)

    Umair, Siddique Mohammed; Kolawale, Abhijeet Rangnath; Bhise, Ganesh Anurath; Gulhane, Nitin Parashram

    Thermal management in the looming world of electronic packaging system is the most prior and conspicuous issue as far as the working efficiency of the system is concerned. The cooling in such systems can be achieved by impinging air jet over the heat sink as jet impingement cooling is one of the cooling technologies which are widely studied now. Here the modulation in impinging and geometric parameters results in the establishment of the characteristic cooling rate over the target surface. The characteristic cooling curve actually resembles non-uniformity in cooling rate. This non-uniformity favors the area average heat dissipation rate. In order to study the non-uniformity in cooling characteristic, the present study takes an initiative in plotting the local Nusselt number magnitude against the non-dimensional radial distance of the different thickness of target surfaces. For this, the steady temperature distribution over the target surface under the impingement of air jet is being determined numerically. The work is completely inclined towards the determination of critical value of geometric thickness below which the non-uniformity in the Nusselt profile starts. This is done by numerically examining different target surfaces under constant Reynolds number and nozzle-target spacing. The occurrences of non-uniformity in Nusselt profile contributes to over a 42% enhancement in area average Nusselt magnitude. The critical value of characteristic thickness (t/d) reported in the present investigation approximate to 0.05. Below this value, the impingement of air jet generates a discrete pressure zones over the target surface in the form of pressure spots. As a result of this, the air flowing in contact with the target surface experiences a damping potential, in due of which it gets more time and contact with the surface to dissipate heat.

  11. Errors and discrepancies in the administration of intravenous infusions: a mixed methods multihospital observational study

    OpenAIRE

    Lyons, I.; Furniss, D.; Blandford, A.; Chumbley, G.; Iacovides, I.; Wei, L.; Cox, A.; Mayer, A.; Vos, J.; Galal-Edeen, G. H.; Schnock, K. O.; Dykes, P. C.; Bates, D. W.; Franklin, B. D.

    2018-01-01

    INTRODUCTION: Intravenous medication administration has traditionally been regarded as error prone, with high potential for harm. A recent US multisite study revealed few potentially harmful errors despite a high overall error rate. However, there is limited evidence about infusion practices in England and how they relate to prevalence and types of error. OBJECTIVES: To determine the prevalence, types and severity of errors and discrepancies in infusion administration in English hospitals, an...

  12. Study of the 3D Euler equations using Clebsch potentials: dual mechanisms for geometric depletion

    Science.gov (United States)

    Ohkitani, Koji

    2018-02-01

    After surveying analyses of the 3D Euler equations using the Clebsch potentials scattered over the literature, we report some preliminary new results. 1. Assuming that flow fields are free from nulls of the impulse and the vorticity fields, we study how constraints imposed by the Clebsch potentials lead to a degenerate geometrical structure, typically in the form of depletion of nonlinearity. We consider a vorticity surface spanned by \\boldsymbol ω and another material vector \\boldsymbol {W} such that \\boldsymbol γ=\\boldsymbol ω× \\boldsymbol {W}, where \\boldsymbol γ is the impulse variable in geometric gauge. We identify dual mechanism for geometric depletion and show that at least of one them is acting if \\boldsymbol {W} does not develop a null. This suggests that formation of singularity in flows endowed with Clebsch potentials is less likely to happen than in more general flows. Some arguments are given towards exclusion of ‘type I’ blowup. A mathematical challenge remains to rule out singularity formation for flows which have Clebsch potentials everywhere. 2. We exploit classical differential geometry kinematically to write down the Gauss-Weingarten equations for the vorticity surface of the Clebsch potential in terms of fluid dynamical variables, as are the first, second and third fundamental forms. In particular, we derive a constraint on the size of the Gaussian curvature near the point of a possible singularity. On the other hand, an application of the Gauss-Bonnet theorem reveals that the tangential curvature of the surface becomes large in the neighborhood of near-singularity. 3. Using spatially-periodic flows with highly-symmetry, i.e. initial conditions of the Taylor-Green vortex and the Kida-Pelz flow, we present explicit formulas of the Clebsch potentials with exceptional singular surfaces where the Clebsch potentials are undefined. This is done by connecting the known expressions with the solenoidal impulse variable (i.e. the

  13. A dose error evaluation study for 4D dose calculations

    Science.gov (United States)

    Milz, Stefan; Wilkens, Jan J.; Ullrich, Wolfgang

    2014-10-01

    Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms. The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms. The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex

  14. Medication errors in home care: a qualitative focus group study.

    Science.gov (United States)

    Berland, Astrid; Bentsen, Signe Berit

    2017-11-01

    To explore registered nurses' experiences of medication errors and patient safety in home care. The focus of care for older patients has shifted from institutional care towards a model of home care. Medication errors are common in this situation and can result in patient morbidity and mortality. An exploratory qualitative design with focus group interviews was used. Four focus group interviews were conducted with 20 registered nurses in home care. The data were analysed using content analysis. Five categories were identified as follows: lack of information, lack of competence, reporting medication errors, trade name products vs. generic name products, and improving routines. Medication errors occur frequently in home care and can threaten the safety of patients. Insufficient exchange of information and poor communication between the specialist and home-care health services, and between general practitioners and healthcare workers can lead to medication errors. A lack of competence in healthcare workers can also lead to medication errors. To prevent these, it is important that there should be up-to-date information and communication between healthcare workers during the transfer of patients from specialist to home care. Ensuring competence among healthcare workers with regard to medication is also important. In addition, there should be openness and accurate reporting of medication errors, as well as in setting routines for the preparation, alteration and administration of medicines. To prevent medication errors in home care, up-to-date information and communication between healthcare workers is important when patients are transferred from specialist to home care. It is also important to ensure adequate competence with regard to medication, and that there should be openness when medication errors occur, as well as in setting routines for the preparation, alteration and administration of medications. © 2017 John Wiley & Sons Ltd.

  15. Geometric metamorphosis.

    Science.gov (United States)

    Niethammer, Marc; Hart, Gabriel L; Pace, Danielle F; Vespa, Paul M; Irimia, Andrei; Van Horn, John D; Aylward, Stephen R

    2011-01-01

    Standard image registration methods do not account for changes in image appearance. Hence, metamorphosis approaches have been developed which jointly estimate a space deformation and a change in image appearance to construct a spatio-temporal trajectory smoothly transforming a source to a target image. For standard metamorphosis, geometric changes are not explicitly modeled. We propose a geometric metamorphosis formulation, which explains changes in image appearance by a global deformation, a deformation of a geometric model, and an image composition model. This work is motivated by the clinical challenge of predicting the long-term effects of traumatic brain injuries based on time-series images. This work is also applicable to the quantification of tumor progression (e.g., estimating its infiltrating and displacing components) and predicting chronic blood perfusion changes after stroke. We demonstrate the utility of the method using simulated data as well as scans from a clinical traumatic brain injury patient.

  16. Empirical study of the GARCH model with rational errors

    International Nuclear Information System (INIS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2013-01-01

    We use the GARCH model with a fat-tailed error distribution described by a rational function and apply it to stock price data on the Tokyo Stock Exchange. To determine the model parameters we perform Bayesian inference to the model. Bayesian inference is implemented by the Metropolis-Hastings algorithm with an adaptive multi-dimensional Student's t-proposal density. In order to compare our model with the GARCH model with the standard normal errors, we calculate the information criteria AIC and DIC, and find that both criteria favor the GARCH model with a rational error distribution. We also calculate the accuracy of the volatility by using the realized volatility and find that a good accuracy is obtained for the GARCH model with a rational error distribution. Thus we conclude that the GARCH model with a rational error distribution is superior to the GARCH model with the normal errors and it can be used as an alternative GARCH model to those with other fat-tailed distributions

  17. Exploring students’ adaptive reasoning skills and van Hiele levels of geometric thinking: a case study in geometry

    Science.gov (United States)

    Rizki, H. T. N.; Frentika, D.; Wijaya, A.

    2018-03-01

    This study aims to explore junior high school students’ adaptive reasoning and the Van Hiele level of geometric thinking. The present study was a quasi-experiment with the non-equivalent control group design. The participants of the study were 34 seventh graders and 35 eighth graders in the experiment classes and 34 seventh graders and 34 eighth graders in the control classes. The students in the experiment classes learned geometry under the circumstances of a Knisley mathematical learning. The data were analyzed quantitatively by using inferential statistics. The results of data analysis show an improvement of adaptive reasoning skills both in the grade seven and grade eight. An improvement was also found for the Van Hiele level of geometric thinking. These results indicate the positive impact of Knisley learning model on students’ adaptive reasoning skills and Van Hiele level of geometric thinking.

  18. Geometrical model of multiple production

    International Nuclear Information System (INIS)

    Chikovani, Z.E.; Jenkovszky, L.L.; Kvaratshelia, T.M.; Struminskij, B.V.

    1988-01-01

    The relation between geometrical and KNO-scaling and their violation is studied in a geometrical model of multiple production of hadrons. Predictions concerning the behaviour of correlation coefficients at future accelerators are given

  19. A Corpus-based Study of EFL Learners’ Errors in IELTS Essay Writing

    Directory of Open Access Journals (Sweden)

    Hoda Divsar

    2017-03-01

    Full Text Available The present study analyzed different types of errors in the EFL learners’ IELTS essays. In order to determine the major types of errors, a corpus of 70 IELTS examinees’ writings were collected, and their errors were extracted and categorized qualitatively. Errors were categorized based on a researcher-developed error-coding scheme into 13 aspects. Based on the descriptive statistical analyses, the frequency of each error type was calculated and the commonest errors committed by the EFL learners in IELTS essays were identified. The results indicated that the two most frequent errors that IELTS candidates committed were related to word choice and verb forms. Based on the research results, pedagogical implications highlight analyzing EFL learners’ writing errors as a useful basis for instructional purposes including creating pedagogical teaching materials that are in line with learners’ linguistic strengths and weaknesses.

  20. Intraoperative application of geometric three-dimensional mitral valve assessment package: a feasibility study.

    Science.gov (United States)

    Mahmood, Feroze; Karthik, Swaminathan; Subramaniam, Balachundhar; Panzica, Peter J; Mitchell, John; Lerner, Adam B; Jervis, Karinne; Maslow, Andrew D

    2008-04-01

    To study the feasibility of using 3-dimensional (3D) echocardiography in the operating room for mitral valve repair or replacement surgery. To perform geometric analysis of the mitral valve before and after repair. Prospective observational study. Academic, tertiary care hospital. Consecutive patients scheduled for mitral valve surgery. Intraoperative reconstruction of 3D images of the mitral valve. One hundred and two patients had 3D analysis of their mitral valve. Successful image reconstruction was performed in 93 patients-8 patients had arrhythmias or a dilated mitral valve annulus resulting in significant artifacts. Time from acquisition to reconstruction and analysis was less than 5 minutes. Surgeon identification of mitral valve anatomy was 100% accurate. The study confirms the feasibility of performing intraoperative 3D reconstruction of the mitral valve. This data can be used for confirmation and communication of 2-dimensional data to the surgeons by obtaining a surgical view of the mitral valve. The incorporation of color-flow Doppler into these 3D images helps in identification of the commissural or perivalvular location of regurgitant orifice. With improvements in the processing power of the current generation of echocardiography equipment, it is possible to quickly acquire, reconstruct, and manipulate images to help with timely diagnosis and surgical planning.

  1. Geometrical accuracy of metallic objects produced with additive or subtractive manufacturing: A comparative in vitro study.

    Science.gov (United States)

    Braian, Michael; Jönsson, David; Kevci, Mir; Wennerberg, Ann

    2018-04-06

    To evaluate the accuracy and precision of objects produced by additive manufacturing systems (AM) for use in dentistry and to compare with subtractive manufacturing systems (SM). Ten specimens of two geometrical objects were produced by five different AM machines and one SM machine. Object A mimics an inlay-shaped object, while object B imitates a four-unit bridge model. All the objects were sorted into different measurement dimensions (x, y, z), linear distances, angles and corner radius. None of the additive manufacturing or subtractive manufacturing groups presented a perfect match to the CAD file with regard to all parameters included in the present study. Considering linear measurements, the precision for subtractive manufacturing group was consistent in all axes for object A, presenting results of additive manufacturing groups had consistent precision in the x-axis and y-axis but not in the z-axis. With regard to corner radius measurements, the SM group had the best overall accuracy and precision for both objects A and B when compared to the AM groups. Within the limitations of this in vitro study, the conclusion can be made that subtractive manufacturing presented overall precision on all measurements below 0.050mm. The AM machines also presented fairly good precision, additive techniques are now being implemented. Thus all these production techniques need to be tested, compared and validated. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  2. Geometric study of transparent superhydrophobic surfaces of molded and grid patterned polydimethylsiloxane (PDMS)

    Science.gov (United States)

    Davaasuren, Gaasuren; Ngo, Chi-Vinh; Oh, Hyun-Seok; Chun, Doo-Man

    2014-09-01

    Herein we describe an economical method to fabricate a transparent superhydrophobic surface that uses grid patterning, and we report on the effects of grid geometry in determining the wettability and transparency of the fabricated surfaces. A polymer casting method was utilized because of its applicability to economical manufacturing and mass production; the material polydimethylsiloxane (PDMS) was selected because of its moldability and transparency. PDMS was replicated from a laser textured mold fabricated by a UV nanosecond pulsed laser. Sapphire wafer was used for the mold because it has very low surface roughness (Ra ≤0.3 nm) and adequate mechanical properties. To study geometric effects, grid patterns of a series of step sizes were fabricated. The maximum water droplet contact angle (WDCA) observed was 171°. WDCAs depended on the wetting area and the wetting state. The experimental results of WDCA were analyzed with Wenzel and Cassie-Baxter equations. The designed grid pattern was suitably transparent and structurally stable. Transmittance of the optimal transparent superhydrophobic surface was measured by using a spectrophotometer. Transmittance loss due to the presence of the grid was around 2-4% over the wavelength region measured (300-1000 nm); the minimum transmittance observed was 83.1% at 300 nm. This study also demonstrates the possibility of using a nanosecond pulsed laser for the surface texturing of a superhydrophobic surface.

  3. Geometric Evaluation of the Effect of Prosthetic Rehabilitation on the Facial Appearance of Mandibulectomy Patients: A Preliminary Study.

    Science.gov (United States)

    Aswehlee, Amel M; Elbashti, Mahmoud E; Hattori, Mariko; Sumita, Yuka I; Taniguchi, Hisashi

    The purpose of this study was to geometrically evaluate the effect of prosthetic rehabilitation on the facial appearance of mandibulectomy patients. Facial scans (with and without prostheses) were performed for 16 mandibulectomy patients using a noncontact three-dimensional (3D) digitizer, and 3D images were reconstructed with the corresponding software. The 3D datasets were geometrically evaluated and compared using 3D evaluation software. The mean difference in absolute 3D deviations for full face scans was 382.2 μm. This method may be useful in evaluating the effect of conventional prostheses on the facial appearance of individuals with mandibulectomy defects.

  4. A comparative study of voluntarily reported medication errors among ...

    African Journals Online (AJOL)

    errors among adult patients in intensive care (IC) and non-. IC settings in Riyadh, ... safety “To err is human: Building a safer health care system” .... regression analysis was used to identify factors affecting the .... that work in non-ICU areas are less likely to report such ... ve.org/read), which permit unrestricted use, distribution ...

  5. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    Science.gov (United States)

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  6. Study of Periodic Fabrication Error of Optical Splitter Device Performance

    OpenAIRE

    Ab-Rahman, Mohammad Syuhaimi; Ater, Foze Saleh; Jumari, Kasmiran; Mohammad, Rahmah

    2012-01-01

    In this paper, the effect of fabrication errors (FEs) on the performance of 1×4 optical power splitter is investigated in details. The FE, which is assumed to take regular shape, is considered in each section of the device. Simulation result show that FE has a significant effect on the output power especially when it occurs in coupling regions.

  7. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

    Directory of Open Access Journals (Sweden)

    Mehdi Jahangiri

    2016-03-01

    Conclusion: The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided.

  8. A Study of the Anechoic Performance of Rice Husk-Based, Geometrically Tapered, Hollow Absorbers

    Directory of Open Access Journals (Sweden)

    Muhammad Nadeem Iqbal

    2014-01-01

    Full Text Available Although solid, geometrically tapered microwave absorbers are preferred due to their better performance, they are bulky and must have a thickness on the order of λ or more. The goal of this study was to design lightweight absorbers that can reduce the electromagnetic reflections to less than −10 dB. We used a very simple approach; two waste materials, that is, rice husks and tire dust in powder form, were used to fabricate two independent samples. We measured and used their dielectric properties to determine and compare the propagation constants and quarter-wave thickness. The quarter-wave thickness for the tire dust was 3 mm less than that of the rice husk material, but we preferred the rice-husk material. This preference was based on the fact that our goal was to achieve minimum backward reflections, and the rice-husk material, with its low dielectric constant, high loss factor, large attenuation per unit length, and ease of fabrication, provided a better opportunity to achieve that goal. The performance of the absorbers was found to be better (lower than −20 dB, and comparison of the results proved that the hollow design with 58% less weight was a good alternative to the use of solid absorbers.

  9. Numerical study of the systematic error in Monte Carlo schemes for semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Muscato, Orazio [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Di Stefano, Vincenza [Univ. degli Studi di Messina (Italy). Dipt. di Matematica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) im Forschungsverbund Berlin e.V. (Germany)

    2008-07-01

    The paper studies the convergence behavior of Monte Carlo schemes for semiconductors. A detailed analysis of the systematic error with respect to numerical parameters is performed. Different sources of systematic error are pointed out and illustrated in a spatially one-dimensional test case. The error with respect to the number of simulation particles occurs during the calculation of the internal electric field. The time step error, which is related to the splitting of transport and electric field calculations, vanishes sufficiently fast. The error due to the approximation of the trajectories of particles depends on the ODE solver used in the algorithm. It is negligible compared to the other sources of time step error, when a second order Runge-Kutta solver is used. The error related to the approximate scattering mechanism is the most significant source of error with respect to the time step. (orig.)

  10. The study on the import of the geometric body by GDML in GEANT4

    International Nuclear Information System (INIS)

    Sun Baodong; Liu Huilan; Sun Dawang; Xie Zhaoyang; Song Yushou

    2014-01-01

    Geometry Description Markup Language (GDML) can be used as an application interface program to import the geometric body into GEANT4. It greatly simplifies the detector construction work with high reliability. With this mechanism the geometric data of a detector is described in an XML file and read by the XML parser embedded in GEANT4. The geometric structure of a detector is designed in CAD toolkit Solidworks and saved as a standard STEP file. Then, by FastRad the STEP file is transformed into XML script, which is readable for GEANT4. In comparison with the detectors constructed by Constructed Solid Geometry (CSG) provided by GEANT4, those imported by GDML also satisfies the requests of general simulation application. At the same time, some solutions and tips for several common problems during the progress constructing the detectors by GDML are given. (authors)

  11. Study of Measurement Strategies of Geometric Deviation of the Position of the Threaded Holes

    Science.gov (United States)

    Drbul, Mário; Martikan, Pavol; Sajgalik, Michal; Czan, Andrej; Broncek, Jozef; Babik, Ondrej

    2017-12-01

    Verification of product and quality control is an integral part of current production process. In terms of functional requirements and product interoperability, it is necessary to analyze their dimensional and also geometric specifications. Threaded holes are verified elements too, which are a substantial part of detachable screw connections and have a broad presence in engineering products. This paper deals with on the analysing of measurement strategies of verification geometric deviation of the position of the threaded holes, which are the indirect method of measuring threaded pins when applying different measurement strategies which can affect the result of the verification of the product..

  12. Computational Fluid Dynamics Study of Channel Geometric Effect for Fischer-Tropsch Microchannel Reactor

    International Nuclear Information System (INIS)

    Na, Jonggeol; Jung, Ikhwan; Kshetrimayum, Krishnadash S.; Park, Seongho; Park, Chansaem; Han, Chonghun

    2014-01-01

    Driven by both environmental and economic reasons, the development of small to medium scale GTL(gas-to-liquid) process for offshore applications and for utilizing other stranded or associated gas has recently been studied increasingly. Microchannel GTL reactors have been preferred over the conventional GTL reactors for such applications, due to its compactness, and additional advantages of small heat and mass transfer distance desired for high heat transfer performance and reactor conversion. In this work, multi-microchannel reactor was simulated by using commercial CFD code, ANSYS FLUENT, to study the geometric effect of the microchannels on the heat transfer phenomena. A heat generation curve was first calculated by modeling a Fischer-Tropsch reaction in a single-microchannel reactor model using Matlab-ASPEN integration platform. The calculated heat generation curve was implemented to the CFD model. Four design variables based on the microchannel geometry namely coolant channel width, coolant channel height, coolant channel to process channel distance, and coolant channel to coolant channel distance, were selected for calculating three dependent variables namely, heat flux, maximum temperature of coolant channel, and maximum temperature of process channel. The simulation results were visualized to understand the effects of the design variables on the dependent variables. Heat flux and maximum temperature of cooling channel and process channel were found to be increasing when coolant channel width and height were decreased. Coolant channel to process channel distance was found to have no effect on the heat transfer phenomena. Finally, total heat flux was found to be increasing and maximum coolant channel temperature to be decreasing when coolant channel to coolant channel distance was decreased. Using the qualitative trend revealed from the present study, an appropriate process channel and coolant channel geometry along with the distance between the adjacent

  13. The geometric factor of electrostatic plasma analyzers: A case study from the Fast Plasma Investigation for the Magnetospheric Multiscale mission

    International Nuclear Information System (INIS)

    Collinson, Glyn A.; Dorelli, John C.; Moore, Thomas E.; Pollock, Craig; Mariano, Al; Shappirio, Mark D.; Adrian, Mark L.; Avanov, Levon A.; Lewis, Gethyn R.; Kataria, Dhiren O.; Bedington, Robert; Owen, Christopher J.; Walsh, Andrew P.; Arridge, Chris S.; Chornay, Dennis J.; Gliese, Ulrik; Barrie, Alexander C.; Tucker, Corey

    2012-01-01

    We report our findings comparing the geometric factor (GF) as determined from simulations and laboratory measurements of the new Dual Electron Spectrometer (DES) being developed at NASA Goddard Space Flight Center as part of the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission. Particle simulations are increasingly playing an essential role in the design and calibration of electrostatic analyzers, facilitating the identification and mitigation of the many sources of systematic error present in laboratory calibration. While equations for laboratory measurement of the GF have been described in the literature, these are not directly applicable to simulation since the two are carried out under substantially different assumptions and conditions, making direct comparison very challenging. Starting from first principles, we derive generalized expressions for the determination of the GF in simulation and laboratory, and discuss how we have estimated errors in both cases. Finally, we apply these equations to the new DES instrument and show that the results agree within errors. Thus we show that the techniques presented here will produce consistent results between laboratory and simulation, and present the first description of the performance of the new DES instrument in the literature.

  14. AN ERROR ANALYSIS OF ARGUMENTATIVE ESSAY (CASE STUDY AT UNIVERSITY MUHAMMADIYAH OF METRO

    Directory of Open Access Journals (Sweden)

    Fenny - Thresia

    2015-10-01

    Full Text Available The purpose of this study was study analyze the students’ error in writing argumentative essay. The researcher focuses on errors of verb, concord and learner language. This study took 20 students as the subject of research from the third semester. The data took from observation and documentation. Based on the result of the data analysis there are some errors still found on the student’s argumentative essay in English writing? The common errors which repeatedly appear are verb. The second is concord, and learner languages are the smallest error. From 20 samples that took, the frequency the errors of verb are 12 items (60%, concord are 8 items (40%, learner languages are 7 items (35%. As a result, verb has the biggest number of common errors.

  15. Factors associated with reporting nursing errors in Iran: a qualitative study

    Directory of Open Access Journals (Sweden)

    Hashemi Fatemeh

    2012-10-01

    Full Text Available Abstract Background Reporting the professional errors for improving patient safety is considered essential not only in hospitals, but also in ambulatory care centers. Unfortunately, a great number of nurses, similar to most clinicians, do not report their errors. Therefore, the present study aimed to clarify the factors associated with reporting the nursing errors through the experiences of clinical nurses and nursing managers. Methods A total of 115 nurses working in the hospitals and specialized clinics affiliated to Tehran and Shiraz Universities of Medical Sciences, Iran participated in this qualitative study. The study data were collected through a semi-structured group discussion conducted in 17 sessions and analyzed by inductive content analysis approach. Results The main categories emerged in this study were: a general approaches of the nurses towards errors, b barriers in reporting the nursing errors, and c motivators in error reporting. Conclusion Error reporting provides extremely valuable information for preventing future errors and improving the patient safety. Overall, regarding motivators and barriers in reporting the nursing errors, it is necessary to enact regulations in which the ways of reporting the error and its constituent elements, such as the notion of the error, are clearly identified.

  16. Geometric group theory

    CERN Document Server

    Druţu, Cornelia

    2018-01-01

    The key idea in geometric group theory is to study infinite groups by endowing them with a metric and treating them as geometric spaces. This applies to many groups naturally appearing in topology, geometry, and algebra, such as fundamental groups of manifolds, groups of matrices with integer coefficients, etc. The primary focus of this book is to cover the foundations of geometric group theory, including coarse topology, ultralimits and asymptotic cones, hyperbolic groups, isoperimetric inequalities, growth of groups, amenability, Kazhdan's Property (T) and the Haagerup property, as well as their characterizations in terms of group actions on median spaces and spaces with walls. The book contains proofs of several fundamental results of geometric group theory, such as Gromov's theorem on groups of polynomial growth, Tits's alternative, Stallings's theorem on ends of groups, Dunwoody's accessibility theorem, the Mostow Rigidity Theorem, and quasiisometric rigidity theorems of Tukia and Schwartz. This is the f...

  17. Medication prescribing errors in a public teaching hospital in India: A prospective study.

    Directory of Open Access Journals (Sweden)

    Pote S

    2007-03-01

    Full Text Available Background: To prevent medication errors in prescribing, one needs to know their types and relative occurrence. Such errors are a great cause of concern as they have the potential to cause patient harm. The aim of this study was to determine the nature and types of medication prescribing errors in an Indian setting.Methods: The medication errors were analyzed in a prospective observational study conducted in 3 medical wards of a public teaching hospital in India. The medication errors were analyzed by means of Micromedex Drug-Reax database.Results: Out of 312 patients, only 304 were included in the study. Of the 304 cases, 103 (34% cases had at least one error. The total number of errors found was 157. The drug-drug interactions were the most frequently (68.2% occurring type of error, which was followed by incorrect dosing interval (12% and dosing errors (9.5%. The medication classes involved most were antimicrobial agents (29.4%, cardiovascular agents (15.4%, GI agents (8.6% and CNS agents (8.2%. The moderate errors contributed maximum (61.8% to the total errors when compared to the major (25.5% and minor (12.7% errors. The results showed that the number of errors increases with age and number of medicines prescribed.Conclusion: The results point to the establishment of medication error reporting at each hospital and to share the data with other hospitals. The role of clinical pharmacist in this situation appears to be a strong intervention; and the clinical pharmacist, initially, could confine to identification of the medication errors.

  18. A geometric morphometric study of a Middle Pleistocene cranium from Hexian, China.

    Science.gov (United States)

    Cui, Yaming; Wu, Xinzhi

    2015-11-01

    The Hexian calvarium is one of the most complete and well-preserved Homo erectus fossils ever found in east Asia, apart from the Zhoukoudian specimens. Various methods bracket the age of the Hexian fossil to between 150 and 412 ka (thousands of years ago). The Hexian calvarium has been considered to be H. erectus given its morphological similarities to Zhoukoudian and Javan H. erectus. However, discussion continues regarding the affinities of the Hexian specimen with other H. erectus fossils. The arguments mainly focus on its relationships to other Asian H. erectus fossils, including those from both China and Java. To better determine the affinities of the Hexian cranium, our study used 3D landmark and semilandmark geometric morphometric techniques and multivariate statistical analyses to quantify the shape of the neurocranium and to compare the Hexian cranium to other H. erectus specimens. The results of this study confirmed the morphological similarities between Hexian and Chinese H. erectus in overall morphology, and particularly in the structure of the frontal bone and the posterior part of the neurocranium. Although the Hexian specimen shows the strongest connection to Chinese H. erectus, the morphology of the lateral neurocranium resembles early Indonesian H. erectus specimens, possibly suggesting shared common ancestry or gene flow from early Indonesian populations. Overall cranial and frontal bone morphology are strongly influenced by geography. Although geographically intermediate between Zhoukoudian and Indonesian H. erectus, the Hexian specimen does not form part of an obvious morphological gradient with regard to overall cranial shape. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Appropriating Geometric Series as a Cultural Tool: A Study of Student Collaborative Learning

    Science.gov (United States)

    Carlsen, Martin

    2010-01-01

    The aim of this article is to illustrate how students, through collaborative small-group problem solving, appropriate the concept of geometric series. Student appropriation of cultural tools is dependent on five sociocultural aspects: involvement in joint activity, shared focus of attention, shared meanings for utterances, transforming actions and…

  20. Errors in translation made by English major students: A study on types and causes

    Directory of Open Access Journals (Sweden)

    Pattanapong Wongranu

    2017-05-01

    Full Text Available Many Thai English major students have problems when they translate Thai texts into English, as numerous errors can be found. Therefore, a study of translation errors is needed to find solutions to these problems. The objectives of this research were: 1 to examine types of translation errors in translation from Thai into English, 2 to determine the types of translation errors that are most common, and 3 to find possible explanations for the causes of errors. The results of this study will be used to improve translation teaching and the course “Translation from Thai into English”. The participants were 26 third-year, English major students at Kasetsart University. The data were collected from the students' exercises and examinations. Interviews and stimulated recall were also used to determine translation problems and causes of errors. The data were analyzed by considering the frequency and percentage, and by content analysis. The results shows that the most frequent translation errors were syntactic errors (65%, followed by semantic errors (26.5% and miscellaneous errors (8.5%, respectively. The causes of errors found in this study included translation procedures, carelessness, low self-confidence, and anxiety. It is recommended that more class time be spent to address the problematic points. In addition, more authentic translation and group work should be implemented to increase self-confidence and decrease anxiety.

  1. 3D Echo pilot study of geometric left ventricular changes after acute myocardial infarction.

    Science.gov (United States)

    Vieira, Marcelo Luiz Campos; Oliveira, Wercules Antonio; Cordovil, Adriana; Rodrigues, Ana Clara Tude; Mônaco, Cláudia Gianini; Afonso, Tânia; Lira Filho, Edgar Bezerra; Perin, Marco; Fischer, Cláudio Henrique; Morhy, Samira Saady

    2013-07-01

    Left ventricular remodeling (LVR) after AMI characterizes a factor of poor prognosis. There is little information in the literature on the LVR analyzed with three-dimensional echocardiography (3D ECHO). To analyze, with 3D ECHO, the geometric and volumetric modifications of the left ventricle (VE) six months after AMI in patients subjected to percutaneous primary treatment. Prospective study with 3D ECHO of 21 subjects (16 men, 56 ± 12 years-old), affected by AMI with ST segment elevation. The morphological and functional analysis (LV) with 3D ECHO (volumes, LVEF, 3D sphericity index) was carried out up to seven days and six months after the AMI. The LVR was considered for increase > 15% of the end diastolic volume of the LV (LVEDV) six months after the AMI, compared to the LVEDV up to seven days from the event. Eight (38%) patients have presented LVR. Echocardiographic measurements (n = 21 patients): I- up to seven days after the AMI: 1- LVEDV: 92.3 ± 22.3 mL; 2- LVEF: 0.51 ± 0.01; 3- sphericity index: 0.38 ± 0.05; II- after six months: 1- LVEDV: 107.3 ± 26.8 mL; 2- LVEF: 0.59 ± 0.01; 3- sphericity index: 0.31 ± 0.05. Correlation coefficient (r) between the sphericity index up to seven days after the AMI and the LVEDV at six months (n = 8) after the AMI: r: 0.74, p = 0.0007; (r) between the sphericity index six months after the AMI and the LVEDV at six months after the AMI: r: 0.85, p series, LVR has been observed in 38% of the patients six months after the AMI. The three-dimensional sphericity index has been associated to the occurrence of LVR.

  2. Heritability of face shape in twins: a preliminary study using 3D stereophotogrammetry and geometric morphometrics

    Directory of Open Access Journals (Sweden)

    Seth M. Weinberg

    2013-11-01

    Full Text Available Introduction: Previous research suggests that aspects of facial surface morphology are heritable.  Traditionally, heritability studies have used a limited set of linear distances to quantify facial morphology and often employ statistical methods poorly designed to deal with biological shape.  In this preliminary report, we use a combination of 3D photogrammetry and landmark-based morphometrics to explore which aspects of face shape show the strongest evidence of heritability in a sample of twins. Methods: 3D surface images were obtained from 21 twin pairs (10 monozygotic, 11 same-sex dizygotic.  Thirteen 3D landmarks were collected from each facial surface and their coordinates subjected to geometric morphometric analysis.  This involved superimposing the individual landmark configurations and then subjecting the resulting shape coordinates to a principal components analysis.  The resulting PC scores were then used to calculate rough narrow-sense heritability estimates. Results: Three principal components displayed evidence of moderate to high heritability and were associated with variation in the breadth of orbital and nasal structures, upper lip height and projection, and the vertical and forward projection of the root of the nose due to variation in the position of nasion. Conclusions: Aspects of facial shape, primarily related to variation in length and breadth of central midfacial structures, were shown to demonstrate evidence of strong heritability. An improved understanding of which facial features are under strong genetic control is an important step in the identification of specific genes that underlie normal facial variation.

  3. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    Science.gov (United States)

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  4. Geometrical nonlinear deformation model and its experimental study on bimorph giant magnetostrictive thin film

    Institute of Scientific and Technical Information of China (English)

    Wei LIU; Zhenyuan JIA; Fuji WANG; Yongshun ZHANG; Dongming GUO

    2008-01-01

    The geometrical nonlinearity of a giant magne-tostrictive thin film (GMF) can be clearly detected under the magnetostriction effect. Thus, using geometrical linear elastic theory to describe the strain, stress, and constitutive relationship of GMF is inaccurate. According to nonlinear elastic theory, a nonlinear deformation model of the bimorph GMF is established based on assumptions that the magnetostriction effect is equivalent to the effect of body force loaded on the GMF. With Taylor series method, the numerical solution is deduced. Experiments on TbDyFe/Polyimide (PI)/SmFe and TbDyFe/Cu/SmFe are then conducted to verify the proposed model, respectively. Results indicate that the nonlinear deflection curve model is in good conformity with the experimental data.

  5. Proportion of medication error reporting and associated factors among nurses: a cross sectional study.

    Science.gov (United States)

    Jember, Abebaw; Hailu, Mignote; Messele, Anteneh; Demeke, Tesfaye; Hassen, Mohammed

    2018-01-01

    A medication error (ME) is any preventable event that may cause or lead to inappropriate medication use or patient harm. Voluntary reporting has a principal role in appreciating the extent and impact of medication errors. Thus, exploration of the proportion of medication error reporting and associated factors among nurses is important to inform service providers and program implementers so as to improve the quality of the healthcare services. Institution based quantitative cross-sectional study was conducted among 397 nurses from March 6 to May 10, 2015. Stratified sampling followed by simple random sampling technique was used to select the study participants. The data were collected using structured self-administered questionnaire which was adopted from studies conducted in Australia and Jordan. A pilot study was carried out to validate the questionnaire before data collection for this study. Bivariate and multivariate logistic regression models were fitted to identify factors associated with the proportion of medication error reporting among nurses. An adjusted odds ratio with 95% confidence interval was computed to determine the level of significance. The proportion of medication error reporting among nurses was found to be 57.4%. Regression analysis showed that sex, marital status, having made a medication error and medication error experience were significantly associated with medication error reporting. The proportion of medication error reporting among nurses in this study was found to be higher than other studies.

  6. Study on analysis from sources of error for Airborne LIDAR

    Science.gov (United States)

    Ren, H. C.; Yan, Q.; Liu, Z. J.; Zuo, Z. Q.; Xu, Q. Q.; Li, F. F.; Song, C.

    2016-11-01

    With the advancement of Aerial Photogrammetry, it appears that to obtain geo-spatial information of high spatial and temporal resolution provides a new technical means for Airborne LIDAR measurement techniques, with unique advantages and broad application prospects. Airborne LIDAR is increasingly becoming a new kind of space for earth observation technology, which is mounted by launching platform for aviation, accepting laser pulses to get high-precision, high-density three-dimensional coordinate point cloud data and intensity information. In this paper, we briefly demonstrates Airborne laser radar systems, and that some errors about Airborne LIDAR data sources are analyzed in detail, so the corresponding methods is put forwarded to avoid or eliminate it. Taking into account the practical application of engineering, some recommendations were developed for these designs, which has crucial theoretical and practical significance in Airborne LIDAR data processing fields.

  7. Implementing an error disclosure coaching model: A multicenter case study.

    Science.gov (United States)

    White, Andrew A; Brock, Douglas M; McCotter, Patricia I; Shannon, Sarah E; Gallagher, Thomas H

    2017-01-01

    National guidelines call for health care organizations to provide around-the-clock coaching for medical error disclosure. However, frontline clinicians may not always seek risk managers for coaching. As part of a demonstration project designed to improve patient safety and reduce malpractice liability, we trained multidisciplinary disclosure coaches at 8 health care organizations in Washington State. The training was highly rated by participants, although not all emerged confident in their coaching skill. This multisite intervention can serve as a model for other organizations looking to enhance existing disclosure capabilities. Success likely requires cultural change and repeated practice opportunities for coaches. © 2017 American Society for Healthcare Risk Management of the American Hospital Association.

  8. Geometric recursion

    DEFF Research Database (Denmark)

    Andersen, Jørgen Ellegaard; Borot, Gaëtan; Orantin, Nicolas

    We propose a general theory whose main component are functorial assignments ∑→Ω∑ ∈ E (∑), for a large class of functors E from a certain category of bordered surfaces (∑'s) to a suitable a target category of topological vector spaces. The construction is done by summing appropriate compositions...... as Poisson structures on the moduli space of flat connections. The theory has a wider scope than that and one expects that many functorial objects in low-dimensional geometry and topology should have a GR construction. The geometric recursion has various projections to topological recursion (TR) and we...... in particular show it retrieves all previous variants and applications of TR. We also show that, for any initial data for topological recursion, one can construct initial data for GR with values in Frobenius algebra-valued continuous functions on Teichmueller space, such that the ωg,n of TR are obtained...

  9. Error or "act of God"? A study of patients' and operating room team members' perceptions of error definition, reporting, and disclosure.

    Science.gov (United States)

    Espin, Sherry; Levinson, Wendy; Regehr, Glenn; Baker, G Ross; Lingard, Lorelei

    2006-01-01

    Calls abound for a culture change in health care to improve patient safety. However, effective change cannot proceed without a clear understanding of perceptions and beliefs about error. In this study, we describe and compare operative team members' and patients' perceptions of error, reporting of error, and disclosure of error. Thirty-nine interviews of team members (9 surgeons, 9 nurses, 10 anesthesiologists) and patients (11) were conducted at 2 teaching hospitals using 4 scenarios as prompts. Transcribed responses to open questions were analyzed by 2 researchers for recurrent themes using the grounded-theory method. Yes/no answers were compared across groups using chi-square analyses. Team members and patients agreed on what constitutes an error. Deviation from standards and negative outcome were emphasized as definitive features. Patients and nurse professionals differed significantly in their perception of whether errors should be reported. Nurses were willing to report only events within their disciplinary scope of practice. Although most patients strongly advocated full disclosure of errors (what happened and how), team members preferred to disclose only what happened. When patients did support partial disclosure, their rationales varied from that of team members. Both operative teams and patients define error in terms of breaking the rules and the concept of "no harm no foul." These concepts pose challenges for treating errors as system failures. A strong culture of individualism pervades nurses' perception of error reporting, suggesting that interventions are needed to foster collective responsibility and a constructive approach to error identification.

  10. ERROR ANALYSIS IN THE TRAVEL WRITING MADE BY THE STUDENTS OF ENGLISH STUDY PROGRAM

    Directory of Open Access Journals (Sweden)

    Vika Agustina

    2015-05-01

    Full Text Available This study was conducted to identify the kinds of errors in surface strategy taxonomy and to know the dominant type of errors made by the fifth semester students of English Department of one State University in Malang-Indonesia in producing their travel writing. The type of research of this study is document analysis since it analyses written materials, in this case travel writing texts. The analysis finds that the grammatical errors made by the students based on surface strategy taxonomy theory consist of four types. They are (1 omission, (2 addition, (3 misformation and (4 misordering. The most frequent errors occuring in misformation are in the use of tense form. Secondly, the errors are in omission of noun/verb inflection. The next error, there are many clauses that contain unnecessary phrase added there.

  11. The study of error for analysis in dynamic image from the error of count rates in Nal (Tl) scintillation camera

    International Nuclear Information System (INIS)

    Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam; Park, Hoon Hee

    2013-01-01

    This study is aimed to evaluate the effect of T 1/2 upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9 9m TcO 4 - of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ 2 test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T 1/2 error from change of gradient with -0.25% to +0.25%, if T 1/2 is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T 1/2 error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation measurement. Especially, we found a

  12. A prospective, multicenter study of pharmacist activities resulting in medication error interception in the emergency department.

    Science.gov (United States)

    Patanwala, Asad E; Sanders, Arthur B; Thomas, Michael C; Acquisto, Nicole M; Weant, Kyle A; Baker, Stephanie N; Merritt, Erica M; Erstad, Brian L

    2012-05-01

    The primary objective of this study is to determine the activities of pharmacists that lead to medication error interception in the emergency department (ED). This was a prospective, multicenter cohort study conducted in 4 geographically diverse academic and community EDs in the United States. Each site had clinical pharmacy services. Pharmacists at each site recorded their medication error interceptions for 250 hours of cumulative time when present in the ED (1,000 hours total for all 4 sites). Items recorded included the activities of the pharmacist that led to medication error interception, type of orders, phase of medication use process, and type of error. Independent evaluators reviewed all medication errors. Descriptive analyses were performed for all variables. A total of 16,446 patients presented to the EDs during the study, resulting in 364 confirmed medication error interceptions by pharmacists. The pharmacists' activities that led to medication error interception were as follows: involvement in consultative activities (n=187; 51.4%), review of medication orders (n=127; 34.9%), and other (n=50; 13.7%). The types of orders resulting in medication error interceptions were written or computerized orders (n=198; 54.4%), verbal orders (n=119; 32.7%), and other (n=47; 12.9%). Most medication error interceptions occurred during the prescribing phase of the medication use process (n=300; 82.4%) and the most common type of error was wrong dose (n=161; 44.2%). Pharmacists' review of written or computerized medication orders accounts for only a third of medication error interceptions. Most medication error interceptions occur during consultative activities. Copyright © 2011. Published by Mosby, Inc.

  13. A second study of the prediction of cognitive errors using the 'CREAM' technique

    International Nuclear Information System (INIS)

    Collier, Steve; Andresen, Gisle

    2000-03-01

    Some human errors, such as errors of commission and knowledge-based errors, are not adequately modelled in probabilistic safety assessments. Even qualitative methods for handling these sorts of errors are comparatively underdeveloped. The 'Cognitive Reliability and Error Analysis Method' (CREAM) was recently developed for prediction of cognitive error modes. It has not yet been comprehensively established how reliable, valid and generally useful it could be to researchers and practitioners. A previous study of CREAM at Halden was promising, showing a relationship between errors predicted in advance and those that actually occurred in simulated fault scenarios. The present study continues this work. CREAM was used to make predictions of cognitive error modes throughout two rather difficult fault scenarios. Predictions were made of the most likely cognitive error mode, were one to occur at all, at several points throughout the expected scenarios, based upon the scenario design and description. Each scenario was then run 15 times with different operators. Error modes occurring during simulations were later scored using the task description for the scenario, videotapes of operator actions, eye-track recording, operators' verbal protocols and an expert's concurrent commentary. The scoring team had no previous substantive knowledge of the experiment or the techniques used, so as to provide a more stringent test of the data and knowledge needed for scoring. The scored error modes were then compared with the CREAM predictions to assess the degree of agreement. Some cognitive error modes were predicted successfully, but the results were generally not so encouraging as the previous study. Several problems were found with both the CREAM technique and the data needed to complete the analysis. It was felt that further development was needed before this kind of analysis can be reliable and valid, either in a research setting or as a practitioner's tool in a safety assessment

  14. Error floor behavior study of LDPC codes for concatenated codes design

    Science.gov (United States)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  15. Sensitivity of risk parameters to human errors in reactor safety study for a PWR

    International Nuclear Information System (INIS)

    Samanta, P.K.; Hall, R.E.; Swoboda, A.L.

    1981-01-01

    Sensitivities of the risk parameters, emergency safety system unavailabilities, accident sequence probabilities, release category probabilities and core melt probability were investigated for changes in the human error rates within the general methodological framework of the Reactor Safety Study (RSS) for a Pressurized Water Reactor (PWR). Impact of individual human errors were assessed both in terms of their structural importance to core melt and reliability importance on core melt probability. The Human Error Sensitivity Assessment of a PWR (HESAP) computer code was written for the purpose of this study. The code employed point estimate approach and ignored the smoothing technique applied in RSS. It computed the point estimates for the system unavailabilities from the median values of the component failure rates and proceeded in terms of point values to obtain the point estimates for the accident sequence probabilities, core melt probability, and release category probabilities. The sensitivity measure used was the ratio of the top event probability before and after the perturbation of the constituent events. Core melt probability per reactor year showed significant increase with the increase in the human error rates, but did not show similar decrease with the decrease in the human error rates due to the dominance of the hardware failures. When the Minimum Human Error Rate (M.H.E.R.) used is increased to 10 -3 , the base case human error rates start sensitivity to human errors. This effort now allows the evaluation of new error rate data along with proposed changes in the man machine interface

  16. Prevalence of Refractive Error in Singaporean Chinese Children: The Strabismus, Amblyopia, and Refractive Error in Young Singaporean Children (STARS) Study

    OpenAIRE

    Dirani, Mohamed; Chan, Yiong-Huak; Gazzard, Gus; Hornbeak, Dana Marie; Leo, Seo-Wei; Selvaraj, Prabakaran; Zhou, Brendan; Young, Terri L.; Mitchell, Paul; Varma, Rohit; Wong, Tien Yin; Saw, Seang-Mei

    2010-01-01

    Using population-based data, the authors report, for the first time, the prevalence of refractive error in Singaporean Chinese children aged 6 to 72 months. In selected regions of Singapore, myopia has been shown to affect more than 80% of adults; therefore, this paper provides insights into the development of refractive error at a very young age.

  17. Size effect studies on geometrically scaled three point bend type specimens with U-notches

    Energy Technology Data Exchange (ETDEWEB)

    Krompholz, K.; Kalkhof, D.; Groth, E

    2001-02-01

    One of the objectives of the REVISA project (REactor Vessel Integrity in Severe Accidents) is to assess size and scale effects in plastic flow and failure. This includes an experimental programme devoted to characterising the influence of specimen size, strain rate, and strain gradients at various temperatures. One of the materials selected was the forged reactor pressure vessel material 20 MnMoNi 55, material number 1.6310 (heat number 69906). Among others, a size effect study of the creep response of this material was performed, using geometrically similar smooth specimens with 5 mm and 20 mm diameter. The tests were done under constant load in an inert atmosphere at 700 {sup o}C, 800 {sup o}C, and 900 {sup o}C, close to and within the phase transformation regime. The mechanical stresses varied from 10 MPa to 30 MPa, depending on temperature. Prior to creep testing the temperature and time dependence of scale oxidation as well as the temperature regime of the phase transformation was determined. The creep tests were supplemented by metallographical investigations.The test results are presented in form of creep curves strain versus time from which characteristic creep data were determined as a function of the stress level at given temperatures. The characteristic data are the times to 5% and 15% strain and to rupture, the secondary (minimum) creep rate, the elongation at fracture within the gauge length, the type of fracture and the area reduction after fracture. From metallographical investigations the accent's phase contents at different temperatures could be estimated. From these data also the parameters of the regression calculation (e.g. Norton's creep law) were obtained. The evaluation revealed that the creep curves and characteristic data are size dependent of varying degree, depending on the stress and temperature level, but the size influence cannot be related to corrosion or orientation effects or to macroscopic heterogeneity (position effect

  18. Female residents experiencing medical errors in general internal medicine: a qualitative study.

    Science.gov (United States)

    Mankaka, Cindy Ottiger; Waeber, Gérard; Gachoud, David

    2014-07-10

    Doctors, especially doctors-in-training such as residents, make errors. They have to face the consequences even though today's approach to errors emphasizes systemic factors. Doctors' individual characteristics play a role in how medical errors are experienced and dealt with. The role of gender has previously been examined in a few quantitative studies that have yielded conflicting results. In the present study, we sought to qualitatively explore the experience of female residents with respect to medical errors. In particular, we explored the coping mechanisms displayed after an error. This study took place in the internal medicine department of a Swiss university hospital. Within a phenomenological framework, semi-structured interviews were conducted with eight female residents in general internal medicine. All interviews were audiotaped, fully transcribed, and thereafter analyzed. Seven main themes emerged from the interviews: (1) A perception that there is an insufficient culture of safety and error; (2) The perceived main causes of errors, which included fatigue, work overload, inadequate level of competences in relation to assigned tasks, and dysfunctional communication; (3) Negative feelings in response to errors, which included different forms of psychological distress; (4) Variable attitudes of the hierarchy toward residents involved in an error; (5) Talking about the error, as the core coping mechanism; (6) Defensive and constructive attitudes toward one's own errors; and (7) Gender-specific experiences in relation to errors. Such experiences consisted in (a) perceptions that male residents were more confident and therefore less affected by errors than their female counterparts and (b) perceptions that sexist attitudes among male supervisors can occur and worsen an already painful experience. This study offers an in-depth account of how female residents specifically experience and cope with medical errors. Our interviews with female residents convey the

  19. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  20. [Responsibility due to medication errors in France: a study based on SHAM insurance data].

    Science.gov (United States)

    Theissen, A; Orban, J-C; Fuz, F; Guerin, J-P; Flavin, P; Albertini, S; Maricic, S; Saquet, D; Niccolai, P

    2015-03-01

    The safe medication practices at the hospital constitute a major public health problem. Drug supply chain is a complex process, potentially source of errors and damages for the patient. SHAM insurances are the biggest French provider of medical liability insurances and a relevant source of data on the health care complications. The main objective of the study was to analyze the type and cause of medication errors declared to SHAM and having led to a conviction by a court. We did a retrospective study on insurance claims provided by SHAM insurances with a medication error and leading to a condemnation over a 6-year period (between 2005 and 2010). Thirty-one cases were analysed, 21 for scheduled activity and 10 for emergency activity. Consequences of claims were mostly serious (12 deaths, 14 serious complications, 5 simple complications). The types of medication errors were a drug monitoring error (11 cases), an administration error (5 cases), an overdose (6 cases), an allergy (4 cases), a contraindication (3 cases) and an omission (2 cases). Intravenous route of administration was involved in 19 of 31 cases (61%). The causes identified by the court expert were an error related to service organization (11), an error related to medical practice (11) or nursing practice (13). Only one claim was due to the hospital pharmacy. The claim related to drug supply chain is infrequent but potentially serious. These data should help strengthen quality approach in risk management. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  1. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    in certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...

  2. Neural Bases of Unconscious Error Detection in a Chinese Anagram Solution Task: Evidence from ERP Study.

    Directory of Open Access Journals (Sweden)

    Hua-Zhan Yin

    Full Text Available In everyday life, error monitoring and processing are important for improving ongoing performance in response to a changing environment. However, detecting an error is not always a conscious process. The temporal activation patterns of brain areas related to cognitive control in the absence of conscious awareness of an error remain unknown. In the present study, event-related potentials (ERPs in the brain were used to explore the neural bases of unconscious error detection when subjects solved a Chinese anagram task. Our ERP data showed that the unconscious error detection (UED response elicited a more negative ERP component (N2 than did no error (NE and detect error (DE responses in the 300-400-ms time window, and the DE elicited a greater late positive component (LPC than did the UED and NE in the 900-1200-ms time window after the onset of the anagram stimuli. Taken together with the results of dipole source analysis, the N2 (anterior cingulate cortex might reflect unconscious/automatic conflict monitoring, and the LPC (superior/medial frontal gyrus might reflect conscious error recognition.

  3. Decreasing scoring errors on Wechsler Scale Vocabulary, Comprehension, and Similarities subtests: a preliminary study.

    Science.gov (United States)

    Linger, Michele L; Ray, Glen E; Zachar, Peter; Underhill, Andrea T; LoBello, Steven G

    2007-10-01

    Studies of graduate students learning to administer the Wechsler scales have generally shown that training is not associated with the development of scoring proficiency. Many studies report on the reduction of aggregated administration and scoring errors, a strategy that does not highlight the reduction of errors on subtests identified as most prone to error. This study evaluated the development of scoring proficiency specifically on the Wechsler (WISC-IV and WAIS-III) Vocabulary, Comprehension, and Similarities subtests during training by comparing a set of 'early test administrations' to 'later test administrations.' Twelve graduate students enrolled in an intelligence-testing course participated in the study. Scoring errors (e.g., incorrect point assignment) were evaluated on the students' actual practice administration test protocols. Errors on all three subtests declined significantly when scoring errors on 'early' sets of Wechsler scales were compared to those made on 'later' sets. However, correcting these subtest scoring errors did not cause significant changes in subtest scaled scores. Implications for clinical instruction and future research are discussed.

  4. Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors

    International Nuclear Information System (INIS)

    Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A

    2012-01-01

    This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)

  5. Study of on-machine error identification and compensation methods for micro machine tools

    International Nuclear Information System (INIS)

    Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng

    2016-01-01

    Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results

  6. Effect of temperature and geometric parameters on elastic properties of tungsten nanowire: A molecular dynamics study

    Energy Technology Data Exchange (ETDEWEB)

    Saha, Sourav, E-mail: ssaha09@me.buet.ac.bd; Mojumder, Satyajit; Mahboob, Monon [Department of Mechanical Engineering, Bangladesh University of Engineering and Technology, Dhaka-1000 (Bangladesh); Islam, M. Zahabul [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, Pennsylvania 16802 (United States)

    2016-07-12

    Tungsten is a promising material and has potential use as battery anode. Tungsten nanowires are gaining attention from researchers all over the world for this wide field of application. In this paper, we investigated effect of temperature and geometric parameters (diameter and aspect ratio) on elastic properties of Tungsten nanowire. Aspect ratios (length to diameter ratio) considered are 8:1, 10:1, and 12:1 while diameter of the nanowire is varied from 1-4 nm. For 2 nm diameter sample (aspect ratio 10:1), temperature is varied (10 K ~ 1500 K) to observe elastic behavior of Tungsten nanowire under uniaxial tensile loading. EAM potential is used for molecular dynamic simulation. We applied constant strain rate of 10{sup 9} s{sup −1} to deform the nanowire. Elastic behavior is expressed through stress vs. strain plot. We also investigated the fracture mechanism of tungsten nanowire and radial distribution function. Investigation suggests peculiar behavior of Tungsten nanowire in nano-scale with double peaks in stress vs. strain diagram. Necking before final fracture suggests that actual elastic behavior of the material is successfully captured through atomistic modeling.

  7. The estimation of total body fat by inelastic neutron scattering - a geometrical feasibility study

    International Nuclear Information System (INIS)

    Lizos, F.; Kotzasarlidoou, M.; Makridou, A.; Giannopoulou, K.

    2012-01-01

    A rough quantitative representation of the basic elements in a human body is shown. It deals with a hypothetical, normal adult weighting 70 kg. It is possible to measure two basic quantities, the FFM, standing for Fat Free Mass and the FM, standing for Fat Mass. The present simulation deals with the most important aspect of the estimation of storage fat in the human body and in order to accomplish such a task, it is considered a representation of the human body, containing a uniform distribution of triacylglycerols, in a shape of cylindrical phantom. The whole process is analyzed and simulated by a geometrical model and with the aid of a computer program which takes into consideration the different attenuation for neutrons and photons, the amount of gamma radiation reaching the detector is also calculated. The net result is the determination of sensitivity for a particular set-up and by relating the out coming data to the amount of carbon; the quantity of fat is estimated. In addition, the non-uniformity is calculated, from the computer programs expressing the consistency of the system. In order to determine the storage fat, a simulation model that will enable to represent the detection of the carbon atoms in triacylglycerols was built

  8. Writing Skill and Categorical Error Analysis: A Study of First Year Undergraduate University Students

    Directory of Open Access Journals (Sweden)

    Adnan Satariyan

    2014-09-01

    Full Text Available Abstract This study identifies and analyses the common errors in writing skill of the first year students of Azad University of South Tehran Branch in relation to their first language (L1, the type of high school they graduated, and their exposure to media and technology in order to learn English. It also determines the categories in which the errors are committed (content, organisation/discourse, vocabulary, mechanics, or syntax and whether or not there is a significant difference in the percentage of errors committed and these categories. Participants of this study are 190 first year students that are asked to write an essay. An error analysis model adapted from Brown (2001 and Gayeta (2002 is then used to evaluate the essay writings in terms of content, organisation, vocabulary, mechanics, and syntax or language use. The results of the study show that the students have greater difficulties in organisation, content, and vocabulary and experience less difficulties in mechanics and syntax.

  9. HTTR criticality calculations with SCALE6: Studies of various geometric and unit-cell options in modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, J. Y.; Chiang, M. H.; Sheu, R. J.; Liu, Y. W. H. [Inst. of Nuclear Engineering and Science, National Tsing Hua Univ., Hsinchu 30013, Taiwan (China)

    2012-07-01

    The fuel element of the High Temperature Engineering Test Reactor (HTTR) presents a doubly heterogeneous geometry, where tiny TRISO fuel particles dispersed in a graphite matrix form the fuel region of a cylindrical fuel rod, and a number of fuel rods together with moderator or reflector then constitute the lattice design of the core. In this study, a series of full-core HTTR criticality calculations were performed with the SCALE6 code system using various geometric and unit-cell options in order to systematically investigate their effects on neutronic analysis. Two geometric descriptions (ARRAY or HOLE) in SCALE6 can be used to construct a complicated and repeated model. The result shows that eliminating the use of HOLE in the HTTR geometric model can save the computation time by a factor of 4. Four unit-cell treatments for resonance self-shielding corrections in SCALE6 were tested to create problem-specific multigroup cross sections for the HTTR core model. Based on the same ENDF/B-VII cross-section library, their results were evaluated by comparing with continuous-energy calculations. The comparison indicates that the INFHOMMEDIUM result overestimates the system multiplication factor (k{sub eff}) by 55 mk, whereas the LATTICECELL and MULTIREGION treatments predict the k{sub eff} values with similar biases of approximately 10 mk overestimation. The DOUBLEHET result shows a more satisfactory agreement, about 4.2 mk underestimation in the k{sub eff} value. In addition, using cell-weighted cross sections instead of an explicit modeling of TRISO particles in fuel region can further reduce the computation time by a factor of 5 without sacrificing accuracy. (authors)

  10. Numerical Study of the Effect of Presence of Geometric Singularities on the Mechanical Behavior of Laminated Plates

    Science.gov (United States)

    Khechai, Abdelhak; Tati, Abdelouahab; Guettala, Abdelhamid

    2017-05-01

    In this paper, an effort is made to understand the effects of geometric singularities on the load bearing capacity and stress distribution in thin laminated plates. Composite plates with variously shaped cutouts are frequently used in both modern and classical aerospace, mechanical and civil engineering structures. Finite element investigation is undertaken to show the effect of geometric singularities on stress distribution. In this study, the stress concentration factors (SCFs) in cross-and-angle-ply laminated as well as in isotropic plates subjected to uniaxial loading are studied using a quadrilateral finite element of four nodes with thirty-two degrees-of-freedom per element. The varying parameters such as the cutout shape and hole sizes (a/b) are considered. The numerical results obtained by the present element are compared favorably with those obtained using the finite element software Freefem++ and the analytic findings published in literature, which demonstrates the accuracy of the present element. Freefem++ is open source software based on the finite element method, which could be helpful to study and improving the analyses of the stress distribution in composite plates with cutouts. The Freefem++ and the quadrilateral finite element formulations will be given in the beginning of this paper. Finally, to show the effect of the fiber orientation angle and anisotropic modulus ratio on the (SCF), number of figures are given for various ratio (a/b).

  11. Pragmatic geometric model evaluation

    Science.gov (United States)

    Pamer, Robert

    2015-04-01

    Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to

  12. Racial differences in retinal vessel geometric characteristics: a multiethnic study in healthy Asians.

    Science.gov (United States)

    Li, Xiang; Wong, Wan Ling; Cheung, Carol Yim-Lui; Cheng, Ching-Yu; Ikram, Mohammad Kamran; Li, Jialiang; Chia, Kee Seng; Wong, Tien Yin

    2013-05-01

    To investigate potential racial/ethnic differences in retinal vascular geometric parameters in a multiethnic Asian population (Chinese, Malay, and Indian) free of clinical diseases. A series of retinal vascular parameters were measured from retinal photographs using a computer-assisted program following a standardized protocol. Healthy participants were defined as nonsmokers, the absence of diabetes mellitus, uncontrolled hypertension, obesity, stroke, heart disease, glaucoma, and retinopathy. THERE WERE SIGNIFICANT DIFFERENCES IN MEASUREMENTS OF RETINAL VASCULAR CALIBER, TORTUOSITY, AND FRACTAL DIMENSION AMONG THE THREE ETHNIC GROUPS. IN MULTIPLE LINEAR REGRESSION MODEL CONTROLLING FOR AGE, SEX, BODY MASS INDEX, SYSTOLIC BLOOD PRESSURE, CHOLESTEROL, AND GLUCOSE LEVELS, INDIANS HAD THE LARGEST ARTERIOLAR AND VENULAR CALIBERS (ARTERIOLES [SE]: 158.94 μm [1.00]; venules: 228.26 μm [1.53]), followed by Malays (arterioles: 138.31 μm [0.74]; venules: 204.26 μm [1.13]), and then Chinese (arterioles: 131.20 μm [0.84]; venules: 195.09 μm [1.28]). Chinese had the largest arteriolar and venular tortuosity (arterioles [× $${10}^{5}$$]: 7.20 [0.08] VENULES [ $${10}^{5}$$]: 9.09 [0.10]), and venular fractal dimension (1.244 [0.003]). There were no statistically significant differences in other retinal vascular parameters after correcting multiple comparisons by the method of modified false discovery rate. We found that among ethnic groups composed of healthy Chinese, Malay, and Indians, there were statistically significant differences in several retinal parameters. There exist racial influences in retinal vascular parameters and other yet unknown or unmeasured environmental factor or lifestyle habits and genetic variations not related to race that may also contribute to these differences.

  13. Three-dimensional vertical Si nanowire MOS capacitor model structure for the study of electrical versus geometrical Si nanowire characteristics

    Science.gov (United States)

    Hourdakis, E.; Casanova, A.; Larrieu, G.; Nassiopoulou, A. G.

    2018-05-01

    Three-dimensional (3D) Si surface nanostructuring is interesting towards increasing the capacitance density of a metal-oxidesemiconductor (MOS) capacitor, while keeping reduced footprint for miniaturization. Si nanowires (SiNWs) can be used in this respect. With the aim of understanding the electrical versus geometrical characteristics of such capacitors, we fabricated and studied a MOS capacitor with highly ordered arrays of vertical Si nanowires of different lengths and thermal silicon oxide dielectric, in comparison to similar flat MOS capacitors. The high homogeneity and ordering of the SiNWs allowed the determination of the single SiNW capacitance and intrinsic series resistance, as well as other electrical characteristics (density of interface states, flat-band voltage and leakage current) in relation to the geometrical characteristics of the SiNWs. The SiNW capacitors demonstrated increased capacitance density compared to the flat case, while maintaining a cutoff frequency above 1 MHz, much higher than in other reports in the literature. Finally, our model system has been shown to constitute an excellent platform for the study of SiNW capacitors with either grown or deposited dielectrics, as for example high-k dielectrics for further increasing the capacitance density. This will be the subject of future work.

  14. SIMulation of Medication Error induced by Clinical Trial drug labeling: the SIMME-CT study.

    Science.gov (United States)

    Dollinger, Cecile; Schwiertz, Vérane; Sarfati, Laura; Gourc-Berthod, Chloé; Guédat, Marie-Gabrielle; Alloux, Céline; Vantard, Nicolas; Gauthier, Noémie; He, Sophie; Kiouris, Elena; Caffin, Anne-Gaelle; Bernard, Delphine; Ranchon, Florence; Rioufol, Catherine

    2016-06-01

    To assess the impact of investigational drug labels on the risk of medication error in drug dispensing. A simulation-based learning program focusing on investigational drug dispensing was conducted. The study was undertaken in an Investigational Drugs Dispensing Unit of a University Hospital of Lyon, France. Sixty-three pharmacy workers (pharmacists, residents, technicians or students) were enrolled. Ten risk factors were selected concerning label information or the risk of confusion with another clinical trial. Each risk factor was scored independently out of 5: the higher the score, the greater the risk of error. From 400 labels analyzed, two groups were selected for the dispensing simulation: 27 labels with high risk (score ≥3) and 27 with low risk (score ≤2). Each question in the learning program was displayed as a simulated clinical trial prescription. Medication error was defined as at least one erroneous answer (i.e. error in drug dispensing). For each question, response times were collected. High-risk investigational drug labels correlated with medication error and slower response time. Error rates were significantly 5.5-fold higher for high-risk series. Error frequency was not significantly affected by occupational category or experience in clinical trials. SIMME-CT is the first simulation-based learning tool to focus on investigational drug labels as a risk factor for medication error. SIMME-CT was also used as a training tool for staff involved in clinical research, to develop medication error risk awareness and to validate competence in continuing medical education. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  15. Measuring the relationship between interruptions, multitasking and prescribing errors in an emergency department: a study protocol.

    Science.gov (United States)

    Raban, Magdalena Z; Walter, Scott R; Douglas, Heather E; Strumpman, Dana; Mackenzie, John; Westbrook, Johanna I

    2015-10-13

    Interruptions and multitasking are frequent in clinical settings, and have been shown in the cognitive psychology literature to affect performance, increasing the risk of error. However, comparatively less is known about their impact on errors in clinical work. This study will assess the relationship between prescribing errors, interruptions and multitasking in an emergency department (ED) using direct observations and chart review. The study will be conducted in an ED of a 440-bed teaching hospital in Sydney, Australia. Doctors will be shadowed at proximity by observers for 2 h time intervals while they are working on day shift (between 0800 and 1800). Time stamped data on tasks, interruptions and multitasking will be recorded on a handheld computer using the validated Work Observation Method by Activity Timing (WOMBAT) tool. The prompts leading to interruptions and multitasking will also be recorded. When doctors prescribe medication, type of chart and chart sections written on, along with the patient's medical record number (MRN) will be recorded. A clinical pharmacist will access patient records and assess the medication orders for prescribing errors. The prescribing error rate will be calculated per prescribing task and is defined as the number of errors divided by the number of medication orders written during the prescribing task. The association between prescribing error rates, and rates of prompts, interruptions and multitasking will be assessed using statistical modelling. Ethics approval has been obtained from the hospital research ethics committee. Eligible doctors will be provided with written information sheets and written consent will be obtained if they agree to participate. Doctor details and MRNs will be kept separate from the data on prescribing errors, and will not appear in the final data set for analysis. Study results will be disseminated in publications and feedback to the ED. Published by the BMJ Publishing Group Limited. For permission

  16. DNA-based and geometric morphometric analysis to validate species designation: a case study of the subterranean rodent Ctenomys bicolor.

    Science.gov (United States)

    Stolz, J F B; Gonçalves, G L; Leipnitz, L; Freitas, T R O

    2013-10-25

    The genus Ctenomys (Rodentia: Ctenomyidae) shows several taxonomic inconsistencies. In this study, we used an integrative approach including DNA sequences, karyotypes, and geometric morphometrics to evaluate the taxonomic validity of a nominal species, Ctenomys bicolor, which was described based on only one specimen in 1912 by Miranda Ribeiro, and since then neglected. We sampled near the type locality assigned to this species and collected 10 specimens. A total of 820 base pairs of the cytochrome b gene were sequenced and analyzed together with nine other species and four morphotypes obtained from GenBank. Bayesian analyses showed that C. bicolor is monophyletic and related to the Bolivian-Matogrossense group, a clade that originated about 3 mya. We compared the cranial shape through morphometric geometrics of C. bicolor, including the specimen originally sampled in 1912, with other species representative of the same phylogenetic group (C. boliviensis and C. steinbachi). C. bicolor shows unique skull traits that distinguish it from all other currently known taxa. Our findings confirm that the specimen collected by Miranda Ribeiro is a valid species, and improve the knowledge about Ctenomys in the Amazon region.

  17. Systematic Error Study for ALICE charged-jet v2 Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-07-18

    We study the treatment of systematic errors in the determination of v2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methods are equivalent.

  18. Study on Network Error Analysis and Locating based on Integrated Information Decision System

    Science.gov (United States)

    Yang, F.; Dong, Z. H.

    2017-10-01

    Integrated information decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, which provides with various services, such as email, short messages, drawing and sharing. Because the under-layer protocols are different, user standards are not unified, many errors are occurred during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which may be happened in different operation phases, stages, TCP/IP communication protocol layers, sub-system software, it is necessary to design a network error analysis and locating tool for IIDS to solve the above problems. This paper studies on network error analysis and locating based on IIDS, which provides strong theory and technology supports for the running and communicating of IIDS.

  19. On bivariate geometric distribution

    Directory of Open Access Journals (Sweden)

    K. Jayakumar

    2013-05-01

    Full Text Available Characterizations of bivariate geometric distribution using univariate and bivariate geometric compounding are obtained. Autoregressive models with marginals as bivariate geometric distribution are developed. Various bivariate geometric distributions analogous to important bivariate exponential distributions like, Marshall-Olkin’s bivariate exponential, Downton’s bivariate exponential and Hawkes’ bivariate exponential are presented.

  20. Visualizing the Geometric Series.

    Science.gov (United States)

    Bennett, Albert B., Jr.

    1989-01-01

    Mathematical proofs often leave students unconvinced or without understanding of what has been proved, because they provide no visual-geometric representation. Presented are geometric models for the finite geometric series when r is a whole number, and the infinite geometric series when r is the reciprocal of a whole number. (MNS)

  1. Transmuted Complementary Weibull Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Ahmed Z. A…fify

    2014-12-01

    Full Text Available This paper provides a new generalization of the complementary Weibull geometric distribution that introduced by Tojeiro et al. (2014, using the quadratic rank transmutation map studied by Shaw and Buckley (2007. The new distribution is referred to as transmuted complementary Weibull geometric distribution (TCWGD. The TCWG distribution includes as special cases the complementary Weibull geometric distribution (CWGD, complementary exponential geometric distribution(CEGD,Weibull distribution (WD and exponential distribution (ED. Various structural properties of the new distribution including moments, quantiles, moment generating function and RØnyi entropy of the subject distribution are derived. We proposed the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set are used to compare the ‡exibility of the transmuted version versus the complementary Weibull geometric distribution.

  2. Study on structural failure of RPV with geometric discontinuity under severe accident

    Energy Technology Data Exchange (ETDEWEB)

    Mao, J.F., E-mail: jianfeng-mao@163.com [Institute of Process Equipment and Control Engineering, Zhejiang University of Technology, Hangzhou, Zhejiang 310032 (China); Engineering Research Center of Process Equipment and Re-manufacturing, Ministry of Education (China); Zhu, J.W. [Institute of Process Equipment and Control Engineering, Zhejiang University of Technology, Hangzhou, Zhejiang 310032 (China); Department of Mechanical and Electrical engineering, Huzhou Vocational & Technical College Huzhou, Zhejiang 313000 (China); Bao, S.Y., E-mail: bsy@zjut.edu.cn [Institute of Process Equipment and Control Engineering, Zhejiang University of Technology, Hangzhou, Zhejiang 310032 (China); Engineering Research Center of Process Equipment and Re-manufacturing, Ministry of Education (China); Luo, L.J. [Institute of Process Equipment and Control Engineering, Zhejiang University of Technology, Hangzhou, Zhejiang 310032 (China); Gao, Z.L. [Institute of Process Equipment and Control Engineering, Zhejiang University of Technology, Hangzhou, Zhejiang 310032 (China); Engineering Research Center of Process Equipment and Re-manufacturing, Ministry of Education (China)

    2016-10-15

    Highlights: • The RPV failure is investigated in depth under severe accident. • The creep and plastic damage are the major contributor to RPV failure. • A elastic core is found at the midpoint of the highly-eroded region. • Weakest location has some ‘accommodating’ quality to prevent ductile tearing. • The internal pressure is critical for the determination of structural failure. - Abstract: A severe accident management strategy known as ‘in-vessel retention (IVR)’ is widely adopted in most of advanced nuclear power plants. The IVR mitigation is assumed to be able to arrest the degraded melting core and maintain the structural integrity of reactor pressure vessel (RPV) within a prescribed period of time. This traditional concept of IVR without consideration of internal pressure effect wasn’t challenged until the occurrence of Fukushima accident on 2011, which showed that the structural behavior had not been appropriately assessed, and a certain pressure (up to 8.0 MPa) still existed inside the RPV. Accordingly, the paper tries to address the related issue on whether lower head (LH) integrity can be maintained, when the LH is subjected to the thermal-mechanical loads created during such a severe accident. Because of the presence of the high temperature melt (∼1300 °C) on the inside of RPV, some local material is melted down to create a unique RPV with geometric discontinuity, while the outside of RPV submerged in cavity water will remain in nucleate boiling (at ∼150 °C). Therefore, the failure mechanisms of RPV can span a wide range of structural behaviors, such as melt-through, creep damage, plastic yielding as well as thermal expansion. Through meticulous investigation, it is found that the RPV failure is mainly caused by creep and plasticity, especially for the inside of highly-eroded region. The elastic core (or layer) is found to exist in the proximity of mid-section of the highly-eroded wall. However, the elastic core is squeezed into

  3. Effects of Lexico-syntactic Errors on Teaching Materials: A Study of Textbooks Written by Nigerians

    Directory of Open Access Journals (Sweden)

    Peace Chinwendu Israel

    2014-01-01

    Full Text Available This study examined lexico-syntactic errors in selected textbooks written by Nigerians. Our focus was on the educated bilinguals (acrolect who acquired their primary, secondary and tertiary education in Nigeria and the selected textbooks were textbooks published by Vanity Publishers/Press. The participants (authors cut across the three major ethnic groups in Nigeria – Hausa, Igbo and Yoruba and the selection of the textbooks covered the major disciplines of study. We adopted the descriptive research design and specifically employed the survey method to accomplish the purpose of our exploratory research.  The lexico-syntactic errors in the selected textbooks were identified and classified into various categories. These errors were not different from those identified over the years in students’ essays and exam scripts. This buttressed our argument that students are merely the conveyor belt of errors contained in the teaching material and that we can analyse the students’ lexico-syntactic errors in tandem with errors contained in the material used in teaching.

  4. Predictive error detection in pianists: A combined ERP and motion capture study

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-09-01

    Full Text Available Performing a piece of music involves the interplay of several cognitive and motor processes and requires extensive training to achieve a high skill level. However, even professional musicians commit errors occasionally. Previous event-related potential (ERP studies have investigated the neurophysiological correlates of pitch errors during piano performance, and reported pre-error negativity already occurring approximately 70-100 ms before the error had been committed and audible. It was assumed that this pre-error negativity reflects predictive control processes that compare predicted consequences with actual consequences of one’s own actions. However, in previous investigations, correct and incorrect pitch events were confounded by their different tempi. In addition, no data about the underlying movements were available. In the present study, we exploratively recorded the ERPs and 3D movement data of pianists’ fingers simultaneously while they performed fingering exercises from memory. Results showed a pre-error negativity for incorrect keystrokes when both correct and incorrect keystrokes were performed with comparable tempi. Interestingly, even correct notes immediately preceding erroneous keystrokes elicited a very similar negativity. In addition, we explored the possibility of computing ERPs time-locked to a kinematic landmark in the finger motion trajectories defined by when a finger makes initial contact with the key surface, that is, at the onset of tactile feedback. Results suggest that incorrect notes elicited a small difference after the onset of tactile feedback, whereas correct notes preceding incorrect ones elicited negativity before the onset of tactile feedback. The results tentatively suggest that tactile feedback plays an important role in error-monitoring during piano performance, because the comparison between predicted and actual sensory (tactile feedback may provide the information necessary for the detection of an

  5. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  6. Review of human error analysis methodologies and case study for accident management

    International Nuclear Information System (INIS)

    Jung, Won Dae; Kim, Jae Whan; Lee, Yong Hee; Ha, Jae Joo

    1998-03-01

    In this research, we tried to establish the requirements for the development of a new human error analysis method. To achieve this goal, we performed a case study as following steps; 1. review of the existing HEA methods 2. selection of those methods which are considered to be appropriate for the analysis of operator's tasks in NPPs 3. choice of tasks for the application, selected for the case study: HRMS (Human reliability management system), PHECA (Potential Human Error Cause Analysis), CREAM (Cognitive Reliability and Error Analysis Method). And, as the tasks for the application, 'bleed and feed operation' and 'decision-making for the reactor cavity flooding' tasks are chosen. We measured the applicability of the selected methods to the NPP tasks, and evaluated the advantages and disadvantages between each method. The three methods are turned out to be applicable for the prediction of human error. We concluded that both of CREAM and HRMS are equipped with enough applicability for the NPP tasks, however, compared two methods. CREAM is thought to be more appropriate than HRMS from the viewpoint of overall requirements. The requirements for the new HEA method obtained from the study can be summarized as follows; firstly, it should deal with cognitive error analysis, secondly, it should have adequate classification system for the NPP tasks, thirdly, the description on the error causes and error mechanisms should be explicit, fourthly, it should maintain the consistency of the result by minimizing the ambiguity in each step of analysis procedure, fifty, it should be done with acceptable human resources. (author). 25 refs., 30 tabs., 4 figs

  7. Geometric group theory

    CERN Document Server

    Bestvina, Mladen; Vogtmann, Karen

    2014-01-01

    Geometric group theory refers to the study of discrete groups using tools from topology, geometry, dynamics and analysis. The field is evolving very rapidly and the present volume provides an introduction to and overview of various topics which have played critical roles in this evolution. The book contains lecture notes from courses given at the Park City Math Institute on Geometric Group Theory. The institute consists of a set of intensive short courses offered by leaders in the field, designed to introduce students to exciting, current research in mathematics. These lectures do not duplicate standard courses available elsewhere. The courses begin at an introductory level suitable for graduate students and lead up to currently active topics of research. The articles in this volume include introductions to CAT(0) cube complexes and groups, to modern small cancellation theory, to isometry groups of general CAT(0) spaces, and a discussion of nilpotent genus in the context of mapping class groups and CAT(0) gro...

  8. Geometric leaf placement strategies

    International Nuclear Information System (INIS)

    Fenwick, J D; Temple, S W P; Clements, R W; Lawrence, G P; Mayles, H M O; Mayles, W P M

    2004-01-01

    Geometric leaf placement strategies for multileaf collimators (MLCs) typically involve the expansion of the beam's-eye-view contour of a target by a uniform MLC margin, followed by movement of the leaves until some point on each leaf end touches the expanded contour. Film-based dose-distribution measurements have been made to determine appropriate MLC margins-characterized through an index d 90 -for multileaves set using one particular strategy to straight lines lying at various angles to the direction of leaf travel. Simple trigonometric relationships exist between different geometric leaf placement strategies and are used to generalize the results of the film work into d 90 values for several different strategies. Measured d 90 values vary both with angle and leaf placement strategy. A model has been derived that explains and describes quite well the observed variations of d 90 with angle. The d 90 angular variations of the strategies studied differ substantially, and geometric and dosimetric reasoning suggests that the best strategy is the one with the least angular variation. Using this criterion, the best straightforwardly implementable strategy studied is a 'touch circle' approach for which semicircles are imagined to be inscribed within leaf ends, the leaves being moved until the semicircles just touch the expanded target outline

  9. The use of adaptive radiation therapy to reduce setup error: a prospective clinical study

    International Nuclear Information System (INIS)

    Yan Di; Wong, John; Vicini, Frank; Robertson, John; Horwitz, Eric; Brabbins, Donald; Cook, Carla; Gustafson, Gary; Stromberg, Jannifer; Martinez, Alvaro

    1996-01-01

    Purpose: Adaptive Radiation Therapy (ART) is a closed-loop feedback process where each patients treatment is adaptively optimized according to the individual variation information measured during the course of treatment. The process aims to maximize the benefits of treatment for the individual patient. A prospective study is currently being conducted to test the feasibility and effectiveness of ART for clinical use. The present study is limited to compensating the effects of systematic setup error. Methods and Materials: The study includes 20 patients treated on a linear accelerator equipped with a computer controlled multileaf collimator (MLC) and a electronic portal imaging device (EPID). Alpha cradles are used to immobilize those patients treated for disease in the thoracic and abdominal regions, and thermal plastic masks for the head and neck. Portal images are acquired daily. Setup error of each treatment field is quantified off-line every day. As determined from an earlier retrospective study of different clinical sites, the measured setup variation from the first 4 to 9 days, are used to estimate systematic setup error and the standard deviation of random setup error for each field. Setup adjustment is made if estimated systematic setup error of the treatment field was larger than or equal to 2 mm. Instead of the conventional approach of repositioning the patient, setup correction is implemented by reshaping MLC to compensate for the estimated systematic error. The entire process from analysis of portal images to the implementation of the modified MLC field is performed via computer network. Systematic and random setup errors of the treatment after adjustment are compared with those prior to adjustment. Finally, the frequency distributions of block overlap cumulated throughout the treatment course are evaluated. Results: Sixty-seven percent of all treatment fields were reshaped to compensate for the estimated systematic errors. At the time of this writing

  10. Geometric Magnetic Frustration in Li3Mg2OsO6 Studied with Muon Spin Relaxation

    Science.gov (United States)

    Carlo, J. P.; Derakhshan, S.; Greedan, J. E.

    Geometric frustration manifests when the spatial arrangement of ions inhibits magnetic order. Typically associated with antiferromagnetically (AF)-correlated moments on triangular or tetrahedral lattices, frustration occurs in a variety of structures and systems, resulting in rich phase diagrams and exotic ground states. As a window to exotic physics revealed by the cancellation of normally dominant interactions, the research community has taken great interest in frustrated systems. One family of recent interest are the rock-salt ordered oxides A5BO6, in which the B sites are occupied by magnetic ions comprising a network of interlocked tetrahedra, and nonmagnetic ions on the A sites control the B oxidation state through charge neutrality. Here we will discuss studies of Li3Mg2OsO6 using muon spin relaxation (μSR), a highly sensitive local probe of magnetism. Previous studies of this family included Li5OsO6, which exhibits AF order below 50K with minimal evidence for frustration, and Li4MgReO6, which exhibits glassy magnetism. Li3Mg2RuO6, meanwhile, exhibits long-range AF, with the ordering temperature suppressed by frustration. But its isoelectronic twin, Li3Mg2OsO6 (5d3 vs. 4d3) exhibits very different behavior, revealed by μSR to be a glassy ground state below 12K. Understanding why such similar systems exhibit diverse ground-state behavior is key to understanding the nature of geometric magnetic frustration. Financial support from the Research Corporation for Science Advancement.

  11. On the importance of Task 1 and error performance measures in PRP dual-task studies

    Science.gov (United States)

    Strobach, Tilo; Schütz, Anja; Schubert, Torsten

    2015-01-01

    The psychological refractory period (PRP) paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and Task 2) are presented with variable stimulus onset asynchronies (SOAs) and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e., decreasing SOAs do not increase reaction times (RTs) and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates) show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/or error rates in Task 1). This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects. PMID:25904890

  12. On the importance of Task 1 and error performance measures in PRP dual-task studies.

    Science.gov (United States)

    Strobach, Tilo; Schütz, Anja; Schubert, Torsten

    2015-01-01

    The psychological refractory period (PRP) paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and Task 2) are presented with variable stimulus onset asynchronies (SOAs) and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e., decreasing SOAs do not increase reaction times (RTs) and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates) show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/or error rates in Task 1). This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.

  13. On the importance of Task 1 and error performance measures in PRP dual-task studies

    Directory of Open Access Journals (Sweden)

    Tilo eStrobach

    2015-04-01

    Full Text Available The Psychological Refractory Period (PRP paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and 2 are presented with variable stimulus onset asynchronies (SOAs and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e. decreasing SOAs do not increase RTs and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/ or error rates in Task 1. This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.

  14. Perceptions and Attitudes towards Medication Error Reporting in Primary Care Clinics: A Qualitative Study in Malaysia.

    Science.gov (United States)

    Samsiah, A; Othman, Noordin; Jamshed, Shazia; Hassali, Mohamed Azmi

    2016-01-01

    To explore and understand participants' perceptions and attitudes towards the reporting of medication errors (MEs). A qualitative study using in-depth interviews of 31 healthcare practitioners from nine publicly funded, primary care clinics in three states in peninsular Malaysia was conducted for this study. The participants included family medicine specialists, doctors, pharmacists, pharmacist assistants, nurses and assistant medical officers. The interviews were audiotaped and transcribed verbatim. Analysis of the data was guided by the framework approach. Six themes and 28 codes were identified. Despite the availability of a reporting system, most of the participants agreed that MEs were underreported. The nature of the error plays an important role in determining the reporting. The reporting system, organisational factors, provider factors, reporter's burden and benefit of reporting also were identified. Healthcare practitioners in primary care clinics understood the importance of reporting MEs to improve patient safety. Their perceptions and attitudes towards reporting of MEs were influenced by many factors which affect the decision-making process of whether or not to report. Although the process is complex, it primarily is determined by the severity of the outcome of the errors. The participants voluntarily report the errors if they are familiar with the reporting system, what error to report, when to report and what form to use.

  15. Random and correlated errors in gold standards used in nutritional epidemiology: implications for validation studies

    Science.gov (United States)

    The measurement error correction de-attenuation factor was estimated from two studies using recovery biomarkers. One study, the Observing Protein and Energy Nutrition (OPEN), was unable to adequately account for within-person variation in protein and energy intake estimated by recovery biomarkers, ...

  16. A Study of Trial and Error Learning in Technology, Engineering, and Design Education

    Science.gov (United States)

    Franzen, Marissa Marie Sloan

    2016-01-01

    The purpose of this research study was to determine if trial and error learning was an effective, practical, and efficient learning method for Technology, Engineering, and Design Education students at the post-secondary level. A mixed methods explanatory research design was used to measure the viability of the learning source. The study sample was…

  17. Optical 'dampening' of the refractive error to axial length ratio: implications for outcome measures in myopia control studies.

    Science.gov (United States)

    Cruickshank, Fiona E; Logan, Nicola S

    2018-05-01

    To gauge the extent to which differences in the refractive error axial length relationship predicted by geometrical optics are observed in actual refractive/biometric data. This study is a retrospective analysis of existing data. Right eye refractive error [RX] and axial length [AXL] data were collected on 343 6-to-7-year-old children [mean 7.18 years (S.D. 0.35)], 294 12-to-13-year-old children [mean 13.12 years (S.D. 0.32)] and 123 young adults aged 18-to-25-years [mean 20.56 years (S.D. 1.91)]. Distance RX was measured with the Shin-Nippon NVision-K 5001 infrared open-field autorefractor. Child participants were cyclopleged prior to data collection (1% Cyclopentolate Hydrochloride). Myopia was defined as a mean spherical equivalent [MSE] ≤-0.50 D. Axial length was measured using the Zeiss IOLMaster 500. Optical modelling was based on ray tracing and manipulation of parameters of a Gullstrand reduced model eye. There was a myopic shift in mean MSE with age (6-7 years +0.87 D, 12-13 years -0.06 D and 18-25 years -1.41 D), associated with an increase in mean AXL (6-7 years 22.70 mm, 12-13 years 23.49 mm and 18-25 years 23.98 mm). There was a significant negative correlation between MSE and AXL for all age groups (all p theory predicts that there will be a reduction in the RX: AXL ratio with longer eyes. The participant data although adhering to this theory show a reduced effect, with eyes with longer axial lengths having a lower refractive error to axial length ratio than predicted by model eye calculations. We propose that in myopia control intervention studies when comparing efficacy, consideration should be given to the dampening effect seen with a longer eye. © 2018 The Authors. Ophthalmic and Physiological Optics published by John Wiley & Sons Ltd on behalf of College of Optometrists.

  18. A vignette study to examine health care professionals' attitudes towards patient involvement in error prevention.

    Science.gov (United States)

    Schwappach, David L B; Frank, Olga; Davis, Rachel E

    2013-10-01

    Various authorities recommend the participation of patients in promoting patient safety, but little is known about health care professionals' (HCPs') attitudes towards patients' involvement in safety-related behaviours. To investigate how HCPs evaluate patients' behaviours and HCP responses to patient involvement in the behaviour, relative to different aspects of the patient, the involved HCP and the potential error. Cross-sectional fractional factorial survey with seven factors embedded in two error scenarios (missed hand hygiene, medication error). Each survey included two randomized vignettes that described the potential error, a patient's reaction to that error and the HCP response to the patient. Twelve hospitals in Switzerland. A total of 1141 HCPs (response rate 45%). Approval of patients' behaviour, HCP response to the patient, anticipated effects on the patient-HCP relationship, HCPs' support for being asked the question, affective response to the vignettes. Outcomes were measured on 7-point scales. Approval of patients' safety-related interventions was generally high and largely affected by patients' behaviour and correct identification of error. Anticipated effects on the patient-HCP relationship were much less positive, little correlated with approval of patients' behaviour and were mainly determined by the HCP response to intervening patients. HCPs expressed more favourable attitudes towards patients intervening about a medication error than about hand sanitation. This study provides the first insights into predictors of HCPs' attitudes towards patient engagement in safety. Future research is however required to assess the generalizability of the findings into practice before training can be designed to address critical issues. © 2012 John Wiley & Sons Ltd.

  19. Optical traps with geometric aberrations

    International Nuclear Information System (INIS)

    Roichman, Yael; Waldron, Alex; Gardel, Emily; Grier, David G.

    2006-01-01

    We assess the influence of geometric aberrations on the in-plane performance of optical traps by studying the dynamics of trapped colloidal spheres in deliberately distorted holographic optical tweezers. The lateral stiffness of the traps turns out to be insensitive to moderate amounts of coma, astigmatism, and spherical aberration. Moreover holographic aberration correction enables us to compensate inherent shortcomings in the optical train, thereby adaptively improving its performance. We also demonstrate the effects of geometric aberrations on the intensity profiles of optical vortices, whose readily measured deformations suggest a method for rapidly estimating and correcting geometric aberrations in holographic trapping systems

  20. A study on the influence of track discontinuities on the degradation of the geometric quality supported by GPR

    Science.gov (United States)

    Paixao, Andre; Fontul, Simona; Salcedas, Tânia; Marques, Margarida

    2017-04-01

    It is known that locations in the track denoting sudden structural changes induce dynamic amplifications in the train-track interaction, thus leading to higher impact loads from trains, which in turn promote a faster development of track defects and increase the degradation of components. Consequently, a reduction in the quality of service can be expected at such discontinuities in the track, inducing higher maintenance costs and decreasing the life-cycle of components. To finding actual evidences on how track discontinuities influence the degradation of the geometric quality, a 50-km long railway section is used as case study. The track geometry data obtained with a recording car is firstly characterized according to the European standard series EN 13848. Then, the results of successive surveys are analysed, making use of various tools such as the standard deviation with moving windows of different sizes and calculating degradation rates. The GPR data was also analysed at the locations corresponding to track discontinuities aiming at better identifying situations where sudden changes occur regarding either the structural characteristics or the track behaviour over the years. The results indicate that the geometric quality degrades faster at locations denoting discontinuities in the track, such as changes in track components, approaches to bridges, tunnels, etc. This behaviour suggests that these sites should be monitored more carefully in the scope of asset management activities in order to maximize the life-cycle of the track and its components. This work is a contribution to COST (European COoperation on Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar".

  1. Prevalence of refractive errors in the European adult population: the Gutenberg Health Study (GHS).

    Science.gov (United States)

    Wolfram, Christian; Höhn, René; Kottler, Ulrike; Wild, Philipp; Blettner, Maria; Bühren, Jens; Pfeiffer, Norbert; Mirshahi, Alireza

    2014-07-01

    To study the distribution of refractive errors among adults of European descent. Population-based eye study in Germany with 15010 participants aged 35-74 years. The study participants underwent a detailed ophthalmic examination according to a standardised protocol. Refractive error was determined by an automatic refraction device (Humphrey HARK 599) without cycloplegia. Definitions for the analysis were myopia +0.5 D, astigmatism >0.5 cylinder D and anisometropia >1.0 D difference in the spherical equivalent between the eyes. Exclusion criterion was previous cataract or refractive surgery. 13959 subjects were eligible. Refractive errors ranged from -21.5 to +13.88 D. Myopia was present in 35.1% of this study sample, hyperopia in 31.8%, astigmatism in 32.3% and anisometropia in 13.5%. The prevalence of myopia decreased, while the prevalence of hyperopia, astigmatism and anisometropia increased with age. 3.5% of the study sample had no refractive correction for their ametropia. Refractive errors affect the majority of the population. The Gutenberg Health Study sample contains more myopes than other study cohorts in adult populations. Our findings do not support the hypothesis of a generally lower prevalence of myopia among adults in Europe as compared with East Asia.

  2. Geometric Hypergraph Learning for Visual Tracking

    OpenAIRE

    Du, Dawei; Qi, Honggang; Wen, Longyin; Tian, Qi; Huang, Qingming; Lyu, Siwei

    2016-01-01

    Graph based representation is widely used in visual tracking field by finding correct correspondences between target parts in consecutive frames. However, most graph based trackers consider pairwise geometric relations between local parts. They do not make full use of the target's intrinsic structure, thereby making the representation easily disturbed by errors in pairwise affinities when large deformation and occlusion occur. In this paper, we propose a geometric hypergraph learning based tr...

  3. Medication Errors in an Internal Intensive Care Unit of a Large Teaching Hospital: A Direct Observation Study

    Directory of Open Access Journals (Sweden)

    Saadat Delfani

    2012-06-01

    Full Text Available Medication errors account for about 78% of serious medical errors in intensive care unit (ICU. So far no study has been performed in Iran to evaluate all type of possible medication errors in ICU. Therefore the objective of this study was to reveal the frequency, type and consequences of all type of errors in an ICU of a large teaching hospital. The prospective observational study was conducted in an 11 bed internal ICU of a university hospital in Shiraz. In each shift all processes that were performed on one selected patient was observed and recorded by a trained pharmacist. Observer would intervene only if medication error would cause substantial harm. The data was evaluated and then were entered in a form that was designed for this purpose. The study continued for 38 shifts. During this period, a total of 442 errors per 5785 opportunities for errors (7.6% occurred. Of those, there were 9.8% administration errors, 6.8% prescribing errors, 3.3% transcription errors and, 2.3% dispensing errors. Totally 45 interventions were made, 40% of interventions result in the correction of errors. The most common causes of errors were observed to be: rule violations, slip and memory lapses and lack of drug knowledge. According to our results, the rate of errors is alarming and requires implementation of a serious solution. Since our system lacks a well-organize detection and reporting mechanism, there is no means for preventing errors in the first place. Hence, as the first step we must implement a system where errors are routinely detected and reported.

  4. Abstract probabilistic CNOT gate model based on double encoding: study of the errors and physical realizability

    Science.gov (United States)

    Gueddana, Amor; Attia, Moez; Chatta, Rihab

    2015-03-01

    In this work, we study the error sources standing behind the non-perfect linear optical quantum components composing a non-deterministic quantum CNOT gate model, which performs the CNOT function with a success probability of 4/27 and uses a double encoding technique to represent photonic qubits at the control and the target. We generalize this model to an abstract probabilistic CNOT version and determine the realizability limits depending on a realistic range of the errors. Finally, we discuss physical constraints allowing the implementation of the Asymmetric Partially Polarizing Beam Splitter (APPBS), which is at the heart of correctly realizing the CNOT function.

  5. Geometric Design Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — Purpose: The mission of the Geometric Design Laboratory (GDL) is to support the Office of Safety Research and Development in research related to the geometric design...

  6. The causes of prescribing errors in English general practices: a qualitative study.

    Science.gov (United States)

    Slight, Sarah P; Howard, Rachel; Ghaleb, Maisoon; Barber, Nick; Franklin, Bryony Dean; Avery, Anthony J

    2013-10-01

    Few detailed studies exist of the underlying causes of prescribing errors in the UK. To examine the causes of prescribing and monitoring errors in general practice and provide recommendations for how they may be overcome. Qualitative interview and focus group study with purposive sampling of English general practices. General practice staff from 15 general practices across three PCTs in England participated in a combination of semi-structured interviews (n = 34) and six focus groups (n = 46). Thematic analysis informed by Reason's Accident Causation Model was used. Seven categories of high-level error-producing conditions were identified: the prescriber, the patient, the team, the working environment, the task, the computer system, and the primary-secondary care interface. These were broken down to reveal various error-producing conditions: the prescriber's therapeutic training, drug knowledge and experience, knowledge of the patient, perception of risk, and their physical and emotional health; the patient's characteristics and the complexity of the individual clinical case; the importance of feeling comfortable within the practice team was highlighted, as well as the safety implications of GPs signing prescriptions generated by nurses when they had not seen the patient for themselves; the working environment with its extensive workload, time pressures, and interruptions; and computer-related issues associated with mis-selecting drugs from electronic pick-lists and overriding alerts were all highlighted as possible causes of prescribing errors and were often interconnected. Complex underlying causes of prescribing and monitoring errors in general practices were highlighted, several of which are amenable to intervention.

  7. [Medication errors in a hospital emergency department: study of the current situation and critical points for improving patient safety].

    Science.gov (United States)

    Pérez-Díez, Cristina; Real-Campaña, José Manuel; Noya-Castro, María Carmen; Andrés-Paricio, Felicidad; Reyes Abad-Sazatornil, María; Bienvenido Povar-Marco, Javier

    2017-01-01

    To determine the frequency of medication errors and incident types in a tertiary-care hospital emergency department. To quantify and classify medication errors and identify critical points where measures should be implemented to improve patient safety. Prospective direct-observation study to detect errors made in June and July 2016. The overall error rate was 23.7%. The most common errors were made while medications were administered (10.9%). We detected 1532 incidents: 53.6% on workdays (P=.001), 43.1% during the afternoon/evening shift (P=.004), and 43.1% in observation areas (P=.004). The medication error rate was significant. Most errors and incidents occurred during the afternoon/evening shift and in the observation area. Most errors were related to administration of medications.

  8. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies.

    NARCIS (Netherlands)

    Kromhout, D.

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the

  9. The northern European geoid: a case study on long-wavelength geoid errors

    DEFF Research Database (Denmark)

    Omang, O.C.D.; Forsberg, René

    2002-01-01

    . This method of removing lower-order terms in the Stokes kernel appears to improve the geoid. The best fit to the global positioning system (GPS) leveling points is obtained with a degree of modification of approximately 30. In addition to the study of modification errors, the results of different methods...

  10. Water displacement leg volumetry in clinical studies - A discussion of error sources

    Science.gov (United States)

    2010-01-01

    Background Water displacement leg volumetry is a highly reproducible method, allowing the confirmation of efficacy of vasoactive substances. Nevertheless errors of its execution and the selection of unsuitable patients are likely to negatively affect the outcome of clinical studies in chronic venous insufficiency (CVI). Discussion Placebo controlled double-blind drug studies in CVI were searched (Cochrane Review 2005, MedLine Search until December 2007) and assessed with regard to efficacy (volume reduction of the leg), patient characteristics, and potential methodological error sources. Almost every second study reported only small drug effects (≤ 30 mL volume reduction). As the most relevant error source the conduct of volumetry was identified. Because the practical use of available equipment varies, volume differences of more than 300 mL - which is a multifold of a potential treatment effect - have been reported between consecutive measurements. Other potential error sources were insufficient patient guidance or difficulties with the transition from the Widmer CVI classification to the CEAP (Clinical Etiological Anatomical Pathophysiological) grading. Summary Patients should be properly diagnosed with CVI and selected for stable oedema and further clinical symptoms relevant for the specific study. Centres require a thorough training on the use of the volumeter and on patient guidance. Volumetry should be performed under constant conditions. The reproducibility of short term repeat measurements has to be ensured. PMID:20070899

  11. Primary fit of the Lord cementless total hip : a geometric study in cadavers

    NARCIS (Netherlands)

    Schimmel, J.W.; Huiskes, H.W.J.

    1988-01-01

    Two Lord prostheses, bilaterally implanted in cadavers, were sectioned. The contact areas between bone and prosthesis were studied and measured using a specially developed reproducible method. Primary fixation of the femoral components appeared to be based principally on wedging of the prosthetic

  12. Experimental study of geometric t-spanners : a running time comparison

    NARCIS (Netherlands)

    Farshi, M.; Gudmundsson, J.; Demetrescu, C.

    2007-01-01

    The construction of t-spanners of a given point set has received a lot of attention, especially from a theoretical perspective. We experimentally study the performance of the most common construction algorithms for points in the Euclidean plane. In a previous paper [10] we considered the properties

  13. Geometrical analysis of stemless shoulder arthroplasty: a radiological study of seventy TESS total shoulder prostheses.

    Science.gov (United States)

    Kadum, Bakir; Hassany, Hamid; Wadsten, Mats; Sayed-Noor, Arkan; Sjödén, Göran

    2016-04-01

    The aim of this study was to investigate the ability of a stemless shoulder prosthesis to restore shoulder anatomy in relation to premorbid anatomy. This prospective study was performed between May 2007 and December 2013. The inclusion criteria were patients with primary osteoarthritis (OA) who had undergone stemless total anatomic shoulder arthroplasty. Radiographic measurements were done on anteroposterior X-ray views of the glenohumeral joint. Sixty-nine patients (70 shoulders) were included in the study. The mean difference between premorbid centre of rotation (COR) and post-operative COR was 1 ± 2 mm (range -3 to 5.8 mm). The mean difference between premorbid humeral head height (HH) and post-operative HH was -1 ± 3 mm (range -9.7 to 8.5 mm). The mean difference between premorbid neck-shaft angle (NSA) and post-operative NSA was -3 ± 12° (range -26 to 20°). Stemless implants could be of help to reconstruct the shoulder anatomy. This study shows that there are some challenges to be addressed when attempting to ensure optimal implant positioning. The critical step is to determine the correct level of bone cut to avoid varus or valgus humeral head inclination and ensure correct head size.

  14. A prospective three-step intervention study to prevent medication errors in drug handling in paediatric care.

    Science.gov (United States)

    Niemann, Dorothee; Bertsche, Astrid; Meyrath, David; Koepf, Ellen D; Traiser, Carolin; Seebald, Katja; Schmitt, Claus P; Hoffmann, Georg F; Haefeli, Walter E; Bertsche, Thilo

    2015-01-01

    To prevent medication errors in drug handling in a paediatric ward. One in five preventable adverse drug events in hospitalised children is caused by medication errors. Errors in drug prescription have been studied frequently, but data regarding drug handling, including drug preparation and administration, are scarce. A three-step intervention study including monitoring procedure was used to detect and prevent medication errors in drug handling. After approval by the ethics committee, pharmacists monitored drug handling by nurses on an 18-bed paediatric ward in a university hospital prior to and following each intervention step. They also conducted a questionnaire survey aimed at identifying knowledge deficits. Each intervention step targeted different causes of errors. The handout mainly addressed knowledge deficits, the training course addressed errors caused by rule violations and slips, and the reference book addressed knowledge-, memory- and rule-based errors. The number of patients who were subjected to at least one medication error in drug handling decreased from 38/43 (88%) to 25/51 (49%) following the third intervention, and the overall frequency of errors decreased from 527 errors in 581 processes (91%) to 116/441 (26%). The issue of the handout reduced medication errors caused by knowledge deficits regarding, for instance, the correct 'volume of solvent for IV drugs' from 49-25%. Paediatric drug handling is prone to errors. A three-step intervention effectively decreased the high frequency of medication errors by addressing the diversity of their causes. Worldwide, nurses are in charge of drug handling, which constitutes an error-prone but often-neglected step in drug therapy. Detection and prevention of errors in daily routine is necessary for a safe and effective drug therapy. Our three-step intervention reduced errors and is suitable to be tested in other wards and settings. © 2014 John Wiley & Sons Ltd.

  15. Modeling misidentification errors in capture-recapture studies using photographic identification of evolving marks

    Science.gov (United States)

    Yoshizaki, J.; Pollock, K.H.; Brownie, C.; Webster, R.A.

    2009-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) are used to identify individual animals in a capture-recapture study. Photographic identification (photoID) typically uses photographic images of animals' naturally existing features as tags (photographic tags) and is subject to two main causes of identification errors: those related to quality of photographs (non-evolving natural tags) and those related to changes in natural marks (evolving natural tags). The conventional methods for analysis of capture-recapture data do not account for identification errors, and to do so requires a detailed understanding of the misidentification mechanism. Focusing on the situation where errors are due to evolving natural tags, we propose a misidentification mechanism and outline a framework for modeling the effect of misidentification in closed population studies. We introduce methods for estimating population size based on this model. Using a simulation study, we show that conventional estimators can seriously overestimate population size when errors due to misidentification are ignored, and that, in comparison, our new estimators have better properties except in cases with low capture probabilities (<0.2) or low misidentification rates (<2.5%). ?? 2009 by the Ecological Society of America.

  16. On chromatic and geometrical calibration

    DEFF Research Database (Denmark)

    Folm-Hansen, Jørgen

    1999-01-01

    The main subject of the present thesis is different methods for the geometrical and chromatic calibration of cameras in various environments. For the monochromatic issues of the calibration we present the acquisition of monochrome images, the classic monochrome aberrations and the various sources...... the correct interpolation method is described. For the chromatic issues of calibration we present the acquisition of colour and multi-spectral images, the chromatic aberrations and the various lens/camera based non-uniformities of the illumination of the image plane. It is described how the monochromatic...... to design calibration targets for both geometrical and chromatic calibration are described. We present some possible systematical errors on the detection of the objects in the calibration targets, if viewed in a non orthogonal angle, if the intensities are uneven or if the image blurring is uneven. Finally...

  17. Geometric and morphometric analysis of fish scales to identity genera, species and populations case study: the Cyprinid family

    Directory of Open Access Journals (Sweden)

    Seyedeh Narjes Tabatabei

    2014-01-01

    Full Text Available Using fish scale to identity species and population is a rapid, safe and low cost method. Hence, this study was carried out to investigate the possibility of using geometric and morphometric methods in fish scales for rapid identification of species and populations and compare the efficiency of applying few and/or high number of landmark points. For this purpose, scales of one population of Luciobarbus capito, four populations of Alburnoides eichwaldii and two populations of Rutilus frisii kutum, all belonging to cyprinid family, were examined. On two-dimensional images of the scales 7 and 23 landmark points were digitized in two separate times using TpsDig2, respectively. Landmark data after generalized procrustes analysis were analyzed using Principal Component Analysis (PCA, Canonical Variate Analysis (CVA and Cluster Analysis. The results of both methods (using 7 and 23 landmark points showed significant differences of the shape of scales among the three species studied (P0.05. The results also showed that few number of landmarks could display the differences between scale shapes. According to the results of this study, it could be stated that the scale of each species had unique shape patterns which could be utilized as a species identification key.

  18. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...

  19. Geometric interpretation of optimal iteration strategies

    International Nuclear Information System (INIS)

    Jones, R.B.

    1977-01-01

    The relationship between inner and outer iteration errors is extremely complex, and even formal description of total error behavior is difficult. Inner and outer iteration error propagation is analyzed in a variational formalism for a reactor model describing multidimensional, one-group theory. In a generalization the work of Akimov and Sabek, the number of inner iterations performed during each outer serial that minimizes the total computation time is determined. The generalized analysis admits a geometric interpretation of total error behavior. The results can be applied to both transport and diffusion theory computer methods. 1 figure

  20. Apology in cases of medical error disclosure: Thoughts based on a preliminary study.

    Science.gov (United States)

    Dahan, Sonia; Ducard, Dominique; Caeymaex, Laurence

    2017-01-01

    Disclosing medical errors is considered necessary by patients, ethicists, and health care professionals. Literature insists on the framing of this disclosure and describes the apology as appropriate and necessary. However, this policy seems difficult to put into practice. Few works have explored the function and meaning of the apology. The aim of this study was to explore the role ascribed to apology in communication between healthcare professionals and patients when disclosing a medical error, and to discuss these findings using a linguistic and philosophical perspective. Qualitative exploratory study, based on face-to-face semi-structured interviews, with seven physicians in a neonatal unit in France. Discourse analysis. Four themes emerged. Difference between apology in everyday life and in the medical encounter; place of the apology in the process of disclosure together with explanations, regrets, empathy and ways to avoid repeating the error; effects of the apology were to allow the patient-physician relationship undermined by the error, to be maintained, responsibility to be accepted, the first steps towards forgiveness to be taken, and a less hierarchical doctor-patient relationship to be created; ways of expressing apology ("I am sorry") reflected regrets and empathy more than an explicit apology. This study highlights how the act of apology can be seen as a "language act" as described by philosophers Austin and Searle, and how it functions as a technique for making amends following a wrongdoing and as an action undertaken in order that neither party should lose face, thus echoing the sociologist Goffmann's interaction theory. This interpretation also accords with the views of Lazare, for whom the function of apology is a restoration of dignity after the humiliation of the error. This approach to the apology illustrates how meaning and impact of real-life language acts can be clarified by philosophical and sociological ideas.

  1. Comparison of Geometrical Layouts for a Multi-Box Aerosol Model from a Single-Chamber Dispersion Study

    Directory of Open Access Journals (Sweden)

    Alexander C. Ø. Jensen

    2018-04-01

    Full Text Available Models are increasingly used to estimate and pre-emptively calculate the occupational exposure of airborne released particulate matter. Typical two-box models assume instant and fully mixed air volumes, which can potentially cause issues in cases with fast processes, slow air mixing, and/or large volumes. In this study, we present an aerosol dispersion model and validate it by comparing the modelled concentrations with concentrations measured during chamber experiments. We investigated whether a better estimation of concentrations was possible by using different geometrical layouts rather than a typical two-box layout. A one-box, two-box, and two three-box layouts were used. The one box model was found to underestimate the concentrations close to the source, while overestimating the concentrations in the far field. The two-box model layout performed well based on comparisons from the chamber study in systems with a steady source concentration for both slow and fast mixing. The three-box layout was found to better estimate the concentrations and the timing of the peaks for fluctuating concentrations than the one-box or two-box layouts under relatively slow mixing conditions. This finding suggests that industry-relevant scaled volumes should be tested in practice to gain more knowledge about when to use the two-box or the three-box layout schemes for multi-box models.

  2. A COMPARITIVE STUDY USING GEOMETRIC AND VERTICAL PROFILE FEATURES DERIVED FROM AIRBORNE LIDAR FOR CLASSIFYING TREE GENERA

    Directory of Open Access Journals (Sweden)

    C. Ko

    2012-07-01

    Full Text Available We present a comparative study between two different approaches for tree genera classification using descriptors derived from tree geometry and those derived from the vertical profile analysis of LiDAR point data. The different methods provide two perspectives for processing LiDAR point clouds for tree genera identification. The geometric perspective analyzes individual tree crowns in relation to valuable information related to characteristics of clusters and line segments derived within crowns and overall tree shapes to highlight the spatial distribution of LiDAR points within the crown. Conversely, analyzing vertical profiles retrieves information about the point distributions with respect to height percentiles; this perspective emphasizes of the importance that point distributions at specific heights express, accommodating for the decreased point density with respect to depth of canopy penetration by LiDAR pulses. The targeted species include white birch, maple, oak, poplar, white pine and jack pine at a study site northeast of Sault Ste. Marie, Ontario, Canada.

  3. Numerical study of geometric parameters effecting temperature and thermal efficiency in a premix multi-hole flat flame burner

    International Nuclear Information System (INIS)

    Saberi Moghaddam, Mohammad Hossein; Saei Moghaddam, Mojtaba; Khorramdel, Mohammad

    2017-01-01

    This paper investigates the geometric parameters related to thermal efficiency and pollution emission of a multi-hole flat flame burner. Recent experimental studies indicate that such burners are significantly influenced by both the use of distribution mesh and the size of the diameter of the main and retention holes. The present study numerically simulated methane-air premixed combustion using a two-step mechanism and constant mass diffusivity for all species. The results indicate that the addition of distribution mesh leads to uniform flow and maximum temperature that will reduce NOx emissions. An increase in the diameter of the main holes increased the mass flow which increased the temperature, thermal efficiency and NOx emissions. The size of the retention holes should be considered to decrease the total flow velocity and bring the flame closer to the burner surface, although a diameter change did not considerably improve temperature and thermal efficiency. Ultimately, under temperature and pollutant emission constraints, the optimum diameters of the main and retention holes were determined to be 5 and 1.25 mm, respectively. - Highlights: • Using distribution mesh led to uniform flow and reduced Nox pollutant by 53%. • 93% of total heat transfer occurred by radiation method in multi-hole burner. • Employing retention hole caused the flame become closer to the burner surface.

  4. Negative control exposure studies in the presence of measurement error: implications for attempted effect estimate calibration.

    Science.gov (United States)

    Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George

    2018-04-01

    Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present.

  5. Medication knowledge, certainty, and risk of errors in health care: a cross-sectional study

    Directory of Open Access Journals (Sweden)

    Johansson Inger

    2011-07-01

    Full Text Available Abstract Background Medication errors are often involved in reported adverse events. Drug therapy, prescribed by physicians, is mostly carried out by nurses, who are expected to master all aspects of medication. Research has revealed the need for improved knowledge in drug dose calculation, and medication knowledge as a whole is poorly investigated. The purpose of this survey was to study registered nurses' medication knowledge, certainty and estimated risk of errors, and to explore factors associated with good results. Methods Nurses from hospitals and primary health care establishments were invited to carry out a multiple-choice test in pharmacology, drug management and drug dose calculations (score range 0-14. Self-estimated certainty in each answer was recorded, graded from 0 = very uncertain to 3 = very certain. Background characteristics and sense of coping were recorded. Risk of error was estimated by combining knowledge and certainty scores. The results are presented as mean (±SD. Results Two-hundred and three registered nurses participated (including 16 males, aged 42.0 (9.3 years with a working experience of 12.4 (9.2 years. Knowledge scores in pharmacology, drug management and drug dose calculations were 10.3 (1.6, 7.5 (1.6, and 11.2 (2.0, respectively, and certainty scores were 1.8 (0.4, 1.9 (0.5, and 2.0 (0.6, respectively. Fifteen percent of the total answers showed a high risk of error, with 25% in drug management. Independent factors associated with high medication knowledge were working in hospitals (p Conclusions Medication knowledge was found to be unsatisfactory among practicing nurses, with a significant risk for medication errors. The study revealed a need to improve the nurses' basic knowledge, especially when referring to drug management.

  6. Organizational Climate, Stress, and Error in Primary Care: The MEMO Study

    Science.gov (United States)

    2005-05-01

    quality, and errors. This model was derived from our earlier work, the Physician Worklife Study14,15 as well as the pioneering work of Lazarus and... Worklife Study instrument,14, 15 and included our five-item global job satisfaction measure and a newly implemented four-item job stress measure.21...measures of practice emphasis with respect to issues such as work–home balance , professionalism, and diversity in office staff, as well as single

  7. Anterior, posterior, left anterior oblique, and geometric mean views in gastric emptying studies using a glucose solution

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, W.T. [Dept. of Radiology, Univ. of Texas Health Science Center, San Antonio, TX (United States); McMahan, C.A. [Dept. of Pathology, Univ. of Texas Health Science Center, San Antonio, TX (United States); Lasher, J.C. [Dept. of Radiology, Univ. of Texas Health Science Center, San Antonio, TX (United States); Blumhardt, M.R. [Dept. of Pathology, Univ. of Texas Health Science Center, San Antonio, TX (United States); Schwartz, J.G. [Dept. of Pathology, Univ. of Texas Health Science Center, San Antonio, TX (United States)

    1995-02-01

    Previous research has shown that the single anterior view of the stomach overestimates the gastric half-emptying time of a solid meal compared to the geometric mean of the anterior and posterior views. Little research has been performed comparing the various views of gastric emptying of a glucose solution. After an overnight fast, 49 nondiabetic subjects were given a 450 ml solution containing 50 g of glucose and 200 {mu}Ci of technetium-99m sulfur colloid. Sequential 1-min anterior, posterior, and left anterior oblique views were obtained every 15 min. The mean percent solution remaining in the stomach for all three views differed from the geometric mean by 1.9% or less at all time points. Average gastric half-emptying times were: geometric mean, 62.7{+-}3.3 min; anterior, 61.9{+-}3.2 min; posterior, 63.5{+-}3.5 min; and left anterior oblique, 61.6{+-}3.3 min. These half-emptying times were not statistically different. For individual patients, differences between all three views and the geometric mean were not clinically important. Approximately 95% of all patients are expected to have gastric half-emptying times measured by any of the three single views within 17 min of the gastric half-emptying time obtained using the geometric mean. The imaging of gastric emptying using glucose solutions can be performed using a convenient single view which allows continuous dynamic imaging. (orig.)

  8. Geometric phase effects in low-energy dynamics near conical intersections: A study of the multidimensional linear vibronic coupling model

    International Nuclear Information System (INIS)

    Joubert-Doriol, Loïc; Ryabinkin, Ilya G.; Izmaylov, Artur F.

    2013-01-01

    In molecular systems containing conical intersections (CIs), a nontrivial geometric phase (GP) appears in the nuclear and electronic wave functions in the adiabatic representation. We study GP effects in nuclear dynamics of an N-dimensional linear vibronic coupling (LVC) model. The main impact of GP on low-energy nuclear dynamics is reduction of population transfer between the local minima of the LVC lower energy surface. For the LVC model, we proposed an isometric coordinate transformation that confines non-adiabatic effects within a two-dimensional subsystem interacting with an N − 2 dimensional environment. Since environmental modes do not couple electronic states, all GP effects originate from nuclear dynamics within the subsystem. We explored when the GP affects nuclear dynamics of the isolated subsystem, and how the subsystem-environment interaction can interfere with GP effects. Comparing quantum dynamics with and without GP allowed us to devise simple rules to determine significance of the GP for nuclear dynamics in this model

  9. DFT studies for three Cu(II) coordination polymers: Geometrical and electronic structures, g factors and UV-visible spectra

    Science.gov (United States)

    Ding, Chang-Chun; Wu, Shao-Yi; Xu, Yong-Qiang; Wu, Li-Na; Zhang, Li-Juan

    2018-05-01

    This work presents a systematic density functional theory (DFT) study for geometrical and electronic structures, g factors and UV-vis spectra of three Cu(II) coordination polymers (CPs) [Cu(XL)(NO3)2]n (1), {[Cu(XL)(4,4‧-bpy)(NO3)2]•CH3CN}n (2) and {[Cu(XL)3](NO3)2·3.5H2O}n (3) based on the ligand N,N‧-bicyclo[2.2.2]oct-7-ene-2,3,5,6-tetracarboxdiimide bi(1,2,4-triazole) (XL) with the linker triazole coordinated with copper to construct the CPs. For three CPs with distinct ligands, the optimized molecular structures with PBE0 hybrid functional and the 6-311g basis set agree well with the corresponding XRD data. Meanwhile, the electronic properties are also analyzed for all the systems. The calculated g factors are found sensitive to the (Hartree-Fock) HF character due to the significant hybridization between copper and ligand orbitals. The calculated UV-visible spectra reveal that the main electronic transitions for CP 1 contain d-d and CT transitions, while those for CPs 2 and 3 largely belong to CT ones. The present CPs seem difficult to adsorb small molecules, e.g., CP 1 with H2O and NO2 exhibit unfavorable adsorption and deformation structures near the Cu2+ site.

  10. A Measurement Study of BLE iBeacon and Geometric Adjustment Scheme for Indoor Location-Based Mobile Applications

    Directory of Open Access Journals (Sweden)

    Jeongyeup Paek

    2016-01-01

    Full Text Available Bluetooth Low Energy (BLE and the iBeacons have recently gained large interest for enabling various proximity-based application services. Given the ubiquitously deployed nature of Bluetooth devices including mobile smartphones, using BLE and iBeacon technologies seemed to be a promising future to come. This work started off with the belief that this was true: iBeacons could provide us with the accuracy in proximity and distance estimation to enable and simplify the development of many previously difficult applications. However, our empirical studies with three different iBeacon devices from various vendors and two types of smartphone platforms prove that this is not the case. Signal strength readings vary significantly over different iBeacon vendors, mobile platforms, environmental or deployment factors, and usage scenarios. This variability in signal strength naturally complicates the process of extracting an accurate location/proximity estimation in real environments. Our lessons on the limitations of iBeacon technique lead us to design a simple class attendance checking application by performing a simple form of geometric adjustments to compensate for the natural variations in beacon signal strength readings. We believe that the negative observations made in this work can provide future researchers with a reference on how well of a performance to expect from iBeacon devices as they enter their system design phases.

  11. Local electronic and geometrical structures of hydrogen-bonded complexes studied by soft X-ray spectroscopy

    International Nuclear Information System (INIS)

    Luo, Y.

    2004-01-01

    Full text: The hydrogen bond is one of the most important forms of intermolecular interactions. It occurs in all-important components of life. However, the electronic structures of hydrogen-bonded complexes in liquid phases have long been difficult to determine due to the lack of proper experimental techniques. In this talk, a recent joint theoretical and experimental effort to understand hydrogen bonding in liquid water and alcohol/water mixtures using synchrotron radiation based soft-X-ray spectroscopy will be presented. The complexity of the liquid systems has made it impossible to interpret the spectra with physical intuition alone. Theoretical simulations have thus played an essential role in understanding the spectra and providing valuable insights on the local geometrical and electronic structures of these liquids. Our study sheds light on a 40-year controversy over what kinds of molecular structures are formed in pure liquid methanol. It also suggests an explanation for the well-known puzzle of why alcohol and water do not mix completely: the system must balance nature's tendency toward greater disorder (entropy) with the molecules' tendency to form hydrogen bonds. The observation of electron sharing and broken hydrogen bonding local structures in liquid water will be presented. The possible use of X-ray spectroscopy to determinate the local arrangements of hydrogen-bonded nanostructures will also been discussed

  12. Visual acuity measures do not reliably detect childhood refractive error--an epidemiological study.

    Directory of Open Access Journals (Sweden)

    Lisa O'Donoghue

    Full Text Available PURPOSE: To investigate the utility of uncorrected visual acuity measures in screening for refractive error in white school children aged 6-7-years and 12-13-years. METHODS: The Northern Ireland Childhood Errors of Refraction (NICER study used a stratified random cluster design to recruit children from schools in Northern Ireland. Detailed eye examinations included assessment of logMAR visual acuity and cycloplegic autorefraction. Spherical equivalent refractive data from the right eye were used to classify significant refractive error as myopia of at least 1DS, hyperopia as greater than +3.50DS and astigmatism as greater than 1.50DC, whether it occurred in isolation or in association with myopia or hyperopia. RESULTS: Results are presented from 661 white 12-13-year-old and 392 white 6-7-year-old school-children. Using a cut-off of uncorrected visual acuity poorer than 0.20 logMAR to detect significant refractive error gave a sensitivity of 50% and specificity of 92% in 6-7-year-olds and 73% and 93% respectively in 12-13-year-olds. In 12-13-year-old children a cut-off of poorer than 0.20 logMAR had a sensitivity of 92% and a specificity of 91% in detecting myopia and a sensitivity of 41% and a specificity of 84% in detecting hyperopia. CONCLUSIONS: Vision screening using logMAR acuity can reliably detect myopia, but not hyperopia or astigmatism in school-age children. Providers of vision screening programs should be cognisant that where detection of uncorrected hyperopic and/or astigmatic refractive error is an aspiration, current UK protocols will not effectively deliver.

  13. A simulation study on the variation of virtual NMR signals by winding, bobbin, spacer error of HTS magnet

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jun Seong; Lee, Woo Seung; Kim, Jin Sub; Song, Seung Hyun; Nam, Seok Ho; Jeon, Hae Ryong; Beak, Geon Woo; Ko, Tae Kuk [Yonsei University, Seoul (Korea, Republic of)

    2016-09-15

    Recently, production technique and property of the High-Temperature Superconductor (HTS) tape have been improved. Thus, the study on applying an HTS magnet to the high magnetic field application is rapidly increased. A Nuclear Magnetic Resonance (NMR) spectrometer requires high magnitude and homogeneous of central magnetic field. However, the HTS magnet has fabrication errors because shape of HTS is tape and HTS magnet is manufactured by winding HTS tape to the bobbin. The fabrication errors are winding error, bobbin diameter error, spacer thickness error and so on. The winding error occurs when HTS tape is departed from the arranged position on the bobbin. The bobbin diameter and spacer thickness error occur since the diameter of bobbin and spacer are inaccurate. These errors lead magnitude and homogeneity of central magnetic field to be different from its ideal design. The purpose of this paper is to investigate the effect of winding error, bobbin diameter error and spacer thickness error on the central field and field homogeneity of HTS magnet using the virtual NMR signals in MATLAB simulation.

  14. Study of run time errors of the ATLAS Pixel Detector in the 2012 data taking period

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00339072

    2013-05-16

    The high resolution silicon Pixel detector is critical in event vertex reconstruction and in particle track reconstruction in the ATLAS detector. During the pixel data taking operation, some modules (Silicon Pixel sensor +Front End Chip+ Module Control Chip (MCC)) go to an auto-disable state, where the Modules don’t send the data for storage. Modules become operational again after reconfiguration. The source of the problem is not fully understood. One possible source of the problem is traced to the occurrence of single event upset (SEU) in the MCC. Such a module goes to either a Timeout or Busy state. This report is the study of different types and rates of errors occurring in the Pixel data taking operation. Also, the study includes the error rate dependency on Pixel detector geometry.

  15. The impact of a closed-loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before-and-after study.

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-08-01

    To assess the impact of a closed-loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Before-and-after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Closed-loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Prescribing errors were identified in 3.8% of 2450 medication orders pre-intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre-intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre-intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; chi(2) test). A closed-loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication-related tasks increased.

  16. The impact of a closed‐loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before‐and‐after study

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-01-01

    Objectives To assess the impact of a closed‐loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Design, setting and participants Before‐and‐after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Intervention Closed‐loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Main outcome measures Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Results Prescribing errors were identified in 3.8% of 2450 medication orders pre‐intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre‐intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre‐intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; χ2 test). Conclusions A closed‐loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication‐related tasks increased. PMID:17693676

  17. Theoretical study of stability geometrical and electronic structure of (BeHsub(2))sub(n) oligomers

    Energy Technology Data Exchange (ETDEWEB)

    Sukhanov, L P; Boldyrev, A I; Charkin, O P [AN SSSR, Moscow. Inst. Novykh Khimicheskikh Problem

    1983-01-01

    The Hartree-Fock-Ruthane method with the Roos-Siegbahn two-exponent basis is used to calculate stability, geometrical and electronic structures of (BeHsub(2))sub(n) oligomers, where n=1, 2, 3, 4 and 6. It is shown that with the growth of oligomerization degree n stability of linear band structure is increased as compared with other configurations including high-coordination volumetric ones. Tendencies in formation with n growth of geometrical, energetic characteristics, electronic structure of (BeHsub(2))sub(n) oligomers of band type are analysed.

  18. In-plant reliability data base for nuclear plant components: a feasibility study on human error information

    International Nuclear Information System (INIS)

    Borkowski, R.J.; Fragola, J.R.; Schurman, D.L.; Johnson, J.W.

    1984-03-01

    This report documents the procedure and final results of a feasibility study which examined the usefulness of nuclear plant maintenance work requests in the IPRDS as tools for understanding human error and its influence on component failure and repair. Developed in this study were (1) a set of criteria for judging the quality of a plant maintenance record set for studying human error; (2) a scheme for identifying human errors in the maintenance records; and (3) two taxonomies (engineering-based and psychology-based) for categorizing and coding human error-related events

  19. Reliability and error analysis on xenon/CT CBF

    International Nuclear Information System (INIS)

    Zhang, Z.

    2000-01-01

    This article provides a quantitative error analysis of a simulation model of xenon/CT CBF in order to investigate the behavior and effect of different types of errors such as CT noise, motion artifacts, lower percentage of xenon supply, lower tissue enhancements, etc. A mathematical model is built to simulate these errors. By adjusting the initial parameters of the simulation model, we can scale the Gaussian noise, control the percentage of xenon supply, and change the tissue enhancement with different kVp settings. The motion artifact will be treated separately by geometrically shifting the sequential CT images. The input function is chosen from an end-tidal xenon curve of a practical study. Four kinds of cerebral blood flow, 10, 20, 50, and 80 cc/100 g/min, are examined under different error environments and the corresponding CT images are generated following the currently popular timing protocol. The simulated studies will be fed to a regular xenon/CT CBF system for calculation and evaluation. A quantitative comparison is given to reveal the behavior and effect of individual error resources. Mixed error testing is also provided to inspect the combination effect of errors. The experiment shows that CT noise is still a major error resource. The motion artifact affects the CBF results more geometrically than quantitatively. Lower xenon supply has a lesser effect on the results, but will reduce the signal/noise ratio. The lower xenon enhancement will lower the flow values in all areas of brain. (author)

  20. Study of wavefront error and polarization of a side mounted infrared window

    Science.gov (United States)

    Liu, Jiaguo; Li, Lin; Hu, Xinqi; Yu, Xin

    2008-03-01

    The wavefront error and polarization of a side mounted infrared window made of ZnS are studied. The Infrared windows suffer from temperature gradient and stress during their launch process. Generally, the gradient in temperature changes the refractive index of the material whereas stress produces deformation and birefringence. In this paper, a thermal finite element analysis (FEA) of an IR window is presented. For this purpose, we employed an FEA program Ansys to obtain the time-varying temperature field. The deformation and stress of the window are derived from a structural FEA with the aerodynamic force and the temperature field previously obtained as being the loads. The deformation, temperature field, stress field, ray tracing and Jones Calculus are used to calculate the wavefront error and the change of polarization state.

  1. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  2. Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Florita, A.; Hodge, B. M.; Milligan, M.

    2012-08-01

    The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.

  3. Financial impact of errors in business forecasting: a comparative study of linear models and neural networks

    Directory of Open Access Journals (Sweden)

    Claudimar Pereira da Veiga

    2012-08-01

    Full Text Available The importance of demand forecasting as a management tool is a well documented issue. However, it is difficult to measure costs generated by forecasting errors and to find a model that assimilate the detailed operation of each company adequately. In general, when linear models fail in the forecasting process, more complex nonlinear models are considered. Although some studies comparing traditional models and neural networks have been conducted in the literature, the conclusions are usually contradictory. In this sense, the objective was to compare the accuracy of linear methods and neural networks with the current method used by the company. The results of this analysis also served as input to evaluate influence of errors in demand forecasting on the financial performance of the company. The study was based on historical data from five groups of food products, from 2004 to 2008. In general, one can affirm that all models tested presented good results (much better than the current forecasting method used, with mean absolute percent error (MAPE around 10%. The total financial impact for the company was 6,05% on annual sales.

  4. Medication Errors in a Swiss Cardiovascular Surgery Department: A Cross-Sectional Study Based on a Novel Medication Error Report Method

    Directory of Open Access Journals (Sweden)

    Kaspar Küng

    2013-01-01

    Full Text Available The purpose of this study was (1 to determine frequency and type of medication errors (MEs, (2 to assess the number of MEs prevented by registered nurses, (3 to assess the consequences of ME for patients, and (4 to compare the number of MEs reported by a newly developed medication error self-reporting tool to the number reported by the traditional incident reporting system. We conducted a cross-sectional study on ME in the Cardiovascular Surgery Department of Bern University Hospital in Switzerland. Eligible registered nurses ( involving in the medication process were included. Data on ME were collected using an investigator-developed medication error self reporting tool (MESRT that asked about the occurrence and characteristics of ME. Registered nurses were instructed to complete a MESRT at the end of each shift even if there was no ME. All MESRTs were completed anonymously. During the one-month study period, a total of 987 MESRTs were returned. Of the 987 completed MESRTs, 288 (29% indicated that there had been an ME. Registered nurses reported preventing 49 (5% MEs. Overall, eight (2.8% MEs had patient consequences. The high response rate suggests that this new method may be a very effective approach to detect, report, and describe ME in hospitals.

  5. Geometric methods for discrete dynamical systems

    CERN Document Server

    Easton, Robert W

    1998-01-01

    This book looks at dynamics as an iteration process where the output of a function is fed back as an input to determine the evolution of an initial state over time. The theory examines errors which arise from round-off in numerical simulations, from the inexactness of mathematical models used to describe physical processes, and from the effects of external controls. The author provides an introduction accessible to beginning graduate students and emphasizing geometric aspects of the theory. Conley''s ideas about rough orbits and chain-recurrence play a central role in the treatment. The book will be a useful reference for mathematicians, scientists, and engineers studying this field, and an ideal text for graduate courses in dynamical systems.

  6. Investigation study of geometric dimensions of the magnetic system of the switched-reluctance machine influence on magnetic moment

    Science.gov (United States)

    Petrushin, A.; Shevkunova, A.

    2018-02-01

    The article deals with the investigation concentrated to optimizing the active part of the switched-reluctance motor with the aim of increasing the value of the average electromagnetic torque. Susceptibility of the average value of the electromagnetic torque to changes of the geometric dimensions of the magnetic system found in the optimization process was set.

  7. Study on the methodology for predicting and preventing errors to improve reliability of maintenance task in nuclear power plant

    International Nuclear Information System (INIS)

    Hanafusa, Hidemitsu; Iwaki, Toshio; Embrey, D.

    2000-01-01

    The objective of this study was to develop and effective methodology for predicting and preventing errors in nuclear power plant maintenance tasks. A method was established by which chief maintenance personnel can predict and reduce errors when reviewing the maintenance procedures and while referring to maintenance supporting systems and methods in other industries including aviation and chemical plant industries. The method involves the following seven steps: 1. Identification of maintenance tasks. 2. Specification of important tasks affecting safety. 3. Assessment of human errors occurring during important tasks. 4. Identification of Performance Degrading Factors. 5. Dividing important tasks into sub-tasks. 6. Extraction of errors using Predictive Human Error Analysis (PHEA). 7. Development of strategies for reducing errors and for recovering from errors. By way of a trial, this method was applied to the pump maintenance procedure in nuclear power plants. This method is believed to be capable of identifying the expected errors in important tasks and supporting the development of error reduction measures. By applying this method, the number of accidents resulting form human errors during maintenance can be reduced. Moreover, the maintenance support base using computers was developed. (author)

  8. Systematic Analysis of Video Data from Different Human-Robot Interaction Studies: A Categorisation of Social Signals During Error Situations

    OpenAIRE

    Manuel eGiuliani; Nicole eMirnig; Gerald eStollnberger; Susanne eStadler; Roland eBuchner; Manfred eTscheligi

    2015-01-01

    Human?robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human?robot interaction experiments. For that, we analyzed 201 videos of five human?robot interaction user studies with varying tasks from four independent projects. The analysis shows tha...

  9. Shape change in the atlas with congenital midline non-union of its posterior arch: a morphometric geometric study.

    Science.gov (United States)

    Ríos, Luis; Palancar, Carlos; Pastor, Francisco; Llidó, Susana; Sanchís-Gimeno, Juan Alberto; Bastir, Markus

    2017-10-01

    The congenital midline non-union of the posterior arch of the atlas is a developmental variant present at a frequency ranging from 0.7% to 3.9%. Most of the reported cases correspond to incidental findings during routine medical examination. In cases of posterior non-union, hypertrophy of the anterior arch and cortical bone thickening of the posterior arches have been observed and interpreted as adaptive responses of the atlas to increased mechanical stress. We sought to determine if the congenital non-union of the posterior arch results in a change in the shape of the atlas. This study is an analysis of the first cervical vertebrae from osteological collections through morphometric geometric techniques. A total of 21 vertebrae were scanned with a high-resolution three-dimensional scanner (Artec Space Spider, Artec Group, Luxembourg). To capture vertebral shape, 19 landmarks and 100 semilandmarks were placed on the vertebrae. Procrustes superimposition was applied to obtain size and shape data (MorphoJ 1.02; Klingenberg, 2011), which were analyzed through principal component analysis (PCA) and mean shape comparisons. The PCA resulted in two components explaining 22.32% and 18.8% of the total shape variance. The graphic plotting of both components indicates a clear shape difference between the control atlas and the atlas with posterior non-union. This observation was supported by statistically significant differences in mean shape comparisons between both types of vertebra (patlas is associated with significant changes in the shape of the vertebra. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Accounting for response misclassification and covariate measurement error improves power and reduces bias in epidemiologic studies.

    Science.gov (United States)

    Cheng, Dunlei; Branscum, Adam J; Stamey, James D

    2010-07-01

    To quantify the impact of ignoring misclassification of a response variable and measurement error in a covariate on statistical power, and to develop software for sample size and power analysis that accounts for these flaws in epidemiologic data. A Monte Carlo simulation-based procedure is developed to illustrate the differences in design requirements and inferences between analytic methods that properly account for misclassification and measurement error to those that do not in regression models for cross-sectional and cohort data. We found that failure to account for these flaws in epidemiologic data can lead to a substantial reduction in statistical power, over 25% in some cases. The proposed method substantially reduced bias by up to a ten-fold margin compared to naive estimates obtained by ignoring misclassification and mismeasurement. We recommend as routine practice that researchers account for errors in measurement of both response and covariate data when determining sample size, performing power calculations, or analyzing data from epidemiological studies. 2010 Elsevier Inc. All rights reserved.

  11. Study of Error Propagation in the Transformations of Dynamic Thermal Models of Buildings

    Directory of Open Access Journals (Sweden)

    Loïc Raillon

    2017-01-01

    Full Text Available Dynamic behaviour of a system may be described by models with different forms: thermal (RC networks, state-space representations, transfer functions, and ARX models. These models, which describe the same process, are used in the design, simulation, optimal predictive control, parameter identification, fault detection and diagnosis, and so on. Since more forms are available, it is interesting to know which one is the most suitable by estimating the sensitivity of the model to transform into a physical model, which is represented by a thermal network. A procedure for the study of error by Monte Carlo simulation and of factor prioritization is exemplified on a simple, but representative, thermal model of a building. The analysis of the propagation of errors and of the influence of the errors on the parameter estimation shows that the transformation from state-space representation to transfer function is more robust than the other way around. Therefore, if only one model is chosen, the state-space representation is preferable.

  12. A study on mechanical errors in Cone Beam Computed Tomography (CBCT) System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yi Seong; Yoo, Eun Jeong; Choi, Kyoung Sik [Dept. of Radiation Oncology, Anyang SAM Hospital, Anyang (Korea, Republic of); Lee, Jong Woo [Dept. of Radiation Oncology, Konkuk University Medical Center, Seoul (Korea, Republic of); Suh, Tae Suk [Dept. of Biomedical Engineering and Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Kim, Jeong Koo [Dept. of Radiological Science, Hanseo University, Seosan (Korea, Republic of)

    2013-06-15

    This study investigated the rate of setup variance by the rotating unbalance of gantry in image-guided radiation therapy. The equipments used linear accelerator(Elekta Synergy ™, UK) and a three-dimensional volume imaging mode(3D Volume View) in cone beam computed tomography(CBCT) system. 2D images obtained by rotating 360°and 180° were reconstructed to 3D image. Catpan503 phantom and homogeneous phantom were used to measure the setup errors. Ball-bearing phantom was used to check the rotation axis of the CBCT. The volume image from CBCT using Catphan503 phantom and homogeneous phantom were analyzed and compared to images from conventional CT in the six dimensional view(X, Y, Z, Roll, Pitch, and Yaw). The variance ratio of setup error were difference in X 0.6 mm, Y 0.5 mm, Z 0.5 mm when the gantry rotated 360° in orthogonal coordinate. whereas rotated 180°, the error measured 0.9 mm, 0.2 mm, 0.3 mm in X, Y, Z respectively. In the rotating coordinates, the more increased the rotating unbalance, the more raised average ratio of setup errors. The resolution of CBCT images showed 2 level of difference in the table recommended. CBCT had a good agreement compared to each recommended values which is the mechanical safety, geometry accuracy and image quality. The rotating unbalance of gentry vary hardly in orthogonal coordinate. However, in rotating coordinate of gantry exceeded the ±1° of recommended value. Therefore, when we do sophisticated radiation therapy six dimensional correction is needed.

  13. Calculation errors of Set-up in patients with tumor location of prostate. Exploratory study; Calculo de errores de Set-up en pacientes con localizacion tumoral de prostata. Estudio exploratorio

    Energy Technology Data Exchange (ETDEWEB)

    Donis Gil, S.; Robayna Duque, B. E.; Jimenez Sosa, A.; Hernandez Armas, O.; Gonzalez Martin, A. E.; Hernandez Armas, J.

    2013-07-01

    The calculation of SM is done from errors in positioning (set-up). These errors are calculated from movements in 3D of the patient. This paper is an exploratory study of 20 patients with tumor location of prostate in which errors of set-up for two protocols of work are evaluated. (Author)

  14. Pediatric Nurses' Perceptions of Medication Safety and Medication Error: A Mixed Methods Study.

    Science.gov (United States)

    Alomari, Albara; Wilson, Val; Solman, Annette; Bajorek, Beata; Tinsley, Patricia

    2017-05-30

    This study aims to outline the current workplace culture of medication practice in a pediatric medical ward. The objective is to explore the perceptions of nurses in a pediatric clinical setting as to why medication administration errors occur. As nurses have a central role in the medication process, it is essential to explore nurses' perceptions of the factors influencing the medication process. Without this understanding, it is difficult to develop effective prevention strategies aimed at reducing medication administration errors. Previous studies were limited to exploring a single and specific aspect of medication safety. The methods used in these studies were limited to survey designs which may lead to incomplete or inadequate information being provided. This study is phase 1 on an action research project. Data collection included a direct observation of nurses during medication preparation and administration, audit based on the medication policy, and guidelines and focus groups with nursing staff. A thematic analysis was undertaken by each author independently to analyze the observation notes and focus group transcripts. Simple descriptive statistics were used to analyze the audit data. The study was conducted in a specialized pediatric medical ward. Four key themes were identified from the combined quantitative and qualitative data: (1) understanding medication errors, (2) the busy-ness of nurses, (3) the physical environment, and (4) compliance with medication policy and practice guidelines. Workload, frequent interruptions to process, poor physical environment design, lack of preparation space, and impractical medication policies are identified as barriers to safe medication practice. Overcoming these barriers requires organizations to review medication process policies and engage nurses more in medication safety research and in designing clinical guidelines for their own practice.

  15. Human error analysis project (HEAP) - The fourth pilot study: verbal data for analysis of operator performance

    International Nuclear Information System (INIS)

    Braarud, Per Oeyvind; Droeyvoldsmo, Asgeir; Hollnagel, Erik

    1997-06-01

    This report is the second report from the Pilot study No. 4 within the Human Error Analyses Project (HEAP). The overall objective of HEAP is to provide a better understanding and explicit modelling of how and why ''cognitive errors'' occur. This study investigated the contribution from different verbal data sources for analysis of control room operator's performance. Operator's concurrent verbal report, retrospective verbal report, and process expert's comments were compared for their contribution to an operator performance measure. This study looked into verbal protocols for single operator and for team. The main findings of the study were that all the three verbal data sources could be used to study performance. There was a relative high overlap between the data sources, but also a unique contribution from each source. There was a common pattern in the types of operator activities the data sources gave information about. The operator's concurrent protocol overall contained slightly more information on the operator's activities than the other two verbal sources. The study also showed that concurrent verbal protocol is feasible and useful for analysis of team's activities during a scenario. (author)

  16. Proof in geometry with "mistakes in geometric proofs"

    CERN Document Server

    Fetisov, A I

    2006-01-01

    This single-volume compilation of 2 books explores the construction of geometric proofs. It offers useful criteria for determining correctness and presents examples of faulty proofs that illustrate common errors. 1963 editions.

  17. Linking Errors between Two Populations and Tests: A Case Study in International Surveys in Education

    Directory of Open Access Journals (Sweden)

    Dirk Hastedt

    2015-06-01

    Full Text Available This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average achievement scores from the latter can be linked to the international metric. In addition to raising issues associated with different testing conditions, administrative procedures, and the like, this approach also poses psychometric challenges. This paper endeavors to shed some light on the effects that can be expected, the linkage errors in particular, by countries using this practice. The ILSA selected for this simulation study was IEA TIMSS 2011, and the three countries used as the national assessment cases were Botswana, Honduras, and Tunisia, all of which participated in TIMSS 2011. The items selected as items common to the simulated national tests and the international test came from the Grade 4 TIMSS 2011 mathematics items that IEA released into the public domain after completion of this assessment. The findings of the current study show that linkage errors seemed to achieve acceptable levels if 30 or more items were used for the linkage, although the errors were still significantly higher compared to the TIMSS' cutoffs. Comparison of the estimated country averages based on the simulated national surveys and the averages based on the international TIMSS assessment revealed only one instance across the three countries of the estimates approaching parity. Also, the percentages of students in these countries who actually reached the defined benchmarks on the TIMSS achievement scale differed significantly from the results based on TIMSS and the results for the simulated national assessments. As a conclusion, we advise against using groups of released items from international assessments in national

  18. Measurement error in epidemiologic studies of air pollution based on land-use regression models.

    Science.gov (United States)

    Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino

    2013-10-15

    Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.

  19. Geometric group theory an introduction

    CERN Document Server

    Löh, Clara

    2017-01-01

    Inspired by classical geometry, geometric group theory has in turn provided a variety of applications to geometry, topology, group theory, number theory and graph theory. This carefully written textbook provides a rigorous introduction to this rapidly evolving field whose methods have proven to be powerful tools in neighbouring fields such as geometric topology. Geometric group theory is the study of finitely generated groups via the geometry of their associated Cayley graphs. It turns out that the essence of the geometry of such groups is captured in the key notion of quasi-isometry, a large-scale version of isometry whose invariants include growth types, curvature conditions, boundary constructions, and amenability. This book covers the foundations of quasi-geometry of groups at an advanced undergraduate level. The subject is illustrated by many elementary examples, outlooks on applications, as well as an extensive collection of exercises.

  20. Organizational Climate, Stress, and Error in Primary Care: The MEMO Study

    National Research Council Canada - National Science Library

    Linzer, Mark; Manwell, Linda B; Mundt, Marlon; Williams, Eric; Maguire, Ann; McMurray, Julia; Plane, Mary B

    2005-01-01

    .... Physician surveys assessed office environment and organizational climate (OC). Stress was measured using a 4-item scale, past errors were self reported, and the likelihood of future errors was self-assessed using the OSPRE...

  1. Barriers to the medication error reporting process within the Irish National Ambulance Service, a focus group study.

    Science.gov (United States)

    Byrne, Eamonn; Bury, Gerard

    2018-02-08

    Incident reporting is vital to identifying pre-hospital medication safety issues because literature suggests that the majority of errors pre-hospital are self-identified. In 2016, the National Ambulance Service (NAS) reported 11 medication errors to the national body with responsibility for risk management and insurance cover. The Health Information and Quality Authority in 2014 stated that reporting of clinical incidents, of which medication errors are a subset, was not felt to be representative of the actual events occurring. Even though reporting systems are in place, the levels appear to be well below what might be expected. Little data is available to explain this apparent discrepancy. To identify, investigate and document the barriers to medication error reporting within the NAS. An independent moderator led four focus groups in March of 2016. A convenience sample of 18 frontline Paramedics and Advanced Paramedics from Cork City and County discussed medication errors and the medication error reporting process. The sessions were recorded and anonymised, and the data was analysed using a process of thematic analysis. Practitioners understood the value of reporting errors. Barriers to reporting included fear of consequences and ridicule, procedural ambiguity, lack of feedback and a perceived lack of both consistency and confidentiality. The perceived consequences for making an error included professional, financial, litigious and psychological. Staff appeared willing to admit errors in a psychologically safe environment. Barriers to reporting are in line with international evidence. Time constraints prevented achievement of thematic saturation. Further study is warranted.

  2. Conflict monitoring in speech processing : An fMRI study of error detection in speech production and perception

    NARCIS (Netherlands)

    Gauvin, Hanna; De Baene, W.; Brass, Marcel; Hartsuiker, Robert

    2016-01-01

    To minimize the number of errors in speech, and thereby facilitate communication, speech is monitored before articulation. It is, however, unclear at which level during speech production monitoring takes place, and what mechanisms are used to detect and correct errors. The present study investigated

  3. Error Analysis for Arithmetic Word Problems--A Case Study of Primary Three Students in One Singapore School

    Science.gov (United States)

    Cheng, Lu Pien

    2015-01-01

    In this study, ways in which 9-year old students from one Singapore school solved 1-step and 2-step word problems based on the three semantic structures were examined. The students' work and diagrams provided insights into the range of errors in word problem solving for 1- step and 2-step word problems. In particular, the errors provided some…

  4. Evaluating Method Engineer Performance: an error classification and preliminary empirical study

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    1998-11-01

    Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.

  5. Medication errors with the use of allopurinol and colchicine: a retrospective study of a national, anonymous Internet-accessible error reporting system.

    Science.gov (United States)

    Mikuls, Ted R; Curtis, Jeffrey R; Allison, Jeroan J; Hicks, Rodney W; Saag, Kenneth G

    2006-03-01

    To more closely assess medication errors in gout care, we examined data from a national, Internet-accessible error reporting program over a 5-year reporting period. We examined data from the MEDMARX database, covering the period from January 1, 1999 through December 31, 2003. For allopurinol and colchicine, we examined error severity, source, type, contributing factors, and healthcare personnel involved in errors, and we detailed errors resulting in patient harm. Causes of error and the frequency of other error characteristics were compared for gout medications versus other musculoskeletal treatments using the chi-square statistic. Gout medication errors occurred in 39% (n = 273) of facilities participating in the MEDMARX program. Reported errors were predominantly from the inpatient hospital setting and related to the use of allopurinol (n = 524), followed by colchicine (n = 315), probenecid (n = 50), and sulfinpyrazone (n = 2). Compared to errors involving other musculoskeletal treatments, allopurinol and colchicine errors were more often ascribed to problems with physician prescribing (7% for other therapies versus 23-39% for allopurinol and colchicine, p < 0.0001) and less often due to problems with drug administration or nursing error (50% vs 23-27%, p < 0.0001). Our results suggest that inappropriate prescribing practices are characteristic of errors occurring with the use of allopurinol and colchicine. Physician prescribing practices are a potential target for quality improvement interventions in gout care.

  6. Quantification of human errors in level-1 PSA studies in NUPEC/JINS

    International Nuclear Information System (INIS)

    Hirano, M.; Hirose, M.; Sugawara, M.; Hashiba, T.

    1991-01-01

    THERP (Technique for Human Error Rate Prediction) method is mainly adopted to evaluate the pre-accident and post-accident human error rates. Performance shaping factors are derived by taking Japanese operational practice into account. Several examples of human error rates with calculational procedures are presented. The important human interventions of typical Japanese NPPs are also presented. (orig./HP)

  7. Learning from Errors at Work: A Replication Study in Elder Care Nursing

    Science.gov (United States)

    Leicher, Veronika; Mulder, Regina H.; Bauer, Johannes

    2013-01-01

    Learning from errors is an important way of learning at work. In this article, we analyse conditions under which elder care nurses use errors as a starting point for the engagement in social learning activities (ESLA) in the form of joint reflection with colleagues on potential causes of errors and ways to prevent them in future. The goal of our…

  8. Prevalence, Nature, Severity and Risk Factors for Prescribing Errors in Hospital Inpatients: Prospective Study in 20 UK Hospitals.

    Science.gov (United States)

    Ashcroft, Darren M; Lewis, Penny J; Tully, Mary P; Farragher, Tracey M; Taylor, David; Wass, Valerie; Williams, Steven D; Dornan, Tim

    2015-09-01

    It has been suggested that doctors in their first year of post-graduate training make a disproportionate number of prescribing errors. This study aimed to compare the prevalence of prescribing errors made by first-year post-graduate doctors with that of errors by senior doctors and non-medical prescribers and to investigate the predictors of potentially serious prescribing errors. Pharmacists in 20 hospitals over 7 prospectively selected days collected data on the number of medication orders checked, the grade of prescriber and details of any prescribing errors. Logistic regression models (adjusted for clustering by hospital) identified factors predicting the likelihood of prescribing erroneously and the severity of prescribing errors. Pharmacists reviewed 26,019 patients and 124,260 medication orders; 11,235 prescribing errors were detected in 10,986 orders. The mean error rate was 8.8 % (95 % confidence interval [CI] 8.6-9.1) errors per 100 medication orders. Rates of errors for all doctors in training were significantly higher than rates for medical consultants. Doctors who were 1 year (odds ratio [OR] 2.13; 95 % CI 1.80-2.52) or 2 years in training (OR 2.23; 95 % CI 1.89-2.65) were more than twice as likely to prescribe erroneously. Prescribing errors were 70 % (OR 1.70; 95 % CI 1.61-1.80) more likely to occur at the time of hospital admission than when medication orders were issued during the hospital stay. No significant differences in severity of error were observed between grades of prescriber. Potentially serious errors were more likely to be associated with prescriptions for parenteral administration, especially for cardiovascular or endocrine disorders. The problem of prescribing errors in hospitals is substantial and not solely a problem of the most junior medical prescribers, particularly for those errors most likely to cause significant patient harm. Interventions are needed to target these high-risk errors by all grades of staff and hence

  9. Impact of Stewardship Interventions on Antiretroviral Medication Errors in an Urban Medical Center: A 3-Year, Multiphase Study.

    Science.gov (United States)

    Zucker, Jason; Mittal, Jaimie; Jen, Shin-Pung; Cheng, Lucy; Cennimo, David

    2016-03-01

    There is a high prevalence of HIV infection in Newark, New Jersey, with University Hospital admitting approximately 600 HIV-infected patients per year. Medication errors involving antiretroviral therapy (ART) could significantly affect treatment outcomes. The goal of this study was to evaluate the effectiveness of various stewardship interventions in reducing the prevalence of prescribing errors involving ART. This was a retrospective review of all inpatients receiving ART for HIV treatment during three distinct 6-month intervals over a 3-year period. During the first year, the baseline prevalence of medication errors was determined. During the second year, physician and pharmacist education was provided, and a computerized order entry system with drug information resources and prescribing recommendations was implemented. Prospective audit of ART orders with feedback was conducted in the third year. Analyses and comparisons were made across the three phases of this study. Of the 334 patients with HIV admitted in the first year, 45% had at least one antiretroviral medication error and 38% had uncorrected errors at the time of discharge. After education and computerized order entry, significant reductions in medication error rates were observed compared to baseline rates; 36% of 315 admissions had at least one error and 31% had uncorrected errors at discharge. While the prevalence of antiretroviral errors in year 3 was similar to that of year 2 (37% of 276 admissions), there was a significant decrease in the prevalence of uncorrected errors at discharge (12%) with the use of prospective review and intervention. Interventions, such as education and guideline development, can aid in reducing ART medication errors, but a committed stewardship program is necessary to elicit the greatest impact. © 2016 Pharmacotherapy Publications, Inc.

  10. Enhancing Intervention for Residual Rhotic Errors Via App-Delivered Biofeedback: A Case Study.

    Science.gov (United States)

    Byun, Tara McAllister; Campbell, Heather; Carey, Helen; Liang, Wendy; Park, Tae Hong; Svirsky, Mario

    2017-06-22

    Recent research suggests that visual-acoustic biofeedback can be an effective treatment for residual speech errors, but adoption remains limited due to barriers including high cost and lack of familiarity with the technology. This case study reports results from the first participant to complete a course of visual-acoustic biofeedback using a not-for-profit iOS app, Speech Therapist's App for /r/ Treatment. App-based biofeedback treatment for rhotic misarticulation was provided in weekly 30-min sessions for 20 weeks. Within-treatment progress was documented using clinician perceptual ratings and acoustic measures. Generalization gains were assessed using acoustic measures of word probes elicited during baseline, treatment, and maintenance sessions. Both clinician ratings and acoustic measures indicated that the participant significantly improved her rhotic production accuracy in trials elicited during treatment sessions. However, these gains did not transfer to generalization probes. This study provides a proof-of-concept demonstration that app-based biofeedback is a viable alternative to costlier dedicated systems. Generalization of gains to contexts without biofeedback remains a challenge that requires further study. App-delivered biofeedback could enable clinician-research partnerships that would strengthen the evidence base while providing enhanced treatment for children with residual rhotic errors. https://doi.org/10.23641/asha.5116318.

  11. Evaluating physician performance at individualizing care: a pilot study tracking contextual errors in medical decision making.

    Science.gov (United States)

    Weiner, Saul J; Schwartz, Alan; Yudkowsky, Rachel; Schiff, Gordon D; Weaver, Frances M; Goldberg, Julie; Weiss, Kevin B

    2007-01-01

    Clinical decision making requires 2 distinct cognitive skills: the ability to classify patients' conditions into diagnostic and management categories that permit the application of research evidence and the ability to individualize or-more specifically-to contextualize care for patients whose circumstances and needs require variation from the standard approach to care. The purpose of this study was to develop and test a methodology for measuring physicians' performance at contextualizing care and compare it to their performance at planning biomedically appropriate care. First, the authors drafted 3 cases, each with 4 variations, 3 of which are embedded with biomedical and/or contextual information that is essential to planning care. Once the cases were validated as instruments for assessing physician performance, 54 internal medicine residents were then presented with opportunities to make these preidentified biomedical or contextual errors, and data were collected on information elicitation and error making. The case validation process was successful in that, in the final iteration, the physicians who received the contextual variant of cases proposed an alternate plan of care to those who received the baseline variant 100% of the time. The subsequent piloting of these validated cases unmasked previously unmeasured differences in physician performance at contextualizing care. The findings, which reflect the performance characteristics of the study population, are presented. This pilot study demonstrates a methodology for measuring physician performance at contextualizing care and illustrates the contribution of such information to an overall assessment of physician practice.

  12. Comparative Study of Communication Error between Conventional and Digital MCR Operators in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Geun; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2015-05-15

    In this regard, the appropriate communication is directly related to the efficient and safe system operation, and inappropriate communication is one of the main causes of the accidents in various industries since inappropriate communications can cause a lack of necessary information exchange between operators and lead to serious consequences in large process systems such as nuclear power plants. According to the study conducted by Y. Hirotsu in 2001, about 25 percents of human error caused incidents in NPPs were related to communication issues. Also, other studies were reported that 85 percents of human error caused incidents in aviation industry and 92 percents in railway industry were related to communication problems. Accordingly, the importance of the efforts for reducing inappropriate communications has been emphasized in order to enhance the safety of pre-described systems. As a result, the average ratio of inappropriate communication in digital MCRs was slightly higher than that in conventional MCRs when the average ratio of no communication in digital MCRs was much smaller than that in conventional MCRs. Regarding the average ratio of inappropriate communication, it can be inferred that operators are still more familiar to the conventional MCRs than digital MCRs. More case studies are required for more delicate comparison since there were only three examined cases for digital MCRs. However, similar result is expected because there are no differences in communication method, although there are many differences in the way of procedure proceeding.

  13. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors.

    Directory of Open Access Journals (Sweden)

    Jian Weng

    Full Text Available Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group.

  14. Comprehensive Anti-error Study on Power Grid Dispatching Based on Regional Regulation and Integration

    Science.gov (United States)

    Zhang, Yunju; Chen, Zhongyi; Guo, Ming; Lin, Shunsheng; Yan, Yinyang

    2018-01-01

    With the large capacity of the power system, the development trend of the large unit and the high voltage, the scheduling operation is becoming more frequent and complicated, and the probability of operation error increases. This paper aims at the problem of the lack of anti-error function, single scheduling function and low working efficiency for technical support system in regional regulation and integration, the integrated construction of the error prevention of the integrated architecture of the system of dispatching anti - error of dispatching anti - error of power network based on cloud computing has been proposed. Integrated system of error prevention of Energy Management System, EMS, and Operation Management System, OMS have been constructed either. The system architecture has good scalability and adaptability, which can improve the computational efficiency, reduce the cost of system operation and maintenance, enhance the ability of regional regulation and anti-error checking with broad development prospects.

  15. Management-changing errors in the recall of radiologic results — A pilot study

    International Nuclear Information System (INIS)

    Brus-Ramer, M.; Yerubandi, V.; Newhouse, J.H.

    2012-01-01

    Aim: To evaluate the occurrence of alterations to diagnostic information from radiological studies, which are altered by person-to-person communication and/or faulty recall, and whether they affect patient management Materials and methods: A structured telephone survey was conducted at a large tertiary care medical centre of house staff managing inpatients who had undergone chest, abdominal, or pelvic computed tomography (CT) or magnetic resonance imaging (MRI) and remained in the hospital at least 2 days later. Fifty-six physicians were surveyed regarding 98 patient cases. Each physician was asked how he or she first became aware of the results of the study. Each was then asked to recall the substance of radiological interpretation and to compare it with the radiology report. Each was then asked to assess the level of difference between the interpretations and whether management was affected. Results were correlated with the route by which interviewees became aware of the report, the report length, and whether the managing service was medical or surgical. Results: In nearly 15% (14/98) of cases, differences between the recalled and official results were such that patient management could have been (11.2%) or had already been affected (3.1%). There was no significant correlation between errors and either the route of report communication or the report length. Conclusion: There was a substantial rate of error in the recall and/or transmission of diagnostic radiological information, which was sufficiently severe to affect patient management.

  16. Study of principle error sources in gamma spectrometry. Application to cross sections measurement

    International Nuclear Information System (INIS)

    Majah, M. Ibn.

    1985-01-01

    The principle error sources in gamma spectrometry have been studied in purpose to measure cross sections with great precision. Three error sources have been studied: dead time and pile up which depend on counting rate, and coincidence effect that depends on the disintegration scheme of the radionuclide in question. A constant frequency pulse generator has been used to correct the counting loss due to dead time and pile up in cases of long and short disintegration periods. The loss due to coincidence effect can reach 25% and over, depending on the disintegration scheme and on the distance source-detector. After establishing the correction formula and verifying its validity for four examples: iron 56, scandium 48, antimony 120 and gold 196 m, an application has been done by measuring cross sections of nuclear reactions that lead to long disintegration periods which need short distance source-detector counting and thus correcting the loss due to dead time effect, pile up and coincidence effect. 16 refs., 45 figs., 25 tabs. (author)

  17. An empirical study on the human error recovery failure probability when using soft controls in NPP advanced MCRs

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun

    2014-01-01

    Highlights: • Many researchers have tried to understand human recovery process or step. • Modeling human recovery process is not sufficient to be applied to HRA. • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • Recovery failure probability in a soft control operation environment is investigated. • Recovery failure probability here would be important evidence for expert judgment. - Abstract: It is well known that probabilistic safety assessments (PSAs) today consider not just hardware failures and environmental events that can impact upon risk, but also human error contributions. Consequently, the focus on reliability and performance management has been on the prevention of human errors and failures rather than the recovery of human errors. However, the recovery of human errors is as important as the prevention of human errors and failures for the safe operation of nuclear power plants (NPPs). For this reason, many researchers have tried to find a human recovery process or step. However, modeling the human recovery process is not sufficient enough to be applied to human reliability analysis (HRA), which requires human error and recovery probabilities. In this study, therefore, human error recovery failure probabilities based on predefined human error modes were investigated by conducting experiments in the operation mockup of advanced/digital main control rooms (MCRs) in NPPs. To this end, 48 subjects majoring in nuclear engineering participated in the experiments. In the experiments, using the developed accident scenario based on tasks from the standard post trip action (SPTA), the steam generator tube rupture (SGTR), and predominant soft control tasks, which are derived from the loss of coolant accident (LOCA) and the excess steam demand event (ESDE), all error detection and recovery data based on human error modes were checked with the performance sheet and the statistical analysis of error recovery/detection was then

  18. Sudan-decoding generalized geometric Goppa codes

    DEFF Research Database (Denmark)

    Heydtmann, Agnes Eileen

    2003-01-01

    Generalized geometric Goppa codes are vector spaces of n-tuples with entries from different extension fields of a ground field. They are derived from evaluating functions similar to conventional geometric Goppa codes, but allowing evaluation in places of arbitrary degree. A decoding scheme...... for these codes based on Sudan's improved algorithm is presented and its error-correcting capacity is analyzed. For the implementation of the algorithm it is necessary that the so-called increasing zero bases of certain spaces of functions are available. A method to obtain such bases is developed....

  19. Automated evaluation of setup errors in carbon ion therapy using PET: Feasibility study

    International Nuclear Information System (INIS)

    Kuess, Peter; Hopfgartner, Johannes; Georg, Dietmar; Helmbrecht, Stephan; Fiedler, Fine; Birkfellner, Wolfgang; Enghardt, Wolfgang

    2013-01-01

    Purpose: To investigate the possibility of detecting patient mispositioning in carbon-ion therapy with particle therapy positron emission tomography (PET) in an automated image registration based manner. Methods: Tumors in the head and neck (H and N), pelvic, lung, and brain region were investigated. Biologically optimized carbon ion treatment plans were created with TRiP98. From these treatment plans, the reference β + -activity distributions were calculated using a Monte Carlo simulation. Setup errors were simulated by shifting or rotating the computed tomography (CT). The expected β + activity was calculated for each plan with shifts. Finally, the reference particle therapy PET images were compared to the “shifted” β + -activity distribution simulations using the Pearson's correlation coefficient (PCC). To account for different PET monitoring options the inbeam PET was compared to three different inroom scenarios. Additionally, the dosimetric effects of the CT misalignments were investigated. Results: The automated PCC detection of patient mispositioning was possible in the investigated indications for cranio-caudal shifts of 4 mm and more, except for prostate tumors. In the rather homogeneous pelvic region, the generated β + -activity distribution of the reference and compared PET image were too much alike. Thus, setup errors in this region could not be detected. Regarding lung lesions the detection strongly depended on the exact tumor location: in the center of the lung tumor misalignments could be detected down to 2 mm shifts while resolving shifts of tumors close to the thoracic wall was more challenging. Rotational shifts in the H and N and lung region of +6° and more could be detected using inroom PET and partly using inbeam PET. Comparing inroom PET to inbeam PET no obvious trend was found. However, among the inroom scenarios a longer measurement time was found to be advantageous. Conclusions: This study scopes the use of various particle therapy

  20. Practical Insights from Initial Studies Related to Human Error Analysis Project (HEAP)

    International Nuclear Information System (INIS)

    Follesoe, Knut; Kaarstad, Magnhild; Droeivoldsmo, Asgeir; Hollnagel, Erik; Kirwan; Barry

    1996-01-01

    This report presents practical insights made from an analysis of the three initial studies in the Human Error Analysis Project (HEAP), and the first study in the US NRC Staffing Project. These practical insights relate to our understanding of diagnosis in Nuclear Power Plant (NPP) emergency scenarios and, in particular, the factors that influence whether a diagnosis will succeed or fail. The insights reported here focus on three inter-related areas: (1) the diagnostic strategies and styles that have been observed in single operator and team-based studies; (2) the qualitative aspects of the key operator support systems, namely VDU interfaces, alarms, training and procedures, that have affected the outcome of diagnosis; and (3) the overall success rates of diagnosis and the error types that have been observed in the various studies. With respect to diagnosis, certain patterns have emerged from the various studies, depending on whether operators were alone or in teams, and on their familiarity with the process. Some aspects of the interface and alarm systems were found to contribute to diagnostic failures while others supported performance and recovery. Similar results were found for training and experience. Furthermore, the availability of procedures did not preclude the need for some diagnosis. With respect to HRA and PSA, it was possible to record the failure types seen in the studies, and in some cases to give crude estimates of the failure likelihood for certain scenarios. Although these insights are interim in nature, they do show the type of information that can be derived from these studies. More importantly, they clarify aspects of our understanding of diagnosis in NPP emergencies, including implications for risk assessment, operator support systems development, and for research into diagnosis in a broader range of fields than the nuclear power industry. (author)

  1. Relationship between refractive error and ocular biometrics in twin children: the Guangzhou Twin Eye Study.

    Science.gov (United States)

    Wang, Decai; Liu, Bin; Huang, Shengsong; Huang, Wenyong; He, Mingguang

    2014-09-01

    A cross-sectional study was conducted to explore the relationship between refractive error and ocular biometrics in children from the Guangzhou twin eye study. Twin participants aged 7-15 years were selected from Guangzhou Twin Eye Study. Ocular examinations included visual acuity measurement, ocular motility evaluation, autorefraction under cycloplegia, and anterior segment, media, and fundus examination. Axial length (AL), anterior chamber depth (ACD), and corneal curvature radius were measured using partial coherence laser interferometry. A multivariate linear regression model was used for statistical analysis. Twin children from Guangzhou city showed a decreased spherical equivalent with age, whereas both AL and ACD were increased and corneal curvature radius remained unchanged. When adjusted by age and gender, the data from 77% of twins presenting with spherical equivalent changes indicated that these were caused by predictable variables (R2 = 0.77, P biometrics. Refractive status is largely determined by axial length as the major factor.

  2. Geometric and engineering drawing

    CERN Document Server

    Morling, K

    2010-01-01

    The new edition of this successful text describes all the geometric instructions and engineering drawing information that are likely to be needed by anyone preparing or interpreting drawings or designs with plenty of exercises to practice these principles.

  3. Differential geometric structures

    CERN Document Server

    Poor, Walter A

    2007-01-01

    This introductory text defines geometric structure by specifying parallel transport in an appropriate fiber bundle and focusing on simplest cases of linear parallel transport in a vector bundle. 1981 edition.

  4. Geometric ghosts and unitarity

    International Nuclear Information System (INIS)

    Ne'eman, Y.

    1980-09-01

    A review is given of the geometrical identification of the renormalization ghosts and the resulting derivation of Unitarity equations (BRST) for various gauges: Yang-Mills, Kalb-Ramond, and Soft-Group-Manifold

  5. An Experimental Study of Medical Error Explanations: Do Apology, Empathy, Corrective Action, and Compensation Alter Intentions and Attitudes?

    Science.gov (United States)

    Nazione, Samantha; Pace, Kristin

    2015-01-01

    Medical malpractice lawsuits are a growing problem in the United States, and there is much controversy regarding how to best address this problem. The medical error disclosure framework suggests that apologizing, expressing empathy, engaging in corrective action, and offering compensation after a medical error may improve the provider-patient relationship and ultimately help reduce the number of medical malpractice lawsuits patients bring to medical providers. This study provides an experimental examination of the medical error disclosure framework and its effect on amount of money requested in a lawsuit, negative intentions, attitudes, and anger toward the provider after a medical error. Results suggest empathy may play a large role in providing positive outcomes after a medical error.

  6. Radiographer and radiologist perception error in reporting double contrast barium enemas: A pilot study

    International Nuclear Information System (INIS)

    Booth, Alison M.; Mannion, Richard A.J.

    2005-01-01

    Purpose: The practice of radiographers performing double contrast barium enemas (DCBE) is now widespread and in many centres the radiographer's opinion is, at least, contributing to a dual reporting system [Bewell J, Chapman AH. Radiographer performed barium enemas - results of a survey to assess progress. Radiography 1996;2:199-205; Leslie A, Virjee JP. Detection of colorectal carcinoma on double contrast barium enema when double reporting is routinely performed: an audit of current practice. Clin Radiol 2001;57:184-7; Culpan DG, Mitchell AJ, Hughes S, Nutman M, Chapman AH. Double contrast barium enema sensitivity: a comparison of studies by radiographers and radiologists. Clin Radiol 2002;57:604-7]. To ensure this change in practice does not lead to an increase in reporting errors, this study aimed to compare the perception abilities of radiographers with those of radiologists. Methods: Three gastro-intestinal (GI) radiographers and three consultant radiologists independently reported on a selection of 50 DCBE examinations, including the level of certainty in their comments for each examination. A blinded comparison of the results with an independent 'standard report' was recorded. Results: The results demonstrate there was no significant difference in perception error for any of the levels of certainty, for single reporting, for double reading by a radiographer/radiologist or by two radiologists. Conclusions: The study shows that radiographers can perceive abnormalities on DCBE at similar sensitivities and specificities as radiologists. While the participants in the study may be typical of a district general hospital, the nature of the study gives it limited external validity. As a pilot, the results demonstrate that, with slight modification, the methodology could be used for a larger study

  7. Fluid dynamic analysis and experimental study of a low radiation error temperature sensor

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jie, E-mail: yangjie396768@163.com [Key Laboratory for Aerosol-Cloud-Precipitation of China Meteorological Administration, Nanjing 210044 (China); School of Atmospheric Physics, Nanjing University of Information Science and Technology, Nanjing 210044 (China); Liu, Qingquan, E-mail: andyucd@163.com [Jiangsu Key Laboratory of Meteorological Observation and Information Processing, Nanjing 210044 (China); Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology, Nanjing 210044 (China); Dai, Wei, E-mail: daiweiilove@163.com [Key Laboratory for Aerosol-Cloud-Precipitation of China Meteorological Administration, Nanjing 210044 (China); School of Atmospheric Physics, Nanjing University of Information Science and Technology, Nanjing 210044 (China); Ding, Renhui, E-mail: drhabcd@sina.com [Jiangsu Meteorological Observation Center, Nanjing 210008 (China)

    2017-01-30

    To improve the air temperature observation accuracy, a low radiation error temperature sensor is proposed. A Computational Fluid Dynamics (CFD) method is implemented to obtain radiation errors under various environmental conditions. The low radiation error temperature sensor, a naturally ventilated radiation shield, a thermometer screen and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean radiation errors of the naturally ventilated radiation shield and the thermometer screen are 0.57 °C and 0.32 °C, respectively. In contrast, the mean radiation error of the low radiation error temperature sensor is 0.05 °C. The low radiation error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature measurement result. - Highlights: • A CFD method is applied to obtain a quantitative solution of radiation error. • A temperature sensor is proposed to minimize radiation error. • The radiation error of the temperature sensor is on the order of 0.05 °C.

  8. Asymptotic and geometrical quantization

    International Nuclear Information System (INIS)

    Karasev, M.V.; Maslov, V.P.

    1984-01-01

    The main ideas of geometric-, deformation- and asymptotic quantizations are compared. It is shown that, on the one hand, the asymptotic approach is a direct generalization of exact geometric quantization, on the other hand, it generates deformation in multiplication of symbols and Poisson brackets. Besides investigating the general quantization diagram, its applications to the calculation of asymptotics of a series of eigenvalues of operators possessing symmetry groups are considered

  9. On geometrized gravitation theories

    International Nuclear Information System (INIS)

    Logunov, A.A.; Folomeshkin, V.N.

    1977-01-01

    General properties of the geometrized gravitation theories have been considered. Geometrization of the theory is realized only to the extent that by necessity follows from an experiment (geometrization of the density of the matter Lagrangian only). Aor a general case the gravitation field equations and the equations of motion for matter are formulated in the different Riemann spaces. A covariant formulation of the energy-momentum conservation laws is given in an arbitrary geometrized theory. The noncovariant notion of ''pseudotensor'' is not required in formulating the conservation laws. It is shown that in the general case (i.e., when there is an explicit dependence of the matter Lagrangian density on the covariant derivatives) a symmetric energy-momentum tensor of the matter is explicitly dependent on the curvature tensor. There are enlisted different geometrized theories that describe a known set of the experimental facts. The properties of one of the versions of the quasilinear geometrized theory that describes the experimental facts are considered. In such a theory the fundamental static spherically symmetrical solution has a singularity only in the coordinate origin. The theory permits to create a satisfactory model of the homogeneous nonstationary Universe

  10. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  11. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    OpenAIRE

    He, Wei; Wang, Yueke; Xing, Kefei; Yang, Jianwei

    2016-01-01

    Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main paramet...

  12. A phantom-based study for assessing the error and uncertainty of a neuronavigation system

    OpenAIRE

    Natalia Izquierdo-Cifuentes; Genaro Daza-Santacoloma; Walter Serna-Serna

    2017-01-01

    This document describes a calibration protocol with the intention to introduce a guide to standardize the metrological vocabulary among manufacturers of image-guided surgery systems. Two stages were developed to measure the errors and estimate the uncertainty of a neuronavigator in different situations, on the first one it was determined a mechanical error on a virtual model of an acrylic phantom, on the second it was determined a coordinate error on the computerized axial tomography scan of ...

  13. Study of errors in absolute flux density measurements of Cassiopeia A

    International Nuclear Information System (INIS)

    Kanda, M.

    1975-10-01

    An error analysis for absolute flux density measurements of Cassiopeia A is discussed. The lower-bound quadrature-accumulation error for state-of-the-art measurements of the absolute flux density of Cas A around 7 GHz is estimated to be 1.71% for 3 sigma limits. The corresponding practicable error for the careful but not state-of-the-art measurement is estimated to be 4.46% for 3 sigma limits

  14. Geometrically frustrated magnetic structures of the heavy-fermion compound CePdAl studied by powder neutron diffraction

    Energy Technology Data Exchange (ETDEWEB)

    Doenni, A.; Fischer, P.; Zolliker, M. [Laboratory for Neutron Scattering, ETH Zuerich and Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Ehlers, G.; Maletta, H. [Hahn Meitner Institute Berlin, Glienicker Strasse 100, D-14092 Berlin (Germany); Kitazawa, H. [National Research Institute for Metals, Tsukuba, Ibaraki 305 (Japan)

    1996-12-09

    The heavy-fermion compound CePdAl with ZrNiAl-type crystal structure (hexagonal space group P6-bar2m) was investigated by powder neutron diffraction. The triangular coordination symmetry of magnetic Ce atoms on site 3f gives rise to geometrical frustration. CePdAl orders below T{sub N} = 2.7 K with an incommensurate antiferromagnetic propagation vector k=[1/2, 0, {tau}], {tau} approx. 0.35, and a longitudinal sine-wave (LSW) modulated spin arrangement. Magnetically ordered moments at Ce(1) and Ce(3) coexist with frustrated disordered moments at Ce(2). The experimentally determined magnetic structure is in agreement with group theoretical symmetry analysis considerations, calculated by the program MODY, which confirm that for Ce(2) an ordered magnetic moment parallel to the magnetically easy c-axis is forbidden by symmetry. Further low-temperature experiments give evidence for a second magnetic phase transition in CePdAl between 0.6 and 1.3 K. Magnetic structures of CePdAl are compared with those of the isostructural compound TbNiAl, where a non-zero ordered magnetic moment for the geometrically frustrated Tb(2) atoms is allowed by symmetry. (author)

  15. Prevalence and risk factors of undercorrected refractive errors among Singaporean Malay adults: the Singapore Malay Eye Study.

    Science.gov (United States)

    Rosman, Mohamad; Wong, Tien Y; Tay, Wan-Ting; Tong, Louis; Saw, Seang-Mei

    2009-08-01

    To describe the prevalence and the risk factors of undercorrected refractive error in an adult urban Malay population. This population-based, cross-sectional study was conducted in Singapore in 3280 Malay adults, aged 40 to 80 years. All individuals were examined at a centralized clinic and underwent standardized interviews and assessment of refractive errors and presenting and best corrected visual acuities. Distance presenting visual acuity was monocularly measured by using a logarithm of the minimum angle of resolution (logMAR) number chart at a distance of 4 m, with the participants wearing their "walk-in" optical corrections (spectacles or contact lenses), if any. Refraction was determined by subjective refraction by trained, certified study optometrists. Best corrected visual acuity was monocularly assessed and recorded in logMAR scores using the same test protocol as was used for presenting visual acuity. Undercorrected refractive error was defined as an improvement of at least 0.2 logMAR (2 lines equivalent) in the best corrected visual acuity compared with the presenting visual acuity in the better eye. The mean age of the subjects included in our study was 58 +/- 11 years, and 52% of the subjects were women. The prevalence rate of undercorrected refractive error among Singaporean Malay adults in our study (n = 3115) was 20.4% (age-standardized prevalence rate, 18.3%). More of the women had undercorrected refractive error than the men (21.8% vs. 18.8%, P = 0.04). Undercorrected refractive error was also more common in subjects older than 50 years than in subjects aged 40 to 49 years (22.6% vs. 14.3%, P Malay adults with refractive errors was higher than that of the Singaporean Chinese adults with refractive errors. Undercorrected refractive error is a significant cause of correctable visual impairment among Singaporean Malay adults, affecting one in five persons.

  16. Comparison of community and hospital pharmacists' attitudes and behaviors on medication error disclosure to the patient: A pilot study.

    Science.gov (United States)

    Kim, ChungYun; Mazan, Jennifer L; Quiñones-Boex, Ana C

    To determine pharmacists' attitudes and behaviors on medication errors and their disclosure and to compare community and hospital pharmacists on such views. An online questionnaire was developed from previous studies on physicians' disclosure of errors. Questionnaire items included demographics, environment, personal experiences, and attitudes on medication errors and the disclosure process. An invitation to participate along with the link to the questionnaire was electronically distributed to members of two Illinois pharmacy associations. A follow-up reminder was sent 4 weeks after the original message. Data were collected for 3 months, and statistical analyses were performed with the use of IBM SPSS version 22.0. The overall response rate was 23.3% (n = 422). The average employed respondent was a 51-year-old white woman with a BS Pharmacy degree working in a hospital pharmacy as a clinical staff member. Regardless of practice settings, pharmacist respondents agreed that medication errors were inevitable and that a disclosure process is necessary. Respondents from community and hospital settings were further analyzed to assess any differences. Community pharmacist respondents were more likely to agree that medication errors were inevitable and that pharmacists should address the patient's emotions when disclosing an error. Community pharmacist respondents were also more likely to agree that the health care professional most closely involved with the error should disclose the error to the patient and thought that it was the pharmacists' responsibility to disclose the error. Hospital pharmacist respondents were more likely to agree that it was important to include all details in a disclosure process and more likely to disagree on putting a "positive spin" on the event. Regardless of practice setting, responding pharmacists generally agreed that errors should be disclosed to patients. There were, however, significant differences in their attitudes and behaviors

  17. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Wei He

    2016-01-01

    Full Text Available Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main parameters for raw soft error vulnerability of the module and coupling factors. Results indicate that the proposed method is feasible.

  18. Geometric quantization and general relativity

    International Nuclear Information System (INIS)

    Souriau, J.-M.

    1977-01-01

    The purpose of geometric quantization is to give a rigorous mathematical content to the 'correspondence principle' between classical and quantum mechanics. The main tools are borrowed on one hand from differential geometry and topology (differential manifolds, differential forms, fiber bundles, homology and cohomology, homotopy), on the other hand from analysis (functions of positive type, infinite dimensional group representations, pseudo-differential operators). Some satisfactory results have been obtained in the study of dynamical systems, but some fundamental questions are still waiting for an answer. The 'geometric quantization of fields', where some further well known difficulties arise, is still in a preliminary stage. In particular, the geometric quantization on the gravitational field is still a mere project. The situation is even more uncertain due to the fact that there is no experimental evidence of any quantum gravitational effect which could give us a hint towards what we are supposed to look for. The first level of both Quantum Theory, and General Relativity describes passive matter: influence by the field without being a source of it (first quantization and equivalence principle respectively). In both cases this is only an approximation (matter is always a source). But this approximation turns out to be the least uncertain part of the description, because on one hand the first quantization avoids the problems of renormalization and on the other hand the equivalence principle does not imply any choice of field equations (it is known that one can modify Einstein equations at short distances without changing their geometrical properties). (Auth.)

  19. Geometric Correction of PHI Hyperspectral Image without Ground Control Points

    International Nuclear Information System (INIS)

    Luan, Kuifeng; Tong, Xiaohua; Liu, Xiangfeng; Ma, Yanhua; Shu, Rong; Xu, Weiming

    2014-01-01

    Geometric correction without ground control points (GCPs) is a very important topic. Conventional airborne photogrammetry is difficult to implement in areas where the installation of GCPs is not available. The technical of integrated GPS/INS systems providing the positioning and attitude of airborne systems is a potential solution in such areas. This paper first states the principle of geometric correction based on a combination of GPS and INS then the error of the geometric correction of Pushbroom Hyperspectral Imager (PHI) without GCP was analysed, then a flight test was carried out in an area of Damxung, Tibet. The experiment result showed that the error at straight track was small, generally less than 1 pixel, while the maximum error at cross track direction, was close to 2 pixels. The results show that geometric correction of PHI without GCP enables a variety of mapping products to be generated from airborne navigation and imagery data

  20. Multivariate Error Covariance Estimates by Monte-Carlo Simulation for Assimilation Studies in the Pacific Ocean

    Science.gov (United States)

    Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.

    2004-01-01

    One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the

  1. A Conceptual Design Study for the Error Field Correction Coil Power Supply in JT-60SA

    International Nuclear Information System (INIS)

    Matsukawa, M.; Shimada, K.; Yamauchi, K.; Gaio, E.; Ferro, A.; Novello, L.

    2013-01-01

    This paper describes a conceptual design study for the circuit configuration of the Error Field Correction Coil (EFCC) power supply (PS) to maximize the expected performance with reasonable cost in JT-60SA. The EFCC consists of eighteen sector coils installed inside the vacuum vessel, six in the toroidal direction and three in the poloidal direction, each one rated for 30 kA-turn. As a result, star point connection is proposed for each group of six EFCC coils installed cyclically in the toroidal direction for decoupling with poloidal field coils. In addition, a six phase inverter which is capable of controlling each phase current was chosen as PS topology to ensure higher flexibility of operation with reasonable cost.

  2. Pressurized water reactor monitoring. Study of detection, diagnostic and estimation methods (least error squares and filtering)

    International Nuclear Information System (INIS)

    Gillet, M.

    1986-07-01

    This thesis presents a study for the surveillance of the ''primary coolant circuit inventory monitoring'' of a pressurized water reactor. A reference model is developed in view of an automatic system ensuring detection and diagnostic in real time. The methods used for the present application are statistical tests and a method related to pattern recognition. The estimation of failures detected, difficult owing to the non-linearity of the problem, is treated by the least error squares method of the predictor or corrector type, and by filtering. It is in this frame that a new optimized method with superlinear convergence is developed, and that a segmented linearization of the model is introduced, in view of a multiple filtering [fr

  3. On the a priori estimation of collocation error covariance functions: a feasibility study

    DEFF Research Database (Denmark)

    Arabelos, D.N.; Forsberg, René; Tscherning, C.C.

    2007-01-01

    and the associated error covariance functions were conducted in the Arctic region north of 64 degrees latitude. The correlation between the known features of the data and the parameters variance and correlation length of the computed error covariance functions was estimated using multiple regression analysis...

  4. Drug administration errors in an institution for individuals with intellectual disability : an observational study

    NARCIS (Netherlands)

    van den Bemt, P M L A; Robertz, R; de Jong, A L; van Roon, E N; Leufkens, H G M

    BACKGROUND: Medication errors can result in harm, unless barriers to prevent them are present. Drug administration errors are less likely to be prevented, because they occur in the last stage of the drug distribution process. This is especially the case in non-alert patients, as patients often form

  5. The study of CD side to side error in line/space pattern caused by post-exposure bake effect

    Science.gov (United States)

    Huang, Jin; Guo, Eric; Ge, Haiming; Lu, Max; Wu, Yijun; Tian, Mingjing; Yan, Shichuan; Wang, Ran

    2016-10-01

    In semiconductor manufacturing, as the design rule has decreased, the ITRS roadmap requires crucial tighter critical dimension (CD) control. CD uniformity is one of the necessary parameters to assure good performance and reliable functionality of any integrated circuit (IC) [1] [2], and towards the advanced technology nodes, it is a challenge to control CD uniformity well. The study of corresponding CD Uniformity by tuning Post-Exposure bake (PEB) and develop process has some significant progress[3], but CD side to side error happening to some line/space pattern are still found in practical application, and the error has approached to over the uniformity tolerance. After details analysis, even though use several developer types, the CD side to side error has not been found significant relationship to the developing. In addition, it is impossible to correct the CD side to side error by electron beam correction as such error does not appear in all Line/Space pattern masks. In this paper the root cause of CD side to side error is analyzed and the PEB module process are optimized as a main factor for improvement of CD side to side error.

  6. Geometric phases in discrete dynamical systems

    Energy Technology Data Exchange (ETDEWEB)

    Cartwright, Julyan H.E., E-mail: julyan.cartwright@csic.es [Instituto Andaluz de Ciencias de la Tierra, CSIC–Universidad de Granada, E-18100 Armilla, Granada (Spain); Instituto Carlos I de Física Teórica y Computacional, Universidad de Granada, E-18071 Granada (Spain); Piro, Nicolas, E-mail: nicolas.piro@epfl.ch [École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne (Switzerland); Piro, Oreste, E-mail: piro@imedea.uib-csic.es [Departamento de Física, Universitat de les Illes Balears, E-07122 Palma de Mallorca (Spain); Tuval, Idan, E-mail: ituval@imedea.uib-csic.es [Mediterranean Institute for Advanced Studies, CSIC–Universitat de les Illes Balears, E-07190 Mallorca (Spain)

    2016-10-14

    In order to study the behaviour of discrete dynamical systems under adiabatic cyclic variations of their parameters, we consider discrete versions of adiabatically-rotated rotators. Parallelling the studies in continuous systems, we generalize the concept of geometric phase to discrete dynamics and investigate its presence in these rotators. For the rotated sine circle map, we demonstrate an analytical relationship between the geometric phase and the rotation number of the system. For the discrete version of the rotated rotator considered by Berry, the rotated standard map, we further explore this connection as well as the role of the geometric phase at the onset of chaos. Further into the chaotic regime, we show that the geometric phase is also related to the diffusive behaviour of the dynamical variables and the Lyapunov exponent. - Highlights: • We extend the concept of geometric phase to maps. • For the rotated sine circle map, we demonstrate an analytical relationship between the geometric phase and the rotation number. • For the rotated standard map, we explore the role of the geometric phase at the onset of chaos. • We show that the geometric phase is related to the diffusive behaviour of the dynamical variables and the Lyapunov exponent.

  7. Study of maintenance skill-work based on PSFs and error category

    International Nuclear Information System (INIS)

    Nagata, Manabu; Yukimachi, Takeo; Hasegawa, Toshio

    2001-01-01

    In this investigation, the skill-types of skill-work are clarified according to the human error data on the maintenance works at nuclear power plants. At first, the causal PSFs of the errors are extracted from the data and some of the skill-types are characterized as results from factor analysis. Moreover, the skill-work model is reexamined on the basis of the contents of the human error data and the error category corresponding to the data. Furthermore, integrating the tendency of the causal PSFs and the actual error category concerning each skill-type, an extended skill-work model was developed with a flow-chart representation as a tentative stage of the investigation. (author)

  8. Free edge effects study in laminated composites using Digital Image Correlation: effect of material and geometrical singularities

    Directory of Open Access Journals (Sweden)

    Brieu M.

    2010-06-01

    Full Text Available Composite materials are today used for various industrial applications. However, delamination on free edges, where stress gradients are strong, still remain a problem. In the aim of a better understanding of such phenomenons, Digital Image Correlation (DIC measurements have been carried out on [(15n/-15n2]s laminates under uniaxial tensile strain. Three different composites with different mechanical properties and microstructure have been tested as well as two samples geometries: flat and with ply drop. Experimental results show high shear strain concentrations near 15°/-15° interlaminar interfaces on free edges which depend on material mechanical properties and microstructure and increase in the vicinity of a geometrical singularity.

  9. Using a site-specific technical error to establish training responsiveness: a preliminary explorative study.

    Science.gov (United States)

    Weatherwax, Ryan M; Harris, Nigel K; Kilding, Andrew E; Dalleck, Lance C

    2018-01-01

    Even though cardiorespiratory fitness (CRF) training elicits numerous health benefits, not all individuals have positive training responses following a structured CRF intervention. It has been suggested that the technical error (TE), a combination of biological variability and measurement error, should be used to establish specific training responsiveness criteria to gain further insight on the effectiveness of the training program. To date, most training interventions use an absolute change or a TE from previous findings, which do not take into consideration the training site and equipment used to establish training outcomes or the specific cohort being evaluated. The purpose of this investigation was to retrospectively analyze training responsiveness of two CRF training interventions using two common criteria and a site-specific TE. Sixteen men and women completed two maximal graded exercise tests and verification bouts to identify maximal oxygen consumption (VO 2 max) and establish a site-specific TE. The TE was then used to retrospectively analyze training responsiveness in comparison to commonly used criteria: percent change of >0% and >+5.6% in VO 2 max. The TE was found to be 7.7% for relative VO 2 max. χ 2 testing showed significant differences in all training criteria for each intervention and pooled data from both interventions, except between %Δ >0 and %Δ >+7.7% in one of the investigations. Training nonresponsiveness ranged from 11.5% to 34.6%. Findings from the present study support the utility of site-specific TE criterion to quantify training responsiveness. A similar methodology of establishing a site-specific and even cohort specific TE should be considered to establish when true cardiorespiratory training adaptations occur.

  10. Modern Geometric Methods of Distance Determination

    Science.gov (United States)

    Thévenin, Frédéric; Falanga, Maurizio; Kuo, Cheng Yu; Pietrzyński, Grzegorz; Yamaguchi, Masaki

    2017-11-01

    Building a 3D picture of the Universe at any distance is one of the major challenges in astronomy, from the nearby Solar System to distant Quasars and galaxies. This goal has forced astronomers to develop techniques to estimate or to measure the distance of point sources on the sky. While most distance estimates used since the beginning of the 20th century are based on our understanding of the physics of objects of the Universe: stars, galaxies, QSOs, the direct measures of distances are based on the geometric methods as developed in ancient Greece: the parallax, which has been applied to stars for the first time in the mid-19th century. In this review, different techniques of geometrical astrometry applied to various stellar and cosmological (Megamaser) objects are presented. They consist in parallax measurements from ground based equipment or from space missions, but also in the study of binary stars or, as we shall see, of binary systems in distant extragalactic sources using radio telescopes. The Gaia mission will be presented in the context of stellar physics and galactic structure, because this key space mission in astronomy will bring a breakthrough in our understanding of stars, galaxies and the Universe in their nature and evolution with time. Measuring the distance to a star is the starting point for an unbiased description of its physics and the estimate of its fundamental parameters like its age. Applying these studies to candles such as the Cepheids will impact our large distance studies and calibration of other candles. The text is constructed as follows: introducing the parallax concept and measurement, we shall present briefly the Gaia satellite which will be the future base catalogue of stellar astronomy in the near future. Cepheids will be discussed just after to demonstrate the state of the art in distance measurements in the Universe with these variable stars, with the objective of 1% of error in distances that could be applied to our closest

  11. Immagini e Concetti in Geometria=The Figural and the Conceptual Components of Geometrical Concepts.

    Science.gov (United States)

    Mariotti, Maria Alessandra

    1992-01-01

    Discusses geometrical reasoning in the framework of the theory of Figural Concepts to highlight the interaction between the figural and conceptual components of geometrical concepts. Examples of students' difficulties and errors in geometrical reasoning are interpreted according to the internal tension that appears in figural concepts resulting…

  12. Medication errors with the use of allopurinol and colchicine : A retrospective study of a national, anonymous Internet-accessible error reporting system

    NARCIS (Netherlands)

    Mikuls, TR; Curtis, [No Value; Allison, JJ; Hicks, RW; Saag, KG

    Objectives. To more closely assess medication errors in gout care, we examined data from a national, Internet-accessible error reporting program over a 5-year reporting period. Methods. We examined data from the MEDMARX (TM) database, covering the period from January 1, 1999 through December 31,

  13. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  14. Geometrical optical illusionists.

    Science.gov (United States)

    Wade, Nicholas J

    2014-01-01

    Geometrical optical illusions were given this title by Oppel in 1855. Variants on such small distortions of visual space were illustrated thereafter, many of which bear the names of those who first described them. Some original forms of the geometrical optical illusions are shown together with 'perceptual portraits' of those who described them. These include: Roget, Chevreul, Fick, Zöllner, Poggendorff, Hering, Kundt, Delboeuf Mach, Helmholtz, Hermann, von Bezold, Müller-Lyer, Lipps, Thiéry, Wundt, Münsterberg, Ebbinghaus, Titchener, Ponzo, Luckiesh, Sander, Ehrenstein, Gregory, Heard, White, Shepard, and. Lingelbach. The illusions are grouped under the headings of orientation, size, the combination of size and orientation, and contrast. Early theories of illusions, before geometrical optical illusions were so named, are mentioned briefly.

  15. (How) do we learn from errors? A prospective study of the link between the ward's learning practices and medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, A; Somech, A; Admi, H; Peterfreund, I; Peker, H; Priente, O

    2014-03-01

    Attention in the ward should shift from preventing medication administration errors to managing them. Nevertheless, little is known in regard with the practices nursing wards apply to learn from medication administration errors as a means of limiting them. To test the effectiveness of four types of learning practices, namely, non-integrated, integrated, supervisory and patchy learning practices in limiting medication administration errors. Data were collected from a convenient sample of 4 hospitals in Israel by multiple methods (observations and self-report questionnaires) at two time points. The sample included 76 wards (360 nurses). Medication administration error was defined as any deviation from prescribed medication processes and measured by a validated structured observation sheet. Wards' use of medication administration technologies, location of the medication station, and workload were observed; learning practices and demographics were measured by validated questionnaires. Results of the mixed linear model analysis indicated that the use of technology and quiet location of the medication cabinet were significantly associated with reduced medication administration errors (estimate=.03, perrors (estimate=.04, plearning practices, supervisory learning was the only practice significantly linked to reduced medication administration errors (estimate=-.04, plearning were significantly linked to higher levels of medication administration errors (estimate=-.03, plearning was not associated with it (p>.05). How wards manage errors might have implications for medication administration errors beyond the effects of typical individual, organizational and technology risk factors. Head nurse can facilitate learning from errors by "management by walking around" and monitoring nurses' medication administration behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Study of the Switching Errors in an RSFQ Switch by Using a Computerized Test Setup

    International Nuclear Information System (INIS)

    Kim, Se Hoon; Baek, Seung Hun; Yang, Jung Kuk; Kim, Jun Ho; Kang, Joon Hee

    2005-01-01

    The problem of fluctuation-induced digital errors in a rapid single flux quantum (RSFQ) circuit has been a very important issue. In this work, we calculated the bit error rate of an RSFQ switch used in superconductive arithmetic logic unit (ALU). RSFQ switch should have a very low error rate in the optimal bias. Theoretical estimates of the RSFQ error rate are on the order of 10 -50 per bit operation. In this experiment, we prepared two identical circuits placed in parallel. Each circuit was composed of 10 Josephson transmission lines (JTLs) connected in series with an RSFQ switch placed in the middle of the 10 JTLs. We used a splitter to feed the same input signal to both circuits. The outputs of the two circuits were compared with an RSFQ exclusive OR (XOR) to measure the bit error rate of the RSFQ switch. By using a computerized bit-error-rate test setup, we measured the bit error rate of 2.18 x 10 -12 when the bias to the RSFQ switch was 0.398 mA that was quite off from the optimum bias of 0.6 mA.

  17. Consanguinity, endogamy and inborn errors of metabolism in Oman: a cross-sectional study.

    Science.gov (United States)

    Al-Thihli, Khalid; Al-Murshedi, Fathiya; Al-Hashmi, Nadia; Al-Mamari, Watfa; Islam, M Mazharul; Al-Yahyaee, Said A

    2014-01-01

    The Sultanate of Oman, like many other Arab countries, has relatively high rates of consanguinity. Reports suggest that the incidence of inborn errors of metabolism (IEM) is also high in Oman. This retrospective cross-sectional study was designed to evaluate the number of patients with IEM being followed at the only two tertiary centers in Oman treating such patients, and to calculate the consanguinity rates among these families. The electronic medical records of all patients were reviewed for demographic and clinical characteristics. A total of 285 patients with IEM were being followed at the 2 centers involved; 162 (56.8%) were male and 123 (43.2%) were female. The history of consanguinity was documented or available for 241 patients: 229 patients (95%) were born to consanguineous parents related as second cousins or closer. First-cousin marriages were reported in 191 families (79.3%), while 31 patients (12.9%) were born to second cousins. The parents of 5 patients (2%) were related as double first cousins, and 2 patients (1%) were born to first cousins once removed. The average coefficient of inbreeding (F) in our study was 0.081. Seventeen patients (6%) had associated comorbid conditions other than IEM. Our study highlights the clinical burden of IEM in Oman and emphasizes the high consanguinity rates among the parents of affected patients. © 2014 S. Karger AG, Basel

  18. Initial singularity and pure geometric field theories

    Science.gov (United States)

    Wanas, M. I.; Kamal, Mona M.; Dabash, Tahia F.

    2018-01-01

    In the present article we use a modified version of the geodesic equation, together with a modified version of the Raychaudhuri equation, to study initial singularities. These modified equations are used to account for the effect of the spin-torsion interaction on the existence of initial singularities in cosmological models. Such models are the results of solutions of the field equations of a class of field theories termed pure geometric. The geometric structure used in this study is an absolute parallelism structure satisfying the cosmological principle. It is shown that the existence of initial singularities is subject to some mathematical (geometric) conditions. The scheme suggested for this study can be easily generalized.

  19. Modeling misidentification errors that result from use of genetic tags in capture-recapture studies

    Science.gov (United States)

    Yoshizaki, J.; Brownie, C.; Pollock, K.H.; Link, W.A.

    2011-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture-recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored. ?? 2009 Springer Science+Business Media, LLC.

  20. Study of dosimetry errors in the framework of a concerted international study about the risk of cancer in nuclear industry workers. Study of the errors made on dose estimations of 100 to 3000 keV photons

    International Nuclear Information System (INIS)

    Thierry Chef, I.

    2000-01-01

    Ionizing radiations are uncontested factors of cancer risk and the radioprotection standards are defined on the basis of epidemiological studies of persons exposed to high doses of radiations (atomic bombs and therapeutic medical exposures). An epidemiological study of cancer risk has been carried out on nuclear industry workers from 17 countries in order to check these standards and to directly evaluate the risk linked with long duration exposures to low doses. The techniques used to measure the workers' doses have changed with time and these evolutions have been different in the different countries considered. The study of dosimetry errors aims at estimating the compatibility of the doses with respect to the periods of time and to the countries, and at quantifying the errors that could have disturbed the dose measurements during the first years and their consideration in the risk estimation. A compilation of the information available about dosimetry in the participating countries has been performed and the main sources of errors have been identified. Experiments have been carried out to test the response of the dosimeters used and to evaluate the conditions of exposure inside the companies. The biases and uncertainties have been estimated per company and per period of time and the most important correspond to the oldest measurements performed. This study contributes also to improve the knowledge of the working conditions and of the preciseness of dose estimations. (J.S.)

  1. Geometric Liouville gravity

    International Nuclear Information System (INIS)

    La, H.

    1992-01-01

    A new geometric formulation of Liouville gravity based on the area preserving diffeo-morphism is given and a possible alternative to reinterpret Liouville gravity is suggested, namely, a scalar field coupled to two-dimensional gravity with a curvature constraint

  2. A Geometric Dissection Problem

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. A Geometric Dissection Problem. M N Deshpande. Think It Over Volume 7 Issue 7 July 2002 pp 91-91. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/07/0091-0091. Author Affiliations.

  3. Geometric statistical inference

    International Nuclear Information System (INIS)

    Periwal, Vipul

    1999-01-01

    A reparametrization-covariant formulation of the inverse problem of probability is explicitly solved for finite sample sizes. The inferred distribution is explicitly continuous for finite sample size. A geometric solution of the statistical inference problem in higher dimensions is outlined

  4. Geometric Series via Probability

    Science.gov (United States)

    Tesman, Barry

    2012-01-01

    Infinite series is a challenging topic in the undergraduate mathematics curriculum for many students. In fact, there is a vast literature in mathematics education research on convergence issues. One of the most important types of infinite series is the geometric series. Their beauty lies in the fact that they can be evaluated explicitly and that…

  5. Structure-preserving geometric algorithms for plasma physics and beam physics

    Science.gov (United States)

    Qin, Hong

    2017-10-01

    Standard algorithms in the plasma physics and beam physics do not possess the long-term accuracy and fidelity required in the study of multi-scale dynamics, because they do not preserve the geometric structures of the physical systems, such as the local energy-momentum conservation, symplectic structure and gauge symmetry. As a result, numerical errors accumulate coherently with time and long-term simulation results are not reliable. To overcome this difficulty, since 2008 structure-preserving geometric algorithms have been developed. This new generation of algorithms utilizes advanced techniques, such as interpolating differential forms, canonical and non-canonical symplectic integrators, and finite element exterior calculus to guarantee gauge symmetry and charge conservation, and the conservation of energy-momentum and symplectic structure. It is our vision that future numerical capabilities in plasma physics and beam physics will be based on the structure-preserving geometric algorithms.

  6. Sleep quality, posttraumatic stress, depression, and human errors in train drivers: a population-based nationwide study in South Korea.

    Science.gov (United States)

    Jeon, Hong Jin; Kim, Ji-Hae; Kim, Bin-Na; Park, Seung Jin; Fava, Maurizio; Mischoulon, David; Kang, Eun-Ho; Roh, Sungwon; Lee, Dongsoo

    2014-12-01

    Human error is defined as an unintended error that is attributable to humans rather than machines, and that is important to avoid to prevent accidents. We aimed to investigate the association between sleep quality and human errors among train drivers. Cross-sectional. Population-based. A sample of 5,480 subjects who were actively working as train drivers were recruited in South Korea. The participants were 4,634 drivers who completed all questionnaires (response rate 84.6%). None. The Pittsburgh Sleep Quality Index (PSQI), the Center for Epidemiologic Studies Depression Scale (CES-D), the Impact of Event Scale-Revised (IES-R), the State-Trait Anxiety Inventory (STAI), and the Korean Occupational Stress Scale (KOSS). Of 4,634 train drivers, 349 (7.5%) showed more than one human error per 5 y. Human errors were associated with poor sleep quality, higher PSQI total scores, short sleep duration at night, and longer sleep latency. Among train drivers with poor sleep quality, those who experienced severe posttraumatic stress showed a significantly higher number of human errors than those without. Multiple logistic regression analysis showed that human errors were significantly associated with poor sleep quality and posttraumatic stress, whereas there were no significant associations with depression, trait and state anxiety, and work stress after adjusting for age, sex, education years, marital status, and career duration. Poor sleep quality was found to be associated with more human errors in train drivers, especially in those who experienced severe posttraumatic stress. © 2014 Associated Professional Sleep Societies, LLC.

  7. A study on the flow field and local heat transfer performance due to geometric scaling of centrifugal fans

    International Nuclear Information System (INIS)

    Stafford, Jason; Walsh, Ed; Egan, Vanessa

    2011-01-01

    Highlights: ► Velocity field and local heat transfer trends of centrifugal fans. ► Time-averaged vortices are generated by flow separation. ► Local vortex and impingement regions are evident on surface heat transfer maps. ► Miniature centrifugal fans should be designed with an aspect ratio below 0.3. ► Theory under predicts heat transfer due to complex, unsteady outlet flow. - Abstract: Scaled versions of fan designs are often chosen to address thermal management issues in space constrained applications. Using velocity field and local heat transfer measurement techniques, the thermal performance characteristics of a range of geometrically scaled centrifugal fan designs have been investigated. Complex fluid flow structures and surface heat transfer trends due to centrifugal fans were found to be common over a wide range of fan aspect ratios (blade height to fan diameter). The limiting aspect ratio for heat transfer enhancement was 0.3, as larger aspect ratios were shown to result in a reduction in overall thermal performance. Over the range of fans examined, the low profile centrifugal designs produced significant enhancement in thermal performance when compared to that predicted using classical laminar flow theory. The limiting non-dimensional distance from the fan, where this enhancement is no longer apparent, has also been determined. Using the fundamental information inferred from local velocity field and heat transfer measurements, selection criteria can be determined for both low and high power practical applications where space restrictions exist.

  8. Molecular dynamics study on the thermal conductivity and thermal rectification in graphene with geometric variations of doped boron

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Qi, E-mail: alfred_02030210@163.com; Wei, Yuan

    2014-03-15

    Thermal conductivity and thermal rectification of graphene with geometric variations have been investigated by using classical non-equilibrium molecular dynamics simulation, and analyzed theoretically the cause of the changes of thermal conductivity and thermal rectification. Two different structural models, triangular single-boron-doped graphene (SBDG) and parallel various-boron-doped graphene (VBDG), were considered. The results indicated that the thermal conductivities of two different models are about 54–63% lower than pristine graphene. And it was also found that the structure of parallel various-boron-doped graphene is inhibited more strongly on the heat transfer than that of triangular single-boron-doped graphene. The reduction in the thermal conductivities of two different models gradually decreases as the temperature rises. The thermal conductivities of triangular boron-doped graphene have a large difference in both directions, and the thermal rectification of this structure shows the downward trend with increasing temperature. However, the thermal conductivities of parallel various-boron-doped graphene are similar in both directions, and the thermal rectification effect is not obvious in this structure. The phenomenon of thermal rectification exits in SBDG. It implies that the SBDG might be a potential promising structure for thermal rectifier by controlling the boron-doped model.

  9. Molecular dynamics study on the thermal conductivity and thermal rectification in graphene with geometric variations of doped boron

    International Nuclear Information System (INIS)

    Liang, Qi; Wei, Yuan

    2014-01-01

    Thermal conductivity and thermal rectification of graphene with geometric variations have been investigated by using classical non-equilibrium molecular dynamics simulation, and analyzed theoretically the cause of the changes of thermal conductivity and thermal rectification. Two different structural models, triangular single-boron-doped graphene (SBDG) and parallel various-boron-doped graphene (VBDG), were considered. The results indicated that the thermal conductivities of two different models are about 54–63% lower than pristine graphene. And it was also found that the structure of parallel various-boron-doped graphene is inhibited more strongly on the heat transfer than that of triangular single-boron-doped graphene. The reduction in the thermal conductivities of two different models gradually decreases as the temperature rises. The thermal conductivities of triangular boron-doped graphene have a large difference in both directions, and the thermal rectification of this structure shows the downward trend with increasing temperature. However, the thermal conductivities of parallel various-boron-doped graphene are similar in both directions, and the thermal rectification effect is not obvious in this structure. The phenomenon of thermal rectification exits in SBDG. It implies that the SBDG might be a potential promising structure for thermal rectifier by controlling the boron-doped model

  10. FIASCO II failure to achieve a satisfactory cardiac outcome study: the elimination of system errors.

    Science.gov (United States)

    Farid, Shakil; Page, Aravinda; Jenkins, David; Jones, Mark T; Freed, Darren; Nashef, Samer A M

    2013-07-01

    Death in low-risk cardiac surgical patients provides a simple and accessible method by which modifiable causes of death can be identified. In the first FIASCO study published in 2009, local potentially modifiable causes of preventable death in low-risk patients with a logistic EuroSCORE of 0-2 undergoing cardiac surgery were inadequate myocardial protection and lack of clarity in the chain of responsibility. As a result, myocardial protection was improved, and a formalized system introduced to ensure clarity of the chain of responsibility in the care of all cardiac surgical patients. The purpose of the current study was to re-audit outcomes in low-risk patients to see if improvements have been achieved. Patients with a logistic EuroSCORE of 0-2 who had cardiac surgery from January 2006 to August 2012 were included. Data were prospectively collected and retrospectively analysed. The case notes of patients who died in hospital were subject to internal and external review and classified according to preventability. Two thousand five hundred and forty-nine patients with a logistic EuroSCORE of 0-2 underwent cardiac surgery during the study period. Seven deaths occurred in truly low-risk patients, giving a mortality of 0.27%. Of the seven, three were considered preventable and four non-preventable. Mortality was marginally lower than in our previous study (0.37%), and no death occurred as a result of inadequate myocardial protection or communication failures. We postulate that the regular study of such events in all institutions may unmask systemic errors that can be remedied to prevent or reduce future occurrences. We encourage all units to use this methodology to detect any similarly modifiable factors in their practice.

  11. Recent study, but not retrieval, of knowledge protects against learning errors.

    Science.gov (United States)

    Mullet, Hillary G; Umanath, Sharda; Marsh, Elizabeth J

    2014-11-01

    Surprisingly, people incorporate errors into their knowledge bases even when they have the correct knowledge stored in memory (e.g., Fazio, Barber, Rajaram, Ornstein, & Marsh, 2013). We examined whether heightening the accessibility of correct knowledge would protect people from later reproducing misleading information that they encountered in fictional stories. In Experiment 1, participants studied a series of target general knowledge questions and their correct answers either a few minutes (high accessibility of knowledge) or 1 week (low accessibility of knowledge) before exposure to misleading story references. In Experiments 2a and 2b, participants instead retrieved the answers to the target general knowledge questions either a few minutes or 1 week before the rest of the experiment. Reading the relevant knowledge directly before the story-reading phase protected against reproduction of the misleading story answers on a later general knowledge test, but retrieving that same correct information did not. Retrieving stored knowledge from memory might actually enhance the encoding of relevant misinformation.

  12. Task errors by emergency physicians are associated with interruptions, multitasking, fatigue and working memory capacity: a prospective, direct observation study.

    Science.gov (United States)

    Westbrook, Johanna I; Raban, Magdalena Z; Walter, Scott R; Douglas, Heather

    2018-01-09

    Interruptions and multitasking have been demonstrated in experimental studies to reduce individuals' task performance. These behaviours are frequently used by clinicians in high-workload, dynamic clinical environments, yet their effects have rarely been studied. To assess the relative contributions of interruptions and multitasking by emergency physicians to prescribing errors. 36 emergency physicians were shadowed over 120 hours. All tasks, interruptions and instances of multitasking were recorded. Physicians' working memory capacity (WMC) and preference for multitasking were assessed using the Operation Span Task (OSPAN) and Inventory of Polychronic Values. Following observation, physicians were asked about their sleep in the previous 24 hours. Prescribing errors were used as a measure of task performance. We performed multivariate analysis of prescribing error rates to determine associations with interruptions and multitasking, also considering physician seniority, age, psychometric measures, workload and sleep. Physicians experienced 7.9 interruptions/hour. 28 clinicians were observed prescribing 239 medication orders which contained 208 prescribing errors. While prescribing, clinicians were interrupted 9.4 times/hour. Error rates increased significantly if physicians were interrupted (rate ratio (RR) 2.82; 95% CI 1.23 to 6.49) or multitasked (RR 1.86; 95% CI 1.35 to 2.56) while prescribing. Having below-average sleep showed a >15-fold increase in clinical error rate (RR 16.44; 95% CI 4.84 to 55.81). WMC was protective against errors; for every 10-point increase on the 75-point OSPAN, a 19% decrease in prescribing errors was observed. There was no effect of polychronicity, workload, physician gender or above-average sleep on error rates. Interruptions, multitasking and poor sleep were associated with significantly increased rates of prescribing errors among emergency physicians. WMC mitigated the negative influence of these factors to an extent. These

  13. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

    International Nuclear Information System (INIS)

    Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

    2007-01-01

    Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

  14. A study of the effect of measurement error in predictor variables in nondestructive assay

    International Nuclear Information System (INIS)

    Burr, Tom L.; Knepper, Paula L.

    2000-01-01

    It is not widely known that ordinary least squares estimates exhibit bias if there are errors in the predictor variables. For example, enrichment measurements are often fit to two predictors: Poisson-distributed count rates in the region of interest and in the background. Both count rates have at least random variation due to counting statistics. Therefore, the parameter estimates will be biased. In this case, the effect of bias is a minor issue because there is almost no interest in the parameters themselves. Instead, the parameters will be used to convert count rates into estimated enrichment. In other cases, this bias source is potentially more important. For example, in tomographic gamma scanning, there is an emission stage which depends on predictors (the 'system matrix') that are estimated with error during the transmission stage. In this paper, we provide background information for the impact and treatment of errors in predictors, present results of candidate methods of compensating for the effect, review some of the nondestructive assay situations where errors in predictors occurs, and provide guidance for when errors in predictors should be considered in nondestructive assay

  15. Shared and unshared exposure measurement error in occupational cohort studies and their effects on statistical inference in proportional hazards models

    Science.gov (United States)

    Laurier, Dominique; Rage, Estelle

    2018-01-01

    Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies. PMID:29408862

  16. A study on the flow field and local heat transfer performance due to geometric scaling of centrifugal fans

    Energy Technology Data Exchange (ETDEWEB)

    Stafford, Jason, E-mail: jason.stafford@ul.ie [Stokes Institute, Mechanical, Aeronautical and Biomedical Engineering Department, University of Limerick, Limerick (Ireland); Walsh, Ed; Egan, Vanessa [Stokes Institute, Mechanical, Aeronautical and Biomedical Engineering Department, University of Limerick, Limerick (Ireland)

    2011-12-15

    Highlights: Black-Right-Pointing-Pointer Velocity field and local heat transfer trends of centrifugal fans. Black-Right-Pointing-Pointer Time-averaged vortices are generated by flow separation. Black-Right-Pointing-Pointer Local vortex and impingement regions are evident on surface heat transfer maps. Black-Right-Pointing-Pointer Miniature centrifugal fans should be designed with an aspect ratio below 0.3. Black-Right-Pointing-Pointer Theory under predicts heat transfer due to complex, unsteady outlet flow. - Abstract: Scaled versions of fan designs are often chosen to address thermal management issues in space constrained applications. Using velocity field and local heat transfer measurement techniques, the thermal performance characteristics of a range of geometrically scaled centrifugal fan designs have been investigated. Complex fluid flow structures and surface heat transfer trends due to centrifugal fans were found to be common over a wide range of fan aspect ratios (blade height to fan diameter). The limiting aspect ratio for heat transfer enhancement was 0.3, as larger aspect ratios were shown to result in a reduction in overall thermal performance. Over the range of fans examined, the low profile centrifugal designs produced significant enhancement in thermal performance when compared to that predicted using classical laminar flow theory. The limiting non-dimensional distance from the fan, where this enhancement is no longer apparent, has also been determined. Using the fundamental information inferred from local velocity field and heat transfer measurements, selection criteria can be determined for both low and high power practical applications where space restrictions exist.

  17. A combinatorial and probabilistic study of initial and end heights of descents in samples of geometrically distributed random variables and in permutations

    Directory of Open Access Journals (Sweden)

    Helmut Prodinger

    2007-01-01

    Full Text Available In words, generated by independent geometrically distributed random variables, we study the l th descent, which is, roughly speaking, the l th occurrence of a neighbouring pair ab with a>b. The value a is called the initial height, and b the end height. We study these two random variables (and some similar ones by combinatorial and probabilistic tools. We find in all instances a generating function Ψ(v,u, where the coefficient of v j u i refers to the j th descent (ascent, and i to the initial (end height. From this, various conclusions can be drawn, in particular expected values. In the probabilistic part, a Markov chain model is used, which allows to get explicit expressions for the heights of the second descent. In principle, one could go further, but the complexity of the results forbids it. This is extended to permutations of a large number of elements. Methods from q-analysis are used to simplify the expressions. This is the reason that we confine ourselves to the geometric distribution only. For general discrete distributions, no such tools are available.

  18. Justifications of policy-error correction: a case study of error correction in the Three Mile Island Nuclear Power Plant Accident

    International Nuclear Information System (INIS)

    Kim, Y.P.

    1982-01-01

    The sensational Three Mile Island Nuclear Power Plant Accident of 1979 raised many policy problems. Since the TMI accident, many authorities in the nation, including the President's Commission on TMI, Congress, GAO, as well as NRC, have researched lessons and recommended various corrective measures for the improvement of nuclear regulatory policy. As an effort to translate the recommendations into effective actions, the NRC developed the TMI Action Plan. How sound are these corrective actions. The NRC approach to the TMI Action Plan is justifiable to the extent that decisions were reached by procedures to reduce the effects of judgmental bias. Major findings from the NRC's effort to justify the corrective actions include: (A) The deficiencies and errors in the operations at the Three Mile Island Plant were not defined through a process of comprehensive analysis. (B) Instead, problems were identified pragmatically and segmentally, through empirical investigations. These problems tended to take one of two forms - determinate problems subject to regulatory correction on the basis of available causal knowledge, and indeterminate problems solved by interim rules plus continuing study. The information to justify the solution was adjusted to the problem characteristics. (C) Finally, uncertainty in the determinate problems was resolved by seeking more causal information, while efforts to resolve indeterminate problems relied upon collective judgment and a consensus rule governing decisions about interim resolutions

  19. Geometric mean for subspace selection.

    Science.gov (United States)

    Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J

    2009-02-01

    Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.

  20. A phantom-based study for assessing the error and uncertainty of a neuronavigation system

    Directory of Open Access Journals (Sweden)

    Natalia Izquierdo-Cifuentes

    2017-01-01

    Full Text Available This document describes a calibration protocol with the intention to introduce a guide to standardize the metrological vocabulary among manufacturers of image-guided surgery systems. Two stages were developed to measure the errors and estimate the uncertainty of a neuronavigator in different situations, on the first one it was determined a mechanical error on a virtual model of an acrylic phantom, on the second it was determined a coordinate error on the computerized axial tomography scan of the same phantom. Ten standard coordinates of the phantom were compared with the coordinates generated by the NeuroCPS. After measurement model was established, there were identified the sources of uncertainty and the data was processed according the guide to the expression of uncertainty in measurement.

  1. Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

    Science.gov (United States)

    Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R

    2014-12-01

    Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Refractive error, ocular biometry, and lens opalescence in an adult population: the Los Angeles Latino Eye Study.

    Science.gov (United States)

    Shufelt, Chrisandra; Fraser-Bell, Samantha; Ying-Lai, Mei; Torres, Mina; Varma, Rohit

    2005-12-01

    To characterize age- and gender-related differences in refractive error, ocular biometry, and lens opalescence (NOP) in a population-based sample of adult Latinos. Also assessed were the determinants of age-related refractive differences. Participants in the Los Angeles Latino Eye Study (LALES), a population-based study of Latinos aged 40 years and more, underwent an ophthalmic examination, including ultrasonic measurements of axial length (AL), vitreous chamber depth (VCD), anterior chamber depth (ACD), lens thickness (LT), and noncycloplegic automated and subjective refraction. Corneal curvature/power (CP) was measured using an autorefractor. NOP was graded at the slit lamp by an ophthalmologist using the Lens Opacity Classification System II. Age- and gender-related differences were calculated. Multiple regression models were used to identify the determinants of age-related refractive differences. Of the 6357 LALES participants, 5588 phakic individuals with biometric data were included in this analysis. Older individuals had shallower ACDs, thicker lenses, more NOP, and more hyperopia compared to younger individuals (P or = 0.05). Women had significantly shorter AL, shallower ACD and VCD, than did men (P < or = 0.01). The strongest determinants of refractive error were AL (primarily VCD) and CP. NOP was a small but significant determinant of refractive error in older individuals. Age- and gender-related differences in ocular biometric, refractive error, and NOP measurements are present in adult Latinos. While the relative contribution of NOP in determining refractive error is small, it is greater in older persons compared to younger individuals.

  3. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  4. Dynamics in geometrical confinement

    CERN Document Server

    Kremer, Friedrich

    2014-01-01

    This book describes the dynamics of low molecular weight and polymeric molecules when they are constrained under conditions of geometrical confinement. It covers geometrical confinement in different dimensionalities: (i) in nanometer thin layers or self supporting films (1-dimensional confinement) (ii) in pores or tubes with nanometric diameters (2-dimensional confinement) (iii) as micelles embedded in matrices (3-dimensional) or as nanodroplets.The dynamics under such conditions have been a much discussed and central topic in the focus of intense worldwide research activities within the last two decades. The present book discusses how the resulting molecular mobility is influenced by the subtle counterbalance between surface effects (typically slowing down molecular dynamics through attractive guest/host interactions) and confinement effects (typically increasing the mobility). It also explains how these influences can be modified and tuned, e.g. through appropriate surface coatings, film thicknesses or pore...

  5. Lectures in geometric combinatorics

    CERN Document Server

    Thomas, Rekha R

    2006-01-01

    This book presents a course in the geometry of convex polytopes in arbitrary dimension, suitable for an advanced undergraduate or beginning graduate student. The book starts with the basics of polytope theory. Schlegel and Gale diagrams are introduced as geometric tools to visualize polytopes in high dimension and to unearth bizarre phenomena in polytopes. The heart of the book is a treatment of the secondary polytope of a point configuration and its connections to the state polytope of the toric ideal defined by the configuration. These polytopes are relatively recent constructs with numerous connections to discrete geometry, classical algebraic geometry, symplectic geometry, and combinatorics. The connections rely on Gr�bner bases of toric ideals and other methods from commutative algebra. The book is self-contained and does not require any background beyond basic linear algebra. With numerous figures and exercises, it can be used as a textbook for courses on geometric, combinatorial, and computational as...

  6. Geometric information provider platform

    Directory of Open Access Journals (Sweden)

    Meisam Yousefzadeh

    2015-07-01

    Full Text Available Renovation of existing buildings is known as an essential stage in reduction of the energy loss. Considerable part of renovation process depends on geometric reconstruction of building based on semantic parameters. Following many research projects which were focused on parameterizing the energy usage, various energy modelling methods were developed during the last decade. On the other hand, by developing accurate measuring tools such as laser scanners, the interests of having accurate 3D building models are rapidly growing. But the automation of 3D building generation from laser point cloud or detection of specific objects in that is still a challenge.  The goal is designing a platform through which required geometric information can be efficiently produced to support energy simulation software. Developing a reliable procedure which extracts required information from measured data and delivers them to a standard energy modelling system is the main purpose of the project.

  7. Gravity, a geometrical course

    CERN Document Server

    Frè, Pietro Giuseppe

    2013-01-01

    ‘Gravity, a Geometrical Course’ presents general relativity (GR) in a systematic and exhaustive way, covering three aspects that are homogenized into a single texture: i) the mathematical, geometrical foundations, exposed in a self consistent contemporary formalism, ii) the main physical, astrophysical and cosmological applications,  updated to the issues of contemporary research and observations, with glimpses on supergravity and superstring theory, iii) the historical development of scientific ideas underlying both the birth of general relativity and its subsequent evolution. The book is divided in two volumes.   Volume One is dedicated to the development of the theory and basic physical applications. It guides the reader from the foundation of special relativity to Einstein field equations, illustrating some basic applications in astrophysics. A detailed  account  of the historical and conceptual development of the theory is combined with the presentation of its mathematical foundations.  Differe...

  8. Nonadiabatic geometrical quantum gates in semiconductor quantum dots

    International Nuclear Information System (INIS)

    Solinas, Paolo; Zanghi, Nino; Zanardi, Paolo; Rossi, Fausto

    2003-01-01

    In this paper, we study the implementation of nonadiabatic geometrical quantum gates with in semiconductor quantum dots. Different quantum information enconding (manipulation) schemes exploiting excitonic degrees of freedom are discussed. By means of the Aharanov-Anandan geometrical phase, one can avoid the limitations of adiabatic schemes relying on adiabatic Berry phase; fast geometrical quantum gates can be, in principle, implemented

  9. The Iatroref study: medical errors are associated with symptoms of depression in ICU staff but not burnout or safety culture.

    Science.gov (United States)

    Garrouste-Orgeas, Maité; Perrin, Marion; Soufir, Lilia; Vesin, Aurélien; Blot, François; Maxime, Virginie; Beuret, Pascal; Troché, Gilles; Klouche, Kada; Argaud, Laurent; Azoulay, Elie; Timsit, Jean-François

    2015-02-01

    Staff behaviours to optimise patient safety may be influenced by burnout, depression and strength of the safety culture. We evaluated whether burnout, symptoms of depression and safety culture affected the frequency of medical errors and adverse events (selected using Delphi techniques) in ICUs. Prospective, observational, multicentre (31 ICUs) study from August 2009 to December 2011. Burnout, depression symptoms and safety culture were evaluated using the Maslach Burnout Inventory (MBI), CES-Depression scale and Safety Attitudes Questionnaire, respectively. Of 1,988 staff members, 1,534 (77.2 %) participated. Frequencies of medical errors and adverse events were 804.5/1,000 and 167.4/1,000 patient-days, respectively. Burnout prevalence was 3 or 40 % depending on the definition (severe emotional exhaustion, depersonalisation and low personal accomplishment; or MBI score greater than -9). Depression symptoms were identified in 62/330 (18.8 %) physicians and 188/1,204 (15.6 %) nurses/nursing assistants. Median safety culture score was 60.7/100 [56.8-64.7] in physicians and 57.5/100 [52.4-61.9] in nurses/nursing assistants. Depression symptoms were an independent risk factor for medical errors. Burnout was not associated with medical errors. The safety culture score had a limited influence on medical errors. Other independent risk factors for medical errors or adverse events were related to ICU organisation (40 % of ICU staff off work on the previous day), staff (specific safety training) and patients (workload). One-on-one training of junior physicians during duties and existence of a hospital risk-management unit were associated with lower risks. The frequency of selected medical errors in ICUs was high and was increased when staff members had symptoms of depression.

  10. Geometric homology revisited

    OpenAIRE

    Ruffino, Fabio Ferrari

    2013-01-01

    Given a cohomology theory, there is a well-known abstract way to define the dual homology theory using the theory of spectra. In [4] the author provides a more geometric construction of the homology theory, using a generalization of the bordism groups. Such a generalization involves in its definition the vector bundle modification, which is a particular case of the Gysin map. In this paper we provide a more natural variant of that construction, which replaces the vector bundle modification wi...

  11. Geometric measure theory

    CERN Document Server

    Waerden, B

    1996-01-01

    From the reviews: "... Federer's timely and beautiful book indeed fills the need for a comprehensive treatise on geometric measure theory, and his detailed exposition leads from the foundations of the theory to the most recent discoveries. ... The author writes with a distinctive style which is both natural and powerfully economical in treating a complicated subject. This book is a major treatise in mathematics and is essential in the working library of the modern analyst." Bulletin of the London Mathematical Society.

  12. Developing geometrical reasoning

    OpenAIRE

    Brown, Margaret; Jones, Keith; Taylor, Ron; Hirst, Ann

    2004-01-01

    This paper summarises a report (Brown, Jones & Taylor, 2003) to the UK Qualifications and Curriculum Authority of the work of one geometry group. The group was charged with developing and reporting on teaching ideas that focus on the development of geometrical reasoning at the secondary school level. The group was encouraged to explore what is possible both within and beyond the current requirements of the UK National Curriculum and the Key Stage 3 strategy, and to consider the whole atta...

  13. Geometrically Consistent Mesh Modification

    KAUST Repository

    Bonito, A.

    2010-01-01

    A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.

  14. Geometric theory of information

    CERN Document Server

    2014-01-01

    This book brings together geometric tools and their applications for Information analysis. It collects current and many uses of in the interdisciplinary fields of Information Geometry Manifolds in Advanced Signal, Image & Video Processing, Complex Data Modeling and Analysis, Information Ranking and Retrieval, Coding, Cognitive Systems, Optimal Control, Statistics on Manifolds, Machine Learning, Speech/sound recognition, and natural language treatment which are also substantially relevant for the industry.

  15. Approximated empirical correlations to the characterization of physical and geometrical properties of solid particulate biomass: case studies of the elephant grass and sugar cane trash

    Energy Technology Data Exchange (ETDEWEB)

    Olivares Gomez, Edgardo; Cortez, Luis A. Barbosa [Universidade Estadual de Campinas (FEAGRI/UNICAMP), SP (Brazil). Fac. de Engenharia Agricola. Lab. de Termodinamica e Energia; Alarcon, Guillermo A. Rocca; Perez, Luis E. Brossard [Universidad de Oriente, Santiago de Cuba (Cuba)

    2008-07-01

    Two types of biomass solid particles, elephant grass (Pennisetum purpureum Schum. variety) and sugar cane trash, were studied in laboratory in order to obtain information about several physical and geometrical properties. In the both case, the length, breadth, and thickness of fifty particles selected randomly from each fraction of the size class, obtained by mechanical fractionation through sieves, were measured manually given their size. A geometric model of type rectangular base prism was adopted because based on observations it was demonstrated that the most of particles that were measured exhibited length which was significantly greater that width ( l >> a ). From these measurements average values for other properties were estimated, for example, characteristic dimension of particle, projected area of the rectangular prism, area of the prism rectangular section, volume of the rectangular prism, shape factors, sphericity, particles specific superficial area and equivalent diameter. A statistical analysis was done and proposed empirical and semi-empirical mathematical correlation models obtained by lineal regression, which show a goodness of fit of these equations to the reported experimental data. (author)

  16. Pilotaje en la detección de errores de prescripción de citostáticos Pilot study in the detection of errors in cytostatics prescription

    Directory of Open Access Journals (Sweden)

    María Antonieta Arbesú Michelena

    2004-12-01

    Full Text Available Dada la alta toxicidad de los citostáticos, resulta importante conocer la prevalencia de los errores de medicación ya que pueden provocar graves consecuencias en la respuesta al tratamiento de cada paciente. Con el objetivo de trazar una estrategia relacionada con la disminución de los posibles errores de prescripción se realizó un estudio piloto durante la semana comprendida entre el 15 y el 21 de diciembre de 2003, en el Servicio Quimioterapia Oncológica del Instituto de Oncología y Radiobiología, en 43 órdenes médicas. Los errores para el presente trabajo se clasificaron en errores por omisión (que dificultan la comprobación por parte del farmacéutico y errores de incorrección (que pueden ser potencialmente graves para el paciente. El total de errores fue de 299; en el caso de los errores por omisión, se destacan la no firma por el facultativo en las 43 prescripciones, así como el uso de abreviaturas, siglas o nombres comerciales en el 88,4 %. En relación con los errores graves se aprecia la no inclusión del peso y la talla en ninguna orden médica, la superficie corporal (sc errónea por encima en 15 casos (34,8 %, la subdosificación en 41 ocasiones (47,7 % y la no correspondencia del Protocolo según las Normas Institucionales con 17 incorrecciones. Se pudo conocer que la ocurrencia de errores de prescripción es alta en el Servicio, lo que demuestra que es importante protocolizar las órdenes médicas, lo que permitirá la disminución en el porcentaje de errores detectados en este estudio piloto y continuar profundizando al respecto.Due to the high toxicity of cytostatics it is important to know the prevalence of errors in medication, since they can cause serious consequences in the reponse to the treatment of every patient. In order to lay down a strategy related to the reduction of the possible errors in prescription, a pilot study was conducted in December 15-21, 2003, at the Oncological Chemotherapy Service of the

  17. A study on fatigue measurement of operators for human error prevention in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Oh Yeon; Il, Jang Tong; Meiling, Luo; Hee, Lee Young [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and

  18. A study on fatigue measurement of operators for human error prevention in NPPs

    International Nuclear Information System (INIS)

    Ju, Oh Yeon; Il, Jang Tong; Meiling, Luo; Hee, Lee Young

    2012-01-01

    The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and management

  19. Are Divorce Studies Trustworthy? The Effects of Survey Nonresponse and Response Errors

    Science.gov (United States)

    Mitchell, Colter

    2010-01-01

    Researchers rely on relationship data to measure the multifaceted nature of families. This article speaks to relationship data quality by examining the ramifications of different types of error on divorce estimates, models predicting divorce behavior, and models employing divorce as a predictor. Comparing matched survey and divorce certificate…

  20. Electrophysiological Endophenotypes and the Error-Related Negativity (ERN) in Autism Spectrum Disorder: A Family Study

    Science.gov (United States)

    Clawson, Ann; South, Mikle; Baldwin, Scott A.; Larson, Michael J.

    2017-01-01

    We examined the error-related negativity (ERN) as an endophenotype of ASD by comparing the ERN in families of ASD probands to control families. We hypothesized that ASD probands and families would display reduced-amplitude ERN relative to controls. Participants included 148 individuals within 39 families consisting of a mother, father, sibling,…

  1. Source Memory Errors Associated with Reports of Posttraumatic Flashbacks: A Proof of Concept Study

    Science.gov (United States)

    Brewin, Chris R.; Huntley, Zoe; Whalley, Matthew G.

    2012-01-01

    Flashbacks are involuntary, emotion-laden images experienced by individuals with posttraumatic stress disorder (PTSD). The qualities of flashbacks could under certain circumstances lead to source memory errors. Participants with PTSD wrote a trauma narrative and reported the experience of flashbacks. They were later presented with stimuli from…

  2. Error-enhanced augmented proprioceptive feedback in stroke rehabilitation training : a pilot study

    NARCIS (Netherlands)

    Molier, Birgit I.; de Boer, Jacintha; Prange, Gerdienke B.; Jannink, Michiel J.A.

    2009-01-01

    Augmented feedback plays an essential role in stroke rehabilitation therapy. When a force is applied to the arm, an augmented sensory (proprioceptive) cue is provided. The question was to find out if stroke patients can learn reach-and retrieval movements with error-enhanced augmented sensory

  3. A study and simulation of the impact of high-order aberrations to overlay error distribution

    Science.gov (United States)

    Sun, G.; Wang, F.; Zhou, C.

    2011-03-01

    With reduction of design rules, a number of corresponding new technologies, such as i-HOPC, HOWA and DBO have been proposed and applied to eliminate overlay error. When these technologies are in use, any high-order error distribution needs to be clearly distinguished in order to remove the underlying causes. Lens aberrations are normally thought to mainly impact the Matching Machine Overlay (MMO). However, when using Image-Based overlay (IBO) measurement tools, aberrations become the dominant influence on single machine overlay (SMO) and even on stage repeatability performance. In this paper, several measurements of the error distributions of the lens of SMEE SSB600/10 prototype exposure tool are presented. Models that characterize the primary influence from lens magnification, high order distortion, coma aberration and telecentricity are shown. The contribution to stage repeatability (as measured with IBO tools) from the above errors was predicted with simulator and compared to experiments. Finally, the drift of every lens distortion that impact to SMO over several days was monitored and matched with the result of measurements.

  4. Estimation of geometrically undistorted B0 inhomogeneity maps

    International Nuclear Information System (INIS)

    Matakos, A; Balter, J; Cao, Y

    2014-01-01

    Geometric accuracy of MRI is one of the main concerns for its use as a sole image modality in precision radiation therapy (RT) planning. In a state-of-the-art scanner, system level geometric distortions are within acceptable levels for precision RT. However, subject-induced B 0 inhomogeneity may vary substantially, especially in air-tissue interfaces. Recent studies have shown distortion levels of more than 2 mm near the sinus and ear canal are possible due to subject-induced field inhomogeneity. These distortions can be corrected with the use of accurate B 0 inhomogeneity field maps. Most existing methods estimate these field maps from dual gradient-echo (GRE) images acquired at two different echo-times under the assumption that the GRE images are practically undistorted. However distortion that may exist in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate correction of clinical images. This work proposes a method for estimating undistorted field maps from GRE acquisitions using an iterative joint estimation technique. The proposed method yields geometrically corrected GRE images and undistorted field maps that can also be used for the correction of images acquired by other sequences. The proposed method is validated through simulation, phantom experiments and applied to patient data. Our simulation results show that our method reduces the root-mean-squared error of the estimated field map from the ground truth by ten-fold compared to the distorted field map. Both the geometric distortion and the intensity corruption (artifact) in the images caused by the B 0 field inhomogeneity are corrected almost completely. Our phantom experiment showed improvement in the geometric correction of approximately 1 mm at an air-water interface using the undistorted field map compared to using a distorted field map. The proposed method for undistorted field map estimation can lead to improved geometric

  5. Estimation of geometrically undistorted B0 inhomogeneity maps

    Science.gov (United States)

    Matakos, A.; Balter, J.; Cao, Y.

    2014-09-01

    Geometric accuracy of MRI is one of the main concerns for its use as a sole image modality in precision radiation therapy (RT) planning. In a state-of-the-art scanner, system level geometric distortions are within acceptable levels for precision RT. However, subject-induced B0 inhomogeneity may vary substantially, especially in air-tissue interfaces. Recent studies have shown distortion levels of more than 2 mm near the sinus and ear canal are possible due to subject-induced field inhomogeneity. These distortions can be corrected with the use of accurate B0 inhomogeneity field maps. Most existing methods estimate these field maps from dual gradient-echo (GRE) images acquired at two different echo-times under the assumption that the GRE images are practically undistorted. However distortion that may exist in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate correction of clinical images. This work proposes a method for estimating undistorted field maps from GRE acquisitions using an iterative joint estimation technique. The proposed method yields geometrically corrected GRE images and undistorted field maps that can also be used for the correction of images acquired by other sequences. The proposed method is validated through simulation, phantom experiments and applied to patient data. Our simulation results show that our method reduces the root-mean-squared error of the estimated field map from the ground truth by ten-fold compared to the distorted field map. Both the geometric distortion and the intensity corruption (artifact) in the images caused by the B0 field inhomogeneity are corrected almost completely. Our phantom experiment showed improvement in the geometric correction of approximately 1 mm at an air-water interface using the undistorted field map compared to using a distorted field map. The proposed method for undistorted field map estimation can lead to improved geometric

  6. Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266

  7. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  8. Geometric Computing for Freeform Architecture

    KAUST Repository

    Wallner, J.; Pottmann, Helmut

    2011-01-01

    Geometric computing has recently found a new field of applications, namely the various geometric problems which lie at the heart of rationalization and construction-aware design processes of freeform architecture. We report on our work in this area

  9. Open quantum systems and error correction

    Science.gov (United States)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC

  10. An Empirical Study on Human Performance according to the Physical Environment (Potential Human Error Hazard) in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kim, Ar Ryum; Jang, In Seok; Seong, Proong Hyun

    2014-01-01

    The management of the physical environment for safety is more effective than a nuclear industry. Despite the physical environment such as lighting, noise satisfy with management standards, it can be background factors may cause human error and affect human performance. Because the consequence of extremely human error and human performance is high according to the physical environment, requirement standard could be covered with specific criteria. Particularly, in order to avoid human errors caused by an extremely low or rapidly-changing intensity illumination and masking effect such as power disconnection, plans for better visual environment and better function performances should be made as a careful study on efficient ways to manage and continue the better conditions is conducted

  11. Improving image quality in Electrical Impedance Tomography (EIT using Projection Error Propagation-based Regularization (PEPR technique: A simulation study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-03-01

    Full Text Available A Projection Error Propagation-based Regularization (PEPR method is proposed and the reconstructed image quality is improved in Electrical Impedance Tomography (EIT. A projection error is produced due to the misfit of the calculated and measured data in the reconstruction process. The variation of the projection error is integrated with response matrix in each iterations and the reconstruction is carried out in EIDORS. The PEPR method is studied with the simulated boundary data for different inhomogeneity geometries. Simulated results demonstrate that the PEPR technique improves image reconstruction precision in EIDORS and hence it can be successfully implemented to increase the reconstruction accuracy in EIT.>doi:10.5617/jeb.158 J Electr Bioimp, vol. 2, pp. 2-12, 2011

  12. Assessing type I error and power of multistate Markov models for panel data-A simulation study.

    Science.gov (United States)

    Cassarly, Christy; Martin, Renee' H; Chimowitz, Marc; Peña, Edsel A; Ramakrishnan, Viswanathan; Palesch, Yuko Y

    2017-01-01

    Ordinal outcomes collected at multiple follow-up visits are common in clinical trials. Sometimes, one visit is chosen for the primary analysis and the scale is dichotomized amounting to loss of information. Multistate Markov models describe how a process moves between states over time. Here, simulation studies are performed to investigate the type I error and power characteristics of multistate Markov models for panel data with limited non-adjacent state transitions. The results suggest that the multistate Markov models preserve the type I error and adequate power is achieved with modest sample sizes for panel data with limited non-adjacent state transitions.

  13. Cepheids Geometrical Distances Using Space Interferometry

    Science.gov (United States)

    Marengo, M.; Karovska, M.; Sasselov, D. D.; Sanchez, M.

    2004-05-01

    A space based interferometer with a sub-milliarcsecond resolution in the UV-optical will provide a new avenue for the calibration of primary distance indicators with unprecedented accuracy, by allowing very accurate and stable measurements of Cepheids pulsation amplitudes at wavelengths not accessible from the ground. Sasselov & Karovska (1994) have shown that interferometers allow very accurate measurements of Cepheids distances by using a ``geometric'' variant of the Baade-Wesselink method. This method has been succesfully applied to derive distances and radii of nearby Cepheids using ground-based near-IR and optical interferometers, within a 15% accuracy level. Our study shows that the main source of error in these measurements is due to the perturbing effects of the Earth atmosphere, which is the limiting factor in the interferometer stability. A space interferometer will not suffer from this intrinsic limitations, and can potentially lead to improve astronomical distance measurements by an order of magnitude in precision. We discuss here the technical requirements that a space based facility will need to carry out this project, allowing distance measurements within a few percent accuracy level. We will finally discuss how a sub-milliarcsecond resolution will allow the direct distance determination for hundreds of galactic sources, and provide a substantial improvement in the zero-point of the Cepheid distance scale.

  14. Geometric modeling for computer aided design

    Science.gov (United States)

    Schwing, James L.; Olariu, Stephen

    1995-01-01

    The primary goal of this grant has been the design and implementation of software to be used in the conceptual design of aerospace vehicles particularly focused on the elements of geometric design, graphical user interfaces, and the interaction of the multitude of software typically used in this engineering environment. This has resulted in the development of several analysis packages and design studies. These include two major software systems currently used in the conceptual level design of aerospace vehicles. These tools are SMART, the Solid Modeling Aerospace Research Tool, and EASIE, the Environment for Software Integration and Execution. Additional software tools were designed and implemented to address the needs of the engineer working in the conceptual design environment. SMART provides conceptual designers with a rapid prototyping capability and several engineering analysis capabilities. In addition, SMART has a carefully engineered user interface that makes it easy to learn and use. Finally, a number of specialty characteristics have been built into SMART which allow it to be used efficiently as a front end geometry processor for other analysis packages. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand-alone, analysis codes. Resulting in a streamlining of the exchange of data between programs reducing errors and improving the efficiency. EASIE provides both a methodology and a collection of software tools to ease the task of coordinating engineering design and analysis codes.

  15. The geometric semantics of algebraic quantum mechanics.

    Science.gov (United States)

    Cruz Morales, John Alexander; Zilber, Boris

    2015-08-06

    In this paper, we will present an ongoing project that aims to use model theory as a suitable mathematical setting for studying the formalism of quantum mechanics. We argue that this approach provides a geometric semantics for such a formalism by means of establishing a (non-commutative) duality between certain algebraic and geometric objects. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  16. Geometric reconstruction methods for electron tomography

    DEFF Research Database (Denmark)

    Alpers, Andreas; Gardner, Richard J.; König, Stefan

    2013-01-01

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts...... and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed...

  17. Geometric Constructions with the Computer.

    Science.gov (United States)

    Chuan, Jen-chung

    The computer can be used as a tool to represent and communicate geometric knowledge. With the appropriate software, a geometric diagram can be manipulated through a series of animation that offers more than one particular snapshot as shown in a traditional mathematical text. Geometric constructions with the computer enable the learner to see and…

  18. Study of Frequency of Errors and Areas of Weaknesses in Business Communications Classes at Kapiolani Community College.

    Science.gov (United States)

    Uehara, Soichi

    This study was made to determine the most prevalent errors, areas of weakness, and their frequency in the writing of letters so that a course in business communications classes at Kapiolani Community College (Hawaii) could be prepared that would help students learn to write effectively. The 55 participating students were divided into two groups…

  19. Nine Loci for Ocular Axial Length Identified through Genome-wide Association Studies, Including Shared Loci with Refractive Error

    NARCIS (Netherlands)

    Cheng, Ching-Yu; Schache, Maria; Ikram, M. Kamran; Young, Terri L.; Guggenheim, Jeremy A.; Vitart, Veronique; Macgregor, Stuart; Verhoeven, Virginie J. M.; Barathi, Veluchamy A.; Liao, Jiemin; Hysi, Pirro G.; Bailey-Wilson, Joan E.; St Pourcain, Beate; Kemp, John P.; McMahon, George; Timpson, Nicholas J.; Evans, David M.; Montgomery, Grant W.; Mishra, Aniket; Wang, Ya Xing; Wang, Jie Jin; Rochtchina, Elena; Polasek, Ozren; Wright, Alan F.; Amin, Najaf; van Leeuwen, Elisabeth M.; Wilson, James F.; Pennell, Craig E.; van Duijn, Cornelia M.; de Jong, Paulus T. V. M.; Vingerling, Johannes R.; Zhou, Xin; Chen, Peng; Li, Ruoying; Tay, Wan-Ting; Zheng, Yingfeng; Chew, Merwyn; Burdon, Kathryn P.; Craig, Jamie E.; Iyengar, Sudha K.; Igo, Robert P.; Lass, Jonathan H.; Chew, Emily Y.; Haller, Toomas; Mihailov, Evelin; Metspalu, Andres; Wedenoja, Juho; Simpson, Claire L.; Wojciechowski, Robert; Chen, Wei

    2013-01-01

    Refractive errors are common eye disorders of public health importance worldwide. Ocular axial length (AL) is the major determinant of refraction and thus of myopia and hyperopia. We conducted a meta-analysis of genome-wide association studies for AL, combining 12,531 Europeans and 8,216 Asians. We

  20. N-acetylated metabolites in urine: proton nuclear magnetic resonance spectroscopic study on patients with inborn errors of metabolism.

    NARCIS (Netherlands)

    Engelke, U.F.H.; Liebrand-van Sambeek, M.L.F.; Jong, J.G.N. de; Leroy, J.G.; Morava, E.; Smeitink, J.A.M.; Wevers, R.A.

    2004-01-01

    BACKGROUND: There is no comprehensive analytical technique to analyze N-acetylated metabolites in urine. Many of these compounds are involved in inborn errors of metabolism. In the present study, we examined the potential of proton nuclear magnetic resonance ((1)H-NMR) spectroscopy as a tool to

  1. Study of the magnetic spectrograph BIG KARL on image errors and their causes

    International Nuclear Information System (INIS)

    Paul, D.

    1987-12-01

    The ionoptical aberrations of the QQDDQ spectrograph BIG KARL are measured and analyzed in order to improve resolution and transmission at large acceptance. The entrance phasespace is scanned in a cartesian grid by means of a narrow collimated beam of scattered deuterons. The distortions due to the nonlinear transformation by the system are measured in the detector plane. A model is developed which describes the measured distortions. The model allows to locate nonlinearities in the system responsible for the observed distortions. It gives a good understanding of geometrical nonlinearities up to the fifth order and chromatical nonlinearities up to the third order. To confirm the model, the magnetic field in the quadrupoles is measured including the fringe field region. Furthermore, nonlinearities appearing in ideal magnets are discussed and compared to experimental data. (orig.) [de

  2. Study on a new framework of Human Reliability Analysis to evaluate soft control execution error in advanced MCRs of NPPs

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun

    2016-01-01

    Highlights: • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • The operation action in NPP Advanced MCRs is performed by soft control. • New HRA framework should be considered in the HRA for advanced MCRs. • HRA framework for evaluation of soft control execution human error is suggested. • Suggested method will be helpful to analyze human reliability in advance MCRs. - Abstract: Since the Three Mile Island (TMI)-2 accident, human error has been recognized as one of the main causes of Nuclear Power Plant (NPP) accidents, and numerous studies related to Human Reliability Analysis (HRA) have been carried out. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). However, the operating environment of MCRs in NPPs has changed with the adoption of new Human-System Interfaces (HSIs) that are based on computer-based technologies. The MCRs that include these digital technologies, such as large display panels, computerized procedures, and soft controls, are called advanced MCRs. Among the many features of advanced MCRs, soft controls are a particularly important feature because operating actions in NPP advanced MCRs are performed by soft control. Due to the differences in interfaces between soft control and hardwired conventional type control, different Human Error Probabilities (HEPs) and a new HRA framework should be considered in the HRA for advanced MCRs. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing a soft control task analysis and the literature regarding widely accepted human error taxonomies is reviewed. Moreover, since most current HRA databases deal with operation in conventional MCRs and are not explicitly designed to deal with digital HSIs, empirical analysis of human error and error recovery considering soft controls under an advanced MCR mockup are carried out to collect human error data, which is

  3. Three-dimensional ray-tracing model for the study of advanced refractive errors in keratoconus.

    Science.gov (United States)

    Schedin, Staffan; Hallberg, Per; Behndig, Anders

    2016-01-20

    We propose a numerical three-dimensional (3D) ray-tracing model for the analysis of advanced corneal refractive errors. The 3D modeling was based on measured corneal elevation data by means of Scheimpflug photography. A mathematical description of the measured corneal surfaces from a keratoconus (KC) patient was used for the 3D ray tracing, based on Snell's law of refraction. A model of a commercial intraocular lens (IOL) was included in the analysis. By modifying the posterior IOL surface, it was shown that the imaging quality could be significantly improved. The RMS values were reduced by approximately 50% close to the retina, both for on- and off-axis geometries. The 3D ray-tracing model can constitute a basis for simulation of customized IOLs that are able to correct the advanced, irregular refractive errors in KC.

  4. Studying and comparing spectrum efficiency and error probability in GMSK and DBPSK modulation schemes

    Directory of Open Access Journals (Sweden)

    Juan Mario Torres Nova

    2008-09-01

    Full Text Available Gaussian minimum shift keying (GMSK and differential binary phase shift keying (DBPSK are two digital modulation schemes which are -frequently used in radio communication systems; however, there is interdependence in the use of its benefits (spectral efficiency, low bit error rate, low inter symbol interference, etc. Optimising one parameter creates problems for another; for example, the GMSK scheme succeeds in reducing bandwidth when introducing a Gaussian filter into an MSK (minimum shift ke-ying modulator in exchange for increasing inter-symbol interference in the system. The DBPSK scheme leads to lower error pro-bability, occupying more bandwidth; it likewise facilitates synchronous data transmission due to the receiver’s bit delay when re-covering a signal.

  5. Generating a normalized geometric liver model with warping

    International Nuclear Information System (INIS)

    Boes, J.L.; Weymouth, T.E.; Meyer, C.R.; Quint, L.E.; Bland, P.H.; Bookstein, F.L.

    1990-01-01

    This paper reports on the automated determination of the liver surface in abdominal CT scans for radiation treatment, surgery planning, and anatomic visualization. The normalized geometric model of the liver is generated by averaging registered outlines from a set of 15 studies of normal liver. The outlines have been registered with the use of thin-plate spline warping based on a set of five homologous landmarks. Thus, the model consists of an average of the surface and a set of five anatomic landmarks. The accuracy of the model is measured against both the set of studies used in model generation and an alternate set of 15 normal studies with use of, as an error measure, the ratio of nonoverlapping model and study volume to total model volume

  6. Refractive error study in young subjects: results from a rural area in Paraguay

    Directory of Open Access Journals (Sweden)

    Isabel Signes-Soler

    2017-03-01

    Full Text Available AIM: To evaluate the distribution of refractive error in young subjects in a rural area of Paraguay in the context of an international cooperation campaign for the prevention of blindness. METHODS: A sample of 1466 young subjects (ranging from 3 to 22 years old, with a mean age of 11.21±3.63 years old, were examined to assess their distance visual acuity (VA and refractive error. The first screening examination performed by trained volunteers, included visual acuity testing, autokeratometry and non-cycloplegic autorefraction. Inclusion criteria for a second complete cycloplegic eye examination by an optometrist were VA <20/25 (0.10 logMAR or 0.8 decimal and/or corneal astigmatism ≥1.50 D. RESULTS: An uncorrected distance VA of 0 logMAR (1.0 decimal was found in 89.2% of children. VA <20/25 and/or corneal astigmatism ≥1.50 D was found in 3.9% of children (n=57, with a prevalence of hyperopia of 5.2% (0.2% of the total in this specific group. Furthermore, myopia (spherical equivalent ≤-0.5 D was found in 37.7% of the refracted children (0.5% of the total. The prevalence of refractive astigmatism (cylinder ≤-1.50 D was 15.8% (0.6% of the total. Visual impairment (VI (0.05≤VA≤0.3 was found in 12/114 (0.4% of the refracted eyes. Main causes for VI were refractive error (58%, retinal problems (17%, 2/12, albinism (17%, 2/12 and unknown (8%, 1/12. CONCLUSION: A low prevalence of refractive error has been found in this rural area of Paraguay, with higher prevalence of myopia than of hyperopia.

  7. A Case–Control Study Investigating Simulated Driving Errors in Ischemic Stroke and Subarachnoid Hemorrhage

    Directory of Open Access Journals (Sweden)

    Megan A. Hird

    2018-02-01

    Full Text Available BackgroundStroke can affect a variety of cognitive, perceptual, and motor abilities that are important for safe driving. Results of studies assessing post-stroke driving ability are quite variable in the areas and degree of driving impairment among patients. This highlights the need to consider clinical characteristics, including stroke subtype, when assessing driving performance.MethodsWe compared the simulated driving performance of 30 chronic stroke patients (>3 months, including 15 patients with ischemic stroke (IS and 15 patients with subarachnoid hemorrhage (SAH, and 20 age-matched controls. A preliminary analysis was performed, subdividing IS patients into right (n = 8 and left (n = 6 hemispheric lesions and SAH patients into middle cerebral artery (MCA, n = 5 and anterior communicating artery (n = 6 territory. A secondary analysis was conducted to investigate the cognitive correlates of driving.ResultsNine patients (30% exhibited impaired simulated driving performance, including four patients with IS (26.7% and five patients with SAH (33.3%. Both patients with IS (2.3 vs. 0.3, U = 76, p < 0.05 and SAH (1.5 vs. 0.3, U = 45, p < 0.001 exhibited difficulty with lane maintenance (% distance out of lane compared to controls. In addition, patients with IS exhibited difficulty with speed maintenance (% distance over speed limit; 8.9 vs. 4.1, U = 81, p < 0.05, whereas SAH patients exhibited difficulty with turning performance (total turning errors; 5.4 vs. 1.6, U = 39.5, p < 0.001. The Trail Making Test (TMT and Useful Field of View test were significantly associated with lane maintenance among patients with IS (rs > 0.6, p < 0.05. No cognitive tests showed utility among patients with SAH.ConclusionBoth IS and SAH exhibited difficulty with lane maintenance. Patients with IS additionally exhibited difficulty with speed maintenance, whereas SAH patients exhibited difficulty with turning

  8. Comparison of different spatial transformations applied to EEG data: A case study of error processing.

    Science.gov (United States)

    Cohen, Michael X

    2015-09-01

    The purpose of this paper is to compare the effects of different spatial transformations applied to the same scalp-recorded EEG data. The spatial transformations applied are two referencing schemes (average and linked earlobes), the surface Laplacian, and beamforming (a distributed source localization procedure). EEG data were collected during a speeded reaction time task that provided a comparison of activity between error vs. correct responses. Analyses focused on time-frequency power, frequency band-specific inter-electrode connectivity, and within-subject cross-trial correlations between EEG activity and reaction time. Time-frequency power analyses showed similar patterns of midfrontal delta-theta power for errors compared to correct responses across all spatial transformations. Beamforming additionally revealed error-related anterior and lateral prefrontal beta-band activity. Within-subject brain-behavior correlations showed similar patterns of results across the spatial transformations, with the correlations being the weakest after beamforming. The most striking difference among the spatial transformations was seen in connectivity analyses: linked earlobe reference produced weak inter-site connectivity that was attributable to volume conduction (zero phase lag), while the average reference and Laplacian produced more interpretable connectivity results. Beamforming did not reveal any significant condition modulations of connectivity. Overall, these analyses show that some findings are robust to spatial transformations, while other findings, particularly those involving cross-trial analyses or connectivity, are more sensitive and may depend on the use of appropriate spatial transformations. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Geometric Algebra Computing

    CERN Document Server

    Corrochano, Eduardo Bayro

    2010-01-01

    This book presents contributions from a global selection of experts in the field. This useful text offers new insights and solutions for the development of theorems, algorithms and advanced methods for real-time applications across a range of disciplines. Written in an accessible style, the discussion of all applications is enhanced by the inclusion of numerous examples, figures and experimental analysis. Features: provides a thorough discussion of several tasks for image processing, pattern recognition, computer vision, robotics and computer graphics using the geometric algebra framework; int

  10. Geometric multipartite entanglement measures

    International Nuclear Information System (INIS)

    Paz-Silva, Gerardo A.; Reina, John H.

    2007-01-01

    Within the framework of constructions for quantifying entanglement, we build a natural scenario for the assembly of multipartite entanglement measures based on Hopf bundle-like mappings obtained through Clifford algebra representations. Then, given the non-factorizability of an arbitrary two-qubit density matrix, we give an alternate quantity that allows the construction of two types of entanglement measures based on their arithmetical and geometrical averages over all pairs of qubits in a register of size N, and thus fully characterize its degree and type of entanglement. We find that such an arithmetical average is both additive and strongly super additive

  11. Geometric correlations and multifractals

    International Nuclear Information System (INIS)

    Amritkar, R.E.

    1991-07-01

    There are many situations where the usual statistical methods are not adequate to characterize correlations in the system. To characterize such situations we introduce mutual correlation dimensions which describe geometric correlations in the system. These dimensions allow us to distinguish between variables which are perfectly correlated with or without a phase lag, variables which are uncorrelated and variables which are partially correlated. We demonstrate the utility of our formalism by considering two examples from dynamical systems. The first example is about the loss of memory in chaotic signals and describes auto-correlations while the second example is about synchronization of chaotic signals and describes cross-correlations. (author). 19 refs, 6 figs

  12. Geometric phase effects in excited state dynamics through a conical intersection in large molecules: N-dimensional linear vibronic coupling model study

    Science.gov (United States)

    Li, Jiaru; Joubert-Doriol, Loïc; Izmaylov, Artur F.

    2017-08-01

    We investigate geometric phase (GP) effects in nonadiabatic transitions through a conical intersection (CI) in an N-dimensional linear vibronic coupling (ND-LVC) model. This model allows for the coordinate transformation encompassing all nonadiabatic effects within a two-dimensional (2D) subsystem, while the other N - 2 dimensions form a system of uncoupled harmonic oscillators identical for both electronic states and coupled bi-linearly with the subsystem coordinates. The 2D subsystem governs ultra-fast nonadiabatic dynamics through the CI and provides a convenient model for studying GP effects. Parameters of the original ND-LVC model define the Hamiltonian of the transformed 2D subsystem and thus influence GP effects directly. Our analysis reveals what values of ND-LVC parameters can introduce symmetry breaking in the 2D subsystem that diminishes GP effects.

  13. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system.

    Science.gov (United States)

    Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O

    2015-02-01

    To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association

  14. Exploring behavioural determinants relating to health professional reporting of medication errors: a qualitative study using the Theoretical Domains Framework.

    Science.gov (United States)

    Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek

    2016-07-01

    Effective and efficient medication reporting processes are essential in promoting patient safety. Few qualitative studies have explored reporting of medication errors by health professionals, and none have made reference to behavioural theories. The objective was to describe and understand the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE). This was a qualitative study comprising face-to-face, semi-structured interviews within three major medical/surgical hospitals of Abu Dhabi, the UAE. Health professionals were sampled purposively in strata of profession and years of experience. The semi-structured interview schedule focused on behavioural determinants around medication error reporting, facilitators, barriers and experiences. The Theoretical Domains Framework (TDF; a framework of theories of behaviour change) was used as a coding framework. Ethical approval was obtained from a UK university and all participating hospital ethics committees. Data saturation was achieved after interviewing ten nurses, ten pharmacists and nine physicians. Whilst it appeared that patient safety and organisational improvement goals and intentions were behavioural determinants which facilitated reporting, there were key determinants which deterred reporting. These included the beliefs of the consequences of reporting (lack of any feedback following reporting and impacting professional reputation, relationships and career progression), emotions (fear and worry) and issues related to the environmental context (time taken to report). These key behavioural determinants which negatively impact error reporting can facilitate the development of an intervention, centring on organisational safety and reporting culture, to enhance reporting effectiveness and efficiency.

  15. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    Science.gov (United States)

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-03-13

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  17. Femoral intertrochanteric nail (fitn): a new short version design with an anterior curvature and a geometric match study using post-operative radiographs.

    Science.gov (United States)

    Chang, Shi-Min; Hu, Sun-Jun; Ma, Zhuo; Du, Shou-Chao; Zhang, Ying-Qi

    2018-02-01

    Femoral intertrochanteric fractures are usually fixed with short, straight cephalomedullary nails. However, mismatches between the nail and the femur frequently occur, such as tip impingement and tail protrusion. The authors designed a new type of short femoral intertrochanteric nail (fitn) with an anterior curvature (length=19.5cm, r=120cm) and herein report the geometric match study for the first of 50 cases. A prospective case series of 50 geriatric patients suffering from unstable intertrochanteric fractures (AO/OTA 31 A2/3) were treated. There were 15 males and 35 females, with an average age of 82.3 years. Post-operatively, the nail entry point position in the sagittal greater trochanter (in three categories, anterior, central and posterior), the nail-tip position in the medullary canal (in 5-grade scale) and the nail-tail level to the greater trochanter (in 3-grade scale) were measured using X-ray films. For the nail entry point measurement, 5 cases were anterior (10%), 38 cases were central (76%), and 7 cases were posterior (14%). For the distal nail-tip position, 32 cases (64%) were located along the central canal axis, 13 cases (26%) were located anteriorly but did not contact the anterior inner cortex, 2 cases (4%) showed less than one-third anterior cortex thickness contact, and 3 cases (6%) were located posteriorly with no contact. For the proximal nail-tail level, there were no protrusions over the greater trochanter in 15 cases (30%), protrusion of less than 5mm in 29 cases (58%), and protrusion of more than 5mm in 6 cases (12%). The fitness was very high, as 96% cases showed no tip-cortex contact, and 88% cases showed less than 5mm proximal tail protrusion. The newly designed femoral intertrochanteric nail has a good geometric match with the femur medullary canal and the proximal length in the Chinese population. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. The Applicability of Standard Error of Measurement and Minimal Detectable Change to Motor Learning Research-A Behavioral Study.

    Science.gov (United States)

    Furlan, Leonardo; Sterr, Annette

    2018-01-01

    Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed

  19. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  20. Ergonomic study of biorhythm effect on the 62 occurrence of human errors and accidents in automobile manufacturing industry

    OpenAIRE

    2012-01-01

    Background and Aim: According to the biorhythm theory when the phase shift from positive to negative and vice versa people experience a critical an unstable day that prone them to error and accident. The purpose of this study is to determine this relationship in one of the automobile manufacturing industry. . Materials and Methods: At first 1280 person incident entered the study was reviewed and then the critical days of each biological cycle was determined using the software Easy Biorh...

  1. A study of redundancy management strategy for tetrad strap-down inertial systems. [error detection codes

    Science.gov (United States)

    Hruby, R. J.; Bjorkman, W. S.; Schmidt, S. F.; Carestia, R. A.

    1979-01-01

    Algorithms were developed that attempt to identify which sensor in a tetrad configuration has experienced a step failure. An algorithm is also described that provides a measure of the confidence with which the correct identification was made. Experimental results are presented from real-time tests conducted on a three-axis motion facility utilizing an ortho-skew tetrad strapdown inertial sensor package. The effects of prediction errors and of quantization on correct failure identification are discussed as well as an algorithm for detecting second failures through prediction.

  2. Development of a simulation program to study error propagation in the reprocessing input accountancy measurements

    International Nuclear Information System (INIS)

    Sanfilippo, L.

    1987-01-01

    A physical model and a computer program have been developed to simulate all the measurement operations involved with the Isotopic Dilution Analysis technique currently applied in the Volume - Concentration method for the Reprocessing Input Accountancy, together with their errors or uncertainties. The simulator is apt to easily solve a number of problems related to the measurement sctivities of the plant operator and the inspector. The program, written in Fortran 77, is based on a particular Montecarlo technique named ''Random Sampling''; a full description of the code is reported

  3. Geometric Rationalization for Freeform Architecture

    KAUST Repository

    Jiang, Caigui

    2016-06-20

    The emergence of freeform architecture provides interesting geometric challenges with regards to the design and manufacturing of large-scale structures. To design these architectural structures, we have to consider two types of constraints. First, aesthetic constraints are important because the buildings have to be visually impressive. Sec- ond, functional constraints are important for the performance of a building and its e cient construction. This thesis contributes to the area of architectural geometry. Specifically, we are interested in the geometric rationalization of freeform architec- ture with the goal of combining aesthetic and functional constraints and construction requirements. Aesthetic requirements typically come from designers and architects. To obtain visually pleasing structures, they favor smoothness of the building shape, but also smoothness of the visible patterns on the surface. Functional requirements typically come from the engineers involved in the construction process. For exam- ple, covering freeform structures using planar panels is much cheaper than using non-planar ones. Further, constructed buildings have to be stable and should not collapse. In this thesis, we explore the geometric rationalization of freeform archi- tecture using four specific example problems inspired by real life applications. We achieve our results by developing optimization algorithms and a theoretical study of the underlying geometrical structure of the problems. The four example problems are the following: (1) The design of shading and lighting systems which are torsion-free structures with planar beams based on quad meshes. They satisfy the functionality requirements of preventing light from going inside a building as shad- ing systems or reflecting light into a building as lighting systems. (2) The Design of freeform honeycomb structures that are constructed based on hex-dominant meshes with a planar beam mounted along each edge. The beams intersect without

  4. A feasibility study of mutual information based setup error estimation for radiotherapy

    International Nuclear Information System (INIS)

    Kim, Jeongtae; Fessler, Jeffrey A.; Lam, Kwok L.; Balter, James M.; Haken, Randall K. ten

    2001-01-01

    We have investigated a fully automatic setup error estimation method that aligns DRRs (digitally reconstructed radiographs) from a three-dimensional planning computed tomography image onto two-dimensional radiographs that are acquired in a treatment room. We have chosen a MI (mutual information)-based image registration method, hoping for robustness to intensity differences between the DRRs and the radiographs. The MI-based estimator is fully automatic since it is based on the image intensity values without segmentation. Using 10 repeated scans of an anthropomorphic chest phantom in one position and two single scans in two different positions, we evaluated the performance of the proposed method and a correlation-based method against the setup error determined by fiducial marker-based method. The mean differences between the proposed method and the fiducial marker-based method were smaller than 1 mm for translational parameters and 0.8 degree for rotational parameters. The standard deviations of estimates from the proposed method due to detector noise were smaller than 0.3 mm and 0.07 degree for the translational parameters and rotational parameters, respectively

  5. ERRORS MEASUREMENT OF INTERPOLATION METHODS FOR GEOID MODELS: STUDY CASE IN THE BRAZILIAN REGION

    Directory of Open Access Journals (Sweden)

    Daniel Arana

    Full Text Available Abstract: The geoid is an equipotential surface regarded as the altimetric reference for geodetic surveys and it therefore, has several practical applications for engineers. In recent decades the geodetic community has concentrated efforts on the development of highly accurate geoid models through modern techniques. These models are supplied through regular grids which users need to make interpolations. Yet, little information can be obtained regarding the most appropriate interpolation method to extract information from the regular grid of geoidal models. The use of an interpolator that does not represent the geoid surface appropriately can impair the quality of geoid undulations and consequently the height transformation. This work aims to quantify the magnitude of error that comes from a regular mesh of geoid models. The analysis consisted of performing a comparison between the interpolation of the MAPGEO2015 program and three interpolation methods: bilinear, cubic spline and neural networks Radial Basis Function. As a result of the experiments, it was concluded that 2.5 cm of the 18 cm error of the MAPGEO2015 validation is caused by the use of interpolations in the 5'x5' grid.

  6. Limit of detection in the presence of instrumental and non-instrumental errors: study of the possible sources of error and application to the analysis of 41 elements at trace levels by inductively coupled plasma-mass spectrometry technique

    International Nuclear Information System (INIS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Tapparo, Andrea; Pastore, Paolo

    2015-01-01

    In this paper the detection limit was estimated when signals were affected by two error contributions, namely instrumental errors and operational-non-instrumental errors. The detection limit was theoretically obtained following the hypothesis testing schema implemented with the calibration curve methodology. The experimental calibration design was based on J standards measured I times with non-instrumental errors affecting each standard systematically but randomly among the J levels. A two-component variance regression was performed to determine the calibration curve and to define the detection limit in these conditions. The detection limit values obtained from the calibration at trace levels of 41 elements by ICP-MS resulted larger than those obtainable from a one component variance regression. The role of the reagent impurities on the instrumental errors was ascertained and taken into account. Environmental pollution was studied as source of non-instrumental errors. The environmental pollution role was evaluated by Principal Component Analysis technique (PCA) applied to a series of nine calibrations performed in fourteen months. The influence of the seasonality of the environmental pollution on the detection limit was evidenced for many elements usually present in the urban air particulate. The obtained results clearly indicated the need of using the two-component variance regression approach for the calibration of all the elements usually present in the environment at significant concentration levels. - Highlights: • Limit of detection was obtained considering a two variance component regression. • Calibration data may be affected by instrumental and operational conditions errors. • Calibration model was applied to determine 41 elements at trace level by ICP-MS. • Non instrumental errors were evidenced by PCA analysis

  7. Methods for Estimation of Radiation Risk in Epidemiological Studies Accounting for Classical and Berkson Errors in Doses

    KAUST Repository

    Kukush, Alexander

    2011-01-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  8. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    Science.gov (United States)

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  9. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  10. Geometrization of quantum physics

    International Nuclear Information System (INIS)

    Ol'khov, O.A.

    2009-01-01

    It is shown that the Dirac equation for a free particle can be considered as a description of specific distortion of the space Euclidean geometry (space topological defect). This approach is based on the possibility of interpretation of the wave function as vector realizing representation of the fundamental group of the closed topological space-time 4-manifold. Mass and spin appear to be topological invariants. Such a concept explains all so-called 'strange' properties of quantum formalism: probabilities, wave-particle duality, nonlocal instantaneous correlation between noninteracting particles (EPR-paradox) and so on. Acceptance of the suggested geometrical concept means rejection of atomistic concept where all matter is considered as consisting of more and more small elementary particles. There are no any particles a priory, before measurement: the notions of particles appear as a result of classical interpretation of the contact of the region of the curved space with a device

  11. Geometrization of quantum physics

    Science.gov (United States)

    Ol'Khov, O. A.

    2009-12-01

    It is shown that the Dirac equation for free particle can be considered as a description of specific distortion of the space euclidean geometry (space topological defect). This approach is based on possibility of interpretation of the wave function as vector realizing representation of the fundamental group of the closed topological space-time 4-manifold. Mass and spin appear to be topological invariants. Such concept explains all so called “strange” properties of quantum formalism: probabilities, wave-particle duality, nonlocal instantaneous correlation between noninteracting particles (EPR-paradox) and so on. Acceptance of suggested geometrical concept means rejection of atomistic concept where all matter is considered as consisting of more and more small elementary particles. There is no any particles a priori, before measurement: the notions of particles appear as a result of classical interpretation of the contact of the region of the curved space with a device.

  12. Geometrical Image Transforms

    OpenAIRE

    Havelka, Jan

    2008-01-01

    Tato diplomová práce se zabývá akcelerací geometrických transformací obrazu s využitím GPU a architektury NVIDIA (R) CUDA TM. Časově kritické části kódu jsou přesunuty na GPU a vykonány paralelně. Jedním z výsledků je demonstrační aplikace pro porovnání výkonnosti obou architektur: CPU, a GPU v kombinaci s CPU. Pro referenční implementaci jsou použity vysoce optimalizované algoritmy z knihovny OpenCV, od firmy Intel. This master's thesis deals with acceleration of geometrical image transfo...

  13. Optical design and studies of a tiled single grating pulse compressor for enhanced parametric space and compensation of tiling errors

    Science.gov (United States)

    Daiya, D.; Patidar, R. K.; Sharma, J.; Joshi, A. S.; Naik, P. A.; Gupta, P. D.

    2017-04-01

    A new optical design of tiled single grating pulse compressor has been proposed, set-up and studied. The parametric space, i.e. the laser beam diameters that can be accommodated in the pulse compressor for the given range of compression lengths, has been calculated and shown to have up to two fold enhancement in comparison to our earlier proposed optical designs. The new optical design of the tiled single grating pulse compressor has an additional advantage of self compensation of various tiling errors like longitudinal and lateral piston, tip and groove density mismatch, compared to the earlier designs. Experiments have been carried out for temporal compression of 650 ps positively chirped laser pulses, at central wavelength 1054 nm, down to 235 fs in the tiled grating pulse compressor set up with the proposed design. Further, far field studies have been performed to show the desired compensation of the tiling errors takes place in the new compressor.

  14. Assessing type I error and power of multistate Markov models for panel data-A simulation study

    OpenAIRE

    Cassarly, Christy; Martin, Renee’ H.; Chimowitz, Marc; Peña, Edsel A.; Ramakrishnan, Viswanathan; Palesch, Yuko Y.

    2016-01-01

    Ordinal outcomes collected at multiple follow-up visits are common in clinical trials. Sometimes, one visit is chosen for the primary analysis and the scale is dichotomized amounting to loss of information. Multistate Markov models describe how a process moves between states over time. Here, simulation studies are performed to investigate the type I error and power characteristics of multistate Markov models for panel data with limited non-adjacent state transitions. The results suggest that ...

  15. Estimation of glucose kinetics in fetal-maternal studies: Potential errors, solutions, and limitations

    International Nuclear Information System (INIS)

    Menon, R.K.; Bloch, C.A.; Sperling, M.A.

    1990-01-01

    We investigated whether errors occur in the estimation of ovine maternal-fetal glucose (Glc) kinetics using the isotope dilution technique when the Glc pool is rapidly expanded by exogenous (protocol A) or endogenous (protocol C) Glc entry and sought possible solutions (protocol B). In protocol A (n = 8), after attaining steady-state Glc specific activity (SA) by [U-14C]glucose (period 1), infusion of Glc (period 2) predictably decreased Glc SA, whereas. [U-14C]glucose concentration unexpectedly rose from 7,208 +/- 367 (means +/- SE) in period 1 to 8,558 +/- 308 disintegrations/min (dpm) per ml in period 2 (P less than 0.01). Fetal endogenous Glc production (EGP) was negligible during period 1 (0.44 +/- 1.0), but yielded a physiologically impossible negative value of -2.1 +/- 0.72 mg.kg-1.min-1 during period 2. When the fall in Glc SA during Glc infusion was prevented by addition of [U-14C]glucose admixed with the exogenous Glc (protocol B; n = 7), EGP was no longer negative. In protocol C (n = 6), sequential infusions of four increasing doses of epinephrine serially decreased SA, whereas tracer Glc increased from 7,483 +/- 608 to 11,525 +/- 992 dpm/ml plasma (P less than 0.05), imposing an obligatory underestimation of EGP. Thus a tracer mixing problem leads to erroneous estimations of fetal Glc utilization and Glc production via the three-compartment model in sheep when the Glc pool is expanded exogenously or endogenously. These errors can be minimized by maintaining the Glc SA relatively constant

  16. Report from LHC MDs 1391 and 1483: Tests of new methods for study of nonlinear errors in the LHC experimental insertions

    CERN Document Server

    Maclean, Ewen Hamish; Fuchsberger, Kajetan; Giovannozzi, Massimo; Persson, Tobias Hakan Bjorn; Tomas Garcia, Rogelio; CERN. Geneva. ATS Department

    2017-01-01

    Nonlinear errors in experimental insertions can pose a significant challenge to the operability of low-β∗ colliders. Previously such errors in the LHC have been studied via their feed-down to tune and coupling under the influence of the nominal crossing angle bumps. This method has proved useful in validating various components of the magnetic model. To understand and correct those errors where significant discrepancies exist with the magnetic model however, will require further development of this technique, in addition to the application of novel methods. In 2016 studies were performed to test new methods for the study of the IR-nonlinear errors.

  17. Comparison of Geometric Design of a Brand of Stainless Steel K-Files: An In Vitro Study.

    Science.gov (United States)

    Saeedullah, Maryam; Husain, Syed Wilayat

    2018-04-01

    The purpose of this experimental study was to determine the diametric variations of a brand of handheld stainless-steel K-files, acquired from different countries, in accordance with the available standards. 20 Mani stainless-steel K-files of identical size (ISO#25) were acquired from Pakistan and were designated as Group A while 20 Mani K-files were purchased from London, UK and designated as Group B. Files were assessed using profile projector Nikon B 24V. Data was statistically compared with ISO 3630:1 and ADA 101 by one sample T test. Significant difference was found between Groups A and B. Average discrepancy of Group A fell within the tolerance limit while that of Group B exceeded the limit. Findings in this study call attention towards adherence to the dimensional standards of stainless-steel endodontic files.

  18. Errors in dual x-ray beam differential absorptiometry

    International Nuclear Information System (INIS)

    Bolin, F.; Preuss, L.; Gilbert, K.; Bugenis, C.

    1977-01-01

    Errors pertinent to the dual beam absorptiometry system have been studied and five areas are given in detail: (1) scattering, in which a computer analysis of multiple scattering shows little error due to this effect; (2) geometrical configuration effects, in which the slope of the sample is shown to influence the accuracy of the measurement; (3) Poisson variations, wherein it is shown that a simultaneous reduction can be obtained in both dosage and statistical error; (4) absorption coefficients, in which the effect of variation in absorption coefficient compilations is shown to have a critical effect on the interpretations of experimental data; and (5) filtering, wherein is shown the need for filters on dual beam systems using a characteristic x-ray output. A zero filter system is outlined

  19. Strain at a semiconductor nanowire-substrate interface studied using geometric phase analysis, convergent beam electron diffraction and nanobeam diffraction

    DEFF Research Database (Denmark)

    Persson, Johan Mikael; Wagner, Jakob Birkedal; Dunin-Borkowski, Rafal E.

    2011-01-01

    Semiconductor nanowires have been studied using electron microscopy since the early days of nanowire growth, e.g. [1]. A common approach for analysing nanowires using transmission electron microscopy (TEM) involves removing them from their substrate and subsequently transferring them onto carbon...... with CBED and NBED [4,5] have shown a high degree of consistency. Strain has previously only been measured in nanowires removed from their substrate [6], or only using GPA [7]. The sample used for the present investigation was an InP nanowire grown on a Si substrate using metal organic vapor phase...

  20. Education influences the association between genetic variants and refractive error: a meta-analysis of five Singapore studies

    Science.gov (United States)

    Fan, Qiao; Wojciechowski, Robert; Kamran Ikram, M.; Cheng, Ching-Yu; Chen, Peng; Zhou, Xin; Pan, Chen-Wei; Khor, Chiea-Chuen; Tai, E-Shyong; Aung, Tin; Wong, Tien-Yin; Teo, Yik-Ying; Saw, Seang-Mei

    2014-01-01

    Refractive error is a complex ocular trait governed by both genetic and environmental factors and possibly their interplay. Thus far, data on the interaction between genetic variants and environmental risk factors for refractive errors are largely lacking. By using findings from recent genome-wide association studies, we investigated whether the main environmental factor, education, modifies the effect of 40 single nucleotide polymorphisms on refractive error among 8461 adults from five studies including ethnic Chinese, Malay and Indian residents of Singapore. Three genetic loci SHISA6-DNAH9, GJD2 and ZMAT4-SFRP1 exhibited a strong association with myopic refractive error in individuals with higher secondary or university education (SHISA6-DNAH9: rs2969180 A allele, β = −0.33 D, P = 3.6 × 10–6; GJD2: rs524952 A allele, β = −0.31 D, P = 1.68 × 10−5; ZMAT4-SFRP1: rs2137277 A allele, β = −0.47 D, P = 1.68 × 10−4), whereas the association at these loci was non-significant or of borderline significance in those with lower secondary education or below (P for interaction: 3.82 × 10−3–4.78 × 10−4). The evidence for interaction was strengthened when combining the genetic effects of these three loci (P for interaction = 4.40 × 10−8), and significant interactions with education were also observed for axial length and myopia. Our study shows that low level of education may attenuate the effect of risk alleles on myopia. These findings further underline the role of gene–environment interactions in the pathophysiology of myopia. PMID:24014484

  1. Inclusion of geometric uncertainties in treatment plan evaluation

    NARCIS (Netherlands)

    van Herk, Marcel; Remeijer, Peter; Lebesque, Joos V.

    2002-01-01

    PURPOSE: To correctly evaluate realistic treatment plans in terms of absorbed dose to the clinical target volume (CTV), equivalent uniform dose (EUD), and tumor control probability (TCP) in the presence of execution (random) and preparation (systematic) geometric errors. MATERIALS AND METHODS: The

  2. Computational studies on the effect of geometric parameters on the performance of a solar chimney power plant

    International Nuclear Information System (INIS)

    Patel, Sandeep K.; Prasad, Deepak; Ahmed, M. Rafiuddin

    2014-01-01

    Graphical abstract: This work is aimed at optimizing the geometry of the major components of a solar chimney power plant using ANSYS-CFX. The collector inlet opening, collector height, collector outlet diameter, the chimney throat diameter and the chimney divergence angle were varied for the same chimney height and collector diameter and the performance of the plant was studied in terms of the available power and an optimum configuration was obtained. The temperature and velocity variations in the collector and along the chimney height were also studied. - Highlights: • Geometry of the major components of a solar chimney power plant optimized using CFX. • Collector inlet opening, height, outlet diameter, chimney throat diameter, and the chimney divergence angle were varied. • Temperature and velocity variations and available power were obtained for different configurations. • Optimum values of collector outlet height and diameter and the divergence angle were obtained. - Abstract: A solar chimney power plant (SCPP) is a renewable-energy power plant that transforms solar energy into electricity. The SCPP consists of three essential elements – solar air collector, chimney tower, and wind turbine(s). The present work is aimed at optimizing the geometry of the major components of the SCPP using a computational fluid dynamics (CFD) software ANSYS-CFX to study and improve the flow characteristics inside the SCPP. The overall chimney height and the collector diameter of the SCPP were kept constant at 10 m and 8 m respectively. The collector inlet opening was varied from 0.05 m to 0.2 m. The collector outlet diameter was also varied from 0.6 m to 1 m. These modified collectors were tested with chimneys of different divergence angles (0°–3°) and also different chimney inlet openings of 0.6 m to 1 m. The diameter of the chimney was also varied from 0.25 m to 0.3 m. Based on the CFX computational results, the best configuration was achieved using the chimney

  3. The preliminary study on the inductory signal triggering the error-prone DNA repair function in mammalian cells

    International Nuclear Information System (INIS)

    Su Zaozhong; Luo Zuyu

    1989-01-01

    The nature of the signal triggering error-prone DNA repair function in mammalian cells was studied from two notions: (1) Does the inducing signal result from the direct hitting the cellular targets by DNA-damaging agents? (2) Is inhibition of DNA replication a prerequisite condition for the triggering effect? Thus, the ultraviolet (UV)-irradiated exogenous DNAs were introduced into human and rat cells by transfection. The results showed that this transfection was able to induce the error-prone repair as efficient as direct UV-irradiation to cells. Moreover, the two inductory treaetments expressed similar kinetics and dose-responses. No matter whether the introduced DNAs initiated replication, they exhibited the incuctory activity. Therefore, it can be considered that DNA lesions itself, not the direct interaction of DNA-damaging agents with specific cellular targets, serve as a triggering signal for the inductory process. Inhibition of DNA replication is not a prerequisite for the inductory signal

  4. The analysis of human error as causes in the maintenance of machines: a case study in mining companies

    Directory of Open Access Journals (Sweden)

    Kovacevic, Srdja

    2016-12-01

    Full Text Available This paper describes the two-step method used to analyse the factors and aspects influencing human error during the maintenance of mining machines. The first step is the cause-effect analysis, supported by brainstorming, where five factors and 21 aspects are identified. During the second step, the group fuzzy analytic hierarchy process is used to rank the identified factors and aspects. A case study is done on mining companies in Serbia. The key aspects are ranked according to an analysis that included experts who assess risks in mining companies (a maintenance engineer, a technologist, an ergonomist, a psychologist, and an organisational scientist. Failure to follow technical maintenance instructions, poor organisation of the training process, inadequate diagnostic equipment, and a lack of understanding of the work process are identified as the most important causes of human error.

  5. A Benchmark Study on Error Assessment and Quality Control of CCS Reads Derived from the PacBio RS.

    Science.gov (United States)

    Jiao, Xiaoli; Zheng, Xin; Ma, Liang; Kutty, Geetha; Gogineni, Emile; Sun, Qiang; Sherman, Brad T; Hu, Xiaojun; Jones, Kristine; Raley, Castle; Tran, Bao; Munroe, David J; Stephens, Robert; Liang, Dun; Imamichi, Tomozumi; Kovacs, Joseph A; Lempicki, Richard A; Huang, Da Wei

    2013-07-31

    PacBio RS, a newly emerging third-generation DNA sequencing platform, is based on a real-time, single-molecule, nano-nitch sequencing technology that can generate very long reads (up to 20-kb) in contrast to the shorter reads produced by the first and second generation sequencing technologies. As a new platform, it is important to assess the sequencing error rate, as well as the quality control (QC) parameters associated with the PacBio sequence data. In this study, a mixture of 10 prior known, closely related DNA amplicons were sequenced using the PacBio RS sequencing platform. After aligning Circular Consensus Sequence (CCS) reads derived from the above sequencing experiment to the known reference sequences, we found that the median error rate was 2.5% without read QC, and improved to 1.3% with an SVM based multi-parameter QC method. In addition, a De Novo assembly was used as a downstream application to evaluate the effects of different QC approaches. This benchmark study indicates that even though CCS reads are post error-corrected it is still necessary to perform appropriate QC on CCS reads in order to produce successful downstream bioinformatics analytical results.

  6. Establishing error management process for power plants. A study on entire picture of the process and introduction stages

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Fujimoto, Junzo; Sugihara, Yoshikuni; Takeda, Daisuke

    2009-01-01

    The purpose of this study is to establish a management process for a power plant to positively find out actual and/or potential problems that may possibility cause a serious human factor event, and to take effective measures. Firstly, detail steps for error management process utilizing human factor event data has been examined through an application at a plant. Secondly, basic steps for evaluating the degree of execution, enhancement and usefulness of each human performance activity and for identifying unsafe acts and uneasy human performance states were established based on literature searching and our experiences on plant evaluation. Finally, an entire picture of error management process was proposed by unifying the steps studied above. In addition, as stages for introducing and establishing the above proposed error management process into a power plant, a basic idea of supplementing an insufficient part of the process with a phased approach after comparing the above proposed management process and the existing human performance activities at the plant was introduced. (author)

  7. Numerical study of the influence of geometrical characteristics of a vertical helical coil on a bubbly flow

    Science.gov (United States)

    Saffari, H.; Moosavi, R.

    2014-11-01

    In this article, turbulent single-phase and two-phase (air-water) bubbly fluid flows in a vertical helical coil are analyzed by using computational fluid dynamics (CFD). The effects of the pipe diameter, coil diameter, coil pitch, Reynolds number, and void fraction on the pressure loss, friction coefficient, and flow characteristics are investigated. The Eulerian-Eulerian model is used in this work to simulate the two-phase fluid flow. Three-dimensional governing equations of continuity, momentum, and energy are solved by using the finite volume method. The k- ɛ turbulence model is used to calculate turbulence fluctuations. The SIMPLE algorithm is employed to solve the velocity and pressure fields. Due to the effect of a secondary force in helical pipes, the friction coefficient is found to be higher in helical pipes than in straight pipes. The friction coefficient increases with an increase in the curvature, pipe diameter, and coil pitch and decreases with an increase in the coil diameter and void fraction. The close correlation between the numerical results obtained in this study and the numerical and empirical results of other researchers confirm the accuracy of the applied method. For void fractions up to 0.1, the numerical results indicate that the friction coefficient increases with increasing the pipe diameter and keeping the coil pitch and diameter constant and decreases with increasing the coil diameter. Finally, with an increase in the Reynolds number, the friction coefficient decreases, while the void fraction increases.

  8. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    Science.gov (United States)

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  9. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction

    Directory of Open Access Journals (Sweden)

    Boulesteix Anne-Laure

    2009-12-01

    Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.

  10. Towards more reliable automated multi-dose dispensing: retrospective follow-up study on medication dose errors and product defects.

    Science.gov (United States)

    Palttala, Iida; Heinämäki, Jyrki; Honkanen, Outi; Suominen, Risto; Antikainen, Osmo; Hirvonen, Jouni; Yliruusi, Jouko

    2013-03-01

    To date, little is known on applicability of different types of pharmaceutical dosage forms in an automated high-speed multi-dose dispensing process. The purpose of the present study was to identify and further investigate various process-induced and/or product-related limitations associated with multi-dose dispensing process. The rates of product defects and dose dispensing errors in automated multi-dose dispensing were retrospectively investigated during a 6-months follow-up period. The study was based on the analysis of process data of totally nine automated high-speed multi-dose dispensing systems. Special attention was paid to the dependence of multi-dose dispensing errors/product defects and pharmaceutical tablet properties (such as shape, dimensions, weight, scored lines, coatings, etc.) to profile the most suitable forms of tablets for automated dose dispensing systems. The relationship between the risk of errors in dose dispensing and tablet characteristics were visualized by creating a principal component analysis (PCA) model for the outcome of dispensed tablets. The two most common process-induced failures identified in the multi-dose dispensing are predisposal of tablet defects and unexpected product transitions in the medication cassette (dose dispensing error). The tablet defects are product-dependent failures, while the tablet transitions are dependent on automated multi-dose dispensing systems used. The occurrence of tablet defects is approximately twice as common as tablet transitions. Optimal tablet preparation for the high-speed multi-dose dispensing would be a round-shaped, relatively small/middle-sized, film-coated tablet without any scored line. Commercial tablet products can be profiled and classified based on their suitability to a high-speed multi-dose dispensing process.

  11. Comparative study of anatomical normalization errors in SPM and 3D-SSP using digital brain phantom.

    Science.gov (United States)

    Onishi, Hideo; Matsutake, Yuki; Kawashima, Hiroki; Matsutomo, Norikazu; Amijima, Hizuru

    2011-01-01

    In single photon emission computed tomography (SPECT) cerebral blood flow studies, two major algorithms are widely used statistical parametric mapping (SPM) and three-dimensional stereotactic surface projections (3D-SSP). The aim of this study is to compare an SPM algorithm-based easy Z score imaging system (eZIS) and a 3D-SSP system in the errors of anatomical standardization using 3D-digital brain phantom images. We developed a 3D-brain digital phantom based on MR images to simulate the effects of head tilt, perfusion defective region size, and count value reduction rate on the SPECT images. This digital phantom was used to compare the errors of anatomical standardization by the eZIS and the 3D-SSP algorithms. While the eZIS allowed accurate standardization of the images of the phantom simulating a head in rotation, lateroflexion, anteflexion, or retroflexion without angle dependency, the standardization by 3D-SSP was not accurate enough at approximately 25° or more head tilt. When the simulated head contained perfusion defective regions, one of the 3D-SSP images showed an error of 6.9% from the true value. Meanwhile, one of the eZIS images showed an error as large as 63.4%, revealing a significant underestimation. When required to evaluate regions with decreased perfusion due to such causes as hemodynamic cerebral ischemia, the 3D-SSP is desirable. In a statistical image analysis, we must reconfirm the image after anatomical standardization by all means.

  12. Comparative study of anatomical normalization errors in SPM and 3D-SSP using digital brain phantom

    International Nuclear Information System (INIS)

    Onishi, Hideo; Matsutomo, Norikazu; Matsutake, Yuki; Kawashima, Hiroki; Amijima, Hizuru

    2011-01-01

    In single photon emission computed tomography (SPECT) cerebral blood flow studies, two major algorithms are widely used statistical parametric mapping (SPM) and three-dimensional stereotactic surface projections (3D-SSP). The aim of this study is to compare an SPM algorithm-based easy Z score imaging system (eZIS) and a 3D-SSP system in the errors of anatomical standardization using 3D-digital brain phantom images. We developed a 3D-brain digital phantom based on MR images to simulate the effects of head tilt, perfusion defective region size, and count value reduction rate on the SPECT images. This digital phantom was used to compare the errors of anatomical standardization by the eZIS and the 3D-SSP algorithms. While the eZIS allowed accurate standardization of the images of the phantom simulating a head in rotation, lateroflexion, anteflexion, or retroflexion without angle dependency, the standardization by 3D-SSP was not accurate enough at approximately 25 deg or more head tilt. When the simulated head contained perfusion defective regions, one of the 3D-SSP images showed an error of 6.9% from the true value. Meanwhile, one of the eZIS images showed an error as large as 63.4%, revealing a significant underestimation. When required to evaluate regions with decreased perfusion due to such causes as hemodynamic cerebral ischemia, the 3D-SSP is desirable. In a statistical image analysis, we must reconfirm the image after anatomical standardization by all means. (author)

  13. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  14. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  15. Procrustes-based geometric morphometrics on MRI images: An example of inter-operator bias in 3D landmarks and its impact on big datasets.

    Science.gov (United States)

    Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea

    2018-01-01

    Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.

  16. Influence of measurement errors and estimated parameters on combustion diagnosis

    International Nuclear Information System (INIS)

    Payri, F.; Molina, S.; Martin, J.; Armas, O.

    2006-01-01

    Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors

  17. The persistence of error: a study of retracted articles on the Internet and in personal libraries.

    Science.gov (United States)

    Davis, Philip M

    2012-07-01

    To determine the accessibility of retracted articles residing on non-publisher websites and in personal libraries. Searches were performed to locate Internet copies of 1,779 retracted articles identified in MEDLINE, published between 1973 and 2010, excluding the publishers' website. Found copies were classified by article version and location. Mendeley (a bibliographic software) was searched for copies residing in personal libraries. Non-publisher websites provided 321 publicly accessible copies for 289 retracted articles: 304 (95%) copies were the publisher' versions, and 13 (4%) were final manuscripts. PubMed Central had 138 (43%) copies; educational websites 94 (29%); commercial websites 24 (7%); advocacy websites 16 (5%); and institutional repositories 10 (3%). Just 16 [corrected] (5%) full-article views included a retraction statement. Personal Mendeley libraries contained records for 1,340 (75%) retracted articles, shared by 3.4 users, on average. The benefits of decentralized access to scientific articles may come with the cost of promoting incorrect, invalid, or untrustworthy science. Automated methods to deliver status updates to readers may reduce the persistence of error in the scientific literature.

  18. A Recent Revisit Study on the Human Error Events of Nuclear Facilities in Korea

    International Nuclear Information System (INIS)

    Lee, Y.-H.

    2016-01-01

    After Fukushima accident we have launched two new projects in Korea. One is for the development of the countermeasures for human errors in nuclear facilities, and the other is for the safety culture of nuclear power plant itself. There had happened several succeeding events that turned out to be the typical flags of the human and organizational factor issues for the safety of the other socio-technical systems as well as nuclear power plants in Korea. The second safety culture project was an ambitious development to establish an infra system utilising system dynamics, business process modeling and big-data techniques to provide effective and efficient information basis to various interest parties related to the nuclear power plants. However the project has been drastically cancelled last year without any further discussion on the original issues raised before in Korea. It may come not only from the conflicting perspectives among the different approaches to nuclear safety culture but also from the misunderstandings on the human factors for the nuclear safety.

  19. Harmonic and geometric analysis

    CERN Document Server

    Citti, Giovanna; Pérez, Carlos; Sarti, Alessandro; Zhong, Xiao

    2015-01-01

    This book presents an expanded version of four series of lectures delivered by the authors at the CRM. Harmonic analysis, understood in a broad sense, has a very wide interplay with partial differential equations and in particular with the theory of quasiconformal mappings and its applications. Some areas in which real analysis has been extremely influential are PDE's and geometric analysis. Their foundations and subsequent developments made extensive use of the Calderón–Zygmund theory, especially the Lp inequalities for Calderón–Zygmund operators (Beurling transform and Riesz transform, among others) and the theory of Muckenhoupt weights.  The first chapter is an application of harmonic analysis and the Heisenberg group to understanding human vision, while the second and third chapters cover some of the main topics on linear and multilinear harmonic analysis. The last serves as a comprehensive introduction to a deep result from De Giorgi, Moser and Nash on the regularity of elliptic partial differen...

  20. A study on the operator's errors of commission (EOC) in accident scenarios of nuclear power plants: methodology development and application

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Jung, Won Dea; Park, Jin Kyun; Kang, Da Il

    2003-04-01

    As the concern on the operator's inappropriate interventions, the so-called Errors Of Commission (EOCs), that can exacerbate the plant safety has been raised, much of interest in the identification and analysis of EOC events from the risk assessment perspective has been increased. Also, one of the items in need of improvement for the conventional PSA and HRA that consider only the system-demanding human actions is the inclusion of the operator's EOC events into the PSA model. In this study, we propose a methodology for identifying and analysing human errors of commission that might be occurring from the failures in situation assessment and decision making during accident progressions given an initiating event. In order to achieve this goal, the following research items have been performed: Firstly, we analysed the error causes or situations contributed to the occurrence of EOCs in several incidents/accidents of nuclear power plants. Secondly, limitations of the advanced HRAs in treating EOCs were reviewed, and a requirement for a new methodology for analysing EOCs was established. Thirdly, based on these accomplishments a methodology for identifying and analysing EOC events inducible from the failures in situation assessment and decision making was proposed and applied to all the accident sequences of YGN 3 and 4 NPP which resulted in the identification of about 10 EOC situations

  1. Dosimetric impact of systematic MLC positional errors on step and shoot IMRT for prostate cancer: a planning study

    International Nuclear Information System (INIS)

    Ung, N.M.; Wee, L.; Harper, C.S.

    2010-01-01

    Full text: The positional accuracy of multi leaf collimators (MLC) is crucial in ensuring precise delivery of intensity-modulated radiotherapy (IMRT). The aim of this planning study was to investigate the dosimetric impact of systematic MLC errors on step and shoot IMRT of prostate cancer. Twelve MLC leaf banks perturbations were introduced to six prostate IMRT treatment plans to simulate MLC systematic errors. Dose volume histograms (OYHs) were generated for the extraction of dose endpoint parameters. Plans were evaluated in terms of changes to the defined endpoint dose parameters, conformity index (CI) and healthy tissue avoidance (HTA) to planning target volume (PTY), rectum and bladder. Negative perturbations of MLC had been found to produce greater changes to endpoint dose parameters than positive perturbations of MLC (p < 0.05). Negative and positive synchronized MLC perturbations of I mm resulted in median changes of -2.32 and 1.78%, respectively to 095% of PTY whereas asynchronized MLC perturbations of the same direction and magnitude resulted in median changes of 1.18 and 0.90%, respectively. Doses to rectum were generally more sensitive to systematic MLC errors compared to bladder. Synchronized MLC perturbations of I mm resulted in median changes of endpoint dose parameters to both rectum and bladder from about I to 3%. Maximum reduction of -4.44 and -7.29% were recorded for CI and HTA, respectively, due to synchronized MLC perturbation of I mm. In summary, MLC errors resulted in measurable amount of dose changes to PTY and surrounding critical structures in prostate LMRT. (author)

  2. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  3. Impact of automated dispensing cabinets on medication selection and preparation error rates in an emergency department: a prospective and direct observational before-and-after study.

    Science.gov (United States)

    Fanning, Laura; Jones, Nick; Manias, Elizabeth

    2016-04-01

    The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.

  4. An Analysis of Errors Committed by Saudi Non-English Major Students in the English Paragraph Writing: A Study of Comparisons

    Directory of Open Access Journals (Sweden)

    Mohammed Nuruzzaman

    2018-01-01

    Full Text Available The present study investigates the writing errors of ninety Saudi non-English major undergraduate students of different proficiency levels from three faculties, who studied English as a foundation course at the English Language Center in the College of Languages &Translation at King Khalid University, Saudi Arabia in the academic year 2016-17. The findings reveal that the common errors the Saudi EFL students make in writing English paragraphs fall under four categories namely grammar, lexis, semantics and mechanics. Then it compares the categories, types and frequency of errors committed by these three groups of students. Among these categories, grammar has been observed as the most error-prone area where students commit errors the most. The study also posits that among the three groups, the students of the College of Medicine make the minimum errors in all the types and the highest number of errors is committed by the students of Engineering College. The College of Computer Science is in the second position in making errors. The frequency of error types is also found different among these three groups.

  5. Regular Polygons and Geometric Series.

    Science.gov (United States)

    Jarrett, Joscelyn A.

    1982-01-01

    Examples of some geometric illustrations of limits are presented. It is believed the limit concept is among the most important topics in mathematics, yet many students do not have good intuitive feelings for the concept, since it is often taught very abstractly. Geometric examples are suggested as meaningful tools. (MP)

  6. Geometric Invariants and Object Recognition.

    Science.gov (United States)

    1992-08-01

    University of Chicago Press. Maybank , S.J. [1992], "The Projection of Two Non-coplanar Conics", in Geometric Invariance in Machine Vision, eds. J.L...J.L. Mundy and A. Zisserman, MIT Press, Cambridge, MA. Mundy, J.L., Kapur, .. , Maybank , S.J., and Quan, L. [1992a] "Geometric Inter- pretation of

  7. Study protocol: the empirical investigation of methods to correct for measurement error in biobanks with dietary assessment

    Directory of Open Access Journals (Sweden)

    Masson Lindsey F

    2011-10-01

    Full Text Available Abstract Background The Public Population Project in Genomics (P3G is an organisation that aims to promote collaboration between researchers in the field of population-based genomics. The main objectives of P3G are to encourage collaboration between researchers and biobankers, optimize study design, promote the harmonization of information use in biobanks, and facilitate transfer of knowledge between interested parties. The importance of calibration and harmonisation of methods for environmental exposure assessment to allow pooling of data across studies in the evaluation of gene-environment interactions has been recognised by P3G, which has set up a methodological group on calibration with the aim of; 1 reviewing the published methodological literature on measurement error correction methods with assumptions and methods of implementation; 2 reviewing the evidence available from published nutritional epidemiological studies that have used a calibration approach; 3 disseminating information in the form of a comparison chart on approaches to perform calibration studies and how to obtain correction factors in order to support research groups collaborating within the P3G network that are unfamiliar with the methods employed; 4 with application to the field of nutritional epidemiology, including gene-diet interactions, ultimately developing a inventory of the typical correction factors for various nutrients. Methods/Design Systematic review of (a the methodological literature on methods to correct for measurement error in epidemiological studies; and (b studies that have been designed primarily to investigate the association between diet and disease and have also corrected for measurement error in dietary intake. Discussion The conduct of a systematic review of the methodological literature on calibration will facilitate the evaluation of methods to correct for measurement error and the design of calibration studies for the prospective pooling of

  8. Morphing of geometric composites via residual swelling.

    Science.gov (United States)

    Pezzulla, Matteo; Shillig, Steven A; Nardinocchi, Paola; Holmes, Douglas P

    2015-08-07

    Understanding and controlling the shape of thin, soft objects has been the focus of significant research efforts among physicists, biologists, and engineers in the last decade. These studies aim to utilize advanced materials in novel, adaptive ways such as fabricating smart actuators or mimicking living tissues. Here, we present the controlled growth-like morphing of 2D sheets into 3D shapes by preparing geometric composite structures that deform by residual swelling. The morphing of these geometric composites is dictated by both swelling and geometry, with diffusion controlling the swelling-induced actuation, and geometric confinement dictating the structure's deformed shape. Building on a simple mechanical analog, we present an analytical model that quantitatively describes how the Gaussian and mean curvatures of a thin disk are affected by the interplay among geometry, mechanics, and swelling. This model is in excellent agreement with our experiments and numerics. We show that the dynamics of residual swelling is dictated by a competition between two characteristic diffusive length scales governed by geometry. Our results provide the first 2D analog of Timoshenko's classical formula for the thermal bending of bimetallic beams - our generalization explains how the Gaussian curvature of a 2D geometric composite is affected by geometry and elasticity. The understanding conferred by these results suggests that the controlled shaping of geometric composites may provide a simple complement to traditional manufacturing techniques.

  9. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  10. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  11. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  12. Al Hirschfeld's NINA as a prototype search task for studying perceptual error in radiology

    Science.gov (United States)

    Nodine, Calvin F.; Kundel, Harold L.

    1997-04-01

    Artist Al Hirschfeld has been hiding the word NINA (his daughter's name) in line drawings of theatrical scenes that have appeared in the New York Times for over 50 years. This paper shows how Hirschfeld's search task of finding the name NINA in his drawings illustrates basic perceptual principles of detection, discrimination and decision-making commonly encountered in radiology search tasks. Hirschfeld's hiding of NINA is typically accomplished by camouflaging the letters of the name and blending them into scenic background details such as wisps of hair and folds of clothing. In a similar way, pulmonary nodules and breast lesions are camouflaged by anatomic features of the chest or breast image. Hirschfeld's hidden NINAs are sometimes missed because they are integrated into a Gestalt overview rather than differentiated from background features during focal scanning. This may be similar to overlooking an obvious nodule behind the heart in a chest x-ray image. Because it is a search game, Hirschfeld assigns a number to each drawing to indicate how many NINAs he has hidden so as not to frustrate his viewers. In the radiologists' task, the number of targets detected in a medical image is determined by combining perceptual input with probabilities generated from clinical history and viewing experience. Thus, in the absence of truth, searching for abnormalities in x-ray images creates opportunities for recognition and decision errors (e.g. false positives and false negatives). We illustrate how camouflage decreases the conspicuity of both artistic and radiographic targets, compare detection performance of radiologists with lay persons searching for NINAs, and, show similarities and differences between scanning strategies of the two groups based on eye-position data.

  13. Comparative Study on Various Geometrical Core Design of 300 MWth Gas Cooled Fast Reactor with UN-PuN Fuel Longlife without Refuelling

    Science.gov (United States)

    Dewi Syarifah, Ratna; Su'ud, Zaki; Basar, Khairul; Irwanto, Dwi

    2017-07-01

    Nuclear power has progressive improvement in the operating performance of exiting reactors and ensuring economic competitiveness of nuclear electricity around the world. The GFR use gas coolant and fast neutron spectrum. This research use helium coolant which has low neutron moderation, chemical inert and single phase. Comparative study on various geometrical core design for modular GFR with UN-PuN fuel long life without refuelling has been done. The calculation use SRAC2006 code both PIJ calculation and CITATION calculation. The data libraries use JENDL 4.0. The variation of fuel fraction is 40% until 65%. In this research, we varied the geometry of core reactor to find the optimum geometry design. The variation of the geometry design is balance cylinder; it means that the diameter active core (D) same with height active core (H). Second, pancake cylinder (D>H) and third, tall cylinder (Dpower is 300 MWth. First calculation, we calculate survey parameter for UN-PuN fuel with fissile contain from Plutonium waste LWR for each geometry. The minimum power density is around 72 Watt/cc, and maximum power density 114 Watt/cc. After we calculate with various geometry core, when we use the balance geometry, the k-eff value flattest and more stable than the others.

  14. Quantum chemical study of the geometrical and electronic structures of ScSi{sub 3}{sup −/0} clusters and assignment of the anion photoelectron spectra

    Energy Technology Data Exchange (ETDEWEB)

    Tran, Quoc Tri; Tran, Van Tan, E-mail: tvtan@dthu.edu.vn [Theoretical and Physical Chemistry Division, Dong Thap University, 783-Pham Huu Lau, Cao Lanh City, Ward 6, Dong Thap (Viet Nam)

    2016-06-07

    The geometrical and electronic structures of ScSi{sub 3}{sup −/0} clusters have been studied with the B3LYP, CCSD(T), and CASPT2 methods. The ground state of the anionic cluster was evaluated to be the {sup 1}A{sub 1} of rhombic η{sup 2}-(Si{sub 3})Sc{sup −} isomer, whereas that of the neutral cluster was computed to be the {sup 2}A{sub 1} of the same isomer. All features in the 266 and 193 nm photoelectron spectra of ScSi{sub 3}{sup −} cluster were interpreted by the one- and two-electron detachments from the {sup 1}A{sub 1} of rhombic η{sup 2}-(Si{sub 3})Sc{sup −} isomer. The Franck-Condon factor simulation results show that the first broad band starting at 1.78 eV in the spectra comprises several vibrational progression peaks of two totally symmetric modes with the corresponding frequencies of 296 and 354 cm{sup −1}.

  15. Geometric inequalities for axially symmetric black holes

    International Nuclear Information System (INIS)

    Dain, Sergio

    2012-01-01

    A geometric inequality in general relativity relates quantities that have both a physical interpretation and a geometrical definition. It is well known that the parameters that characterize the Kerr-Newman black hole satisfy several important geometric inequalities. Remarkably enough, some of these inequalities also hold for dynamical black holes. This kind of inequalities play an important role in the characterization of the gravitational collapse; they are closely related with the cosmic censorship conjecture. Axially symmetric black holes are the natural candidates to study these inequalities because the quasi-local angular momentum is well defined for them. We review recent results in this subject and we also describe the main ideas behind the proofs. Finally, a list of relevant open problems is presented. (topical review)

  16. Exponentiated Lomax Geometric Distribution: Properties and Applications

    Directory of Open Access Journals (Sweden)

    Amal Soliman Hassan

    2017-09-01

    Full Text Available In this paper, a new four-parameter lifetime distribution, called the exponentiated Lomax geometric (ELG is introduced. The new lifetime distribution contains the Lomax geometric and exponentiated Pareto geometric as new sub-models. Explicit algebraic formulas of probability density function, survival and hazard functions are derived. Various structural properties of the new model are derived including; quantile function, Re'nyi entropy, moments, probability weighted moments, order statistic, Lorenz and Bonferroni curves. The estimation of the model parameters is performed by maximum likelihood method and inference for a large sample is discussed. The flexibility and potentiality of the new model in comparison with some other distributions are shown via an application to a real data set. We hope that the new model will be an adequate model for applications in various studies.

  17. Series-NonUniform Rational B-Spline (S-NURBS) model: a geometrical interpolation framework for chaotic data.

    Science.gov (United States)

    Shao, Chenxi; Liu, Qingqing; Wang, Tingting; Yin, Peifeng; Wang, Binghong

    2013-09-01

    Time series is widely exploited to study the innate character of the complex chaotic system. Existing chaotic models are weak in modeling accuracy because of adopting either error minimization strategy or an acceptable error to end the modeling process. Instead, interpolation can be very useful for solving differential equations with a small modeling error, but it is also very difficult to deal with arbitrary-dimensional series. In this paper, geometric theory is considered to reduce the modeling error, and a high-precision framework called Series-NonUniform Rational B-Spline (S-NURBS) model is developed to deal with arbitrary-dimensional series. The capability of the interpolation framework is proved in the validation part. Besides, we verify its reliability by interpolating Musa dataset. The main improvement of the proposed framework is that we are able to reduce the interpolation error by properly adjusting weights series step by step if more information is given. Meanwhile, these experiments also demonstrate that studying the physical system from a geometric perspective is feasible.

  18. Investigating potential physicochemical errors in polymer gel dosimeters

    International Nuclear Information System (INIS)

    Sedaghat, Mahbod; Lepage, Martin; Bujold, Rachel

    2011-01-01

    Measurement errors in polymer gel dosimetry can originate either during irradiation or scanning. One concern related to the exothermic nature of polymerization reaction was that the heat released in polymer gel dosimeters during irradiation modifies their dose response. In this paper, the effect of heat released from the exothermal polymerization reaction on the dose response of a number of dosimeters was studied. In addition, we investigated whether heat-generated geometric distortion existed in newly proposed gel dosimeters that contain highly thermoresponsive polymers. Our results suggest that despite a significant internal temperature increase in some gel compositions, their dose responses are not affected when oxygen is well expelled mechanically from the gel mixture. We also report on significant pre-irradiation instability in some recently developed polymer gel dosimeters but that geometric distortions were not observed. Data obtained by a set of small calibration vials are compared to those obtained from larger phantoms, and potential physicochemical causes of deviations between them are identified.

  19. Investigating potential physicochemical errors in polymer gel dosimeters

    Energy Technology Data Exchange (ETDEWEB)

    Sedaghat, Mahbod; Lepage, Martin [Centre d' imagerie moleculaire de Sherbrooke, Departement de medecine nucleaire et radiobiologie, Universite de Sherbrooke, Sherbrooke, QC (Canada); Bujold, Rachel, E-mail: martin.lepage@usherbrooke.ca [Service de radio-oncologie, Centre hospitalier universitaire de Sherbrooke, Sherbrooke, QC (Canada)

    2011-09-21

    Measurement errors in polymer gel dosimetry can originate either during irradiation or scanning. One concern related to the exothermic nature of polymerization reaction was that the heat released in polymer gel dosimeters during irradiation modifies their dose response. In this paper, the effect of heat released from the exothermal polymerization reaction on the dose response of a number of dosimeters was studied. In addition, we investigated whether heat-generated geometric distortion existed in newly proposed gel dosimeters that contain highly thermoresponsive polymers. Our results suggest that despite a significant internal temperature increase in some gel compositions, their dose responses are not affected when oxygen is well expelled mechanically from the gel mixture. We also report on significant pre-irradiation instability in some recently developed polymer gel dosimeters but that geometric distortions were not observed. Data obtained by a set of small calibration vials are compared to those obtained from larger phantoms, and potential physicochemical causes of deviations between them are identified.

  20. Geometric modular action and transformation groups

    International Nuclear Information System (INIS)

    Summers, S.J.

    1996-01-01

    We study a weak form of geometric modular action, which is naturally associated with transformation groups of partially ordered sets and which provides these groups with projective representations. Under suitable conditions it is shown that these groups are implemented by point transformations of topological spaces serving as models for space-times, leading to groups which may be interpreted as symmetry groups of the space-times. As concrete examples, it is shown that the Poincare group and the de Sitter group can be derived from this condition of geometric modular action. Further consequences and examples are discussed. (orig.)

  1. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning.

    Science.gov (United States)

    Arrighi, Pieranna; Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno; Andre, Paolo

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor

  2. Aspects of random geometric graphs : Pursuit-evasion and treewidth

    NARCIS (Netherlands)

    Li, A.

    2015-01-01

    In this thesis, we studied two aspects of random geometric graphs: pursuit-evasion and treewidth. We first studied one pursuit-evasion game: Cops and Robbers. This game, which dates back to 1970s, are studied extensively in recent years. We investigate this game on random geometric graphs, and get

  3. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    Science.gov (United States)

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...

  4. On designing geometric motion planners to solve regulating and trajectory tracking problems for robotic locomotion systems

    International Nuclear Information System (INIS)

    Asnafi, Alireza; Mahzoon, Mojtaba

    2011-01-01

    Based on a geometric fiber bundle structure, a generalized method to solve both regulation and trajectory tracking problems for locomotion systems is presented. The method is especially applied to two case studies of robotic locomotion systems; a three link articulated fish-like robot as a prototype of locomotion systems with symmetry, and the snakeboard as a prototype of mixed locomotion systems. Our results show that although these motion planners have an open loop structure, due to their generalities, they can steer case studies with negligible errors for almost any complicated path.

  5. On designing geometric motion planners to solve regulating and trajectory tracking problems for robotic locomotion systems

    Energy Technology Data Exchange (ETDEWEB)

    Asnafi, Alireza [Hydro-Aeronautical Research Center, Shiraz University, Shiraz, 71348-13668 (Iran, Islamic Republic of); Mahzoon, Mojtaba [Department of Mechanical Engineering, School of Engineering, Shiraz University, Shiraz, 71348-13668 (Iran, Islamic Republic of)

    2011-09-15

    Based on a geometric fiber bundle structure, a generalized method to solve both regulation and trajectory tracking problems for locomotion systems is presented. The method is especially applied to two case studies of robotic locomotion systems; a three link articulated fish-like robot as a prototype of locomotion systems with symmetry, and the snakeboard as a prototype of mixed locomotion systems. Our results show that although these motion planners have an open loop structure, due to their generalities, they can steer case studies with negligible errors for almost any complicated path.

  6. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    Science.gov (United States)

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  7. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  8. Impact of geometric uncertainties on dose calculations for intensity modulated radiation therapy of prostate cancer

    Science.gov (United States)

    Jiang, Runqing

    Intensity-modulated radiation therapy (IMRT) uses non-uniform beam intensities within a radiation field to provide patient-specific dose shaping, resulting in a dose distribution that conforms tightly to the planning target volume (PTV). Unavoidable geometric uncertainty arising from patient repositioning and internal organ motion can lead to lower conformality index (CI) during treatment delivery, a decrease in tumor control probability (TCP) and an increase in normal tissue complication probability (NTCP). The CI of the IMRT plan depends heavily on steep dose gradients between the PTV and organ at risk (OAR). Geometric uncertainties reduce the planned dose gradients and result in a less steep or "blurred" dose gradient. The blurred dose gradients can be maximized by constraining the dose objective function in the static IMRT plan or by reducing geometric uncertainty during treatment with corrective verification imaging. Internal organ motion and setup error were evaluated simultaneously for 118 individual patients with implanted fiducials and MV electronic portal imaging (EPI). A Gaussian probability density function (PDF) is reasonable for modeling geometric uncertainties as indicated by the 118 patients group. The Gaussian PDF is patient specific and group standard deviation (SD) should not be used for accurate treatment planning for individual patients. In addition, individual SD should not be determined or predicted from small imaging samples because of random nature of the fluctuations. Frequent verification imaging should be employed in situations where geometric uncertainties are expected. Cumulative PDF data can be used for re-planning to assess accuracy of delivered dose. Group data is useful for determining worst case discrepancy between planned and delivered dose. The margins for the PTV should ideally represent true geometric uncertainties. The measured geometric uncertainties were used in this thesis to assess PTV coverage, dose to OAR, equivalent

  9. Geometrical method of decoupling

    Directory of Open Access Journals (Sweden)

    C. Baumgarten

    2012-12-01

    Full Text Available The computation of tunes and matched beam distributions are essential steps in the analysis of circular accelerators. If certain symmetries—like midplane symmetry—are present, then it is possible to treat the betatron motion in the horizontal, the vertical plane, and (under certain circumstances the longitudinal motion separately using the well-known Courant-Snyder theory, or to apply transformations that have been described previously as, for instance, the method of Teng and Edwards. In a preceding paper, it has been shown that this method requires a modification for the treatment of isochronous cyclotrons with non-negligible space charge forces. Unfortunately, the modification was numerically not as stable as desired and it was still unclear, if the extension would work for all conceivable cases. Hence, a systematic derivation of a more general treatment seemed advisable. In a second paper, the author suggested the use of real Dirac matrices as basic tools for coupled linear optics and gave a straightforward recipe to decouple positive definite Hamiltonians with imaginary eigenvalues. In this article this method is generalized and simplified in order to formulate a straightforward method to decouple Hamiltonian matrices with eigenvalues on the real and the imaginary axis. The decoupling of symplectic matrices which are exponentials of such Hamiltonian matrices can be deduced from this in a few steps. It is shown that this algebraic decoupling is closely related to a geometric “decoupling” by the orthogonalization of the vectors E[over →], B[over →], and P[over →], which were introduced with the so-called “electromechanical equivalence.” A mathematical analysis of the problem can be traced down to the task of finding a structure-preserving block diagonalization of symplectic or Hamiltonian matrices. Structure preservation means in this context that the (sequence of transformations must be symplectic and hence canonical. When

  10. Angular truncation errors in integrating nephelometry

    International Nuclear Information System (INIS)

    Moosmueller, Hans; Arnott, W. Patrick

    2003-01-01

    Ideal integrating nephelometers integrate light scattered by particles over all directions. However, real nephelometers truncate light scattered in near-forward and near-backward directions below a certain truncation angle (typically 7 deg. ). This results in truncation errors, with the forward truncation error becoming important for large particles. Truncation errors are commonly calculated using Mie theory, which offers little physical insight and no generalization to nonspherical particles. We show that large particle forward truncation errors can be calculated and understood using geometric optics and diffraction theory. For small truncation angles (i.e., <10 deg. ) as typical for modern nephelometers, diffraction theory by itself is sufficient. Forward truncation errors are, by nearly a factor of 2, larger for absorbing particles than for nonabsorbing particles because for large absorbing particles most of the scattered light is due to diffraction as transmission is suppressed. Nephelometers calibration procedures are also discussed as they influence the effective truncation error

  11. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  12. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  13. Geometric inequalities for black holes

    International Nuclear Information System (INIS)

    Dain, Sergio

    2013-01-01

    Full text: A geometric inequality in General Relativity relates quantities that have both a physical interpretation and a geometrical definition. It is well known that the parameters that characterize the Kerr-Newman black hole satisfy several important geometric inequalities. Remarkably enough, some of these inequalities also hold for dynamical black holes. This kind of inequalities, which are valid in the dynamical and strong field regime, play an important role in the characterization of the gravitational collapse. They are closed related with the cosmic censorship conjecture. In this talk I will review recent results in this subject. (author)

  14. Geometric Computing for Freeform Architecture

    KAUST Repository

    Wallner, J.

    2011-06-03

    Geometric computing has recently found a new field of applications, namely the various geometric problems which lie at the heart of rationalization and construction-aware design processes of freeform architecture. We report on our work in this area, dealing with meshes with planar faces and meshes which allow multilayer constructions (which is related to discrete surfaces and their curvatures), triangles meshes with circle-packing properties (which is related to conformal uniformization), and with the paneling problem. We emphasize the combination of numerical optimization and geometric knowledge.

  15. Geometric inequalities for black holes

    Energy Technology Data Exchange (ETDEWEB)

    Dain, Sergio [Universidad Nacional de Cordoba (Argentina)

    2013-07-01

    Full text: A geometric inequality in General Relativity relates quantities that have both a physical interpretation and a geometrical definition. It is well known that the parameters that characterize the Kerr-Newman black hole satisfy several important geometric inequalities. Remarkably enough, some of these inequalities also hold for dynamical black holes. This kind of inequalities, which are valid in the dynamical and strong field regime, play an important role in the characterization of the gravitational collapse. They are closed related with the cosmic censorship conjecture. In this talk I will review recent results in this subject. (author)

  16. Assessment of errors associated with plot size and lateral movement of nitrogen-15 when studying fertilizer recovery under field conditions

    International Nuclear Information System (INIS)

    Sanchez, C.A.; Blackmer, A.M.; Horton, R.; Timmons, D.R.

    1987-01-01

    The high cost of 15 N-labeled fertilizers encourages the use of field plots having minimum size. If plot size is reduced too much, however, lateral movement of N near the plots by mass flow or diffusion within the soil or by translocation through plant roots can become a significant source of error in determinations of fertilizer N recovery. This study was initiated to assess the importance of lateral movement of labeled fertilizer when unconfined plots are used to determine recovery of fertilizer. Corn grain samples were collected at various positions inside and outside 15 N plots, and the 15 N contents of these samples were determined. The data were fit to mathematical models to estimate the extent to which lateral movement of fertilizer N caused errors in determined values of fertilizer recovery for the first, second, and third crops following fertilization. These models also were used to predict the plot size needed for similar 15 N-tracer studies in the future. The results of these studies indicate that 15 N plots having a size of 2 by 2 m are sufficiently large for determining recovery of fertilizer N for corn crops under most conditions. Where lateral movement of fertilizer N in soils is suspected to be a problem, we recommend collection of a few plant samples outside the 15 N plots as insurance against misleading conclusions concerning fertilizer N recovery

  17. An Image Analysis-Based Methodology for Chromite Exploration through Opto-Geometric Parameters; a Case Study in Faryab Area, SE of Iran

    Directory of Open Access Journals (Sweden)

    Mansur Ziaii

    2017-06-01

    Full Text Available Traditional methods of chromite exploration are mostly based on geophysical techniques and drilling operations. They are expensive and time-consuming. Furthermore, they suffer from several shortcomings such as lack of sufficient geophysical density contrast. In order to overcome these drawbacks, the current research work is carried out to introduce a novel, automatic and opto-geometric image analysis (OGIA technique for extracting the structural properties of chromite minerals using polished thin sections prepared from outcrops. Several images are taken from polished thick sections through a reflected-light microscope equipped with a digital camera. The images are processed in filtering and segmentation steps to extract the worthwhile information of chromite minerals. The directional density of chromite minerals, as a textural property, is studied in different inclinations, and the main trend of chromite growth is identified. Microscopic inclination of chromite veins can be generalized for exploring the macroscopic layers of chromite buried under either the surface quaternary alluvium or overburden rocks. The performance of the OGIA methodology is tested in a real case study, where several exploratory boreholes are drilled. The results obtained show that the microscopic investigation outlines through image analysis are in good agreement with the results obtained from interpretation of boreholes. The OGIA method represents a reliable map of the absence or existence of chromite ore deposits in different horizontal surfaces. Directing the exploration investigations toward more susceptible zones (potentials and preventing from wasting time and money are the major contributions of the OGIA methodology. It leads to make an optimal managerial and economical decision.

  18. Impossible Geometric Constructions: A Calculus Writing Project

    Science.gov (United States)

    Awtrey, Chad

    2013-01-01

    This article discusses a writing project that offers students the opportunity to solve one of the most famous geometric problems of Greek antiquity; namely, the impossibility of trisecting the angle [pi]/3. Along the way, students study the history of Greek geometry problems as well as the life and achievements of Carl Friedrich Gauss. Included is…

  19. Geometric Transformations in Middle School Mathematics Textbooks

    Science.gov (United States)

    Zorin, Barbara

    2011-01-01

    This study analyzed treatment of geometric transformations in presently available middle grades (6, 7, 8) student mathematics textbooks. Fourteen textbooks from four widely used textbook series were evaluated: two mainline publisher series, Pearson (Prentice Hall) and Glencoe (Math Connects); one National Science Foundation (NSF) funded curriculum…

  20. APPLICATION OF SIX SIGMA METHODOLOGY TO REDUCE MEDICATION ERRORS IN THE OUTPATIENT PHARMACY UNIT: A CASE STUDY FROM THE KING FAHD UNIVERSITY HOSPITAL, SAUDI ARABIA

    Directory of Open Access Journals (Sweden)

    Ahmed Al Kuwaiti

    2016-06-01

    Full Text Available Medication errors will affect the patient safety and quality of healthcare. The aim of this study is to analyze the effect of Six Sigma (DMAIC methodology in reducing medication errors in the outpatient pharmacy of King Fahd Hospital of the University, Saudi Arabia. It was conducted through the five phases of Define, Measure, Analyze, Improve, Control (DMAIC model using various quality tools. The goal was fixed as to reduce medication errors in an outpatient pharmacy by 20%. After implementation of improvement strategies, there was a marked reduction of defects and also improvement of their sigma rating. Especially, Parts per million (PPM of prescription/data entry errors reduced from 56,000 to 5,000 and its sigma rating improved from 3.09 to 4.08. This study concluded that the Six Sigma (DMAIC methodology is found to be more significant in reducing medication errors and ensuring patient safety.

  1. A STUDY OF ERRORS IN THE THIRD SINGULAR PRONOUNS OF SIMPLE PRESENT TENSE BY USING INTERLANGUAGE ANALYSIS AS AN APPROACH. A CASE STUDY

    Directory of Open Access Journals (Sweden)

    Salmon Pandarangga

    2015-03-01

    Full Text Available The purpose of this study is to analyze factors contributing to errors made in learning English as a target language (TL. Employing a case study research, the participant was interviewed for approximately 30 minutes about daily activities and experiences in learning English. This research focuses in analysing the participant‟s use of third singular pronoun in simple present tense. The findings revealed that errors made by TL learners are mainly influenced by some factorsrelated to their TL‟s and native language‟s (NL knowledge, systems and rules. These factors are coexisted and interconnected in TL learners‟ minds. This is against Robert Lado‟s argument which mentioned that learner made errors in TL learning because of the interference from NL. The study provides pedagogical implications that TL teachers should perceive errors made by the learners as a sign of language learning and development; therefore they should not be discouraged to learn. Also, TL teachers should be aware of their very important roles to help, to guide and to lead the learners‟ progress in learning the TL. The future subsequent studies should consider of involving more sample size over a longer period of time as to obtain to a more generalized finding.

  2. Dosimetric impact of systematic MLC positional errors on step and shoot IMRT for prostate cancer: a planning study

    International Nuclear Information System (INIS)

    Ung, N.M.; Harper, C.S.; Wee, L.

    2011-01-01

    Full text: The positional accuracy of multileaf collimators (MLC) is crucial in ensuring precise delivery of intensity-modulated radiotherapy (IMRT). The aim of this planning study was to investigate the dosimetric impact of systematic MLC positional errors on step and shoot IMRT of prostate cancer. A total of 12 perturbations of MLC leaf banks were introduced to six prostate IMRT treatment plans to simulate MLC systematic positional errors. Dose volume histograms (DVHs) were generated for the extraction of dose endpoint parameters. Plans were evaluated in terms of changes to the defined endpoint dose parameters, conformity index (CI) and healthy tissue avoidance (HTA) to planning target volume (PTV), rectum and bladder. Negative perturbations of MLC had been found to produce greater changes to endpoint dose parameters than positive perturbations of MLC (p 9 5 of -1.2 and 0.9% respectively. Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in D 9 5 of -2.3 and 1.8% respectively. Doses to rectum were generally more sensitive to systematic MLC en-ors compared to bladder (p < 0.01). Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in endpoint dose parameters of rectum and bladder from 1.0 to 2.5%. Maximum reduction of -4.4 and -7.3% were recorded for conformity index (CI) and healthy tissue avoidance (HT A) respectively due to synchronised MLC perturbation of 1 mm. MLC errors resulted in dosimetric changes in IMRT plans for prostate. (author)

  3. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

    Science.gov (United States)

    Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C

    2018-06-01

    Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China).

    Science.gov (United States)

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-05-25

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The

  5. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    Science.gov (United States)

    Zhao, Q.

    2017-12-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The

  6. Linking Errors between Two Populations and Tests: A Case Study in International Surveys in Education

    Science.gov (United States)

    Hastedt, Dirk; Desa, Deana

    2015-01-01

    This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs) such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average…

  7. Geometric morphometric analysis of intratrackway variability: a case study on theropod and ornithopod dinosaur trackways from Münchehagen (Lower Cretaceous, Germany

    Directory of Open Access Journals (Sweden)

    Jens N. Lallensack

    2016-06-01

    Full Text Available A profound understanding of the influence of trackmaker anatomy, foot movements and substrate properties is crucial for any interpretation of fossil tracks. In this case study we analyze variability of footprint shape within one large theropod (T3, one medium-sized theropod (T2 and one ornithopod (I1 trackway from the Lower Cretaceous of Münchehagen (Lower Saxony, Germany in order to determine the informativeness of individual features and measurements for ichnotaxonomy, trackmaker identification, and the discrimination between left and right footprints. Landmark analysis is employed based on interpretative outline drawings derived from photogrammetric data, allowing for the location of variability within the footprint and the assessment of covariation of separate footprint parts. Objective methods to define the margins of a footprint are tested and shown to be sufficiently accurate to reproduce the most important results. The lateral hypex and the heel are the most variable regions in the two theropod trackways. As indicated by principal component analysis, a posterior shift of the lateral hypex is correlated with an anterior shift of the margin of the heel. This pattern is less pronounced in the ornithopod trackway, indicating that variation patterns can differ in separate trackways. In all trackways, hypices vary independently from each other, suggesting that their relative position a questionable feature for ichnotaxonomic purposes. Most criteria commonly employed to differentiate between left and right footprints assigned to theropods are found to be reasonably reliable. The described ornithopod footprints are asymmetrical, again allowing for a left–right differentiation. Strikingly, 12 out of 19 measured footprints of the T2 trackway are stepped over the trackway midline, rendering the trackway pattern a misleading left–right criterion for this trackway. Traditional measurements were unable to differentiate between the theropod and

  8. Geometric morphometric analysis of intratrackway variability: a case study on theropod and ornithopod dinosaur trackways from Münchehagen (Lower Cretaceous, Germany).

    Science.gov (United States)

    Lallensack, Jens N; van Heteren, Anneke H; Wings, Oliver

    2016-01-01

    A profound understanding of the influence of trackmaker anatomy, foot movements and substrate properties is crucial for any interpretation of fossil tracks. In this case study we analyze variability of footprint shape within one large theropod (T3), one medium-sized theropod (T2) and one ornithopod (I1) trackway from the Lower Cretaceous of Münchehagen (Lower Saxony, Germany) in order to determine the informativeness of individual features and measurements for ichnotaxonomy, trackmaker identification, and the discrimination between left and right footprints. Landmark analysis is employed based on interpretative outline drawings derived from photogrammetric data, allowing for the location of variability within the footprint and the assessment of covariation of separate footprint parts. Objective methods to define the margins of a footprint are tested and shown to be sufficiently accurate to reproduce the most important results. The lateral hypex and the heel are the most variable regions in the two theropod trackways. As indicated by principal component analysis, a posterior shift of the lateral hypex is correlated with an anterior shift of the margin of the heel. This pattern is less pronounced in the ornithopod trackway, indicating that variation patterns can differ in separate trackways. In all trackways, hypices vary independently from each other, suggesting that their relative position a questionable feature for ichnotaxonomic purposes. Most criteria commonly employed to differentiate between left and right footprints assigned to theropods are found to be reasonably reliable. The described ornithopod footprints are asymmetrical, again allowing for a left-right differentiation. Strikingly, 12 out of 19 measured footprints of the T2 trackway are stepped over the trackway midline, rendering the trackway pattern a misleading left-right criterion for this trackway. Traditional measurements were unable to differentiate between the theropod and the ornithopod

  9. Discrete geometric structures for architecture

    KAUST Repository

    Pottmann, Helmut

    2010-01-01

    . The talk will provide an overview of recent progress in this field, with a particular focus on discrete geometric structures. Most of these result from practical requirements on segmenting a freeform shape into planar panels and on the physical realization

  10. Geometric Rationalization for Freeform Architecture

    KAUST Repository

    Jiang, Caigui

    2016-01-01

    The emergence of freeform architecture provides interesting geometric challenges with regards to the design and manufacturing of large-scale structures. To design these architectural structures, we have to consider two types of constraints. First

  11. Geometrical optics in general relativity

    OpenAIRE

    Loinger, A.

    2006-01-01

    General relativity includes geometrical optics. This basic fact has relevant consequences that concern the physical meaning of the discontinuity surfaces propagated in the gravitational field - as it was first emphasized by Levi-Civita.

  12. Mobile Watermarking against Geometrical Distortions

    Directory of Open Access Journals (Sweden)

    Jing Zhang

    2015-08-01

    Full Text Available Mobile watermarking robust to geometrical distortions is still a great challenge. In mobile watermarking, efficient computation is necessary because mobile devices have very limited resources due to power consumption. In this paper, we propose a low-complexity geometrically resilient watermarking approach based on the optimal tradeoff circular harmonic function (OTCHF correlation filter and the minimum average correlation energy Mellin radial harmonic (MACE-MRH correlation filter. By the rotation, translation and scale tolerance properties of the two kinds of filter, the proposed watermark detector can be robust to geometrical attacks. The embedded watermark is weighted by a perceptual mask which matches very well with the properties of the human visual system. Before correlation, a whitening process is utilized to improve watermark detection reliability. Experimental results demonstrate that the proposed watermarking approach is computationally efficient and robust to geometrical distortions.

  13. Reliability of footprint geometric and plantar loading measurements in children using the Emed(®) M system.

    Science.gov (United States)

    Tong, Jasper W K; Kong, Pui W

    2013-06-01

    This study investigated the between-day reliability of footprint geometric and plantar loading measurements on children utilising the Emed(®) M pressure measurement device. Bilateral footprints (static and dynamic) and foot loading measurements using the two-step gait method were collected on 21 children two days apart (age = 9.9 ± 1.8 years; mass = 34.6 ± 8.9 kg; height = 1.38 ± 0.12 m). Static and dynamic footprint geometric (lengths, widths and angles) and dynamic loading (pressures, forces, contact areas and contact time) parameters were compared. Intraclass correlation coefficients of static geometric parameters were varied (0.19-0.96), while superior results were achieved with dynamic geometric (0.66-0.98) and loading variables (0.52-0.94), with the exception of left contact time (0.37). Standard error of measurement recorded small absolute disparity for all geometric (length = 0.1-0.3 cm; arch index = 0.00-0.01; subarch angle = 0.6-6.2°; left/right foot progression angle = 0.5°/0.7°) and loading (peak pressure = 2.3-16.2 kPa; maximum force = 0.3-3.0%; total contact area = 0.28-0.49 cm(2); % contact area = 0.1-0.6%; contact time = 32-79 ms) variables. Coefficient of variation displayed widest spread for static geometry (1.1-27.6%) followed by dynamic geometry (0.8-22.5%) and smallest spread for loading (1.3-16.8%) parameters. Limits of agreement (95%) were narrower in dynamic than static geometric parameters. Overall, the reliability of most dynamic geometric and loading parameters was good and excellent. Static electronic footprint measurements on children are not recommended due to their light body mass which results in incomplete footprints. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Geometrical verification system using Adobe Photoshop in radiotherapy.

    Science.gov (United States)

    Ishiyama, Hiromichi; Suzuki, Koji; Niino, Keiji; Hosoya, Takaaki; Hayakawa, Kazushige

    2005-02-01

    Adobe Photoshop is used worldwide and is useful for comparing portal films with simulation films. It is possible to scan images and then view them simultaneously with this software. The purpose of this study was to assess the accuracy of a geometrical verification system using Adobe Photoshop. We prepared the following two conditions for verification. Under one condition, films were hanged on light boxes, and examiners measured distances between the isocenter on simulation films and that on portal films by adjusting the bony structures. Under the other condition, films were scanned into a computer and displayed using Adobe Photoshop, and examiners measured distances between the isocenter on simulation films and those on portal films by adjusting the bony structures. To obtain control data, lead balls were used as a fiducial point for matching the films accurately. The errors, defined as the differences between the control data and the measurement data, were assessed. Errors of the data obtained using Adobe Photoshop were significantly smaller than those of the data obtained from films on light boxes (p Adobe Photoshop is available on any PC with this software and is useful for improving the accuracy of verification.

  15. Accounting for optical errors in microtensiometry.

    Science.gov (United States)

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup

  16. Geometric inequalities methods of proving

    CERN Document Server

    Sedrakyan, Hayk

    2017-01-01

    This unique collection of new and classical problems provides full coverage of geometric inequalities. Many of the 1,000 exercises are presented with detailed author-prepared-solutions, developing creativity and an arsenal of new approaches for solving mathematical problems. This book can serve teachers, high-school students, and mathematical competitors. It may also be used as supplemental reading, providing readers with new and classical methods for proving geometric inequalities. .

  17. Predictors of Errors of Novice Java Programmers

    Science.gov (United States)

    Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.

    2012-01-01

    This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…

  18. Canonical symplectic structure and structure-preserving geometric algorithms for Schrödinger-Maxwell systems

    Science.gov (United States)

    Chen, Qiang; Qin, Hong; Liu, Jian; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei

    2017-11-01

    An infinite dimensional canonical symplectic structure and structure-preserving geometric algorithms are developed for the photon-matter interactions described by the Schrödinger-Maxwell equations. The algorithms preserve the symplectic structure of the system and the unitary nature of the wavefunctions, and bound the energy error of the simulation for all time-steps. This new numerical capability enables us to carry out first-principle based simulation study of important photon-matter interactions, such as the high harmonic generation and stabilization of ionization, with long-term accuracy and fidelity.

  19. Potential errors and misuse of statistics in studies on leakage in endodontics.

    Science.gov (United States)

    Lucena, C; Lopez, J M; Pulgar, R; Abalos, C; Valderrama, M J

    2013-04-01

    To assess the quality of the statistical methodology used in studies of leakage in Endodontics, and to compare the results found using appropriate versus inappropriate inferential statistical methods. The search strategy used the descriptors 'root filling' 'microleakage', 'dye penetration', 'dye leakage', 'polymicrobial leakage' and 'fluid filtration' for the time interval 2001-2010 in journals within the categories 'Dentistry, Oral Surgery and Medicine' and 'Materials Science, Biomaterials' of the Journal Citation Report. All retrieved articles were reviewed to find potential pitfalls in statistical methodology that may be encountered during study design, data management or data analysis. The database included 209 papers. In all the studies reviewed, the statistical methods used were appropriate for the category attributed to the outcome variable, but in 41% of the cases, the chi-square test or parametric methods were inappropriately selected subsequently. In 2% of the papers, no statistical test was used. In 99% of cases, a statistically 'significant' or 'not significant' effect was reported as a main finding, whilst only 1% also presented an estimation of the magnitude of the effect. When the appropriate statistical methods were applied in the studies with originally inappropriate data analysis, the conclusions changed in 19% of the cases. Statistical deficiencies in leakage studies may affect their results and interpretation and might be one of the reasons for the poor agreement amongst the reported findings. Therefore, more effort should be made to standardize statistical methodology. © 2012 International Endodontic Journal.

  20. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  1. A Study of the Spelling Errors committed by Students of English in Saudi Arabia: Exploration and Remedial Measures

    Directory of Open Access Journals (Sweden)

    Paikar Fatima Mazhar Hameed

    2016-02-01

    Full Text Available The craziness of English spelling has undeniably perplexed learners, especially in an EFL context as in the Kingdom of Saudi Arabia. In these situations, among other obstacles, learners also have to tackle the perpetual and unavoidable problem of MT interference. Sadly, this perplexity takes the shape of a real problem in the language classroom where the English teacher has a tough time rationalizing with the learners why ‘cough’ is not spelt as /kuf/ or ‘knee’ has to do with a silent /k/. It is observed that students of English as second/foreign language in Saudi Arabia commit spelling errors that cause not only a lot of confusion to the teachers but also lower the self-esteem of the students concerned. The current study aims to identify the key problem areas as far as English spelling ability of Saudi EFL learners is concerned. It aims to also suggest remedial and pedagogical measures to improve the learners’ competence in this crucial, though hitherto, nascent skill area in the Saudi education system. Keywords: EFL; error-pattern, spelling instructions, orthography, phonology, vocabulary, language skills, language users

  2. Nurses' systems thinking competency, medical error reporting, and the occurrence of adverse events: a cross-sectional study.

    Science.gov (United States)

    Hwang, Jee-In; Park, Hyeoun-Ae

    2017-12-01

    Healthcare professionals' systems thinking is emphasized for patient safety. To report nurses' systems thinking competency, and its relationship with medical error reporting and the occurrence of adverse events. A cross-sectional survey using a previously validated Systems Thinking Scale (STS), was conducted. Nurses from two teaching hospitals were invited to participate in the survey. There were 407 (60.3%) completed surveys. The mean STS score was 54.5 (SD 7.3) out of 80. Nurses with higher STS scores were more likely to report medical errors (odds ratio (OR) = 1.05; 95% confidence interval (CI) = 1.02-1.08) and were less likely to be involved in the occurrence of adverse events (OR = 0.96; 95% CI = 0.93-0.98). Nurses showed moderate systems thinking competency. Systems thinking was a significant factor associated with patient safety. Impact Statement: The findings of this study highlight the importance of enhancing nurses' systems thinking capacity to promote patient safety.

  3. Preanalytical errors in primary healthcare: a questionnaire study of information search procedures, test request management and test tube labelling.

    Science.gov (United States)

    Söderberg, Johan; Brulin, Christine; Grankvist, Kjell; Wallin, Olof

    2009-01-01

    Most errors in laboratory medicine occur in the preanalytical phase and are the result of human mistakes. This study investigated information search procedures, test request management and test tube labelling in primary healthcare compared to the same procedures amongst clinical laboratory staff. A questionnaire was completed by 317 venous blood sampling staff in 70 primary healthcare centres and in two clinical laboratories (response rate = 94%). Correct procedures were not always followed. Only 60% of the primary healthcare staff reported that they always sought information in the updated, online laboratory manual. Only 12% reported that they always labelled the test tubes prior to drawing blood samples. No major differences between primary healthcare centres and clinical laboratories were found, except for test tube labelling, whereby the laboratory staff reported better practices. Re-education and access to documented routines were not clearly associated with better practices. The preanalytical procedure in the surveyed primary healthcare centres was associated with a risk of errors which could affect patient safety. To improve patient safety in laboratory testing, all healthcare providers should survey their preanalytical procedures and improve the total testing process with a systems perspective.

  4. Studying the effect of perceptual errors on the decisions made by the ...

    African Journals Online (AJOL)

    The factors that the investors are not aware of their effectiveness and make investment decisions. The main purpose of the present research is to study the perceptual factors affecting on the decision making process of the investors and the effect of information on these factors. For this aim, 385 investors of Tehran Stock ...

  5. Measurement Error Correction Formula for Cluster-Level Group Differences in Cluster Randomized and Observational Studies

    Science.gov (United States)

    Cho, Sun-Joo; Preacher, Kristopher J.

    2016-01-01

    Multilevel modeling (MLM) is frequently used to detect cluster-level group differences in cluster randomized trial and observational studies. Group differences on the outcomes (posttest scores) are detected by controlling for the covariate (pretest scores) as a proxy variable for unobserved factors that predict future attributes. The pretest and…

  6. Error Patterns in Word Reading among Primary School Children: A Cross-Orthographic Study

    Science.gov (United States)

    Guron, Louise Miller; Lundberg, Ingvar

    2004-01-01

    A comparative investigation of word reading efficiency indicates that different strategies may be used by English and Swedish early readers. In a first study, 328 native English speakers from UK Years 3 and 6 completed a pen-and-paper word recognition task (the "Wordchains" test). Results were analysed for frequency and type of errors…

  7. A Study on Large Display Panel Design for the Countermeasures against Team Errors within the Main Control Room of APR-1400

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sa Kil; Lee, Yong Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The personal aspect of human errors has been mainly overcome by virtue of the education and training. However, in the system aspect, the education and training system needs to be reconsidered for more effective reduction of human errors affected from various systems hazards. Traditionally the education and training systems are mainly not focused on team skills such as communication, situational awareness, and coordination, etc. but individual knowledge, skill, and attitude. However, the team factor is one of the crucial issues to reduce the human errors in most industries. In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Team error is one of the typical organizational errors that may occur during performing operations in nuclear power plants. The large display panel is a representative feature of digitalized control room. As a group-view display, the large display panel provides plant overview to the operators. However, in terms of team performance and team errors, the large display panel is on a discussion board still because the large display panel was designed just a concept of passive display. In this study, we will propose revised large display panel which is integrated with several alternative interfaces against feasible team errors.

  8. A Study on Large Display Panel Design for the Countermeasures against Team Errors within the Main Control Room of APR-1400

    International Nuclear Information System (INIS)

    Kim, Sa Kil; Lee, Yong Hee

    2015-01-01

    The personal aspect of human errors has been mainly overcome by virtue of the education and training. However, in the system aspect, the education and training system needs to be reconsidered for more effective reduction of human errors affected from various systems hazards. Traditionally the education and training systems are mainly not focused on team skills such as communication, situational awareness, and coordination, etc. but individual knowledge, skill, and attitude. However, the team factor is one of the crucial issues to reduce the human errors in most industries. In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Team error is one of the typical organizational errors that may occur during performing operations in nuclear power plants. The large display panel is a representative feature of digitalized control room. As a group-view display, the large display panel provides plant overview to the operators. However, in terms of team performance and team errors, the large display panel is on a discussion board still because the large display panel was designed just a concept of passive display. In this study, we will propose revised large display panel which is integrated with several alternative interfaces against feasible team errors

  9. Human Error Analysis Project (HEAP) - The Fourth Pilot Study: Scoring and Analysis of Raw Data Types

    International Nuclear Information System (INIS)

    Hollnagel, Erik; Braarud; Per Oeyvind; Droeivoldsmo, Asgeir; Follesoe; Knut; Helgar, Stein; Kaarstad, Magnhild

    1996-01-01

    Pilot study No. 4 rounded off the series of pilot studies by looking at the important issue of the quality of the various data sources. The preceding experiments had clearly shown that that it was necessary to use both concurrent and interrupted verbal protocols, and also that information about eye movements was very valuable. The effort and resources needed to analyse a combination of the different data sources is, however, significant, and it was therefore important to find out whether one or more of the data sources could replace another. In order to determine this issue, pilot study No. 4 looked specifically at the quality of information provided by different data sources. The main hypotheses were that information about operators' diagnosis and decision making would be provided by verbal protocols, expert commentators, and auto-confrontation protocols, that the data sources would be valid, and that they would complement each other. The study used three main data sources: (1) concurrent verbal protocols, which were the operators' verbalisations during the experiment; (2) expert commentator reports, which were descriptions by process experts of the operators' performance; and (3) auto-confrontation, which were the operators' comments on their performance based on a replay of the performance recording minus the concurrent verbal protocol. Additional data sources were eye movement recordings, process data, alarms, etc. The three main data sources were treated as independent variables and applied according to an experimental design that facilitated the test of the main hypotheses. The pilot study produced altogether 59 verbal protocols, some of which were in Finnish. After a translation into English, each protocol was analysed and scored according to a specific scheme. The scoring was designed to facilitate the evaluation of the experimental hypotheses. Due to the considerable work involved, the analysis process has only been partly completed, and no firm results

  10. Sensitivity to Measurement Errors in Studies on Prosocial Choice using a Two-Choice Paradigm

    Directory of Open Access Journals (Sweden)

    Sikorska Julia

    2016-12-01

    Full Text Available Research on prosocial behaviors in primates often relies on the two-choice paradigm. Motoric lateralization is a surprisingly big problem in this field of research research, as it may influence which lever will ultimately be chosen by the actor. The results of lateralization studies on primates do not form a clear picture of that phenomenon, which makes it difficult to address the problem during research. The authors discuss possible ways of managing this confounding variable.

  11. Extent and Determinants of Error in Doctors' Prognoses in Terminally Ill Patients: Prospective Cohort Study

    OpenAIRE

    Lamont, Elizabeth; Christakis, Nicholas

    2000-01-01

    Objective: To describe doctors' prognostic accuracy in terminally ill patients and to evaluate the determinants of that accuracy. Design: Prospective cohort study. Setting: Five outpatient hospice programmes in Chicago. Participants: 343 doctors provided survival estimates for 468 terminally ill patients at the time of hospice referral. Main outcome measures: Patients' estimated and actual survival. Results: Median survival was 24 days. Only 20% (92/468) of predictions were acc...

  12. The geometric structures, vibrational frequencies and redox properties of the actinyl coordination complexes ([AnO2(L)n](m); An = U, Pu, Np; L = H2O, Cl-, CO3(2-), CH3CO2(-), OH-) in aqueous solution, studied by density functional theory methods.

    Science.gov (United States)

    Austin, Jonathan P; Sundararajan, Mahesh; Vincent, Mark A; Hillier, Ian H

    2009-08-14

    The geometric and electronic structures of the aqua, chloro, acetato, hydroxo and carbonato complexes of U, Np and Pu in both their (VI) and (V) oxidation states, and in an aqueous environment, have been studied using density functional theory methods. We have obtained micro-solvated structures derived from molecular dynamics simulations and included the bulk solvent using a continuum model. We find that two different hydrogen bonding patterns involving the axial actinyl oxygen atoms are sometimes possible, and may give rise to different An-O bond lengths and vibrational frequencies. These alternative structures are reflected in the experimental An-O bond lengths of the aqua and carbonato complexes. The variation of the redox potential of the uranyl complexes with the different ligands has been studied using both BP86 and B3LYP functionals. The relative values for the four uranium complexes having anionic ligands are in surprisingly good agreement with experiment, although the absolute values are in error by approximately 1 eV. The absolute error for the aqua species is much less, leading to an incorrect order of the redox potentials of the aqua and chloro species.

  13. Geometric Aspects of Quantum Mechanics and Quantum Entanglement

    International Nuclear Information System (INIS)

    Chruscinski, Dariusz

    2006-01-01

    It is shown that the standard non-relativistic Quantum Mechanics gives rise to elegant and rich geometrical structures. The space of quantum states is endowed with nontrivial Fubini-Study metric which is responsible for the 'peculiarities' of the quantum world. We show that there is also intricate connection between geometrical structures and quantum entanglement

  14. A Framework for Assessing Reading Comprehension of Geometric Construction Texts

    Science.gov (United States)

    Yang, Kai-Lin; Li, Jian-Lin

    2018-01-01

    This study investigates one issue related to reading mathematical texts by presenting a two-dimensional framework for assessing reading comprehension of geometric construction texts. The two dimensions of the framework were formulated by modifying categories of reading literacy and drawing on key elements of geometric construction texts. Three…

  15. Active Learning Environment with Lenses in Geometric Optics

    Science.gov (United States)

    Tural, Güner

    2015-01-01

    Geometric optics is one of the difficult topics for students within physics discipline. Students learn better via student-centered active learning environments than the teacher-centered learning environments. So this study aimed to present a guide for middle school teachers to teach lenses in geometric optics via active learning environment…

  16. Discrete geometric structures for architecture

    KAUST Repository

    Pottmann, Helmut

    2010-06-13

    The emergence of freeform structures in contemporary architecture raises numerous challenging research problems, most of which are related to the actual fabrication and are a rich source of research topics in geometry and geometric computing. The talk will provide an overview of recent progress in this field, with a particular focus on discrete geometric structures. Most of these result from practical requirements on segmenting a freeform shape into planar panels and on the physical realization of supporting beams and nodes. A study of quadrilateral meshes with planar faces reveals beautiful relations to discrete differential geometry. In particular, we discuss meshes which discretize the network of principal curvature lines. Conical meshes are among these meshes; they possess conical offset meshes at a constant face/face distance, which in turn leads to a supporting beam layout with so-called torsion free nodes. This work can be generalized to a variety of multilayer structures and laid the ground for an adapted curvature theory for these meshes. There are also efforts on segmenting surfaces into planar hexagonal panels. Though these are less constrained than planar quadrilateral panels, this problem is still waiting for an elegant solution. Inspired by freeform designs in architecture which involve circles and spheres, we present a new kind of triangle mesh whose faces\\' in-circles form a packing, i.e., the in-circles of two triangles with a common edge have the same contact point on that edge. These "circle packing (CP) meshes" exhibit an aesthetic balance of shape and size of their faces. They are closely tied to sphere packings on surfaces and to various remarkable structures and patterns which are of interest in art, architecture, and design. CP meshes constitute a new link between architectural freeform design and computational conformal geometry. Recently, certain timber structures motivated us to study discrete patterns of geodesics on surfaces. This

  17. Geometrical tile design for complex neighborhoods.

    Science.gov (United States)

    Czeizler, Eugen; Kari, Lila

    2009-01-01

    Recent research has showed that tile systems are one of the most suitable theoretical frameworks for the spatial study and modeling of self-assembly processes, such as the formation of DNA and protein oligomeric structures. A Wang tile is a unit square, with glues on its edges, attaching to other tiles and forming larger and larger structures. Although quite intuitive, the idea of glues placed on the edges of a tile is not always natural for simulating the interactions occurring in some real systems. For example, when considering protein self-assembly, the shape of a protein is the main determinant of its functions and its interactions with other proteins. Our goal is to use geometric tiles, i.e., square tiles with geometrical protrusions on their edges, for simulating tiled paths (zippers) with complex neighborhoods, by ribbons of geometric tiles with simple, local neighborhoods. This paper is a step toward solving the general case of an arbitrary neighborhood, by proposing geometric tile designs that solve the case of a "tall" von Neumann neighborhood, the case of the f-shaped neighborhood, and the case of a 3 x 5 "filled" rectangular neighborhood. The techniques can be combined and generalized to solve the problem in the case of any neighborhood, centered at the tile of reference, and included in a 3 x (2k + 1) rectangle.

  18. Prevalence and risk factors for refractive errors in the South Indian adult population: The Andhra Pradesh Eye disease study

    Directory of Open Access Journals (Sweden)

    Sannapaneni Krishnaiah

    2008-12-01

    Full Text Available Sannapaneni Krishnaiah1,2,3, Marmamula Srinivas1,2,3, Rohit C Khanna1,2, Gullapalli N Rao1,2,31L V Prasad Eye Institute, Banjara Hills, Hyderabad, India; 2International Center for Advancement of Rural Eye Care, L V Prasad Eye Institute, Banjara Hills, Hyderabad, India; 3Vision CRC, University of New South Wales, Sydney, NSW, AustraliaAim: To report the prevalence, risk factors and associated population attributable risk percentage (PAR for refractive errors in the South Indian adult population.Methods: A population-based cross-sectional epidemiologic study was conducted in the Indian state of Andhra Pradesh. A multistage cluster, systematic, stratified random sampling method was used to obtain participants (n = 10293 for this study.Results: The age-gender-area-adjusted prevalence rates in those ≥40 years of age were determined for myopia (spherical equivalent [SE] < −0.5 D 34.6% (95% confidence interval [CI]: 33.1–36.1, high-myopia (SE < −5.0 D 4.5% (95% CI: 3.8–5.2, hyperopia (SE > +0.5 D 18.4% (95% CI: 17.1–19.7, astigmatism (cylinder < −0.5 D 37.6% (95% CI: 36–39.2, and anisometropia (SE difference between right and left eyes >0.5 D 13.0% (95% CI: 11.9–14.1. The prevalence of myopia, astigmatism, high-myopia, and anisometropia significantly increased with increasing age (all p < 0.0001. There was no gender difference in prevalence rates in any type of refractive error, though women had a significantly higher rate of hyperopia than men (p < 0.0001. Hyperopia was significantly higher among those with a higher educational level (odds ratio [OR] 2.49; 95% CI: 1.51–3.95 and significantly higher among the hypertensive group (OR 1.24; 95% CI: 1.03–1.49. The severity of lens nuclear opacity was positively associated with myopia and negatively associated with hyperopia.Conclusions: The prevalence of myopia in this adult Indian population is much higher than in similarly aged white populations. These results confirm the previously

  19. Time Series Analysis Using Geometric Template Matching.

    Science.gov (United States)

    Frank, Jordan; Mannor, Shie; Pineau, Joelle; Precup, Doina

    2013-03-01

    We present a novel framework for analyzing univariate time series data. At the heart of the approach is a versatile algorithm for measuring the similarity of two segments of time series called geometric template matching (GeTeM). First, we use GeTeM to compute a similarity measure for clustering and nearest-neighbor classification. Next, we present a semi-supervised learning algorithm that uses the similarity measure with hierarchical clustering in order to improve classification performance when unlabeled training data are available. Finally, we present a boosting framework called TDEBOOST, which uses an ensemble of GeTeM classifiers. TDEBOOST augments the traditional boosting approach with an additional step in which the features used as inputs to the classifier are adapted at each step to improve the training error. We empirically evaluate the proposed approaches on several datasets, such as accelerometer data collected from wearable sensors and ECG data.

  20. Refractive error, visual acuity and causes of vision loss in children in Shandong, China. The Shandong Children Eye Study.

    Directory of Open Access Journals (Sweden)

    Jian Feng Wu

    Full Text Available PURPOSE: To examine the prevalence of refractive errors and prevalence and causes of vision loss among preschool and school children in East China. METHODS: Using a random cluster sampling in a cross-sectional school-based study design, children with an age of 4-18 years were selected from kindergartens, primary schools, and junior and senior high schools in the rural Guanxian County and the city of Weihai. All children underwent a complete ocular examination including measurement of uncorrected (UCVA and best corrected visual acuity (BCVA and auto-refractometry under cycloplegia. Myopia was defined as refractive error of ≤-0.5 diopters (D, high myopia as ≤ -6.0D, and amblyopia as BCVA ≤ 20/32 without any obvious reason for vision reduction and with strabismus or refractive errors as potential reasons. RESULTS: Out of 6364 eligible children, 6026 (94.7% children participated. Prevalence of myopia (overall: 36.9 ± 0.6%;95% confidence interval (CI:36.0,38.0 increased (P<0.001 from 1.7 ± 1.2% (95%CI:0.0,4.0 in the 4-years olds to 84.6 ± 3.2% (95%CI:78.0,91.0 in 17-years olds. Myopia was associated with older age (OR:1.56;95%CI:1.52,1.60;P<0.001, female gender (OR:1.22;95%CI:1.08,1.39;P = 0.002 and urban region (OR:2.88;95%CI:2.53,3.29;P<0.001. Prevalence of high myopia (2.0 ± 0.2% increased from 0.7 ± 0.3% (95%CI:0.1,1.3 in 10-years olds to 13.9 ± 3.0 (95%CI:7.8,19.9 in 17-years olds. It was associated with older age (OR:1.50;95%CI:1.41,1.60;P<0.001 and urban region (OR:3.11;95%CI:2.08,4.66;P<0.001. Astigmatism (≥ 0.75D (36.3 ± 0.6%;95%CI:35.0,38.0 was associated with older age (P<0.001;OR:1.06;95%CI:1.04,1.09, more myopic refractive error (P<0.001;OR:0.94;95%CI:0.91,0.97 and urban region (P<0.001;OR:1.47;95%CI:1.31,1.64. BCVA was ≤ 20/40 in the better eye in 19 (0.32% children. UCVA ≤ 20/40 in at least one eye was found in 2046 (34.05% children, with undercorrected refractive error as cause in 1975 (32.9% children. Amblyopia