WorldWideScience

Sample records for reduce geometry errors

  1. Students’ Errors in Geometry Viewed from Spatial Intelligence

    Science.gov (United States)

    Riastuti, N.; Mardiyana, M.; Pramudya, I.

    2017-09-01

    Geometry is one of the difficult materials because students must have ability to visualize, describe images, draw shapes, and know the kind of shapes. This study aim is to describe student error based on Newmans’ Error Analysis in solving geometry problems viewed from spatial intelligence. This research uses descriptive qualitative method by using purposive sampling technique. The datas in this research are the result of geometri material test and interview by the 8th graders of Junior High School in Indonesia. The results of this study show that in each category of spatial intelligence has a different type of error in solving the problem on the material geometry. Errors are mostly made by students with low spatial intelligence because they have deficiencies in visual abilities. Analysis of student error viewed from spatial intelligence is expected to help students do reflection in solving the problem of geometry.

  2. Errors Analysis of Students in Mathematics Department to Learn Plane Geometry

    Science.gov (United States)

    Mirna, M.

    2018-04-01

    This article describes the results of qualitative descriptive research that reveal the locations, types and causes of student error in answering the problem of plane geometry at the problem-solving level. Answers from 59 students on three test items informed that students showed errors ranging from understanding the concepts and principles of geometry itself to the error in applying it to problem solving. Their type of error consists of concept errors, principle errors and operational errors. The results of reflection with four subjects reveal the causes of the error are: 1) student learning motivation is very low, 2) in high school learning experience, geometry has been seen as unimportant, 3) the students' experience using their reasoning in solving the problem is very less, and 4) students' reasoning ability is still very low.

  3. Euclidean Geometry Codes, minimum weight words and decodable error-patterns using bit-flipping

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn; Jonsson, Bergtor

    2005-01-01

    We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns.......We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns....

  4. Entropy Error Model of Planar Geometry Features in GIS

    Institute of Scientific and Technical Information of China (English)

    LI Dajun; GUAN Yunlan; GONG Jianya; DU Daosheng

    2003-01-01

    Positional error of line segments is usually described by using "g-band", however, its band width is in relation to the confidence level choice. In fact, given different confidence levels, a series of concentric bands can be obtained. To overcome the effect of confidence level on the error indicator, by introducing the union entropy theory, we propose an entropy error ellipse index of point, then extend it to line segment and polygon,and establish an entropy error band of line segment and an entropy error donut of polygon. The research shows that the entropy error index can be determined uniquely and is not influenced by confidence level, and that they are suitable for positional uncertainty of planar geometry features.

  5. Minimizing pulling geometry errors in atomic force microscope single molecule force spectroscopy.

    Science.gov (United States)

    Rivera, Monica; Lee, Whasil; Ke, Changhong; Marszalek, Piotr E; Cole, Daniel G; Clark, Robert L

    2008-10-01

    In atomic force microscopy-based single molecule force spectroscopy (AFM-SMFS), it is assumed that the pulling angle is negligible and that the force applied to the molecule is equivalent to the force measured by the instrument. Recent studies, however, have indicated that the pulling geometry errors can drastically alter the measured force-extension relationship of molecules. Here we describe a software-based alignment method that repositions the cantilever such that it is located directly above the molecule's substrate attachment site. By aligning the applied force with the measurement axis, the molecule is no longer undergoing combined loading, and the full force can be measured by the cantilever. Simulations and experimental results verify the ability of the alignment program to minimize pulling geometry errors in AFM-SMFS studies.

  6. Fast Erasure and Error decoding of Algebraic Geometry Codes up to the Feng-Rao Bound

    DEFF Research Database (Denmark)

    Jensen, Helge Elbrønd; Sakata, S.; Leonard, D.

    1996-01-01

    This paper gives an errata(that is erasure-and error-) decoding algorithm of one-point algebraic geometry codes up to the Feng-Rao designed minimum distance using Sakata's multidimensional generalization of the Berlekamp-massey algorithm and the votin procedure of Feng and Rao.......This paper gives an errata(that is erasure-and error-) decoding algorithm of one-point algebraic geometry codes up to the Feng-Rao designed minimum distance using Sakata's multidimensional generalization of the Berlekamp-massey algorithm and the votin procedure of Feng and Rao....

  7. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  8. Perceptual learning eases crowding by reducing recognition errors but not position errors.

    Science.gov (United States)

    Xiong, Ying-Zi; Yu, Cong; Zhang, Jun-Yun

    2015-08-01

    When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three-letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order. We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by-product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors.

  9. Fast Erasure-and error decoding of algebraic geometry codes up to the Feng-Rao bound

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Sakata, Shojiro

    1998-01-01

    This correspondence gives an errata (that is erasure-and error-) decoding algorithm of one-point algebraic-geometry codes up to the Feng-Rao designed minimum distance using Sakata's multidimensional generalization of the Berlekamp-Massey algorithm and the voting procedure of Feng and Rao....

  10. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  11. Reducing diagnostic errors in medicine: what's the goal?

    Science.gov (United States)

    Graber, Mark; Gordon, Ruthanna; Franklin, Nancy

    2002-10-01

    This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.

  12. Errors as a Means of Reducing Impulsive Food Choice.

    Science.gov (United States)

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2016-06-05

    Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities.

  13. A numerical method for multigroup slab-geometry discrete ordinates problems with no spatial truncation error

    International Nuclear Information System (INIS)

    Barros, R.C. de; Larsen, E.W.

    1991-01-01

    A generalization of the one-group Spectral Green's Function (SGF) method is developed for multigroup, slab-geometry discrete ordinates (S N ) problems. The multigroup SGF method is free from spatial truncation errors; it generated numerical values for the cell-edge and cell-average angular fluxes that agree with the analytic solution of the multigroup S N equations. Numerical results are given to illustrate the method's accuracy

  14. Automation of Commanding at NASA: Reducing Human Error in Space Flight

    Science.gov (United States)

    Dorn, Sarah J.

    2010-01-01

    Automation has been implemented in many different industries to improve efficiency and reduce human error. Reducing or eliminating the human interaction in tasks has been proven to increase productivity in manufacturing and lessen the risk of mistakes by humans in the airline industry. Human space flight requires the flight controllers to monitor multiple systems and react quickly when failures occur so NASA is interested in implementing techniques that can assist in these tasks. Using automation to control some of these responsibilities could reduce the number of errors the flight controllers encounter due to standard human error characteristics. This paper will investigate the possibility of reducing human error in the critical area of manned space flight at NASA.

  15. Electronic prescribing reduces prescribing error in public hospitals.

    Science.gov (United States)

    Shawahna, Ramzi; Rahman, Nisar-Ur; Ahmad, Mahmood; Debray, Marcel; Yliperttula, Marjo; Declèves, Xavier

    2011-11-01

    To examine the incidence of prescribing errors in a main public hospital in Pakistan and to assess the impact of introducing electronic prescribing system on the reduction of their incidence. Medication errors are persistent in today's healthcare system. The impact of electronic prescribing on reducing errors has not been tested in developing world. Prospective review of medication and discharge medication charts before and after the introduction of an electronic inpatient record and prescribing system. Inpatient records (n = 3300) and 1100 discharge medication sheets were reviewed for prescribing errors before and after the installation of electronic prescribing system in 11 wards. Medications (13,328 and 14,064) were prescribed for inpatients, among which 3008 and 1147 prescribing errors were identified, giving an overall error rate of 22·6% and 8·2% throughout paper-based and electronic prescribing, respectively. Medications (2480 and 2790) were prescribed for discharge patients, among which 418 and 123 errors were detected, giving an overall error rate of 16·9% and 4·4% during paper-based and electronic prescribing, respectively. Electronic prescribing has a significant effect on the reduction of prescribing errors. Prescribing errors are commonplace in Pakistan public hospitals. The study evaluated the impact of introducing electronic inpatient records and electronic prescribing in the reduction of prescribing errors in a public hospital in Pakistan. © 2011 Blackwell Publishing Ltd.

  16. Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.

    Science.gov (United States)

    Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2016-01-01

    Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (Pcutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Stereotype threat can reduce older adults' memory errors.

    Science.gov (United States)

    Barber, Sarah J; Mather, Mara

    2013-01-01

    Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research, we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment. Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 and 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well.

  18. Reduced error signalling in medication-naive children with ADHD

    DEFF Research Database (Denmark)

    Plessen, Kerstin J; Allen, Elena A; Eichele, Heike

    2016-01-01

    reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. LIMITATIONS: Our study was limited by the modest sample size......BACKGROUND: We examined the blood-oxygen level-dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). METHODS: We acquired...

  19. Automated drug dispensing system reduces medication errors in an intensive care setting.

    Science.gov (United States)

    Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick

    2010-12-01

    We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; perror (20.4% and 13.5%; perror showed a significant impact of the automated dispensing system in reducing preparation errors (perrors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.

  20. Reducing patient identification errors related to glucose point-of-care testing

    Directory of Open Access Journals (Sweden)

    Gaurav Alreja

    2011-01-01

    Full Text Available Background: Patient identification (ID errors in point-of-care testing (POCT can cause test results to be transferred to the wrong patient′s chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number is checked against patient registration data from admission, discharge, and transfer (ADT feeds and only matched results are transferred to the patient′s electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015% in comparison with 61.5 errors/month (0.319% before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.

  1. Reduced error signalling in medication-naive children with ADHD

    DEFF Research Database (Denmark)

    Plessen, Kerstin J; Allen, Elena A; Eichele, Heike

    2016-01-01

    BACKGROUND: We examined the blood-oxygen level-dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). METHODS: We acquired...... functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8-12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions...... and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. RESULTS: We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions...

  2. Reduced phase error through optimized control of a superconducting qubit

    International Nuclear Information System (INIS)

    Lucero, Erik; Kelly, Julian; Bialczak, Radoslaw C.; Lenander, Mike; Mariantoni, Matteo; Neeley, Matthew; O'Connell, A. D.; Sank, Daniel; Wang, H.; Weides, Martin; Wenner, James; Cleland, A. N.; Martinis, John M.; Yamamoto, Tsuyoshi

    2010-01-01

    Minimizing phase and other errors in experimental quantum gates allows higher fidelity quantum processing. To quantify and correct for phase errors, in particular, we have developed an experimental metrology - amplified phase error (APE) pulses - that amplifies and helps identify phase errors in general multilevel qubit architectures. In order to correct for both phase and amplitude errors specific to virtual transitions and leakage outside of the qubit manifold, we implement 'half derivative', an experimental simplification of derivative reduction by adiabatic gate (DRAG) control theory. The phase errors are lowered by about a factor of five using this method to ∼1.6 deg. per gate, and can be tuned to zero. Leakage outside the qubit manifold, to the qubit |2> state, is also reduced to ∼10 -4 for 20% faster gates.

  3. Near field communications technology and the potential to reduce medication errors through multidisciplinary application.

    Science.gov (United States)

    O'Connell, Emer; Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J; Tabirca, Sabin; O'Driscoll, Aoife; Corrigan, Mark

    2016-01-01

    Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (PNFC based medication system may be used to effectively reduce medication errors in a simulated ward environment.

  4. Reducing errors benefits the field-based learning of a fundamental movement skill in children.

    Science.gov (United States)

    Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W

    2013-03-01

    Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.

  5. Numerically robust geometry engine for compound solid geometries

    International Nuclear Information System (INIS)

    Vlachoudis, V.; Sinuela-Pastor, D.

    2013-01-01

    Monte Carlo programs heavily rely on a fast and numerically robust solid geometry engines. However the success of solid modeling, depends on facilities for specifying and editing parameterized models through a user-friendly graphical front-end. Such a user interface has to be fast enough in order to be interactive for 2D and/or 3D displays, but at the same time numerically robust in order to display possible modeling errors at real time that could be critical for the simulation. The graphical user interface Flair for FLUKA currently employs such an engine where special emphasis has been given on being fast and numerically robust. The numerically robustness is achieved by a novel method of estimating the floating precision of the operations, which dynamically adapts all the decision operations accordingly. Moreover a predictive caching mechanism is ensuring that logical errors in the geometry description are found online, without compromising the processing time by checking all regions. (authors)

  6. Probabilistic error bounds for reduced order modeling

    Energy Technology Data Exchange (ETDEWEB)

    Abdo, M.G.; Wang, C.; Abdel-Khalik, H.S., E-mail: abdo@purdue.edu, E-mail: wang1730@purdue.edu, E-mail: abdelkhalik@purdue.edu [Purdue Univ., School of Nuclear Engineering, West Lafayette, IN (United States)

    2015-07-01

    Reduced order modeling has proven to be an effective tool when repeated execution of reactor analysis codes is required. ROM operates on the assumption that the intrinsic dimensionality of the associated reactor physics models is sufficiently small when compared to the nominal dimensionality of the input and output data streams. By employing a truncation technique with roots in linear algebra matrix decomposition theory, ROM effectively discards all components of the input and output data that have negligible impact on reactor attributes of interest. This manuscript introduces a mathematical approach to quantify the errors resulting from the discarded ROM components. As supported by numerical experiments, the introduced analysis proves that the contribution of the discarded components could be upper-bounded with an overwhelmingly high probability. The reverse of this statement implies that the ROM algorithm can self-adapt to determine the level of the reduction needed such that the maximum resulting reduction error is below a given tolerance limit that is set by the user. (author)

  7. Reducing systematic errors in measurements made by a SQUID magnetometer

    International Nuclear Information System (INIS)

    Kiss, L.F.; Kaptás, D.; Balogh, J.

    2014-01-01

    A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co 1.9 Fe 1.1 Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors – radial displacement in particular – and not by instrumental or environmental noise. - Highlights: • A simple method is described which reduces systematic errors of a SQUID. • The errors arise from a radial displacement of the sample in the gradiometer coil. • The procedure is to rotate the sample rod (with the sample) around its axis. • The best fit to the SQUID voltage has to be attained moving the sample through the coil. • The accuracy of measuring magnetic moment can be increased significantly

  8. Reducing errors in the GRACE gravity solutions using regularization

    Science.gov (United States)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  9. Reducing Approximation Error in the Fourier Flexible Functional Form

    Directory of Open Access Journals (Sweden)

    Tristan D. Skolrud

    2017-12-01

    Full Text Available The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms.

  10. Sub-Doppler cooling in reduced-period optical lattice geometries

    International Nuclear Information System (INIS)

    Berman, P.R.; Raithel, G.; Zhang, R.; Malinovsky, V.S.

    2005-01-01

    It is shown that sub-Doppler cooling occurs in an atom-field geometry that can lead to reduced-period optical lattices. Four optical fields are combined to produce a 'standing wave' Raman field that drives transitions between two ground state sublevels. In contrast to conventional Sisyphus cooling, sub-Doppler cooling to zero velocity occurs when all fields are polarized in the same direction. Solutions are obtained using both semiclassical and quantum Monte Carlo methods in the case of exact two-photon resonance. The connection of the results with conventional Sisyphus cooling is established using a dressed state basis

  11. Current pulse: can a production system reduce medical errors in health care?

    Science.gov (United States)

    Printezis, Antonios; Gopalakrishnan, Mohan

    2007-01-01

    One of the reasons for rising health care costs is medical errors, a majority of which result from faulty systems and processes. Health care in the past has used process-based initiatives such as Total Quality Management, Continuous Quality Improvement, and Six Sigma to reduce errors. These initiatives to redesign health care, reduce errors, and improve overall efficiency and customer satisfaction have had moderate success. Current trend is to apply the successful Toyota Production System (TPS) to health care since its organizing principles have led to tremendous improvement in productivity and quality for Toyota and other businesses that have adapted them. This article presents insights on the effectiveness of TPS principles in health care and the challenges that lie ahead in successfully integrating this approach with other quality initiatives.

  12. Approaches to reducing photon dose calculation errors near metal implants

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F., E-mail: sfkry@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Liu, Xinming [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Stingo, Francesco C. [Department of Biostatistics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States)

    2016-09-15

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  13. Approaches to reducing photon dose calculation errors near metal implants

    International Nuclear Information System (INIS)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F.; Liu, Xinming; Stingo, Francesco C.

    2016-01-01

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  14. Accounting for uncertain fault geometry in earthquake source inversions - I: theory and simplified application

    Science.gov (United States)

    Ragon, Théa; Sladen, Anthony; Simons, Mark

    2018-05-01

    The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of

  15. Recent results in the decoding of Algebraic geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  16. Sources of Error in Satellite Navigation Positioning

    Directory of Open Access Journals (Sweden)

    Jacek Januszewski

    2017-09-01

    Full Text Available An uninterrupted information about the user’s position can be obtained generally from satellite navigation system (SNS. At the time of this writing (January 2017 currently two global SNSs, GPS and GLONASS, are fully operational, two next, also global, Galileo and BeiDou are under construction. In each SNS the accuracy of the user’s position is affected by the three main factors: accuracy of each satellite position, accuracy of pseudorange measurement and satellite geometry. The user’s position error is a function of both the pseudorange error called UERE (User Equivalent Range Error and user/satellite geometry expressed by right Dilution Of Precision (DOP coefficient. This error is decomposed into two types of errors: the signal in space ranging error called URE (User Range Error and the user equipment error UEE. The detailed analyses of URE, UEE, UERE and DOP coefficients, and the changes of DOP coefficients in different days are presented in this paper.

  17. Compensation strategy to reduce geometry and mechanics mismatches in porous biomaterials built with Selective Laser Melting.

    Science.gov (United States)

    Bagheri, Zahra S; Melancon, David; Liu, Lu; Johnston, R Burnett; Pasini, Damiano

    2017-06-01

    The accuracy of Additive Manufacturing processes in fabricating porous biomaterials is currently limited by their capacity to render pore morphology that precisely matches its design. In a porous biomaterial, a geometric mismatch can result in pore occlusion and strut thinning, drawbacks that can inherently compromise bone ingrowth and severely impact mechanical performance. This paper focuses on Selective Laser Melting of porous microarchitecture and proposes a compensation scheme that reduces the morphology mismatch between as-designed and as-manufactured geometry, in particular that of the pore. A spider web analog is introduced, built out of Ti-6Al-4V powder via SLM, and morphologically characterized. Results from error analysis of strut thickness are used to generate thickness compensation relations expressed as a function of the angle each strut formed with the build plane. The scheme is applied to fabricate a set of three-dimensional porous biomaterials, which are morphologically and mechanically characterized via micro Computed Tomography, mechanically tested and numerically analyzed. For strut thickness, the results show the largest mismatch (60% from the design) occurring for horizontal members, reduces to 3.1% upon application of the compensation. Similar improvement is observed also for the mechanical properties, a factor that further corroborates the merit of the design-oriented scheme here introduced. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Cognitive emotion regulation enhances aversive prediction error activity while reducing emotional responses.

    Science.gov (United States)

    Mulej Bratec, Satja; Xie, Xiyao; Schmid, Gabriele; Doll, Anselm; Schilbach, Leonhard; Zimmer, Claus; Wohlschläger, Afra; Riedl, Valentin; Sorg, Christian

    2015-12-01

    Cognitive emotion regulation is a powerful way of modulating emotional responses. However, despite the vital role of emotions in learning, it is unknown whether the effect of cognitive emotion regulation also extends to the modulation of learning. Computational models indicate prediction error activity, typically observed in the striatum and ventral tegmental area, as a critical neural mechanism involved in associative learning. We used model-based fMRI during aversive conditioning with and without cognitive emotion regulation to test the hypothesis that emotion regulation would affect prediction error-related neural activity in the striatum and ventral tegmental area, reflecting an emotion regulation-related modulation of learning. Our results show that cognitive emotion regulation reduced emotion-related brain activity, but increased prediction error-related activity in a network involving ventral tegmental area, hippocampus, insula and ventral striatum. While the reduction of response activity was related to behavioral measures of emotion regulation success, the enhancement of prediction error-related neural activity was related to learning performance. Furthermore, functional connectivity between the ventral tegmental area and ventrolateral prefrontal cortex, an area involved in regulation, was specifically increased during emotion regulation and likewise related to learning performance. Our data, therefore, provide first-time evidence that beyond reducing emotional responses, cognitive emotion regulation affects learning by enhancing prediction error-related activity, potentially via tegmental dopaminergic pathways. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Experimental Evaluation of a Mixed Controller That Amplifies Spatial Errors and Reduces Timing Errors

    Directory of Open Access Journals (Sweden)

    Laura Marchal-Crespo

    2017-06-01

    Full Text Available Research on motor learning suggests that training with haptic guidance enhances learning of the timing components of motor tasks, whereas error amplification is better for learning the spatial components. We present a novel mixed guidance controller that combines haptic guidance and error amplification to simultaneously promote learning of the timing and spatial components of complex motor tasks. The controller is realized using a force field around the desired position. This force field has a stable manifold tangential to the trajectory that guides subjects in velocity-related aspects. The force field has an unstable manifold perpendicular to the trajectory, which amplifies the perpendicular (spatial error. We also designed a controller that applies randomly varying, unpredictable disturbing forces to enhance the subjects’ active participation by pushing them away from their “comfort zone.” We conducted an experiment with thirty-two healthy subjects to evaluate the impact of four different training strategies on motor skill learning and self-reported motivation: (i No haptics, (ii mixed guidance, (iii perpendicular error amplification and tangential haptic guidance provided in sequential order, and (iv randomly varying disturbing forces. Subjects trained two motor tasks using ARMin IV, a robotic exoskeleton for upper limb rehabilitation: follow circles with an ellipsoidal speed profile, and move along a 3D line following a complex speed profile. Mixed guidance showed no detectable learning advantages over the other groups. Results suggest that the effectiveness of the training strategies depends on the subjects’ initial skill level. Mixed guidance seemed to benefit subjects who performed the circle task with smaller errors during baseline (i.e., initially more skilled subjects, while training with no haptics was more beneficial for subjects who created larger errors (i.e., less skilled subjects. Therefore, perhaps the high functional

  20. Examination of the program to avoid round-off error

    International Nuclear Information System (INIS)

    Shiota, Y.; Kusunoki, T.; Tabushi, K.; Shimomura, K.; Kitou, S.

    2005-01-01

    The MACRO programs which express a simple shape such as PLANE, SPHERE, CYLINDER and CONE, are used to the formation of the geometry in EGS4. Each MACRO calculates the important value for the main code to recognize the configured geometry. This calculation process may generate the calculation error due to the effect of a round-off error. SPHERE, CYLINDER and CONE MACRO include the function to avoid the effect, but PLANE MACRO dose not include. The effect of the round-off error is small usually in case of PLANE MACRO, however a slant plane may cause the expansion of the effect. Therefore, we have configured the DELPLANE program with the function to avoid the effect of the round-off error. In this study, we examine the DELPLANE program using the simply geometry with slant plane. As a result, the normal PLANE MACRO generates the round-off error, however DELPLANE program dose not generates one. (author)

  1. A continuous quality improvement project to reduce medication error in the emergency department.

    Science.gov (United States)

    Lee, Sara Bc; Lee, Larry Ly; Yeung, Richard Sd; Chan, Jimmy Ts

    2013-01-01

    Medication errors are a common source of adverse healthcare incidents particularly in the emergency department (ED) that has a number of factors that make it prone to medication errors. This project aims to reduce medication errors and improve the health and economic outcomes of clinical care in Hong Kong ED. In 2009, a task group was formed to identify problems that potentially endanger medication safety and developed strategies to eliminate these problems. Responsible officers were assigned to look after seven error-prone areas. Strategies were proposed, discussed, endorsed and promulgated to eliminate the problems identified. A reduction of medication incidents (MI) from 16 to 6 was achieved before and after the improvement work. This project successfully established a concrete organizational structure to safeguard error-prone areas of medication safety in a sustainable manner.

  2. The effect on dose accumulation accuracy of inverse-consistency and transitivity error reduced deformation maps

    International Nuclear Information System (INIS)

    Hardcastle, Nicholas; Bender, Edward T.; Tomé, Wolfgang A.

    2014-01-01

    It has previously been shown that deformable image registrations (DIRs) often result in deformation maps that are neither inverse-consistent nor transitive, and that the dose accumulation based on these deformation maps can be inconsistent if different image pathways are used for dose accumulation. A method presented to reduce inverse consistency and transitivity errors has been shown to result in more consistent dose accumulation, regardless of the image pathway selected for dose accumulation. The present study investigates the effect on the dose accumulation accuracy of deformation maps processed to reduce inverse consistency and transitivity errors. A set of lung 4DCT phases were analysed, consisting of four images on which a dose grid was created. Dose to 75 corresponding anatomical locations was manually tracked. Dose accumulation was performed between all image sets with Demons derived deformation maps as well as deformation maps processed to reduce inverse consistency and transitivity errors. The ground truth accumulated dose was then compared with the accumulated dose derived from DIR. Two dose accumulation image pathways were considered. The post-processing method to reduce inverse consistency and transitivity errors had minimal effect on the dose accumulation accuracy. There was a statistically significant improvement in dose accumulation accuracy for one pathway, but for the other pathway there was no statistically significant difference. A post-processing technique to reduce inverse consistency and transitivity errors has a positive, yet minimal effect on the dose accumulation accuracy. Thus the post-processing technique improves consistency of dose accumulation with minimal effect on dose accumulation accuracy.

  3. Sensitivity of subject-specific models to errors in musculo-skeletal geometry.

    Science.gov (United States)

    Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N

    2012-09-21

    Subject-specific musculo-skeletal models of the lower extremity are an important tool for investigating various biomechanical problems, for instance the results of surgery such as joint replacements and tendon transfers. The aim of this study was to assess the potential effects of errors in musculo-skeletal geometry on subject-specific model results. We performed an extensive sensitivity analysis to quantify the effect of the perturbation of origin, insertion and via points of each of the 56 musculo-tendon parts contained in the model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by only the perturbed musculo-tendon parts and by all the remaining musculo-tendon parts, respectively, during a simulated gait cycle. Results indicated that, for each musculo-tendon part, only two points show a significant sensitivity: its origin, or pseudo-origin, point and its insertion, or pseudo-insertion, point. The most sensitive points belong to those musculo-tendon parts that act as prime movers in the walking movement (insertion point of the Achilles Tendon: LSI=15.56%, OSI=7.17%; origin points of the Rectus Femoris: LSI=13.89%, OSI=2.44%) and as hip stabilizers (insertion points of the Gluteus Medius Anterior: LSI=17.92%, OSI=2.79%; insertion point of the Gluteus Minimus: LSI=21.71%, OSI=2.41%). The proposed priority list provides quantitative information to improve the predictive accuracy of subject-specific musculo-skeletal models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. An improved approach to reduce partial volume errors in brain SPET

    International Nuclear Information System (INIS)

    Hatton, R.L.; Hatton, B.F.; Michael, G.; Barnden, L.; QUT, Brisbane, QLD; The Queen Elizabeth Hospital, Adelaide, SA

    1999-01-01

    Full text: Limitations in SPET resolution give rise to significant partial volume error (PVE) in small brain structures We have investigated a previously published method (Muller-Gartner et al., J Cereb Blood Flow Metab 1992;16: 650-658) to correct PVE in grey matter using MRI. An MRI is registered and segmented to obtain a grey matter tissue volume which is then smoothed to obtain resolution matched to the corresponding SPET. By dividing the original SPET with this correction map, structures can be corrected for PVE on a pixel-by-pixel basis. Since this approach is limited by space-invariant filtering, modification was made by estimating projections for the segmented MRI and reconstructing these using identical parameters to SPET. The methods were tested on simulated brain scans, reconstructed with the ordered subsets EM algorithm (8,16, 32, 64 equivalent EM iterations) The new method provided better recovery visually. For 32 EM iterations, recovery coefficients were calculated for grey matter regions. The effects of potential errors in the method were examined. Mean recovery was unchanged with one pixel registration error, the maximum error found in most registration programs. Errors in segmentation > 2 pixels results in loss of accuracy for small structures. The method promises to be useful for reducing PVE in brain SPET

  5. Reducing Error, Fraud and Corruption (EFC) in Social Protection Programs

    OpenAIRE

    Tesliuc, Emil Daniel; Milazzo, Annamaria

    2007-01-01

    Social Protection (SP) and Social Safety Net (SSN) programs channel a large amount of public resources, it is important to make sure that these reach the intended beneficiaries. Error, fraud, or corruption (EFC) reduces the economic efficiency of these interventions by decreasing the amount of money that goes to the intended beneficiaries, and erodes the political support for the program. ...

  6. Performance Analysis of a Decoding Algorithm for Algebraic Geometry Codes

    DEFF Research Database (Denmark)

    Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund; Høholdt, Tom

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  7. The possible benefits of reduced errors in the motor skills acquisition of children

    Directory of Open Access Journals (Sweden)

    Capio Catherine M

    2012-01-01

    Full Text Available Abstract An implicit approach to motor learning suggests that relatively complex movement skills may be better acquired in environments that constrain errors during the initial stages of practice. This current concept paper proposes that reducing the number of errors committed during motor learning leads to stable performance when attention demands are increased by concurrent cognitive tasks. While it appears that this approach to practice may be beneficial for motor learning, further studies are needed to both confirm this advantage and better understand the underlying mechanisms. An approach involving error minimization during early learning may have important applications in paediatric rehabilitation.

  8. Novel error propagation approach for reducing H2S/O2 reaction mechanism

    International Nuclear Information System (INIS)

    Selim, H.; Gupta, A.K.; Sassi, M.

    2012-01-01

    A reduction strategy of hydrogen sulfide/oxygen reaction mechanism is conducted to simplify the detailed mechanism. Direct relation graph and error propagation methodology (DRGEP) has been used. A novel approach of direct elementary reaction error (DERE) has been developed in this study. The developed approach allowed for further reduction of the reaction mechanism. The reduced mechanism has been compared with the detailed mechanism under different conditions to emphasize its validity. The results obtained from the resulting reduced mechanism showed good agreement with that from the detailed mechanism. However, some discrepancies have been found for some species. Hydrogen and oxygen mole fractions showed the largest discrepancy of all combustion products. The reduced mechanism was also found to be capable of tracking the changes that occur in chemical kinetics through the change in reaction conditions. A comparison on the ignition delay time obtained from the reduced mechanism and previous experimental data showed good agreement. The reduced mechanism was used to track changes in mechanistic pathways of Claus reactions with the reaction progress.

  9. Reducing WCET Overestimations by Correcting Errors in Loop Bound Constraints

    Directory of Open Access Journals (Sweden)

    Fanqi Meng

    2017-12-01

    Full Text Available In order to reduce overestimations of worst-case execution time (WCET, in this article, we firstly report a kind of specific WCET overestimation caused by non-orthogonal nested loops. Then, we propose a novel correction approach which has three basic steps. The first step is to locate the worst-case execution path (WCEP in the control flow graph and then map it onto source code. The second step is to identify non-orthogonal nested loops from the WCEP by means of an abstract syntax tree. The last step is to recursively calculate the WCET errors caused by the loose loop bound constraints, and then subtract the total errors from the overestimations. The novelty lies in the fact that the WCET correction is only conducted on the non-branching part of WCEP, thus avoiding potential safety risks caused by possible WCEP switches. Experimental results show that our approach reduces the specific WCET overestimation by an average of more than 82%, and 100% of corrected WCET is no less than the actual WCET. Thus, our approach is not only effective but also safe. It will help developers to design energy-efficient and safe real-time systems.

  10. Comparative study of the gamma spectrometry method performance in different measurement geometries

    International Nuclear Information System (INIS)

    Diaconescu, C.; Ichim, C.; Bujoreanu, L.; Florea, I.

    2013-01-01

    This paper presents the results obtained by gamma spectrometry on aqueous liquid waste sample using different measurement geometries. A liquid waste sample with known gamma emitters content was measured in three different geometries in order to assess the influence of the geometry on the final results. To obtain low measurement errors, gamma spectrometer was calibrated using a calibration standard with the same physical and chemical characteristics as the sample to be measured. Since the calibration was performed with the source at contact with HPGe detector, the waste sample was also measured, for all the three geometries, at the detector contact. The influence of the measurement geometry on the results was evaluated by computing the relative errors. The measurements performed using three different geometries (250 ml plastic vial, Sarpagan box and 24 ml Tricarb vial) showed that all these geometries may be used to quantify the activity of gamma emitters in different type of radioactive waste. (authors)

  11. Indoor localization using unsupervised manifold alignment with geometry perturbation

    KAUST Repository

    Majeed, Khaqan

    2014-04-01

    The main limitation of deploying/updating Received Signal Strength (RSS) based indoor localization is the construction of fingerprinted radio map, which is quite a hectic and time-consuming process especially when the indoor area is enormous and/or dynamic. Different approaches have been undertaken to reduce such deployment/update efforts, but the performance degrades when the fingerprinting load is reduced below a certain level. In this paper, we propose an indoor localization scheme that requires as low as 1% fingerprinting load. This scheme employs unsupervised manifold alignment that takes crowd sourced RSS readings and localization requests as source data set and the environment\\'s plan coordinates as destination data set. The 1% fingerprinting load is only used to perturb the local geometries in the destination data set. Our proposed algorithm was shown to achieve less than 5 m mean localization error with 1% fingerprinting load and a limited number of crowd sourced readings, when other learning based localization schemes pass the 10 m mean error with the same information.

  12. Indoor localization using unsupervised manifold alignment with geometry perturbation

    KAUST Repository

    Majeed, Khaqan; Sorour, Sameh; Al-Naffouri, Tareq Y.; Valaee, Shahrokh

    2014-01-01

    The main limitation of deploying/updating Received Signal Strength (RSS) based indoor localization is the construction of fingerprinted radio map, which is quite a hectic and time-consuming process especially when the indoor area is enormous and/or dynamic. Different approaches have been undertaken to reduce such deployment/update efforts, but the performance degrades when the fingerprinting load is reduced below a certain level. In this paper, we propose an indoor localization scheme that requires as low as 1% fingerprinting load. This scheme employs unsupervised manifold alignment that takes crowd sourced RSS readings and localization requests as source data set and the environment's plan coordinates as destination data set. The 1% fingerprinting load is only used to perturb the local geometries in the destination data set. Our proposed algorithm was shown to achieve less than 5 m mean localization error with 1% fingerprinting load and a limited number of crowd sourced readings, when other learning based localization schemes pass the 10 m mean error with the same information.

  13. An Error Analysis of Structured Light Scanning of Biological Tissue

    DEFF Research Database (Denmark)

    Jensen, Sebastian Hoppe Nesgaard; Wilm, Jakob; Aanæs, Henrik

    2017-01-01

    This paper presents an error analysis and correction model for four structured light methods applied to three common types of biological tissue; skin, fat and muscle. Despite its many advantages, structured light is based on the assumption of direct reflection at the object surface only......, statistical linear model based on the scan geometry. As such, scans can be corrected without introducing any specially designed pattern strategy or hardware. We can effectively reduce the error in a structured light scanner applied to biological tissue by as much as factor of two or three........ This assumption is violated by most biological material e.g. human skin, which exhibits subsurface scattering. In this study, we find that in general, structured light scans of biological tissue deviate significantly from the ground truth. We show that a large portion of this error can be predicted with a simple...

  14. Effects of Geometry Design Parameters on the Static Strength and Dynamics for Spiral Bevel Gear

    Directory of Open Access Journals (Sweden)

    Zhiheng Feng

    2017-01-01

    Full Text Available Considering the geometry design parameters, a quasi-static mesh model of spiral bevel gears was established and the mesh characteristics were computed. Considering the time-varying effects of mesh points, mesh force, line-of-action vector, mesh stiffness, transmission error, friction force direction, and friction coefficient, a nonlinear lumped parameter dynamic model was developed for the spiral bevel gear pair. Based on the mesh model and the nonlinear dynamic model, the effects of main geometry parameters on the contact and bending strength were analyzed. Also, the effects on the dynamic mesh force and dynamic transmission error were investigated. Results show that higher value for the pressure angle, root fillet radius, and the ratio of tooth thickness tend to improve the contact and bending strength and to reduce the risk of tooth fracture. Improved gears have a better vibration performance in the targeted frequency range. Finally, bench tests for both types of spiral bevel gears were performed. Results show that the main failure mode is the tooth fracture and the life was increased a lot for the spiral bevel gears with improved geometry parameters compared to the original design.

  15. Electronic laboratory system reduces errors in National Tuberculosis Program: a cluster randomized controlled trial.

    Science.gov (United States)

    Blaya, J A; Shin, S S; Yale, G; Suarez, C; Asencios, L; Contreras, C; Rodriguez, P; Kim, J; Cegielski, P; Fraser, H S F

    2010-08-01

    To evaluate the impact of the e-Chasqui laboratory information system in reducing reporting errors compared to the current paper system. Cluster randomized controlled trial in 76 health centers (HCs) between 2004 and 2008. Baseline data were collected every 4 months for 12 months. HCs were then randomly assigned to intervention (e-Chasqui) or control (paper). Further data were collected for the same months the following year. Comparisons were made between intervention and control HCs, and before and after the intervention. Intervention HCs had respectively 82% and 87% fewer errors in reporting results for drug susceptibility tests (2.1% vs. 11.9%, P = 0.001, OR 0.17, 95%CI 0.09-0.31) and cultures (2.0% vs. 15.1%, P Chasqui users sent on average three electronic error reports per week to the laboratories. e-Chasqui reduced the number of missing laboratory results at point-of-care health centers. Clinical users confirmed viewing electronic results not available on paper. Reporting errors to the laboratory using e-Chasqui promoted continuous quality improvement. The e-Chasqui laboratory information system is an important part of laboratory infrastructure improvements to support multidrug-resistant tuberculosis care in Peru.

  16. Multilevel geometry optimization

    Science.gov (United States)

    Rodgers, Jocelyn M.; Fast, Patton L.; Truhlar, Donald G.

    2000-02-01

    Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol.

  17. Effectiveness of stress release geometries on reducing residual stress in electroforming metal microstructure

    Science.gov (United States)

    Song, Chang; Du, Liqun; Zhao, Wenjun; Zhu, Heqing; Zhao, Wen; Wang, Weitai

    2018-04-01

    Micro electroforming, as a mature micromachining technology, is widely used to fabricate metal microdevices in micro electro mechanical systems (MEMS). However, large residual stress in the local positions of the micro electroforming layer often leads to non-uniform residual stress distributions, dimension accuracy defects and reliability issues during fabrication of the metal microdevice. To solve this problem, a novel design method of presetting stress release geometries in the topological structure of the metal microstructure is proposed in this paper. First, the effect of stress release geometries (circular shape, annular groove shape and rivet shape) on the residual stress in the metal microstructure was investigated by finite element modeling (FEM) analysis. Two evaluation parameters, stress concentration factor K T and stress non-uniformity factor δ were calculated. The simulation results show that presetting stress release geometries can effectively reduce and homogenize the residual stress in the metal microstructures were measured metal microstructure. By combined use with stress release geometries of annular groove shape and rivet shape, the stress concentration factor K T and the stress non-uniformity factor δ both decreased at a maximum of 49% and 53%, respectively. Meanwhile, the average residual stress σ avg decreased at a maximum of 20% from  -292.4 MPa to  -232.6 MPa. Then, micro electroforming experiments were carried out corresponding to the simulation models. The residual stresses in the metal microstructures were measured by micro Raman spectroscopy (MRS) method. The results of the experiment proved that the stress non-uniformity factor δ and the average residual stress σ avg also decreased at a maximum with the combination use of annular groove shape and rivet shape stress release geometries, which is in agreement with the results of FEM analysis. The stress non-uniformity factor δ has a maximum decrease of 49% and the

  18. Spur gears: Optimal geometry, methods for generation and Tooth Contact Analysis (TCA) program

    Science.gov (United States)

    Litvin, Faydor L.; Zhang, Jiao

    1988-01-01

    The contents of this report include the following: (1) development of optimal geometry for crowned spur gears; (2) methods for their generation; and (3) tooth contact analysis (TCA) computer programs for the analysis of meshing and bearing contact on the crowned spur gears. The method developed for synthesis is used for the determination of the optimal geometry for crowned pinion surface and is directed to reduce the sensitivity of the gears to misalignment, localize the bearing contact, and guarantee the favorable shape and low level of the transmission errors. A new method for the generation of the crowned pinion surface has been proposed. This method is based on application of the tool with a surface of revolution that slightly deviates from a regular cone surface. The tool can be used as a grinding wheel or as a shaver. The crowned pinion surface can also be generated by a generating plane whose motion is provided by an automatic grinding machine controlled by a computer. The TCA program simulates the meshing and bearing contact of the misaligned gears. The transmission errors are also determined.

  19. Geometrical error calibration in reflective surface testing based on reverse Hartmann test

    Science.gov (United States)

    Gong, Zhidong; Wang, Daodang; Xu, Ping; Wang, Chao; Liang, Rongguang; Kong, Ming; Zhao, Jun; Mo, Linhai; Mo, Shuhui

    2017-08-01

    In the fringe-illumination deflectometry based on reverse-Hartmann-test configuration, ray tracing of the modeled testing system is performed to reconstruct the test surface error. Careful calibration of system geometry is required to achieve high testing accuracy. To realize the high-precision surface testing with reverse Hartmann test, a computer-aided geometrical error calibration method is proposed. The aberrations corresponding to various geometrical errors are studied. With the aberration weights for various geometrical errors, the computer-aided optimization of system geometry with iterative ray tracing is carried out to calibration the geometrical error, and the accuracy in the order of subnanometer is achieved.

  20. Reducing wrong patient selection errors: exploring the design space of user interface techniques.

    Science.gov (United States)

    Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben

    2014-01-01

    Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients' identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed.

  1. Strategies to reduce the systematic error due to tumor and rectum motion in radiotherapy of prostate cancer

    International Nuclear Information System (INIS)

    Hoogeman, Mischa S.; Herk, Marcel van; Bois, Josien de; Lebesque, Joos V.

    2005-01-01

    Background and purpose: The goal of this work is to develop and evaluate strategies to reduce the uncertainty in the prostate position and rectum shape that arises in the preparation stage of the radiation treatment of prostate cancer. Patients and methods: Nineteen prostate cancer patients, who were treated with 3-dimensional conformal radiotherapy, received each a planning CT scan and 8-13 repeat CT scans during the treatment period. We quantified prostate motion relative to the pelvic bone by first matching the repeat CT scans on the planning CT scan using the bony anatomy. Subsequently, each contoured prostate, including seminal vesicles, was matched on the prostate in the planning CT scan to obtain the translations and rotations. The variation in prostate position was determined in terms of the systematic, random and group mean error. We tested the performance of two correction strategies to reduce the systematic error due to prostate motion. The first strategy, the pre-treatment strategy, used only the initial rectum volume in the planning CT scan to adjust the angle of the prostate with respect to the left-right (LR) axis and the shape and position of the rectum. The second strategy, the adaptive strategy, used the data of repeat CT scans to improve the estimate of the prostate position and rectum shape during the treatment. Results: The largest component of prostate motion was a rotation around the LR axis. The systematic error (1 SD) was 5.1 deg and the random error was 3.6 deg (1 SD). The average LR-axis rotation between the planning and the repeat CT scans correlated significantly with the rectum volume in the planning CT scan (r=0.86, P<0.0001). Correction of the rotational position on the basis of the planning rectum volume alone reduced the systematic error by 28%. A correction, based on the data of the planning CT scan and 4 repeat CT scans reduced the systematic error over the complete treatment period by a factor of 2. When the correction was

  2. Multilevel geometry optimization

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, Jocelyn M. [Department of Chemistry and Supercomputer Institute, University of Minnesota, Minneapolis, Minnesota 55455-0431 (United States); Fast, Patton L. [Department of Chemistry and Supercomputer Institute, University of Minnesota, Minneapolis, Minnesota 55455-0431 (United States); Truhlar, Donald G. [Department of Chemistry and Supercomputer Institute, University of Minnesota, Minneapolis, Minnesota 55455-0431 (United States)

    2000-02-15

    Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol. (c) 2000 American Institute of Physics.

  3. Customization of user interfaces to reduce errors and enhance user acceptance.

    Science.gov (United States)

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  4. The use of adaptive radiation therapy to reduce setup error: a prospective clinical study

    International Nuclear Information System (INIS)

    Yan Di; Wong, John; Vicini, Frank; Robertson, John; Horwitz, Eric; Brabbins, Donald; Cook, Carla; Gustafson, Gary; Stromberg, Jannifer; Martinez, Alvaro

    1996-01-01

    , eight patients had completed the study. Their mean systematic setup error was 4 mm with a range of 2 mm to 6 mm before adjustment; and was reduced to 0.8 mm with a range of 0.2 mm to 1.8 mm after adjustments. There was no significant difference in their random setup errors before and after adjustment. Analysis of the block overlap distributions shows that the fractions of the prescribed field areas covered by the daily treatment increased after setup adjustment. The block overlap distributions also show that the magnitude of random setup errors at different field edges were different; 50% of which were small enough to allow the treatment margin to be reduced to 4 mm or less. Results from the on-going treatments of the remaining 12 patients show similar trends and magnitudes, and are not expected to be different. Conclusion: Our prospective study demonstrates that the ART process provides an effective and reliable approach to compensate for the systematic setup error of the individual patient. Adjusting the MLC field allows accurate setup adjustment as small as 2 mm, minimizes the possibility of 'unsettling' the patient and reduces the work load of the therapists. The ART process can be extended to correct for random setup errors by further modification of the MLC field shape and prescribed dose. Most importantly, ART integrates the use of advanced technologies to maximize treatment benefits, and can be important in the implementation of dose escalated conformal therapy

  5. Reducing number entry errors: solving a widespread, serious problem.

    Science.gov (United States)

    Thimbleby, Harold; Cairns, Paul

    2010-10-06

    Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an 'out by 10 error', which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical.To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact. (Charles Darwin 1879 [2008], p. 229).

  6. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    Science.gov (United States)

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Introduction to combinatorial geometry

    International Nuclear Information System (INIS)

    Gabriel, T.A.; Emmett, M.B.

    1985-01-01

    The combinatorial geometry package as used in many three-dimensional multimedia Monte Carlo radiation transport codes, such as HETC, MORSE, and EGS, is becoming the preferred way to describe simple and complicated systems. Just about any system can be modeled using the package with relatively few input statements. This can be contrasted against the older style geometry packages in which the required input statements could be large even for relatively simple systems. However, with advancements come some difficulties. The users of combinatorial geometry must be able to visualize more, and, in some instances, all of the system at a time. Errors can be introduced into the modeling which, though slight, and at times hard to detect, can have devastating effects on the calculated results. As with all modeling packages, the best way to learn the combinatorial geometry is to use it, first on a simple system then on more complicated systems. The basic technique for the description of the geometry consists of defining the location and shape of the various zones in terms of the intersections and unions of geometric bodies. The geometric bodies which are generally included in most combinatorial geometry packages are: (1) box, (2) right parallelepiped, (3) sphere, (4) right circular cylinder, (5) right elliptic cylinder, (6) ellipsoid, (7) truncated right cone, (8) right angle wedge, and (9) arbitrary polyhedron. The data necessary to describe each of these bodies are given. As can be easily noted, there are some subsets included for simplicity

  8. Performance analysis of a decoding algorithm for algebraic-geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1999-01-01

    The fast decoding algorithm for one point algebraic-geometry codes of Sakata, Elbrond Jensen, and Hoholdt corrects all error patterns of weight less than half the Feng-Rao minimum distance. In this correspondence we analyze the performance of the algorithm for heavier error patterns. It turns out...

  9. Near field communications technology and the potential to reduce medication errors through multidisciplinary application

    LENUS (Irish Health Repository)

    O’Connell, Emer

    2016-07-01

    Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems.

  10. Human factors interventions to reduce human errors and improve productivity in maintenance tasks

    International Nuclear Information System (INIS)

    Isoda, Hachiro; Yasutake, J.Y.

    1992-01-01

    This paper describes work in progress to develop interventions to reduce human errors and increase maintenance productivity in nuclear power plants. The effort is part of a two-phased Human Factors research program being conducted jointly by the Central Research Institute of Electric Power Industry (CRIEPI) in Japan and the Electric Power Research Institute (EPRI) in the United States. The overall objective of this joint research program is to identify critical maintenance tasks and to develop, implement and evaluate interventions which have high potential for reducing human errors or increasing maintenance productivity. As a result of the Phase 1 effort, ten critical maintenance tasks were identified. For these tasks, over 25 candidate interventions were identified for potential development. After careful analysis, seven interventions were selected for development during Phase 2. This paper describes the methodology used to analyze and identify the most critical tasks, the process of identifying and developing selected interventions and some of the initial results. (author)

  11. Numerical determination of transmission probabilities in cylindrical geometry

    International Nuclear Information System (INIS)

    Queiroz Bogado Leite, S. de.

    1989-11-01

    Efficient methods for numerical calculation of transmission probabilities in cylindrical geometry are presented. Relative errors of the order of 10 -5 or smaller are obtained using analytical solutions and low order quadrature integration schemes. (author) [pt

  12. Mathematical model of geometry and fibrous structure of the heart.

    Science.gov (United States)

    Nielsen, P M; Le Grice, I J; Smaill, B H; Hunter, P J

    1991-04-01

    We developed a mathematical representation of ventricular geometry and muscle fiber organization using three-dimensional finite elements referred to a prolate spheroid coordinate system. Within elements, fields are approximated using basis functions with associated parameters defined at the element nodes. Four parameters per node are used to describe ventricular geometry. The radial coordinate is interpolated using cubic Hermite basis functions that preserve slope continuity, while the angular coordinates are interpolated linearly. Two further nodal parameters describe the orientation of myocardial fibers. The orientation of fibers within coordinate planes bounded by epicardial and endocardial surfaces is interpolated linearly, with transmural variation given by cubic Hermite basis functions. Left and right ventricular geometry and myocardial fiber orientations were characterized for a canine heart arrested in diastole and fixed at zero transmural pressure. The geometry was represented by a 24-element ensemble with 41 nodes. Nodal parameters fitted using least squares provided a realistic description of ventricular epicardial [root mean square (RMS) error less than 0.9 mm] and endocardial (RMS error less than 2.6 mm) surfaces. Measured fiber fields were also fitted (RMS error less than 17 degrees) with a 60-element, 99-node mesh obtained by subdividing the 24-element mesh. These methods provide a compact and accurate anatomic description of the ventricles suitable for use in finite element stress analysis, simulation of cardiac electrical activation, and other cardiac field modeling problems.

  13. Reducing image interpretation errors – Do communication strategies undermine this?

    International Nuclear Information System (INIS)

    Snaith, B.; Hardy, M.; Lewis, E.F.

    2014-01-01

    Introduction: Errors in the interpretation of diagnostic images in the emergency department are a persistent problem internationally. To address this issue, a number of risk reduction strategies have been suggested but only radiographer abnormality detection schemes (RADS) have been widely implemented in the UK. This study considers the variation in RADS operation and communication in light of technological advances and changes in service operation. Methods: A postal survey of all NHS hospitals operating either an Emergency Department or Minor Injury Unit and a diagnostic imaging (radiology) department (n = 510) was undertaken between July and August 2011. The questionnaire was designed to elicit information on emergency service provision and details of RADS. Results: 325 questionnaires were returned (n = 325/510; 63.7%). The majority of sites (n = 288/325; 88.6%) operated a RADS with the majority (n = 227/288; 78.8%) employing a visual ‘flagging’ system as the only method of communication although symbols used were inconsistent and contradictory across sites. 61 sites communicated radiographer findings through a written proforma (paper or electronic) but this was run in conjunction with a flagging system at 50 sites. The majority of sites did not have guidance on the scope or operation of the ‘flagging’ or written communication system in use. Conclusions: RADS is an established clinical intervention to reduce errors in diagnostic image interpretation within the emergency setting. The lack of standardisation in communication processes and practices alongside the rapid adoption of technology has increased the potential for error and miscommunication

  14. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation of the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.

  15. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  16. A Lorentzian quantum geometry

    Energy Technology Data Exchange (ETDEWEB)

    Grotz, Andreas

    2011-10-07

    In this thesis, a formulation of a Lorentzian quantum geometry based on the framework of causal fermion systems is proposed. After giving the general definition of causal fermion systems, we deduce space-time as a topological space with an underlying causal structure. Restricting attention to systems of spin dimension two, we derive the objects of our quantum geometry: the spin space, the tangent space endowed with a Lorentzian metric, connection and curvature. In order to get the correspondence to classical differential geometry, we construct examples of causal fermion systems by regularizing Dirac sea configurations in Minkowski space and on a globally hyperbolic Lorentzian manifold. When removing the regularization, the objects of our quantum geometry reduce to the common objects of spin geometry on Lorentzian manifolds, up to higher order curvature corrections.

  17. A Lorentzian quantum geometry

    International Nuclear Information System (INIS)

    Grotz, Andreas

    2011-01-01

    In this thesis, a formulation of a Lorentzian quantum geometry based on the framework of causal fermion systems is proposed. After giving the general definition of causal fermion systems, we deduce space-time as a topological space with an underlying causal structure. Restricting attention to systems of spin dimension two, we derive the objects of our quantum geometry: the spin space, the tangent space endowed with a Lorentzian metric, connection and curvature. In order to get the correspondence to classical differential geometry, we construct examples of causal fermion systems by regularizing Dirac sea configurations in Minkowski space and on a globally hyperbolic Lorentzian manifold. When removing the regularization, the objects of our quantum geometry reduce to the common objects of spin geometry on Lorentzian manifolds, up to higher order curvature corrections.

  18. Dependence of Dynamic Modeling Accuracy on Sensor Measurements, Mass Properties, and Aircraft Geometry

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    The NASA Generic Transport Model (GTM) nonlinear simulation was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of identified parameters in mathematical models describing the flight dynamics and determined from flight data. Measurements from a typical flight condition and system identification maneuver were systematically and progressively deteriorated by introducing noise, resolution errors, and bias errors. The data were then used to estimate nondimensional stability and control derivatives within a Monte Carlo simulation. Based on these results, recommendations are provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using additional flight conditions and parameter estimation methods, as well as a nonlinear flight simulation of the General Dynamics F-16 aircraft, were compared with these recommendations

  19. Spinning geometry = Twisted geometry

    International Nuclear Information System (INIS)

    Freidel, Laurent; Ziprick, Jonathan

    2014-01-01

    It is well known that the SU(2)-gauge invariant phase space of loop gravity can be represented in terms of twisted geometries. These are piecewise-linear-flat geometries obtained by gluing together polyhedra, but the resulting geometries are not continuous across the faces. Here we show that this phase space can also be represented by continuous, piecewise-flat three-geometries called spinning geometries. These are composed of metric-flat three-cells glued together consistently. The geometry of each cell and the manner in which they are glued is compatible with the choice of fluxes and holonomies. We first remark that the fluxes provide each edge with an angular momentum. By studying the piecewise-flat geometries which minimize edge lengths, we show that these angular momenta can be literally interpreted as the spin of the edges: the geometries of all edges are necessarily helices. We also show that the compatibility of the gluing maps with the holonomy data results in the same conclusion. This shows that a spinning geometry represents a way to glue together the three-cells of a twisted geometry to form a continuous geometry which represents a point in the loop gravity phase space. (paper)

  20. Dual Numbers Approach in Multiaxis Machines Error Modeling

    Directory of Open Access Journals (Sweden)

    Jaroslav Hrdina

    2014-01-01

    Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.

  1. Using Healthcare Failure Mode and Effect Analysis to reduce medication errors in the process of drug prescription, validation and dispensing in hospitalised patients.

    Science.gov (United States)

    Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa

    2013-01-01

    To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.

  2. Geometric Monte Carlo and black Janus geometries

    Energy Technology Data Exchange (ETDEWEB)

    Bak, Dongsu, E-mail: dsbak@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); B.W. Lee Center for Fields, Gravity & Strings, Institute for Basic Sciences, Daejeon 34047 (Korea, Republic of); Kim, Chanju, E-mail: cjkim@ewha.ac.kr [Department of Physics, Ewha Womans University, Seoul 03760 (Korea, Republic of); Kim, Kyung Kiu, E-mail: kimkyungkiu@gmail.com [Department of Physics, Sejong University, Seoul 05006 (Korea, Republic of); Department of Physics, College of Science, Yonsei University, Seoul 03722 (Korea, Republic of); Min, Hyunsoo, E-mail: hsmin@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); Song, Jeong-Pil, E-mail: jeong_pil_song@brown.edu [Department of Chemistry, Brown University, Providence, RI 02912 (United States)

    2017-04-10

    We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.

  3. A Proposal on the Geometry Splitting Strategy to Enhance the Calculation Efficiency in Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Han, Gi Yeong; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles.

  4. A Proposal on the Geometry Splitting Strategy to Enhance the Calculation Efficiency in Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Han, Gi Yeong; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung

    2014-01-01

    In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles

  5. Thermal error analysis and compensation for digital image/volume correlation

    Science.gov (United States)

    Pan, Bing

    2018-02-01

    Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.

  6. An Integrated Signaling-Encryption Mechanism to Reduce Error Propagation in Wireless Communications: Performance Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL

    2015-01-01

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

  7. Error analysis of the crystal orientations obtained by the dictionary approach to EBSD indexing.

    Science.gov (United States)

    Ram, Farangis; Wright, Stuart; Singh, Saransh; De Graef, Marc

    2017-10-01

    The efficacy of the dictionary approach to Electron Back-Scatter Diffraction (EBSD) indexing was evaluated through the analysis of the error in the retrieved crystal orientations. EBSPs simulated by the Callahan-De Graef forward model were used for this purpose. Patterns were noised, distorted, and binned prior to dictionary indexing. Patterns with a high level of noise, with optical distortions, and with a 25 × 25 pixel size, when the error in projection center was 0.7% of the pattern width and the error in specimen tilt was 0.8°, were indexed with a 0.8° mean error in orientation. The same patterns, but 60 × 60 pixel in size, were indexed by the standard 2D Hough transform based approach with almost the same orientation accuracy. Optimal detection parameters in the Hough space were obtained by minimizing the orientation error. It was shown that if the error in detector geometry can be reduced to 0.1% in projection center and 0.1° in specimen tilt, the dictionary approach can retrieve a crystal orientation with a 0.2° accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A New Human-Machine Interfaces of Computer-based Procedure System to Reduce the Team Errors in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kim, Sa Kil; Sim, Joo Hyun; Lee, Hyun Chul

    2016-01-01

    In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room of Korea. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Organizational errors sometimes increase the likelihood of operator errors through the active failure pathway and, at the same time, enhance the possibility of adverse outcomes through defensive weaknesses. We incorporate the crew resource management as a representative approach to deal with the team factors of the human errors. We suggest a set of crew resource management training procedures under the unsafe environments where human errors can have devastating effects. We are on the way to develop alternative interfaces against team error in a condition of using a computer-based procedure system in a digitalized main control room. The computer-based procedure system is a representative feature of digitalized control room. In this study, we will propose new interfaces of computer-based procedure system to reduce feasible team errors. We are on the way of effectiveness test to validate whether the new interface can reduce team errors during operating with a computer-based procedure system in a digitalized control room

  9. A New Human-Machine Interfaces of Computer-based Procedure System to Reduce the Team Errors in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sa Kil; Sim, Joo Hyun; Lee, Hyun Chul [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room of Korea. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Organizational errors sometimes increase the likelihood of operator errors through the active failure pathway and, at the same time, enhance the possibility of adverse outcomes through defensive weaknesses. We incorporate the crew resource management as a representative approach to deal with the team factors of the human errors. We suggest a set of crew resource management training procedures under the unsafe environments where human errors can have devastating effects. We are on the way to develop alternative interfaces against team error in a condition of using a computer-based procedure system in a digitalized main control room. The computer-based procedure system is a representative feature of digitalized control room. In this study, we will propose new interfaces of computer-based procedure system to reduce feasible team errors. We are on the way of effectiveness test to validate whether the new interface can reduce team errors during operating with a computer-based procedure system in a digitalized control room.

  10. 75 FR 18514 - Developing Guidance on Naming, Labeling, and Packaging Practices to Reduce Medication Errors...

    Science.gov (United States)

    2010-04-12

    ... packaging designs. Among these measures, FDA agreed that by the end of FY 2010, after public consultation... product names and designing product labels and packaging to reduce medication errors. Four panel... of product packaging design, and costs associated with designing product packaging. Panel 3 will...

  11. Thresholds of surface codes on the general lattice structures suffering biased error and loss

    International Nuclear Information System (INIS)

    Tokunaga, Yuuki; Fujii, Keisuke

    2014-01-01

    A family of surface codes with general lattice structures is proposed. We can control the error tolerances against bit and phase errors asymmetrically by changing the underlying lattice geometries. The surface codes on various lattices are found to be efficient in the sense that their threshold values universally approach the quantum Gilbert-Varshamov bound. We find that the error tolerance of the surface codes depends on the connectivity of the underlying lattices; the error chains on a lattice of lower connectivity are easier to correct. On the other hand, the loss tolerance of the surface codes exhibits an opposite behavior; the logical information on a lattice of higher connectivity has more robustness against qubit loss. As a result, we come upon a fundamental trade-off between error and loss tolerances in the family of surface codes with different lattice geometries

  12. Thresholds of surface codes on the general lattice structures suffering biased error and loss

    Energy Technology Data Exchange (ETDEWEB)

    Tokunaga, Yuuki [NTT Secure Platform Laboratories, NTT Corporation, 3-9-11 Midori-cho, Musashino, Tokyo 180-8585, Japan and Japan Science and Technology Agency, CREST, 5 Sanban-cho, Chiyoda-ku, Tokyo 102-0075 (Japan); Fujii, Keisuke [Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 560-8531 (Japan)

    2014-12-04

    A family of surface codes with general lattice structures is proposed. We can control the error tolerances against bit and phase errors asymmetrically by changing the underlying lattice geometries. The surface codes on various lattices are found to be efficient in the sense that their threshold values universally approach the quantum Gilbert-Varshamov bound. We find that the error tolerance of the surface codes depends on the connectivity of the underlying lattices; the error chains on a lattice of lower connectivity are easier to correct. On the other hand, the loss tolerance of the surface codes exhibits an opposite behavior; the logical information on a lattice of higher connectivity has more robustness against qubit loss. As a result, we come upon a fundamental trade-off between error and loss tolerances in the family of surface codes with different lattice geometries.

  13. Interventions for reducing medication errors in children in hospital

    NARCIS (Netherlands)

    Maaskant, Jolanda M; Vermeulen, Hester; Apampa, Bugewa; Fernando, Bernard; Ghaleb, Maisoon A; Neubert, Antje; Thayyil, Sudhin; Soe, Aung

    2015-01-01

    BACKGROUND: Many hospitalised patients are affected by medication errors (MEs) that may cause discomfort, harm and even death. Children are at especially high risk of harm as the result of MEs because such errors are potentially more hazardous to them than to adults. Until now, interventions to

  14. Interventions for reducing medication errors in children in hospital

    NARCIS (Netherlands)

    Maaskant, Jolanda M.; Vermeulen, Hester; Apampa, Bugewa; Fernando, Bernard; Ghaleb, Maisoon A.; Neubert, Antje; Thayyil, Sudhin; Soe, Aung

    2015-01-01

    Background Many hospitalised patients are affected by medication errors (MEs) that may cause discomfort, harm and even death. Children are at especially high risk of harm as the result of MEs because such errors are potentially more hazardous to them than to adults. Until now, interventions to

  15. A response matrix method for one-speed discrete ordinates fixed source problems in slab geometry with no spatial truncation error

    International Nuclear Information System (INIS)

    Lydia, Emilio J.; Barros, Ricardo C.

    2011-01-01

    In this paper we describe a response matrix method for one-speed slab-geometry discrete ordinates (SN) neutral particle transport problems that is completely free from spatial truncation errors. The unknowns in the method are the cell-edge angular fluxes of particles. The numerical results generated for these quantities are exactly those obtained from the analytic solution of the SN problem apart from finite arithmetic considerations. Our method is based on a spectral analysis that we perform in the SN equations with scattering inside a discretization cell of the spatial grid set up on the slab. As a result of this spectral analysis, we are able to obtain an expression for the local general solution of the SN equations. With this local general solution, we determine the response matrix and use the prescribed boundary conditions and continuity conditions to sweep across the discretization cells from left to right and from right to left across the slab, until a prescribed convergence criterion is satisfied. (author)

  16. Error-correcting pairs for a public-key cryptosystem

    International Nuclear Information System (INIS)

    Pellikaan, Ruud; Márquez-Corbella, Irene

    2017-01-01

    Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t -bounded decoding algorithms which is achieved in the case the code has a t -error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t -ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t -error correcting pair. (paper)

  17. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

    Science.gov (United States)

    Nicholson, Mark; Markley, F.; Seidewitz, E.

    1988-01-01

    The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

  18. General Geometry and Geometry of Electromagnetism

    OpenAIRE

    Shahverdiyev, Shervgi S.

    2002-01-01

    It is shown that Electromagnetism creates geometry different from Riemannian geometry. General geometry including Riemannian geometry as a special case is constructed. It is proven that the most simplest special case of General Geometry is geometry underlying Electromagnetism. Action for electromagnetic field and Maxwell equations are derived from curvature function of geometry underlying Electromagnetism. And it is shown that equation of motion for a particle interacting with electromagnetic...

  19. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  20. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  1. Sensitivity of subject-specific models to errors in musculo-skeletal geometry

    NARCIS (Netherlands)

    Carbone, V.; van der Krogt, M.M.; Koopman, H.F.J.M.; Verdonschot, N.

    2012-01-01

    Subject-specific musculo-skeletal models of the lower extremity are an important tool for investigating various biomechanical problems, for instance the results of surgery such as joint replacements and tendon transfers. The aim of this study was to assess the potential effects of errors in

  2. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-01-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  3. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-11-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  4. APPLICATION OF SIX SIGMA METHODOLOGY TO REDUCE MEDICATION ERRORS IN THE OUTPATIENT PHARMACY UNIT: A CASE STUDY FROM THE KING FAHD UNIVERSITY HOSPITAL, SAUDI ARABIA

    Directory of Open Access Journals (Sweden)

    Ahmed Al Kuwaiti

    2016-06-01

    Full Text Available Medication errors will affect the patient safety and quality of healthcare. The aim of this study is to analyze the effect of Six Sigma (DMAIC methodology in reducing medication errors in the outpatient pharmacy of King Fahd Hospital of the University, Saudi Arabia. It was conducted through the five phases of Define, Measure, Analyze, Improve, Control (DMAIC model using various quality tools. The goal was fixed as to reduce medication errors in an outpatient pharmacy by 20%. After implementation of improvement strategies, there was a marked reduction of defects and also improvement of their sigma rating. Especially, Parts per million (PPM of prescription/data entry errors reduced from 56,000 to 5,000 and its sigma rating improved from 3.09 to 4.08. This study concluded that the Six Sigma (DMAIC methodology is found to be more significant in reducing medication errors and ensuring patient safety.

  5. SimpleGeO - new developments in the interactive creation and debugging of geometries for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Theis, Christian; Feldbaumer, Eduard; Forkel-Wirth, Doris; Jaegerhofer, Lukas; Roesler, Stefan; Vincke, Helmut; Buchegger, Karl Heinz

    2010-01-01

    Nowadays radiation transport Monte Carlo simulations have become an indispensable tool in various fields of physics. The applications are diversified and range from physics simulations, like detector studies or shielding design, to medical applications. Usually a significant amount of time is spent on the quite cumbersome and often error prone task of implementing geometries, before the actual physics studies can be performed. SimpleGeo is an interactive solid modeler which allows for the interactive creation and visualization of geometries for various Monte Carlo particle transport codes in 3D. Even though visual validation of the geometry is important, it might not reveal subtle errors like overlapping or undefined regions. These might eventually corrupt the execution of the simulation or even lead to incorrect results, the latter being sometimes hard to identify. In many cases a debugger is provided by the Monte Carlo package, but most often they lack interactive visual feedback, thus making it hard for the user to localize and correct the error. In this paper we describe the latest developments in SimpleGeo, which include debugging facilities that support immediate visual feedback, and apply various algorithms based on deterministic, Monte Carlo or Quasi Monte Carlo methods. These approaches allow for a fast and robust identification of subtle geometry errors that are also marked visually. (author)

  6. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    Science.gov (United States)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  7. FMEA: a model for reducing medical errors.

    Science.gov (United States)

    Chiozza, Maria Laura; Ponzetti, Clemente

    2009-06-01

    Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).

  8. Hepatic glucose output in humans measured with labeled glucose to reduce negative errors

    International Nuclear Information System (INIS)

    Levy, J.C.; Brown, G.; Matthews, D.R.; Turner, R.C.

    1989-01-01

    Steele and others have suggested that minimizing changes in glucose specific activity when estimating hepatic glucose output (HGO) during glucose infusions could reduce non-steady-state errors. This approach was assessed in nondiabetic and type II diabetic subjects during constant low dose [27 mumol.kg ideal body wt (IBW)-1.min-1] glucose infusion followed by a 12 mmol/l hyperglycemic clamp. Eight subjects had paired tests with and without labeled infusions. Labeled infusion was used to compare HGO in 11 nondiabetic and 15 diabetic subjects. Whereas unlabeled infusions produced negative values for endogenous glucose output, labeled infusions largely eliminated this error and reduced the dependence of the Steele model on the pool fraction in the paired tests. By use of labeled infusions, 11 nondiabetic subjects suppressed HGO from 10.2 +/- 0.6 (SE) fasting to 0.8 +/- 0.9 mumol.kg IBW-1.min-1 after 90 min of glucose infusion and to -1.9 +/- 0.5 mumol.kg IBW-1.min-1 after 90 min of a 12 mmol/l glucose clamp, but 15 diabetic subjects suppressed only partially from 13.0 +/- 0.9 fasting to 5.7 +/- 1.2 at the end of the glucose infusion and 5.6 +/- 1.0 mumol.kg IBW-1.min-1 in the clamp (P = 0.02, 0.002, and less than 0.001, respectively)

  9. MOCUM: A two-dimensional method of characteristics code based on constructive solid geometry and unstructured meshing for general geometries

    International Nuclear Information System (INIS)

    Yang Xue; Satvat, Nader

    2012-01-01

    Highlight: ► A two-dimensional numerical code based on the method of characteristics is developed. ► The complex arbitrary geometries are represented by constructive solid geometry and decomposed by unstructured meshing. ► Excellent agreement between Monte Carlo and the developed code is observed. ► High efficiency is achieved by parallel computing. - Abstract: A transport theory code MOCUM based on the method of characteristics as the flux solver with an advanced general geometry processor has been developed for two-dimensional rectangular and hexagonal lattice and full core neutronics modeling. In the code, the core structure is represented by the constructive solid geometry that uses regularized Boolean operations to build complex geometries from simple polygons. Arbitrary-precision arithmetic is also used in the process of building geometry objects to eliminate the round-off error from the commonly used double precision numbers. Then, the constructed core frame will be decomposed and refined into a Conforming Delaunay Triangulation to ensure the quality of the meshes. The code is fully parallelized using OpenMP and is verified and validated by various benchmarks representing rectangular, hexagonal, plate type and CANDU reactor geometries. Compared with Monte Carlo and deterministic reference solution, MOCUM results are highly accurate. The mentioned characteristics of the MOCUM make it a perfect tool for high fidelity full core calculation for current and GenIV reactor core designs. The detailed representation of reactor physics parameters can enhance the safety margins with acceptable confidence levels, which lead to more economically optimized designs.

  10. Improved compliance with the World Health Organization Surgical Safety Checklist is associated with reduced surgical specimen labelling errors.

    Science.gov (United States)

    Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J

    2016-09-09

    A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.

  11. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  12. Benefits and risks of using smart pumps to reduce medication error rates: a systematic review.

    Science.gov (United States)

    Ohashi, Kumiko; Dalleur, Olivia; Dykes, Patricia C; Bates, David W

    2014-12-01

    Smart infusion pumps have been introduced to prevent medication errors and have been widely adopted nationally in the USA, though they are not always used in Europe or other regions. Despite widespread usage of smart pumps, intravenous medication errors have not been fully eliminated. Through a systematic review of recent studies and reports regarding smart pump implementation and use, we aimed to identify the impact of smart pumps on error reduction and on the complex process of medication administration, and strategies to maximize the benefits of smart pumps. The medical literature related to the effects of smart pumps for improving patient safety was searched in PUBMED, EMBASE, and the Cochrane Central Register of Controlled Trials (CENTRAL) (2000-2014) and relevant papers were selected by two researchers. After the literature search, 231 papers were identified and the full texts of 138 articles were assessed for eligibility. Of these, 22 were included after removal of papers that did not meet the inclusion criteria. We assessed both the benefits and negative effects of smart pumps from these studies. One of the benefits of using smart pumps was intercepting errors such as the wrong rate, wrong dose, and pump setting errors. Other benefits include reduction of adverse drug event rates, practice improvements, and cost effectiveness. Meanwhile, the current issues or negative effects related to using smart pumps were lower compliance rates of using smart pumps, the overriding of soft alerts, non-intercepted errors, or the possibility of using the wrong drug library. The literature suggests that smart pumps reduce but do not eliminate programming errors. Although the hard limits of a drug library play a main role in intercepting medication errors, soft limits were still not as effective as hard limits because of high override rates. Compliance in using smart pumps is key towards effectively preventing errors. Opportunities for improvement include upgrading drug

  13. Automation in the Teaching of Descriptive Geometry and CAD. High-Level CAD Templates Using Script Languages

    Science.gov (United States)

    Moreno, R.; Bazán, A. M.

    2017-10-01

    The main purpose of this work is to study improvements to the learning method of technical drawing and descriptive geometry through exercises with traditional techniques that are usually solved manually by applying automated processes assisted by high-level CAD templates (HLCts). Given that an exercise with traditional procedures can be solved, detailed step by step in technical drawing and descriptive geometry manuals, CAD applications allow us to do the same and generalize it later, incorporating references. Traditional teachings have become obsolete and current curricula have been relegated. However, they can be applied in certain automation processes. The use of geometric references (using variables in script languages) and their incorporation into HLCts allows the automation of drawing processes. Instead of repeatedly creating similar exercises or modifying data in the same exercises, users should be able to use HLCts to generate future modifications of these exercises. This paper introduces the automation process when generating exercises based on CAD script files, aided by parametric geometry calculation tools. The proposed method allows us to design new exercises without user intervention. The integration of CAD, mathematics, and descriptive geometry facilitates their joint learning. Automation in the generation of exercises not only saves time but also increases the quality of the statements and reduces the possibility of human error.

  14. A method for optical ground station reduce alignment error in satellite-ground quantum experiments

    Science.gov (United States)

    He, Dong; Wang, Qiang; Zhou, Jian-Wei; Song, Zhi-Jun; Zhong, Dai-Jun; Jiang, Yu; Liu, Wan-Sheng; Huang, Yong-Mei

    2018-03-01

    A satellite dedicated for quantum science experiments, has been developed and successfully launched from Jiuquan, China, on August 16, 2016. Two new optical ground stations (OGSs) were built to cooperate with the satellite to complete satellite-ground quantum experiments. OGS corrected its pointing direction by satellite trajectory error to coarse tracking system and uplink beacon sight, therefore fine tracking CCD and uplink beacon optical axis alignment accuracy was to ensure that beacon could cover the quantum satellite in all time when it passed the OGSs. Unfortunately, when we tested specifications of the OGSs, due to the coarse tracking optical system was commercial telescopes, the change of position of the target in the coarse CCD was up to 600μrad along with the change of elevation angle. In this paper, a method of reduce alignment error between beacon beam and fine tracking CCD is proposed. Firstly, OGS fitted the curve of target positions in coarse CCD along with the change of elevation angle. Secondly, OGS fitted the curve of hexapod secondary mirror positions along with the change of elevation angle. Thirdly, when tracking satellite, the fine tracking error unloaded on the real-time zero point position of coarse CCD which computed by the firstly calibration data. Simultaneously the positions of the hexapod secondary mirror were adjusted by the secondly calibration data. Finally the experiment result is proposed. Results show that the alignment error is less than 50μrad.

  15. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  16. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  17. Reducing waste and errors: piloting lean principles at Intermountain Healthcare.

    Science.gov (United States)

    Jimmerson, Cindy; Weber, Dorothy; Sobek, Durward K

    2005-05-01

    The Toyota Production System (TPS), based on industrial engineering principles and operational innovations, is used to achieve waste reduction and efficiency while increasing product quality. Several key tools and principles, adapted to health care, have proved effective in improving hospital operations. Value Stream Maps (VSMs), which represent the key people, material, and information flows required to deliver a product or service, distinguish between value-adding and non-value-adding steps. The one-page Problem-Solving A3 Report guides staff through a rigorous and systematic problem-solving process. PILOT PROJECT at INTERMOUNTAIN HEALTHCARE: In a pilot project, participants made many improvements, ranging from simple changes implemented immediately (for example, heart monitor paper not available when a patient presented with a dysrythmia) to larger projects involving patient or information flow issues across multiple departments. Most of the improvements required little or no investment and reduced significant amounts of wasted time for front-line workers. In one unit, turnaround time for pathologist reports from an anatomical pathology lab was reduced from five to two days. TPS principles and tools are applicable to an endless variety of processes and work settings in health care and can be used to address critical challenges such as medical errors, escalating costs, and staffing shortages.

  18. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  19. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  20. Three-dimensional ray-tracing model for the study of advanced refractive errors in keratoconus.

    Science.gov (United States)

    Schedin, Staffan; Hallberg, Per; Behndig, Anders

    2016-01-20

    We propose a numerical three-dimensional (3D) ray-tracing model for the analysis of advanced corneal refractive errors. The 3D modeling was based on measured corneal elevation data by means of Scheimpflug photography. A mathematical description of the measured corneal surfaces from a keratoconus (KC) patient was used for the 3D ray tracing, based on Snell's law of refraction. A model of a commercial intraocular lens (IOL) was included in the analysis. By modifying the posterior IOL surface, it was shown that the imaging quality could be significantly improved. The RMS values were reduced by approximately 50% close to the retina, both for on- and off-axis geometries. The 3D ray-tracing model can constitute a basis for simulation of customized IOLs that are able to correct the advanced, irregular refractive errors in KC.

  1. Probabilistic linkage to enhance deterministic algorithms and reduce data linkage errors in hospital administrative data.

    Science.gov (United States)

    Hagger-Johnson, Gareth; Harron, Katie; Goldstein, Harvey; Aldridge, Robert; Gilbert, Ruth

    2017-06-30

     BACKGROUND: The pseudonymisation algorithm used to link together episodes of care belonging to the same patients in England (HESID) has never undergone any formal evaluation, to determine the extent of data linkage error. To quantify improvements in linkage accuracy from adding probabilistic linkage to existing deterministic HESID algorithms. Inpatient admissions to NHS hospitals in England (Hospital Episode Statistics, HES) over 17 years (1998 to 2015) for a sample of patients (born 13/28th of months in 1992/1998/2005/2012). We compared the existing deterministic algorithm with one that included an additional probabilistic step, in relation to a reference standard created using enhanced probabilistic matching with additional clinical and demographic information. Missed and false matches were quantified and the impact on estimates of hospital readmission within one year were determined. HESID produced a high missed match rate, improving over time (8.6% in 1998 to 0.4% in 2015). Missed matches were more common for ethnic minorities, those living in areas of high socio-economic deprivation, foreign patients and those with 'no fixed abode'. Estimates of the readmission rate were biased for several patient groups owing to missed matches, which was reduced for nearly all groups. CONCLUSION: Probabilistic linkage of HES reduced missed matches and bias in estimated readmission rates, with clear implications for commissioning, service evaluation and performance monitoring of hospitals. The existing algorithm should be modified to address data linkage error, and a retrospective update of the existing data would address existing linkage errors and their implications.

  2. Reducing Technology-Induced Errors: Organizational and Health Systems Approaches.

    Science.gov (United States)

    Borycki, Elizabeth M; Senthriajah, Yalini; Kushniruk, Andre W; Palojoki, Sari; Saranto, Kaija; Takeda, Hiroshi

    2016-01-01

    Technology-induced errors are a growing concern for health care organizations. Such errors arise from the interaction between healthcare and information technology deployed in complex settings and contexts. As the number of health information technologies that are used to provide patient care rises so will the need to develop ways to improve the quality and safety of the technology that we use. The objective of the panel is to describe varying approaches to improving software safety from and organizational and health systems perspective. We define what a technology-induced error is. Then, we discuss how software design and testing can be used to improve health information technologies. This discussion is followed by work in the area of monitoring and reporting at a health district and national level. Lastly, we draw on the quality, safety and resilience literature. The target audience for this work are nursing and health informatics researchers, practitioners, administrators, policy makers and students.

  3. Geometries

    CERN Document Server

    Sossinsky, A B

    2012-01-01

    The book is an innovative modern exposition of geometry, or rather, of geometries; it is the first textbook in which Felix Klein's Erlangen Program (the action of transformation groups) is systematically used as the basis for defining various geometries. The course of study presented is dedicated to the proposition that all geometries are created equal--although some, of course, remain more equal than others. The author concentrates on several of the more distinguished and beautiful ones, which include what he terms "toy geometries", the geometries of Platonic bodies, discrete geometries, and classical continuous geometries. The text is based on first-year semester course lectures delivered at the Independent University of Moscow in 2003 and 2006. It is by no means a formal algebraic or analytic treatment of geometric topics, but rather, a highly visual exposition containing upwards of 200 illustrations. The reader is expected to possess a familiarity with elementary Euclidean geometry, albeit those lacking t...

  4. Accounting for response misclassification and covariate measurement error improves power and reduces bias in epidemiologic studies.

    Science.gov (United States)

    Cheng, Dunlei; Branscum, Adam J; Stamey, James D

    2010-07-01

    To quantify the impact of ignoring misclassification of a response variable and measurement error in a covariate on statistical power, and to develop software for sample size and power analysis that accounts for these flaws in epidemiologic data. A Monte Carlo simulation-based procedure is developed to illustrate the differences in design requirements and inferences between analytic methods that properly account for misclassification and measurement error to those that do not in regression models for cross-sectional and cohort data. We found that failure to account for these flaws in epidemiologic data can lead to a substantial reduction in statistical power, over 25% in some cases. The proposed method substantially reduced bias by up to a ten-fold margin compared to naive estimates obtained by ignoring misclassification and mismeasurement. We recommend as routine practice that researchers account for errors in measurement of both response and covariate data when determining sample size, performing power calculations, or analyzing data from epidemiological studies. 2010 Elsevier Inc. All rights reserved.

  5. Impact of Educational Activities in Reducing Pre-Analytical Laboratory Errors: A quality initiative.

    Science.gov (United States)

    Al-Ghaithi, Hamed; Pathare, Anil; Al-Mamari, Sahimah; Villacrucis, Rodrigo; Fawaz, Naglaa; Alkindi, Salam

    2017-08-01

    Pre-analytic errors during diagnostic laboratory investigations can lead to increased patient morbidity and mortality. This study aimed to ascertain the effect of educational nursing activities on the incidence of pre-analytical errors resulting in non-conforming blood samples. This study was conducted between January 2008 and December 2015. All specimens received at the Haematology Laboratory of the Sultan Qaboos University Hospital, Muscat, Oman, during this period were prospectively collected and analysed. Similar data from 2007 were collected retrospectively and used as a baseline for comparison. Non-conforming samples were defined as either clotted samples, haemolysed samples, use of the wrong anticoagulant, insufficient quantities of blood collected, incorrect/lack of labelling on a sample or lack of delivery of a sample in spite of a sample request. From 2008 onwards, multiple educational training activities directed at the hospital nursing staff and nursing students primarily responsible for blood collection were implemented on a regular basis. After initiating corrective measures in 2008, a progressive reduction in the percentage of non-conforming samples was observed from 2009 onwards. Despite a 127.84% increase in the total number of specimens received, there was a significant reduction in non-conforming samples from 0.29% in 2007 to 0.07% in 2015, resulting in an improvement of 75.86% ( P educational activities directed primarily towards hospital nursing staff had a positive impact on the quality of laboratory specimens by significantly reducing pre-analytical errors.

  6. Field error reduction experiment on the REPUTE-1 RFP device

    International Nuclear Information System (INIS)

    Toyama, H.; Shinohara, S.; Yamagishi, K.

    1989-01-01

    The vacuum chamber of the RFP device REPUTE-1 is a welded structure using 18 sets of 1 mm thick Inconel bellows (inner minor radius 22 cm) and 2.4 mm thick port segments arranged in toroidal geometry as shown in Fig. 1. The vacuum chamber is surrounded by 5 mm thick stainless steel shells. The time constant of the shell is 1 ms for vertical field penetration. The pulse length in REPUTE-1 is so far 3.2 ms (about 3 times longer than shell skin time). The port bypass plates have been attached as shown in Fig. 2 to reduce field errors so that the pulse length becomes longer and the loop voltage becomes lower. (author) 5 refs., 4 figs

  7. Reducing Diagnostic Errors through Effective Communication: Harnessing the Power of Information Technology

    Science.gov (United States)

    Naik, Aanand Dinkar; Rao, Raghuram; Petersen, Laura Ann

    2008-01-01

    Diagnostic errors are poorly understood despite being a frequent cause of medical errors. Recent efforts have aimed to advance the "basic science" of diagnostic error prevention by tracing errors to their most basic origins. Although a refined theory of diagnostic error prevention will take years to formulate, we focus on communication breakdown, a major contributor to diagnostic errors and an increasingly recognized preventable factor in medical mishaps. We describe a comprehensive framework that integrates the potential sources of communication breakdowns within the diagnostic process and identifies vulnerable steps in the diagnostic process where various types of communication breakdowns can precipitate error. We then discuss potential information technology-based interventions that may have efficacy in preventing one or more forms of these breakdowns. These possible intervention strategies include using new technologies to enhance communication between health providers and health systems, improve patient involvement, and facilitate management of information in the medical record. PMID:18373151

  8. Preventing treatment errors in radiotherapy by identifying and evaluating near misses and actual incidents

    LENUS (Irish Health Repository)

    Holmberg, Ola

    2002-06-01

    When preparing radiation treatment, the prescribed dose and irradiation geometry must be translated into physical machine parameters. An error in the calculations or machine settings can negatively affect the intended treatment outcome. Analysing incidents originating in the treatment preparation chain makes it possible to find weak links and prevent treatment errors. The aim of this work is to study the effectiveness of a multilayered error prevention system by analysing both near misses and actual treatment errors.

  9. Thermoelectric Cooling-Aided Bead Geometry Regulation in Wire and Arc-Based Additive Manufacturing of Thin-Walled Structures

    Directory of Open Access Journals (Sweden)

    Fang Li

    2018-01-01

    Full Text Available Wire and arc-based additive manufacturing (WAAM is a rapidly developing technology which employs a welding arc to melt metal wire for additive manufacturing purposes. During WAAM of thin-walled structures, as the wall height increases, the heat dissipation to the substrate is slowed down gradually and so is the solidification of the molten pool, leading to variation of the bead geometry. Though gradually reducing the heat input via adjusting the process parameters can alleviate this issue, as suggested by previous studies, it relies on experience to a large extent and inevitably sacrifices the deposition rate because the wire feed rate is directly coupled with the heat input. This study introduces for the first time an in-process active cooling system based on thermoelectric cooling technology into WAAM, which aims to eliminate the difference in heat dissipation between upper and lower layers. The case study shows that, with the aid of thermoelectric cooling, the bead width error is reduced by 56.8%, the total fabrication time is reduced by 60.9%, and the average grain size is refined by 25%. The proposed technique provides new insight into bead geometry regulation during WAAM with various benefits in terms of geometric accuracy, productivity, and microstructure.

  10. Checklist Usage as a Guidance on Read-Back Reducing the Potential Risk of Medication Error

    Directory of Open Access Journals (Sweden)

    Ida Bagus N. Maharjana

    2014-06-01

    Full Text Available Hospital as a last line of health services shall provide quality service and oriented on patient safety, one responsibility in preventing medication errors. Effective collaboration and communication between the profession needed to achieve patient safety. Read-back is one way of doing effective communication. Before-after study with PDCA TQM approach. The samples were on the medication chart patient medical rd rd records in the 3 week of May (before and the 3 week in July (after 2013. Treatment using the check list, asked for time 2 minutes to read-back by the doctors and nurses after the visit together. Obtained 57 samples (before and 64 samples (after. Before charging 45.54% incomplete medication chart on patient medical records that have the potential risk of medication error to 10.17% after treatment with a read back check list for 10 weeks, with 77.78% based on the achievement of the PDCA TQM approach. Checklist usage as a guidance on Read-back as an effective communication can reduce charging incompleteness drug records on medical records that have the potential risk of medication errors, 45.54% to 10.17%.

  11. Parameterized combinatorial geometry modeling in Moritz

    International Nuclear Information System (INIS)

    Van Riper, K.A.

    2005-01-01

    We describe the use of named variables as surface and solid body coefficients in the Moritz geometry editing program. Variables can also be used as material numbers, cell densities, and transformation values. A variable is defined as a constant or an arithmetic combination of constants and other variables. A variable reference, such as in a surface coefficient, can be a single variable or an expression containing variables and constants. Moritz can read and write geometry models in MCNP and ITS ACCEPT format; support for other codes will be added. The geometry can be saved with either the variables in place, for modifying the models in Moritz, or with the variables evaluated for use in the transport codes. A program window shows a list of variables and provides fields for editing them. Surface coefficients and other values that use a variable reference are shown in a distinctive style on object property dialogs; associated buttons show fields for editing the reference. We discuss our use of variables in defining geometry models for shielding studies in PET clinics. When a model is parameterized through the use of variables, changes such as room dimensions, shielding layer widths, and cell compositions can be quickly achieved by changing a few numbers without requiring knowledge of the input syntax for the transport code or the tedious and error prone work of recalculating many surface or solid body coefficients. (author)

  12. Geometry

    CERN Document Server

    Prasolov, V V

    2015-01-01

    This book provides a systematic introduction to various geometries, including Euclidean, affine, projective, spherical, and hyperbolic geometries. Also included is a chapter on infinite-dimensional generalizations of Euclidean and affine geometries. A uniform approach to different geometries, based on Klein's Erlangen Program is suggested, and similarities of various phenomena in all geometries are traced. An important notion of duality of geometric objects is highlighted throughout the book. The authors also include a detailed presentation of the theory of conics and quadrics, including the theory of conics for non-Euclidean geometries. The book contains many beautiful geometric facts and has plenty of problems, most of them with solutions, which nicely supplement the main text. With more than 150 figures illustrating the arguments, the book can be recommended as a textbook for undergraduate and graduate-level courses in geometry.

  13. Presentation of geometries and transient results of TRAC-calculations

    International Nuclear Information System (INIS)

    Lutz, A.; Lang, U.; Ruehle, R.

    1985-02-01

    The computer code TRAC is used to analyze the transient behaviour of nuclear reactors. The input of a TRAC-Calculation, as well as the produced result files serve for the graphical presentation of the geometries and transient results. This supports the search for errors during input generation and the understanding of complex processes by dynamic presentation of calculational result in colour. (orig.) [de

  14. The use of ionospheric tomography and elevation masks to reduce the overall error in single-frequency GPS timing applications

    Science.gov (United States)

    Rose, Julian A. R.; Tong, Jenna R.; Allain, Damien J.; Mitchell, Cathryn N.

    2011-01-01

    Signals from Global Positioning System (GPS) satellites at the horizon or at low elevations are often excluded from a GPS solution because they experience considerable ionospheric delays and multipath effects. Their exclusion can degrade the overall satellite geometry for the calculations, resulting in greater errors; an effect known as the Dilution of Precision (DOP). In contrast, signals from high elevation satellites experience less ionospheric delays and multipath effects. The aim is to find a balance in the choice of elevation mask, to reduce the propagation delays and multipath whilst maintaining good satellite geometry, and to use tomography to correct for the ionosphere and thus improve single-frequency GPS timing accuracy. GPS data, collected from a global network of dual-frequency GPS receivers, have been used to produce four GPS timing solutions, each with a different ionospheric compensation technique. One solution uses a 4D tomographic algorithm, Multi-Instrument Data Analysis System (MIDAS), to compensate for the ionospheric delay. Maps of ionospheric electron density are produced and used to correct the single-frequency pseudorange observations. This method is compared to a dual-frequency solution and two other single-frequency solutions: one does not include any ionospheric compensation and the other uses the broadcast Klobuchar model. Data from the solar maximum year 2002 and October 2003 have been investigated to display results when the ionospheric delays are large and variable. The study focuses on Europe and results are produced for the chosen test site, VILL (Villafranca, Spain). The effects of excluding all of the GPS satellites below various elevation masks, ranging from 5° to 40°, on timing solutions for fixed (static) and mobile (moving) situations are presented. The greatest timing accuracies when using the fixed GPS receiver technique are obtained by using a 40° mask, rather than a 5° mask. The mobile GPS timing solutions are most

  15. Solving the neutron diffusion equation on combinatorial geometry computational cells for reactor physics calculations

    International Nuclear Information System (INIS)

    Azmy, Y. Y.

    2004-01-01

    An approach is developed for solving the neutron diffusion equation on combinatorial geometry computational cells, that is computational cells composed by combinatorial operations involving simple-shaped component cells. The only constraint on the component cells from which the combinatorial cells are assembled is that they possess a legitimate discretization of the underlying diffusion equation. We use the Finite Difference (FD) approximation of the x, y-geometry diffusion equation in this work. Performing the same combinatorial operations involved in composing the combinatorial cell on these discrete-variable equations yields equations that employ new discrete variables defined only on the combinatorial cell's volume and faces. The only approximation involved in this process, beyond the truncation error committed in discretizing the diffusion equation over each component cell, is a consistent-order Legendre series expansion. Preliminary results for simple configurations establish the accuracy of the solution to the combinatorial geometry solution compared to straight FD as the system dimensions decrease. Furthermore numerical results validate the consistent Legendre-series expansion order by illustrating the second order accuracy of the combinatorial geometry solution, the same as standard FD. Nevertheless the magnitude of the error for the new approach is larger than FD's since it incorporates the additional truncated series approximation. (authors)

  16. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  17. Unified tractable model for downlink MIMO cellular networks using stochastic geometry

    KAUST Repository

    Afify, Laila H.

    2016-07-26

    Several research efforts are invested to develop stochastic geometry models for cellular networks with multiple antenna transmission and reception (MIMO). On one hand, there are models that target abstract outage probability and ergodic rate for simplicity. On the other hand, there are models that sacrifice simplicity to target more tangible performance metrics such as the error probability. Both types of models are completely disjoint in terms of the analytic steps to obtain the performance measures, which makes it challenging to conduct studies that account for different performance metrics. This paper unifies both techniques and proposes a unified stochastic-geometry based mathematical paradigm to account for error probability, outage probability, and ergodic rates in MIMO cellular networks. The proposed model is also unified in terms of the antenna configurations and leads to simpler error probability analysis compared to existing state-of-the-art models. The core part of the analysis is based on abstracting unnecessary information conveyed within the interfering signals by assuming Gaussian signaling. To this end, the accuracy of the proposed framework is verified against state-of-the-art models as well as system level simulations. We provide via this unified study insights on network design by reflecting system parameters effect on different performance metrics. © 2016 IEEE.

  18. Making Residents Part of the Safety Culture: Improving Error Reporting and Reducing Harms.

    Science.gov (United States)

    Fox, Michael D; Bump, Gregory M; Butler, Gabriella A; Chen, Ling-Wan; Buchert, Andrew R

    2017-01-30

    Reporting medical errors is a focus of the patient safety movement. As frontline physicians, residents are optimally positioned to recognize errors and flaws in systems of care. Previous work highlights the difficulty of engaging residents in identification and/or reduction of medical errors and in integrating these trainees into their institutions' cultures of safety. The authors describe the implementation of a longitudinal, discipline-based, multifaceted curriculum to enhance the reporting of errors by pediatric residents at Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center. The key elements of this curriculum included providing the necessary education to identify medical errors with an emphasis on systems-based causes, modeling of error reporting by faculty, and integrating error reporting and discussion into the residents' daily activities. The authors tracked monthly error reporting rates by residents and other health care professionals, in addition to serious harm event rates at the institution. The interventions resulted in significant increases in error reports filed by residents, from 3.6 to 37.8 per month over 4 years (P error reporting correlated with a decline in serious harm events, from 15.0 to 8.1 per month over 4 years (P = 0.01). Integrating patient safety into the everyday resident responsibilities encourages frequent reporting and discussion of medical errors and leads to improvements in patient care. Multiple simultaneous interventions are essential to making residents part of the safety culture of their training hospitals.

  19. Inverse estimation for temperatures of outer surface and geometry of inner surface of furnace with two layer walls

    International Nuclear Information System (INIS)

    Chen, C.-K.; Su, C.-R.

    2008-01-01

    This study provides an inverse analysis to estimate the boundary thermal behavior of a furnace with two layer walls. The unknown temperature distribution of the outer surface and the geometry of the inner surface were estimated from the temperatures of a small number of measured points within the furnace wall. The present approach rearranged the matrix forms of the governing differential equations and then combined the reversed matrix method, the linear least squares error method and the concept of virtual area to determine the unknown boundary conditions of the furnace system. The dimensionless temperature data obtained from the direct problem were used to simulate the temperature measurements. The influence of temperature measurement errors upon the precision of the estimated results was also investigated. The advantage of this approach is that the unknown condition can be directly solved by only one calculation process without initially guessed temperatures, and the iteration process of the traditional method can be avoided in the analysis of the heat transfer. Therefore, the calculation in this work is more rapid and exact than the traditional method. The result showed that the estimation error of the geometry increased with increasing distance between measured points and inner surface and in preset error, and with decreasing number of measured points. However, the geometry of the furnace inner surface could be successfully estimated by only the temperatures of a small number of measured points within and near the outer surface under reasonable preset error

  20. Professional, structural and organisational interventions in primary care for reducing medication errors.

    Science.gov (United States)

    Khalil, Hanan; Bell, Brian; Chambers, Helen; Sheikh, Aziz; Avery, Anthony J

    2017-10-04

    Medication-related adverse events in primary care represent an important cause of hospital admissions and mortality. Adverse events could result from people experiencing adverse drug reactions (not usually preventable) or could be due to medication errors (usually preventable). To determine the effectiveness of professional, organisational and structural interventions compared to standard care to reduce preventable medication errors by primary healthcare professionals that lead to hospital admissions, emergency department visits, and mortality in adults. We searched CENTRAL, MEDLINE, Embase, three other databases, and two trial registries on 4 October 2016, together with reference checking, citation searching and contact with study authors to identify additional studies. We also searched several sources of grey literature. We included randomised trials in which healthcare professionals provided community-based medical services. We also included interventions in outpatient clinics attached to a hospital where people are seen by healthcare professionals but are not admitted to hospital. We only included interventions that aimed to reduce medication errors leading to hospital admissions, emergency department visits, or mortality. We included all participants, irrespective of age, who were prescribed medication by a primary healthcare professional. Three review authors independently extracted data. Each of the outcomes (hospital admissions, emergency department visits, and mortality), are reported in natural units (i.e. number of participants with an event per total number of participants at follow-up). We presented all outcomes as risk ratios (RRs) with 95% confidence intervals (CIs). We used the GRADE tool to assess the certainty of evidence. We included 30 studies (169,969 participants) in the review addressing various interventions to prevent medication errors; four studies addressed professional interventions (8266 participants) and 26 studies described

  1. Nurses' Behaviors and Visual Scanning Patterns May Reduce Patient Identification Errors

    Science.gov (United States)

    Marquard, Jenna L.; Henneman, Philip L.; He, Ze; Jo, Junghee; Fisher, Donald L.; Henneman, Elizabeth A.

    2011-01-01

    Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20)…

  2. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    Science.gov (United States)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  3. Cell homogenization methods for pin-by-pin core calculations tested in slab geometry

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Kitamura, Yasunori; Yamane, Yoshihiro

    2004-01-01

    In this paper, performances of spatial homogenization methods for fuel or non-fuel cells are compared in slab geometry in order to facilitate pin-by-pin core calculations. Since the spatial homogenization methods were mainly developed for fuel assemblies, systematic study of their performance for the cell-level homogenization has not been carried out. Importance of cell-level homogenization is recently increasing since the pin-by-pin mesh core calculation in actual three-dimensional geometry, which is less approximate approach than current advanced nodal method, is getting feasible. Four homogenization methods were investigated in this paper; the flux-volume weighting, the generalized equivalence theory, the superhomogenization (SPH) method and the nonlinear iteration method. The last one, the nonlinear iteration method, was tested as the homogenization method for the first time. The calculations were carried out in simplified colorset assembly configurations of PWR, which are simulated by slab geometries, and homogenization performances were evaluated through comparison with the reference cell-heterogeneous calculations. The calculation results revealed that the generalized equivalence theory showed best performance. Though the nonlinear iteration method can significantly reduce homogenization error, its performance was not as good as that of the generalized equivalence theory. Through comparison of the results obtained by the generalized equivalence theory and the superhomogenization method, important byproduct was obtained; deficiency of the current superhomogenization method, which could be improved by incorporating the 'cell-level discontinuity factor between assemblies', was clarified

  4. Analysis and Compensation for Gear Accuracy with Setting Error in Form Grinding

    Directory of Open Access Journals (Sweden)

    Chenggang Fang

    2015-01-01

    Full Text Available In the process of form grinding, gear setting error was the main factor that influenced the form grinding accuracy; we proposed an effective method to improve form grinding accuracy that corrected the error by controlling the machine operations. Based on establishing the geometry model of form grinding and representing the gear setting errors as homogeneous coordinate, tooth mathematic model was obtained and simplified under the gear setting error. Then, according to the gear standard of ISO1328-1: 1997 and the ANSI/AGMA 2015-1-A01: 2002, the relationship was investigated by changing the gear setting errors with respect to tooth profile deviation, helix deviation, and cumulative pitch deviation, respectively, under the condition of gear eccentricity error, gear inclination error, and gear resultant error. An error compensation method was proposed based on solving sensitivity coefficient matrix of setting error in a five-axis CNC form grinding machine; simulation and experimental results demonstrated that the method can effectively correct the gear setting error, as well as further improving the forming grinding accuracy.

  5. Management and minimisation of uncertainties and errors in numerical aerodynamics results of the German collaborative project MUNA

    CERN Document Server

    Barnewitz, Holger; Fritz, Willy; Thiele, Frank

    2013-01-01

    This volume reports results from the German research initiative MUNA (Management and Minimization of Errors and Uncertainties in Numerical Aerodynamics), which combined development activities of the German Aerospace Center (DLR), German universities and German aircraft industry. The main objective of this five year project was the development of methods and procedures aiming at reducing various types of uncertainties that are typical of numerical flow simulations. The activities were focused on methods for grid manipulation, techniques for increasing the simulation accuracy, sensors for turbulence modelling, methods for handling uncertainties of the geometry and grid deformation as well as stochastic methods for quantifying aleatoric uncertainties.

  6. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  7. Spectral nodal method for one-speed X,Y-geometry Eigenvalue diffusion problems

    International Nuclear Information System (INIS)

    Dominguez, Dany S.; Lorenzo, Daniel M.; Hernandez, Carlos G.; Barros, Ricardo C.; Silva, Fernando C. da

    2001-01-01

    Presented here is a new numerical nodal method for steady-state multidimensional neutron diffusion equation in rectangular geometry. Our method is based on a spectral analysis of the transverse-integrated nodal diffusion equations. These equations are obtained by integrating the diffusion equation in X and Y directions, and then considering flat approximations for the transverse leakage terms. These flat approximations are the only approximations that we consider in this method; as a result the numerical solutions are completely free from truncation errors in slab geometry. We show numerical results to illustrate the method's accuracy for coarse mesh calculations in a heterogeneous medium. (author)

  8. W-geometry

    International Nuclear Information System (INIS)

    Hull, C.M.

    1993-01-01

    The geometric structure of theories with gauge fields of spins two and higher should involve a higher spin generalisation of Riemannian geometry. Such geometries are discussed and the case of W ∝ -gravity is analysed in detail. While the gauge group for gravity in d dimensions is the diffeomorphism group of the space-time, the gauge group for a certain W-gravity theory (which is W ∝ -gravity in the case d=2) is the group of symplectic diffeomorphisms of the cotangent bundle of the space-time. Gauge transformations for W-gravity gauge fields are given by requiring the invariance of a generalised line element. Densities exist and can be constructed from the line element (generalising √detg μν ) only if d=1 or d=2, so that only for d=1,2 can actions be constructed. These two cases and the corresponding W-gravity actions are considered in detail. In d=2, the gauge group is effectively only a subgroup of the symplectic diffeomorphisms group. Some of the constraints that arise for d=2 are similar to equations arising in the study of self-dual four-dimensional geometries and can be analysed using twistor methods, allowing contact to be made with other formulations of W-gravity. While the twistor transform for self-dual spaces with one Killing vector reduces to a Legendre transform, that for two Killing vectors gives a generalisation of the Legendre transform. (orig.)

  9. Simplified discrete ordinates method in spherical geometry

    International Nuclear Information System (INIS)

    Elsawi, M.A.; Abdurrahman, N.M.; Yavuz, M.

    1999-01-01

    The authors extend the method of simplified discrete ordinates (SS N ) to spherical geometry. The motivation for such an extension is that the appearance of the angular derivative (redistribution) term in the spherical geometry transport equation makes it difficult to decide which differencing scheme best approximates this term. In the present method, the angular derivative term is treated implicitly and thus avoids the need for the approximation of such term. This method can be considered to be analytic in nature with the advantage of being free from spatial truncation errors from which most of the existing transport codes suffer. In addition, it treats the angular redistribution term implicitly with the advantage of avoiding approximations to that term. The method also can handle scattering in a very general manner with the advantage of spending almost the same computational effort for all scattering modes. Moreover, the methods can easily be applied to higher-order S N calculations

  10. Optical geometry

    International Nuclear Information System (INIS)

    Robinson, I.; Trautman, A.

    1988-01-01

    The geometry of classical physics is Lorentzian; but weaker geometries are often more appropriate: null geodesics and electromagnetic fields, for example, are well known to be objects of conformal geometry. To deal with a single null congruence, or with the radiative electromagnetic fields associated with it, even less is needed: flag geometry for the first, optical geometry, with which this paper is chiefly concerned, for the second. The authors establish a natural one-to-one correspondence between optical geometries, considered locally, and three-dimensional Cauchy-Riemann structures. A number of Lorentzian geometries are shown to be equivalent from the optical point of view. For example the Goedel universe, the Taub-NUT metric and Hauser's twisting null solution have an optical geometry isomorphic to the one underlying the Robinson congruence in Minkowski space. The authors present general results on the problem of lifting a CR structure to a Lorentz manifold and, in particular, to Minkowski space; and exhibit the relevance of the deviation form to this problem

  11. Intrinsic Losses Based on Information Geometry and Their Applications

    Directory of Open Access Journals (Sweden)

    Yao Rong

    2017-08-01

    Full Text Available One main interest of information geometry is to study the properties of statistical models that do not depend on the coordinate systems or model parametrization; thus, it may serve as an analytic tool for intrinsic inference in statistics. In this paper, under the framework of Riemannian geometry and dual geometry, we revisit two commonly-used intrinsic losses which are respectively given by the squared Rao distance and the symmetrized Kullback–Leibler divergence (or Jeffreys divergence. For an exponential family endowed with the Fisher metric and α -connections, the two loss functions are uniformly described as the energy difference along an α -geodesic path, for some α ∈ { − 1 , 0 , 1 } . Subsequently, the two intrinsic losses are utilized to develop Bayesian analyses of covariance matrix estimation and range-spread target detection. We provide an intrinsically unbiased covariance estimator, which is verified to be asymptotically efficient in terms of the intrinsic mean square error. The decision rules deduced by the intrinsic Bayesian criterion provide a geometrical justification for the constant false alarm rate detector based on generalized likelihood ratio principle.

  12. A three-dimensional reconstruction algorithm for an inverse-geometry volumetric CT system

    International Nuclear Information System (INIS)

    Schmidt, Taly Gilat; Fahrig, Rebecca; Pelc, Norbert J.

    2005-01-01

    An inverse-geometry volumetric computed tomography (IGCT) system has been proposed capable of rapidly acquiring sufficient data to reconstruct a thick volume in one circular scan. The system uses a large-area scanned source opposite a smaller detector. The source and detector have the same extent in the axial, or slice, direction, thus providing sufficient volumetric sampling and avoiding cone-beam artifacts. This paper describes a reconstruction algorithm for the IGCT system. The algorithm first rebins the acquired data into two-dimensional (2D) parallel-ray projections at multiple tilt and azimuthal angles, followed by a 3D filtered backprojection. The rebinning step is performed by gridding the data onto a Cartesian grid in a 4D projection space. We present a new method for correcting the gridding error caused by the finite and asymmetric sampling in the neighborhood of each output grid point in the projection space. The reconstruction algorithm was implemented and tested on simulated IGCT data. Results show that the gridding correction reduces the gridding errors to below one Hounsfield unit. With this correction, the reconstruction algorithm does not introduce significant artifacts or blurring when compared to images reconstructed from simulated 2D parallel-ray projections. We also present an investigation of the noise behavior of the method which verifies that the proposed reconstruction algorithm utilizes cross-plane rays as efficiently as in-plane rays and can provide noise comparable to an in-plane parallel-ray geometry for the same number of photons. Simulations of a resolution test pattern and the modulation transfer function demonstrate that the IGCT system, using the proposed algorithm, is capable of 0.4 mm isotropic resolution. The successful implementation of the reconstruction algorithm is an important step in establishing feasibility of the IGCT system

  13. Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations

    International Nuclear Information System (INIS)

    Theis, C.; Buchegger, K.H.; Brugger, M.; Forkel-Wirth, D.; Roesler, S.; Vincke, H.

    2006-01-01

    The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems

  14. Introducing geometry concept based on history of Islamic geometry

    Science.gov (United States)

    Maarif, S.; Wahyudin; Raditya, A.; Perbowo, K. S.

    2018-01-01

    Geometry is one of the areas of mathematics interesting to discuss. Geometry also has a long history in mathematical developments. Therefore, it is important integrated historical development of geometry in the classroom to increase’ knowledge of how mathematicians earlier finding and constructing a geometric concept. Introduction geometrical concept can be started by introducing the Muslim mathematician who invented these concepts so that students can understand in detail how a concept of geometry can be found. However, the history of mathematics development, especially history of Islamic geometry today is less popular in the world of education in Indonesia. There are several concepts discovered by Muslim mathematicians that should be appreciated by the students in learning geometry. Great ideas of mathematicians Muslim can be used as study materials to supplement religious character values taught by Muslim mathematicians. Additionally, by integrating the history of geometry in teaching geometry are expected to improve motivation and geometrical understanding concept.

  15. Hand biometric recognition based on fused hand geometry and vascular patterns.

    Science.gov (United States)

    Park, GiTae; Kim, Soowon

    2013-02-28

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.

  16. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  17. The Influence of the Mounting Errors in RodToothed Transmissions

    Directory of Open Access Journals (Sweden)

    M. Yu. Sachkov

    2015-01-01

    Full Text Available In the paper we consider an approximate transmission. The work is aimed at development of gear-powered transmission on parallel axes, which is RF patent-protected. The paper justifies a relevance of the synthesis of new kinds of engagement with the simplified geometry of the contacting condition. A typical solution for powered mechanisms received by F. L. Livinin and his disciples is characterized.The paper describes the arrangement of the coordinate systems used to obtain the function of the position of the gear-powered transmission consisting of two wheels with fifteen leads. For them, also the coordinates of the contact points are obtained, and errors of function of the position in tooth changeover are calculated. To obtain the function position was used a method of matrix transformation and equality of radius and unit normal vectors at the contact point. This transmission can be used in mechanical and instrumentation engineering, and other sectors of the economy. Both reducers and multipliers can be made on its basis. It has high manufacturability (with no special equipment required for its production, and a displacement function is close to linear.This article describes the influence of the axle spacing error on the quality of the transmission characteristics. The paper presents the graphic based relationships and tabular estimates for nominal axle spacing and offsets within 0.2 mm. This error of axle spacing is significant for gearing. From the results of this work we can say that the transmission is almost insensitive to errors of axle spacing. Engagement occurs without an exit of contact point on the lead edge. To solve the obtained system of equations, the numerical methods of the software MathCAD package have been applied.In the future, the authors expect to consider other possible manufacturing and mounting errors of gear-powered transmission (such as the error of the step, misalignment, etc. to assess their impact on the quality

  18. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  19. Vectorising the detector geometry to optimize particle transport

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    Among the components contributing to particle transport, geometry navigation is an important consumer of CPU cycles. The tasks performed to get answers to "basic" queries such as locating a point within a geometry hierarchy or computing accurately the distance to the next boundary can become very computing intensive for complex detector setups. So far, the existing geometry algorithms employ mainly scalar optimisation strategies (voxelization, caching) to reduce their CPU consumption. In this paper, we would like to take a different approach and investigate how geometry navigation can benefit from the vector instruction set extensions that are one of the primary source of performance enhancements on current and future hardware. While on paper, this form of microparallelism promises increasing performance opportunities, applying this technology to the highly hierarchical and multiply branched geometry code is a difficult challenge. We refer to the current work done to vectorise an important part of the critica...

  20. Error performance analysis in downlink cellular networks with interference management

    KAUST Repository

    Afify, Laila H.

    2015-05-01

    Modeling aggregate network interference in cellular networks has recently gained immense attention both in academia and industry. While stochastic geometry based models have succeeded to account for the cellular network geometry, they mostly abstract many important wireless communication system aspects (e.g., modulation techniques, signal recovery techniques). Recently, a novel stochastic geometry model, based on the Equivalent-in-Distribution (EiD) approach, succeeded to capture the aforementioned communication system aspects and extend the analysis to averaged error performance, however, on the expense of increasing the modeling complexity. Inspired by the EiD approach, the analysis developed in [1] takes into consideration the key system parameters, while providing a simple tractable analysis. In this paper, we extend this framework to study the effect of different interference management techniques in downlink cellular network. The accuracy of the proposed analysis is verified via Monte Carlo simulations.

  1. Reducing workpieces to their base geometry for multi-step incremental forming using manifold harmonics

    Science.gov (United States)

    Carette, Yannick; Vanhove, Hans; Duflou, Joost

    2018-05-01

    Single Point Incremental Forming is a flexible process that is well-suited for small batch production and rapid prototyping of complex sheet metal parts. The distributed nature of the deformation process and the unsupported sheet imply that controlling the final accuracy of the workpiece is challenging. To improve the process limits and the accuracy of SPIF, the use of multiple forming passes has been proposed and discussed by a number of authors. Most methods use multiple intermediate models, where the previous one is strictly smaller than the next one, while gradually increasing the workpieces' wall angles. Another method that can be used is the manufacture of a smoothed-out "base geometry" in the first pass, after which more detailed features can be added in subsequent passes. In both methods, the selection of these intermediate shapes is freely decided by the user. However, their practical implementation in the production of complex freeform parts is not straightforward. The original CAD model can be manually adjusted or completely new CAD models can be created. This paper discusses an automatic method that is able to extract the base geometry from a full STL-based CAD model in an analytical way. Harmonic decomposition is used to express the final geometry as the sum of individual surface harmonics. It is then possible to filter these harmonic contributions to obtain a new CAD model with a desired level of geometric detail. This paper explains the technique and its implementation, as well as its use in the automatic generation of multi-step geometries.

  2. Electronic error-reporting systems: a case study into the impact on nurse reporting of medical errors.

    Science.gov (United States)

    Lederman, Reeva; Dreyfus, Suelette; Matchan, Jessica; Knott, Jonathan C; Milton, Simon K

    2013-01-01

    Underreporting of errors in hospitals persists despite the claims of technology companies that electronic systems will facilitate reporting. This study builds on previous analyses to examine error reporting by nurses in hospitals using electronic media. This research asks whether the electronic media creates additional barriers to error reporting, and, if so, what practical steps can all hospitals take to reduce these barriers. This is a mixed-method case study nurses' use of an error reporting system, RiskMan, in two hospitals. The case study involved one large private hospital and one large public hospital in Victoria, Australia, both of which use the RiskMan medical error reporting system. Information technology-based error reporting systems have unique access problems and time demands and can encourage nurses to develop alternative reporting mechanisms. This research focuses on nurses and raises important findings for hospitals using such systems or considering installation. This article suggests organizational and technical responses that could reduce some of the identified barriers. Crown Copyright © 2013. Published by Mosby, Inc. All rights reserved.

  3. Geometry through history Euclidean, hyperbolic, and projective geometries

    CERN Document Server

    Dillon, Meighan I

    2018-01-01

    Presented as an engaging discourse, this textbook invites readers to delve into the historical origins and uses of geometry. The narrative traces the influence of Euclid’s system of geometry, as developed in his classic text The Elements, through the Arabic period, the modern era in the West, and up to twentieth century mathematics. Axioms and proof methods used by mathematicians from those periods are explored alongside the problems in Euclidean geometry that lead to their work. Students cultivate skills applicable to much of modern mathematics through sections that integrate concepts like projective and hyperbolic geometry with representative proof-based exercises. For its sophisticated account of ancient to modern geometries, this text assumes only a year of college mathematics as it builds towards its conclusion with algebraic curves and quaternions. Euclid’s work has affected geometry for thousands of years, so this text has something to offer to anyone who wants to broaden their appreciation for the...

  4. Trauma Quality Improvement: Reducing Triage Errors by Automating the Level Assignment Process.

    Science.gov (United States)

    Stonko, David P; O Neill, Dillon C; Dennis, Bradley M; Smith, Melissa; Gray, Jeffrey; Guillamondegui, Oscar D

    2018-04-12

    Trauma patients are triaged by the severity of their injury or need for intervention while en route to the trauma center according to trauma activation protocols that are institution specific. Significant research has been aimed at improving these protocols in order to optimize patient outcomes while striving for efficiency in care. However, it is known that patients are often undertriaged or overtriaged because protocol adherence remains imperfect. The goal of this quality improvement (QI) project was to improve this adherence, and thereby reduce the triage error. It was conducted as part of the formal undergraduate medical education curriculum at this institution. A QI team was assembled and baseline data were collected, then 2 Plan-Do-Study-Act (PDSA) cycles were implemented sequentially. During the first cycle, a novel web tool was developed and implemented in order to automate the level assignment process (it takes EMS-provided data and automatically determines the level); the tool was based on the existing trauma activation protocol. The second PDSA cycle focused on improving triage accuracy in isolated, less than 10% total body surface area burns, which we identified to be a point of common error. Traumas were reviewed and tabulated at the end of each PDSA cycle, and triage accuracy was followed with a run chart. This study was performed at Vanderbilt University Medical Center and Medical School, which has a large level 1 trauma center covering over 75,000 square miles, and which sees urban, suburban, and rural trauma. The baseline assessment period and each PDSA cycle lasted 2 weeks. During this time, all activated, adult, direct traumas were reviewed. There were 180 patients during the baseline period, 189 after the first test of change, and 150 after the second test of change. All were included in analysis. Of 180 patients, 30 were inappropriately triaged during baseline analysis (3 undertriaged and 27 overtriaged) versus 16 of 189 (3 undertriaged and 13

  5. TOPIC: a debugging code for torus geometry input data of Monte Carlo transport code

    International Nuclear Information System (INIS)

    Iida, Hiromasa; Kawasaki, Hiromitsu.

    1979-06-01

    TOPIC has been developed for debugging geometry input data of the Monte Carlo transport code. the code has the following features: (1) It debugs the geometry input data of not only MORSE-GG but also MORSE-I capable of treating torus geometry. (2) Its calculation results are shown in figures drawn by Plotter or COM, and the regions not defined or doubly defined are easily detected. (3) It finds a multitude of input data errors in a single run. (4) The input data required in this code are few, so that it is readily usable in a time sharing system of FACOM 230-60/75 computer. Example TOPIC calculations in design study of tokamak fusion reactors (JXFR, INTOR-J) are presented. (author)

  6. Simulating Irregular Source Geometries for Ionian Plumes

    Science.gov (United States)

    McDoniel, W. J.; Goldstein, D. B.; Varghese, P. L.; Trafton, L. M.; Buchta, D. A.; Freund, J.; Kieffer, S. W.

    2011-05-01

    Volcanic plumes on Io respresent a complex rarefied flow into a near-vacuum in the presence of gravity. A 3D Direct Simulation Monte Carlo (DSMC) method is used to investigate the gas dynamics of such plumes, with a focus on the effects of source geometry on far-field deposition patterns. A rectangular slit and a semicircular half annulus are simulated to illustrate general principles, especially the effects of vent curvature on deposition ring structure. Then two possible models for the giant plume Pele are presented. One is a curved line source corresponding to an IR image of a particularly hot region in the volcano's caldera and the other is a large area source corresponding to the entire caldera. The former is seen to produce the features seen in observations of Pele's ring, but with an error in orientation. The latter corrects the error in orientation, but loses some structure. A hybrid simulation of 3D slit flow is also discussed.

  7. Simulating Irregular Source Geometries for Ionian Plumes

    International Nuclear Information System (INIS)

    McDoniel, W. J.; Goldstein, D. B.; Varghese, P. L.; Trafton, L. M.; Buchta, D. A.; Freund, J.; Kieffer, S. W.

    2011-01-01

    Volcanic plumes on Io respresent a complex rarefied flow into a near-vacuum in the presence of gravity. A 3D Direct Simulation Monte Carlo (DSMC) method is used to investigate the gas dynamics of such plumes, with a focus on the effects of source geometry on far-field deposition patterns. A rectangular slit and a semicircular half annulus are simulated to illustrate general principles, especially the effects of vent curvature on deposition ring structure. Then two possible models for the giant plume Pele are presented. One is a curved line source corresponding to an IR image of a particularly hot region in the volcano's caldera and the other is a large area source corresponding to the entire caldera. The former is seen to produce the features seen in observations of Pele's ring, but with an error in orientation. The latter corrects the error in orientation, but loses some structure. A hybrid simulation of 3D slit flow is also discussed.

  8. The effect of TWD estimation error on the geometry of machined surfaces in micro-EDM milling

    DEFF Research Database (Denmark)

    Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard

    In micro EDM (electrical discharge machining) milling, tool electrode wear must be effectively compensated in order to achieve high accuracy of machined features [1]. Tool wear compensation in micro-EDM milling can be based on off-line techniques with limited accuracy such as estimation...... and statistical characterization of the discharge population [3]. The TWD based approach permits the direct control of the position of the tool electrode front surface. However, TWD estimation errors will generate a self-amplifying error on the tool electrode axial depth during micro-EDM milling. Therefore....... The error propagation effect is demonstrated through a software simulation tool developed by the authors for determination of the correct TWD for subsequent use in compensation of electrode wear in EDM milling. The implemented model uses an initial arbitrary estimation of TWD and a single experiment...

  9. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors.

    Directory of Open Access Journals (Sweden)

    Jian Weng

    Full Text Available Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group.

  10. Thermal geometry from CFT at finite temperature

    Directory of Open Access Journals (Sweden)

    Wen-Cong Gan

    2016-09-01

    Full Text Available We present how the thermal geometry emerges from CFT at finite temperature by using the truncated entanglement renormalization network, the cMERA. For the case of 2d CFT, the reduced geometry is the BTZ black hole or the thermal AdS as expectation. In order to determine which spacetimes prefer to form, we propose a cMERA description of the Hawking–Page phase transition. Our proposal is in agreement with the picture of the recent proposed surface/state correspondence.

  11. Thermal geometry from CFT at finite temperature

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Wen-Cong, E-mail: ganwencong@gmail.com [Department of Physics, Nanchang University, Nanchang 330031 (China); Center for Relativistic Astrophysics and High Energy Physics, Nanchang University, Nanchang 330031 (China); Shu, Fu-Wen, E-mail: shufuwen@ncu.edu.cn [Department of Physics, Nanchang University, Nanchang 330031 (China); Center for Relativistic Astrophysics and High Energy Physics, Nanchang University, Nanchang 330031 (China); Wu, Meng-He, E-mail: menghewu.physik@gmail.com [Department of Physics, Nanchang University, Nanchang 330031 (China); Center for Relativistic Astrophysics and High Energy Physics, Nanchang University, Nanchang 330031 (China)

    2016-09-10

    We present how the thermal geometry emerges from CFT at finite temperature by using the truncated entanglement renormalization network, the cMERA. For the case of 2d CFT, the reduced geometry is the BTZ black hole or the thermal AdS as expectation. In order to determine which spacetimes prefer to form, we propose a cMERA description of the Hawking–Page phase transition. Our proposal is in agreement with the picture of the recent proposed surface/state correspondence.

  12. Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns

    Science.gov (United States)

    Park, GiTae; Kim, Soowon

    2013-01-01

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119

  13. A Wear Geometry Model of Plain Woven Fabric Composites

    Directory of Open Access Journals (Sweden)

    Gu Dapeng

    2014-09-01

    Full Text Available The paper g describes a model meant for analysis of the wear geometry of plain woven fabric composites. The referred model consists of a mathematical description of plain woven fabric based on Peirce’s model coupled with a stratified method for the solution of the wear geometry. The evolutions of the wear area ratio of weft yarn, warp yarn and matrix resin on the worn surface are simulated by MatLab software in combination of warp and weft yarn diameters, warp and weft yarn-to-yarn distances, fabric structure phases (SPs. By comparing theoretical and experimental results from the PTFE/Kevlar fabric wear experiment, it can be concluded that the model can present a trend of the component area ratio variations along with the thickness of fabric, but has a inherently large error in quantitative analysis as an idealized model.

  14. Architectural geometry

    KAUST Repository

    Pottmann, Helmut

    2014-11-26

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  15. Architectural geometry

    KAUST Repository

    Pottmann, Helmut; Eigensatz, Michael; Vaxman, Amir; Wallner, Johannes

    2014-01-01

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  16. Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.

    Science.gov (United States)

    Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M

    2006-10-01

    Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.

  17. Two lectures on D-geometry and noncommutative geometry

    International Nuclear Information System (INIS)

    Douglas, M.R.

    1999-01-01

    This is a write-up of lectures given at the 1998 Spring School at the Abdus Salam ICTP. We give a conceptual introduction to D-geometry, the study of geometry as seen by D-branes in string theory, and to noncommutative geometry as it has appeared in D-brane and Matrix theory physics. (author)

  18. Hessian matrix approach for determining error field sensitivity to coil deviations

    Science.gov (United States)

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi

    2018-05-01

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.

  19. Image defects from surface and alignment errors in grazing incidence telescopes

    Science.gov (United States)

    Saha, Timo T.

    1989-01-01

    The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.

  20. A channel-by-channel method of reducing the errors associated with peak area integration

    International Nuclear Information System (INIS)

    Luedeke, T.P.; Tripard, G.E.

    1996-01-01

    A new method of reducing the errors associated with peak area integration has been developed. This method utilizes the signal content of each channel as an estimate of the overall peak area. These individual estimates can then be weighted according to the precision with which each estimate is known, producing an overall area estimate. Experimental measurements were performed on a small peak sitting on a large background, and the results compared to those obtained from a commercial software program. Results showed a marked decrease in the spread of results around the true value (obtained by counting for a long period of time), and a reduction in the statistical uncertainty associated with the peak area. (orig.)

  1. Effect of duct geometry on Wells turbine performance

    International Nuclear Information System (INIS)

    Shaaban, S.; Abdel Hafiz, A.

    2012-01-01

    Highlights: ► A Wells turbine duct design in the form of venturi duct is proposed and investigated. ► Optimum duct geometry is identified. ► Up to 14% increase of the turbine power can be achieved using the optimized duct geometry. ► Up to 9% improve of the turbine efficiency is attained by optimizing the turbine duct geometry. ► The optimized duct geometry results in tangible delay of the turbine stalling point. - Abstract: Wells turbines can represent important source of renewable energy for many countries. An essential disadvantage of Wells turbines is their low aerodynamic efficiency and consequently low power produced. In order to enhance the Wells turbine performance, the present research work proposes the use of a symmetrical duct in the form of a venturi tube with turbine rotor located at throat. The effects of duct area ratio and duct angle are investigated in order to optimize Wells turbine performance. The turbine performance is numerically investigated by solving the steady 3D incompressible Reynolds Averaged Navier–Stocks equation (RANS). A substantial improve of the turbine performance is achieved by optimizing the duct geometry. Increasing both the duct area ratio and duct angle increase the acceleration and deceleration upstream and downstream the rotor respectively. The accelerating flow with thinner boundary layer thickness upstream the rotor reduces the flow separation on the rotor suction side. The downstream diffuser reduces the interaction between tip leakage flow and blade suction side. Up to 14% increase in turbine power and 9% increase in turbine efficiency are achieved by optimizing the duct geometry. On other hand, a tangible delay of the turbine stall point is also detected.

  2. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  3. Twistor geometry

    NARCIS (Netherlands)

    van den Broek, P.M.

    1984-01-01

    The aim of this paper is to give a detailed exposition of the relation between the geometry of twistor space and the geometry of Minkowski space. The paper has a didactical purpose; no use has been made of differential geometry and cohomology.

  4. Physical predictions from lattice QCD. Reducing systematic errors

    International Nuclear Information System (INIS)

    Pittori, C.

    1994-01-01

    Some recent developments in the theoretical understanding of lattice quantum chromodynamics and of its possible sources of systematic errors are reported, and a review of some of the latest Monte Carlo results for light quarks phenomenology is presented. A very general introduction on a quantum field theory on a discrete spacetime lattice is given, and the Monte Carlo methods which allow to compute many interesting physical quantities in the non-perturbative domain of strong interactions, is illustrated. (author). 17 refs., 3 figs., 3 tabs

  5. Geometry

    Indian Academy of Sciences (India)

    . In the previous article we looked at the origins of synthetic and analytic geometry. More practical minded people, the builders and navigators, were studying two other aspects of geometry- trigonometry and integral calculus. These are actually ...

  6. Learning a locomotor task: with or without errors?

    Science.gov (United States)

    Marchal-Crespo, Laura; Schneider, Jasmin; Jaeger, Lukas; Riener, Robert

    2014-03-04

    Robotic haptic guidance is the most commonly used robotic training strategy to reduce performance errors while training. However, research on motor learning has emphasized that errors are a fundamental neural signal that drive motor adaptation. Thus, researchers have proposed robotic therapy algorithms that amplify movement errors rather than decrease them. However, to date, no study has analyzed with precision which training strategy is the most appropriate to learn an especially simple task. In this study, the impact of robotic training strategies that amplify or reduce errors on muscle activation and motor learning of a simple locomotor task was investigated in twenty two healthy subjects. The experiment was conducted with the MAgnetic Resonance COmpatible Stepper (MARCOS) a special robotic device developed for investigations in the MR scanner. The robot moved the dominant leg passively and the subject was requested to actively synchronize the non-dominant leg to achieve an alternating stepping-like movement. Learning with four different training strategies that reduce or amplify errors was evaluated: (i) Haptic guidance: errors were eliminated by passively moving the limbs, (ii) No guidance: no robot disturbances were presented, (iii) Error amplification: existing errors were amplified with repulsive forces, (iv) Noise disturbance: errors were evoked intentionally with a randomly-varying force disturbance on top of the no guidance strategy. Additionally, the activation of four lower limb muscles was measured by the means of surface electromyography (EMG). Strategies that reduce or do not amplify errors limit muscle activation during training and result in poor learning gains. Adding random disturbing forces during training seems to increase attention, and therefore improve motor learning. Error amplification seems to be the most suitable strategy for initially less skilled subjects, perhaps because subjects could better detect their errors and correct them

  7. Development of 'SKYSHINE-CG' code. A line-beam method code equipped with combinatorial geometry routine

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, Takahiro; Ochiai, Katsuharu [Plant and System Planning Department, Toshiba Corporation, Yokohama, Kanagawa (Japan); Uematsu, Mikio; Hayashida, Yoshihisa [Department of Nuclear Engineering, Toshiba Engineering Corporation, Yokohama, Kanagawa (Japan)

    2000-03-01

    A boiling water reactor (BWR) plant has a single loop coolant system, in which main steam generated in the reactor core proceeds directly into turbines. Consequently, radioactive {sup 16}N (6.2 MeV photon emitter) contained in the steam contributes to gamma-ray skyshine dose in the vicinity of the BWR plant. The skyshine dose analysis is generally performed with the line-beam method code SKYSHINE, in which calculational geometry consists of a rectangular turbine building and a set of isotropic point sources corresponding to an actual distribution of {sup 16}N sources. For the purpose of upgrading calculational accuracy, the SKYSHINE-CG code has been developed by incorporating the combinatorial geometry (CG) routine into the SKYSHINE code, so that shielding effect of in-building equipment can be properly considered using a three-dimensional model composed of boxes, cylinders, spheres, etc. Skyshine dose rate around a 500 MWe BWR plant was calculated with both SKYSHINE and SKYSHINE-CG codes, and the calculated results were compared with measured data obtained with a NaI(Tl) scintillation detector. The C/E values for SKYSHINE-CG calculation were scattered around 4.0, whereas the ones for SKYSHINE calculation were as large as 6.0. Calculational error was found to be reduced by adopting three-dimensional model based on the combinatorial geometry method. (author)

  8. Computation of Surface Laplacian for tri-polar ring electrodes on high-density realistic geometry head model.

    Science.gov (United States)

    Junwei Ma; Han Yuan; Sunderam, Sridhar; Besio, Walter; Lei Ding

    2017-07-01

    Neural activity inside the human brain generate electrical signals that can be detected on the scalp. Electroencephalograph (EEG) is one of the most widely utilized techniques helping physicians and researchers to diagnose and understand various brain diseases. Due to its nature, EEG signals have very high temporal resolution but poor spatial resolution. To achieve higher spatial resolution, a novel tri-polar concentric ring electrode (TCRE) has been developed to directly measure Surface Laplacian (SL). The objective of the present study is to accurately calculate SL for TCRE based on a realistic geometry head model. A locally dense mesh was proposed to represent the head surface, where the local dense parts were to match the small structural components in TCRE. Other areas without dense mesh were used for the purpose of reducing computational load. We conducted computer simulations to evaluate the performance of the proposed mesh and evaluated possible numerical errors as compared with a low-density model. Finally, with achieved accuracy, we presented the computed forward lead field of SL for TCRE for the first time in a realistic geometry head model and demonstrated that it has better spatial resolution than computed SL from classic EEG recordings.

  9. The effect of biomechanical variables on force sensitive resistor error: Implications for calibration and improved accuracy.

    Science.gov (United States)

    Schofield, Jonathon S; Evans, Katherine R; Hebert, Jacqueline S; Marasco, Paul D; Carey, Jason P

    2016-03-21

    Force Sensitive Resistors (FSRs) are commercially available thin film polymer sensors commonly employed in a multitude of biomechanical measurement environments. Reasons for such wide spread usage lie in the versatility, small profile, and low cost of these sensors. Yet FSRs have limitations. It is commonly accepted that temperature, curvature and biological tissue compliance may impact sensor conductance and resulting force readings. The effect of these variables and degree to which they interact has yet to be comprehensively investigated and quantified. This work systematically assesses varying levels of temperature, sensor curvature and surface compliance using a full factorial design-of-experiments approach. Three models of Interlink FSRs were evaluated. Calibration equations under 12 unique combinations of temperature, curvature and compliance were determined for each sensor. Root mean squared error, mean absolute error, and maximum error were quantified as measures of the impact these thermo/mechanical factors have on sensor performance. It was found that all three variables have the potential to affect FSR calibration curves. The FSR model and corresponding sensor geometry are sensitive to these three mechanical factors at varying levels. Experimental results suggest that reducing sensor error requires calibration of each sensor in an environment as close to its intended use as possible and if multiple FSRs are used in a system, they must be calibrated independently. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Molecular geometry

    CERN Document Server

    Rodger, Alison

    1995-01-01

    Molecular Geometry discusses topics relevant to the arrangement of atoms. The book is comprised of seven chapters that tackle several areas of molecular geometry. Chapter 1 reviews the definition and determination of molecular geometry, while Chapter 2 discusses the unified view of stereochemistry and stereochemical changes. Chapter 3 covers the geometry of molecules of second row atoms, and Chapter 4 deals with the main group elements beyond the second row. The book also talks about the complexes of transition metals and f-block elements, and then covers the organometallic compounds and trans

  11. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  12. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Science.gov (United States)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  13. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Energy Technology Data Exchange (ETDEWEB)

    Huan, Xun [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Geraci, Gianluca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vane, Zachary P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lacaze, Guilhem [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Oefelein, Joseph C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  14. MODELING OF MANUFACTURING ERRORS FOR PIN-GEAR ELEMENTS OF PLANETARY GEARBOX

    Directory of Open Access Journals (Sweden)

    Ivan M. Egorov

    2014-11-01

    Full Text Available Theoretical background for calculation of k-h-v type cycloid reducers was developed relatively long ago. However, recently the matters of cycloid reducer design again attracted heightened attention. The reason for that is that such devices are used in many complex engineering systems, particularly, in mechatronic and robotics systems. The development of advanced technological capabilities for manufacturing of such reducers today gives the possibility for implementation of essential features of such devices: high efficiency, high gear ratio, kinematic accuracy and smooth motion. The presence of an adequate mathematical model gives the possibility for adjusting kinematic accuracy of the reducer by rational selection of manufacturing tolerances for its parts. This makes it possible to automate the design process for cycloid reducers with account of various factors including technological ones. A mathematical model and mathematical technique have been developed giving the possibility for modeling the kinematic error of the reducer with account of multiple factors, including manufacturing errors. The errors are considered in the way convenient for prediction of kinematic accuracy early at the manufacturing stage according to the results of reducer parts measurement on coordinate measuring machines. During the modeling, the wheel manufacturing errors are determined by the eccentricity and radius deviation of the pin tooth centers circle, and the deviation between the pin tooth axes positions and the centers circle. The satellite manufacturing errors are determined by the satellite eccentricity deviation and the satellite rim eccentricity. Due to the collinearity, the pin tooth and pin tooth hole diameter errors and the satellite tooth profile errors for a designated contact point are integrated into one deviation. Software implementation of the model makes it possible to estimate the pointed errors influence on satellite rotation angle error and

  15. Symplectic and Poisson Geometry in Interaction with Analysis, Algebra and Topology & Symplectic Geometry, Noncommutative Geometry and Physics

    CERN Document Server

    Eliashberg, Yakov; Maeda, Yoshiaki; Symplectic, Poisson, and Noncommutative geometry

    2014-01-01

    Symplectic geometry originated in physics, but it has flourished as an independent subject in mathematics, together with its offspring, symplectic topology. Symplectic methods have even been applied back to mathematical physics. Noncommutative geometry has developed an alternative mathematical quantization scheme based on a geometric approach to operator algebras. Deformation quantization, a blend of symplectic methods and noncommutative geometry, approaches quantum mechanics from a more algebraic viewpoint, as it addresses quantization as a deformation of Poisson structures. This volume contains seven chapters based on lectures given by invited speakers at two May 2010 workshops held at the Mathematical Sciences Research Institute: Symplectic and Poisson Geometry in Interaction with Analysis, Algebra and Topology (honoring Alan Weinstein, one of the key figures in the field) and Symplectic Geometry, Noncommutative Geometry and Physics. The chapters include presentations of previously unpublished results and ...

  16. A novel ULA-based geometry for improving AOA estimation

    Science.gov (United States)

    Shirvani-Moghaddam, Shahriar; Akbari, Farida

    2011-12-01

    Due to relatively simple implementation, Uniform Linear Array (ULA) is a popular geometry for array signal processing. Despite this advantage, it does not have a uniform performance in all directions and Angle of Arrival (AOA) estimation performance degrades considerably in the angles close to endfire. In this article, a new configuration is proposed which can solve this problem. Proposed Array (PA) configuration adds two elements to the ULA in top and bottom of the array axis. By extending signal model of the ULA to the new proposed ULA-based array, AOA estimation performance has been compared in terms of angular accuracy and resolution threshold through two well-known AOA estimation algorithms, MUSIC and MVDR. In both algorithms, Root Mean Square Error (RMSE) of the detected angles descends as the input Signal to Noise Ratio (SNR) increases. Simulation results show that the proposed array geometry introduces uniform accurate performance and higher resolution in middle angles as well as border ones. The PA also presents less RMSE than the ULA in endfire directions. Therefore, the proposed array offers better performance for the border angles with almost the same array size and simplicity in both MUSIC and MVDR algorithms with respect to the conventional ULA. In addition, AOA estimation performance of the PA geometry is compared with two well-known 2D-array geometries: L-shape and V-shape, and acceptable results are obtained with equivalent or lower complexity.

  17. Replication infidelity via a mismatch with Watson-Crick geometry.

    Science.gov (United States)

    Bebenek, Katarzyna; Pedersen, Lars C; Kunkel, Thomas A

    2011-02-01

    In describing the DNA double helix, Watson and Crick suggested that "spontaneous mutation may be due to a base occasionally occurring in one of its less likely tautomeric forms." Indeed, among many mispairing possibilities, either tautomerization or ionization of bases might allow a DNA polymerase to insert a mismatch with correct Watson-Crick geometry. However, despite substantial progress in understanding the structural basis of error prevention during polymerization, no DNA polymerase has yet been shown to form a natural base-base mismatch with Watson-Crick-like geometry. Here we provide such evidence, in the form of a crystal structure of a human DNA polymerase λ variant poised to misinsert dGTP opposite a template T. All atoms needed for catalysis are present at the active site and in positions that overlay with those for a correct base pair. The mismatch has Watson-Crick geometry consistent with a tautomeric or ionized base pair, with the pH dependence of misinsertion consistent with the latter. The results support the original idea that a base substitution can originate from a mismatch having Watson-Crick geometry, and they suggest a common catalytic mechanism for inserting a correct and an incorrect nucleotide. A second structure indicates that after misinsertion, the now primer-terminal G • T mismatch is also poised for catalysis but in the wobble conformation seen in other studies, indicating the dynamic nature of the pathway required to create a mismatch in fully duplex DNA.

  18. Replication infidelity via a mismatch with Watson–Crick geometry

    Science.gov (United States)

    Bebenek, Katarzyna; Pedersen, Lars C.; Kunkel, Thomas A.

    2011-01-01

    In describing the DNA double helix, Watson and Crick suggested that “spontaneous mutation may be due to a base occasionally occurring in one of its less likely tautomeric forms.” Indeed, among many mispairing possibilities, either tautomerization or ionization of bases might allow a DNA polymerase to insert a mismatch with correct Watson–Crick geometry. However, despite substantial progress in understanding the structural basis of error prevention during polymerization, no DNA polymerase has yet been shown to form a natural base–base mismatch with Watson–Crick-like geometry. Here we provide such evidence, in the form of a crystal structure of a human DNA polymerase λ variant poised to misinsert dGTP opposite a template T. All atoms needed for catalysis are present at the active site and in positions that overlay with those for a correct base pair. The mismatch has Watson–Crick geometry consistent with a tautomeric or ionized base pair, with the pH dependence of misinsertion consistent with the latter. The results support the original idea that a base substitution can originate from a mismatch having Watson–Crick geometry, and they suggest a common catalytic mechanism for inserting a correct and an incorrect nucleotide. A second structure indicates that after misinsertion, the now primer-terminal G•T mismatch is also poised for catalysis but in the wobble conformation seen in other studies, indicating the dynamic nature of the pathway required to create a mismatch in fully duplex DNA. PMID:21233421

  19. A novel ULA-based geometry for improving AOA estimation

    Directory of Open Access Journals (Sweden)

    Akbari Farida

    2011-01-01

    Full Text Available Abstract Due to relatively simple implementation, Uniform Linear Array (ULA is a popular geometry for array signal processing. Despite this advantage, it does not have a uniform performance in all directions and Angle of Arrival (AOA estimation performance degrades considerably in the angles close to endfire. In this article, a new configuration is proposed which can solve this problem. Proposed Array (PA configuration adds two elements to the ULA in top and bottom of the array axis. By extending signal model of the ULA to the new proposed ULA-based array, AOA estimation performance has been compared in terms of angular accuracy and resolution threshold through two well-known AOA estimation algorithms, MUSIC and MVDR. In both algorithms, Root Mean Square Error (RMSE of the detected angles descends as the input Signal to Noise Ratio (SNR increases. Simulation results show that the proposed array geometry introduces uniform accurate performance and higher resolution in middle angles as well as border ones. The PA also presents less RMSE than the ULA in endfire directions. Therefore, the proposed array offers better performance for the border angles with almost the same array size and simplicity in both MUSIC and MVDR algorithms with respect to the conventional ULA. In addition, AOA estimation performance of the PA geometry is compared with two well-known 2D-array geometries: L-shape and V-shape, and acceptable results are obtained with equivalent or lower complexity.

  20. Evaluation of a cone beam computed tomography geometry for image guided small animal irradiation

    International Nuclear Information System (INIS)

    Yang, Yidong; Armour, Michael; Wang, Ken Kang-Hsin; Gandhi, Nishant; Wong, John; Iordachita, Iulian; Siewerdsen, Jeffrey

    2015-01-01

    The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal (‘tubular’ geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal (‘pancake’ geometry). The small animal radiation research platform developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Not withstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e. pancake and tubular geometry

  1. Evaluation of a cone beam computed tomography geometry for image guided small animal irradiation.

    Science.gov (United States)

    Yang, Yidong; Armour, Michael; Wang, Ken Kang-Hsin; Gandhi, Nishant; Iordachita, Iulian; Siewerdsen, Jeffrey; Wong, John

    2015-07-07

    The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal ('tubular' geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal ('pancake' geometry). The small animal radiation research platform developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Not withstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e. pancake and tubular geometry, respectively.

  2. SILENE and TDT: A code for collision probability calculations in XY geometries

    International Nuclear Information System (INIS)

    Sanchez, R.; Stankovski, Z.

    1993-01-01

    Collision probability methods are routinely used for cell and assembly multigroup transport calculations in core design tasks. Collision probability methods use a specialized tracking routine to compute neutron trajectories within a given geometric object. These trajectories are then used to generate the appropriate collision matrices in as many groups as required. Traditional tracking routines are based on open-quotes globalclose quotes geometric descriptions (such as regular meshes) and are not able to cope with the geometric detail required in actual core calculations. Therefore, users have to modify their geometry in order to match the geometric model accepted by the tracking routine, introducing thus a modeling error whose evaluation requires the use of a open-quotes referenceclose quotes method. Recently, an effort has been made to develop more flexible tracking routines either by directly adopting tracking Monte Carlo techniques or by coding of complicated geometries. Among these, the SILENE and TDT package is being developed at the Commissariat a l' Energie Atomique to provide routine as well as reference calculations in arbitrarily shaped XY geometries. This package combines a direct graphical acquisition system (SILENE) together with a node-based collision probability code for XY geometries (TDT)

  3. Error Analysis of Variations on Larsen's Benchmark Problem

    International Nuclear Information System (INIS)

    Azmy, YY

    2001-01-01

    Error norms for three variants of Larsen's benchmark problem are evaluated using three numerical methods for solving the discrete ordinates approximation of the neutron transport equation in multidimensional Cartesian geometry. The three variants of Larsen's test problem are concerned with the incoming flux boundary conditions: unit incoming flux on the left and bottom edges (Larsen's configuration); unit, incoming flux only on the left edge; unit incoming flux only on the bottom edge. The three methods considered are the Diamond Difference (DD) method, and the constant-approximation versions of the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic (AHOT-C) type. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L 1 , L 2 , and L ∞ error norms are calculated. The results of this study demonstrate that while integral error norms, i.e. L 1 , L 2 , converge to zero with mesh refinement, the pointwise L ∞ norm does not due to solution discontinuity across the singular characteristic. Little difference is observed between the error norm behavior of the three methods considered in spite of the fact that AHOT-C is locally exact, suggesting that numerical diffusion across the singular characteristic as the major source of error on the global scale. However, AHOT-C possesses a given accuracy in a larger fraction of computational cells than DD

  4. Errors of isotope conveyor weigher caused by profile variations and shift of material

    International Nuclear Information System (INIS)

    Machaj, B.

    1977-01-01

    Results of investigations of isotope conveyor weigher in transmission geometry and with long plastic scintillator as a detector are presented in the paper. The results indicate that errors caused by material shift across the conveyor belt can be decreased by forming probe sensitivity to incident radiation along its axis by means of additional radiation absorbents. The errors caused by material profile variations can effectively be diminished by increase of photon energy. Application of 60 Co instead of 137 Cs ensured more than three times lower errors caused by profile variation. Errors caused by vertical movements of the belt with material, decrease considerably, when single point source situated in the center of the measuring head is replaced at least by two point sources situated out of the center, above the edges of the belt. (author)

  5. Morphing the feature-based multi-blocks of normative/healthy vertebral geometries to scoliosis vertebral geometries: development of personalized finite element models.

    Science.gov (United States)

    Hadagali, Prasannaah; Peters, James R; Balasubramanian, Sriram

    2018-03-12

    Personalized Finite Element (FE) models and hexahedral elements are preferred for biomechanical investigations. Feature-based multi-block methods are used to develop anatomically accurate personalized FE models with hexahedral mesh. It is tedious to manually construct multi-blocks for large number of geometries on an individual basis to develop personalized FE models. Mesh-morphing method mitigates the aforementioned tediousness in meshing personalized geometries every time, but leads to element warping and loss of geometrical data. Such issues increase in magnitude when normative spine FE model is morphed to scoliosis-affected spinal geometry. The only way to bypass the issue of hex-mesh distortion or loss of geometry as a result of morphing is to rely on manually constructing the multi-blocks for scoliosis-affected spine geometry of each individual, which is time intensive. A method to semi-automate the construction of multi-blocks on the geometry of scoliosis vertebrae from the existing multi-blocks of normative vertebrae is demonstrated in this paper. High-quality hexahedral elements were generated on the scoliosis vertebrae from the morphed multi-blocks of normative vertebrae. Time taken was 3 months to construct the multi-blocks for normative spine and less than a day for scoliosis. Efforts taken to construct multi-blocks on personalized scoliosis spinal geometries are significantly reduced by morphing existing multi-blocks.

  6. Arithmetic noncommutative geometry

    CERN Document Server

    Marcolli, Matilde

    2005-01-01

    Arithmetic noncommutative geometry denotes the use of ideas and tools from the field of noncommutative geometry, to address questions and reinterpret in a new perspective results and constructions from number theory and arithmetic algebraic geometry. This general philosophy is applied to the geometry and arithmetic of modular curves and to the fibers at archimedean places of arithmetic surfaces and varieties. The main reason why noncommutative geometry can be expected to say something about topics of arithmetic interest lies in the fact that it provides the right framework in which the tools of geometry continue to make sense on spaces that are very singular and apparently very far from the world of algebraic varieties. This provides a way of refining the boundary structure of certain classes of spaces that arise in the context of arithmetic geometry, such as moduli spaces (of which modular curves are the simplest case) or arithmetic varieties (completed by suitable "fibers at infinity"), by adding boundaries...

  7. An error bound estimate and convergence of the Nodal-LTS N solution in a rectangle

    International Nuclear Information System (INIS)

    Hauser, Eliete Biasotto; Pazos, Ruben Panta; Tullio de Vilhena, Marco

    2005-01-01

    In this work, we report the mathematical analysis concerning error bound estimate and convergence of the Nodal-LTS N solution in a rectangle. For such we present an efficient algorithm, called LTS N 2D-Diag solution for Cartesian geometry

  8. Improved Landau gauge fixing and discretisation errors

    International Nuclear Information System (INIS)

    Bonnet, F.D.R.; Bowman, P.O.; Leinweber, D.B.; Richards, D.G.; Williams, A.G.

    2000-01-01

    Lattice discretisation errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition displays the secondary benefit of reducing the size of higher-order errors. These results emphasise the importance of implementing an improved gauge fixing condition

  9. Reducing Monte Carlo error in the Bayesian estimation of risk ratios using log-binomial regression models.

    Science.gov (United States)

    Salmerón, Diego; Cano, Juan A; Chirlaque, María D

    2015-08-30

    In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Physician Preferences to Communicate Neuropsychological Results: Comparison of Qualitative Descriptors and a Proposal to Reduce Communication Errors.

    Science.gov (United States)

    Schoenberg, Mike R; Osborn, Katie E; Mahone, E Mark; Feigon, Maia; Roth, Robert M; Pliskin, Neil H

    2017-11-08

    Errors in communication are a leading cause of medical errors. A potential source of error in communicating neuropsychological results is confusion in the qualitative descriptors used to describe standardized neuropsychological data. This study sought to evaluate the extent to which medical consumers of neuropsychological assessments believed that results/findings were not clearly communicated. In addition, preference data for a variety of qualitative descriptors commonly used to communicate normative neuropsychological test scores were obtained. Preference data were obtained for five qualitative descriptor systems as part of a larger 36-item internet-based survey of physician satisfaction with neuropsychological services. A new qualitative descriptor system termed the Simplified Qualitative Classification System (Q-Simple) was proposed to reduce the potential for communication errors using seven terms: very superior, superior, high average, average, low average, borderline, and abnormal/impaired. A non-random convenience sample of 605 clinicians identified from four United States academic medical centers from January 1, 2015 through January 7, 2016 were invited to participate. A total of 182 surveys were completed. A minority of clinicians (12.5%) indicated that neuropsychological study results were not clearly communicated. When communicating neuropsychological standardized scores, the two most preferred qualitative descriptor systems were by Heaton and colleagues (26%) and a newly proposed Q-simple system (22%). Comprehensive norms for an extended Halstead-Reitan battery: Demographic corrections, research findings, and clinical applications. Odessa, TX: Psychological Assessment Resources) (26%) and the newly proposed Q-Simple system (22%). Initial findings highlight the need to improve and standardize communication of neuropsychological results. These data offer initial guidance for preferred terms to communicate test results and form a foundation for more

  11. SU-E-T-558: Monte Carlo Photon Transport Simulations On GPU with Quadric Geometry

    International Nuclear Information System (INIS)

    Chi, Y; Tian, Z; Jiang, S; Jia, X

    2015-01-01

    Purpose: Monte Carlo simulation on GPU has experienced rapid advancements over the past a few years and tremendous accelerations have been achieved. Yet existing packages were developed only in voxelized geometry. In some applications, e.g. radioactive seed modeling, simulations in more complicated geometry are needed. This abstract reports our initial efforts towards developing a quadric geometry module aiming at expanding the application scope of GPU-based MC simulations. Methods: We defined the simulation geometry consisting of a number of homogeneous bodies, each specified by its material composition and limiting surfaces characterized by quadric functions. A tree data structure was utilized to define geometric relationship between different bodies. We modified our GPU-based photon MC transport package to incorporate this geometry. Specifically, geometry parameters were loaded into GPU’s shared memory for fast access. Geometry functions were rewritten to enable the identification of the body that contains the current particle location via a fast searching algorithm based on the tree data structure. Results: We tested our package in an example problem of HDR-brachytherapy dose calculation for shielded cylinder. The dose under the quadric geometry and that under the voxelized geometry agreed in 94.2% of total voxels within 20% isodose line based on a statistical t-test (95% confidence level), where the reference dose was defined to be the one at 0.5cm away from the cylinder surface. It took 243sec to transport 100million source photons under this quadric geometry on an NVidia Titan GPU card. Compared with simulation time of 99.6sec in the voxelized geometry, including quadric geometry reduced efficiency due to the complicated geometry-related computations. Conclusion: Our GPU-based MC package has been extended to support photon transport simulation in quadric geometry. Satisfactory accuracy was observed with a reduced efficiency. Developments for charged

  12. SU-E-T-558: Monte Carlo Photon Transport Simulations On GPU with Quadric Geometry

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Y; Tian, Z; Jiang, S; Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2015-06-15

    Purpose: Monte Carlo simulation on GPU has experienced rapid advancements over the past a few years and tremendous accelerations have been achieved. Yet existing packages were developed only in voxelized geometry. In some applications, e.g. radioactive seed modeling, simulations in more complicated geometry are needed. This abstract reports our initial efforts towards developing a quadric geometry module aiming at expanding the application scope of GPU-based MC simulations. Methods: We defined the simulation geometry consisting of a number of homogeneous bodies, each specified by its material composition and limiting surfaces characterized by quadric functions. A tree data structure was utilized to define geometric relationship between different bodies. We modified our GPU-based photon MC transport package to incorporate this geometry. Specifically, geometry parameters were loaded into GPU’s shared memory for fast access. Geometry functions were rewritten to enable the identification of the body that contains the current particle location via a fast searching algorithm based on the tree data structure. Results: We tested our package in an example problem of HDR-brachytherapy dose calculation for shielded cylinder. The dose under the quadric geometry and that under the voxelized geometry agreed in 94.2% of total voxels within 20% isodose line based on a statistical t-test (95% confidence level), where the reference dose was defined to be the one at 0.5cm away from the cylinder surface. It took 243sec to transport 100million source photons under this quadric geometry on an NVidia Titan GPU card. Compared with simulation time of 99.6sec in the voxelized geometry, including quadric geometry reduced efficiency due to the complicated geometry-related computations. Conclusion: Our GPU-based MC package has been extended to support photon transport simulation in quadric geometry. Satisfactory accuracy was observed with a reduced efficiency. Developments for charged

  13. A Physically—Based Geometry Model for Transport Distance Estimation of Rainfall-Eroded Soil Sediment

    Directory of Open Access Journals (Sweden)

    Qian-Gui Zhang

    2016-01-01

    Full Text Available Estimations of rainfall-induced soil erosion are mostly derived from the weight of sediment measured in natural runoff. The transport distance of eroded soil is important for evaluating landscape evolution but is difficult to estimate, mainly because it cannot be linked directly to the eroded sediment weight. The volume of eroded soil is easier to calculate visually using popular imaging tools, which can aid in estimating the transport distance of eroded soil through geometry relationships. In this study, we present a straightforward geometry model to predict the maximum sediment transport distance incurred by rainfall events of various intensity and duration. In order to verify our geometry prediction model, a series of experiments are reported in the form of a sediment volume. The results show that cumulative rainfall has a linear relationship with the total volume of eroded soil. The geometry model can accurately estimate the maximum transport distance of eroded soil by cumulative rainfall, with a low root-mean-square error (4.7–4.8 and a strong linear correlation (0.74–0.86.

  14. Higher geometry an introduction to advanced methods in analytic geometry

    CERN Document Server

    Woods, Frederick S

    2005-01-01

    For students of mathematics with a sound background in analytic geometry and some knowledge of determinants, this volume has long been among the best available expositions of advanced work on projective and algebraic geometry. Developed from Professor Woods' lectures at the Massachusetts Institute of Technology, it bridges the gap between intermediate studies in the field and highly specialized works.With exceptional thoroughness, it presents the most important general concepts and methods of advanced algebraic geometry (as distinguished from differential geometry). It offers a thorough study

  15. Non-Riemannian geometry

    CERN Document Server

    Eisenhart, Luther Pfahler

    2005-01-01

    This concise text by a prominent mathematician deals chiefly with manifolds dominated by the geometry of paths. Topics include asymmetric and symmetric connections, the projective geometry of paths, and the geometry of sub-spaces. 1927 edition.

  16. The calculation and experiment verification of geometry factors of disk sources and detectors

    International Nuclear Information System (INIS)

    Shi Zhixia; Minowa, Y.

    1993-01-01

    In alpha counting the efficiency of counting system is most frequently determined from the counter response to a calibrated source. Whenever this procedure is used, however, question invariably arise as to the integrity of the standard source, or indeed the validity of the primary calibration. As a check, therefore, it is often helped to be able to calculate the disintegration rate from counting rate data. The conclusion are: 1. If the source is thin enough the error E is generally less than 5%. It is acceptable in routine measurement. When the standard source lacks for experiment we can use the geometry factor calculated instead of measured efficiency. 2. The geometry factor calculated can be used to correct the counter system, study the effect of each parameters and identify those parameters needing careful control. 3. The method of overlapping area of the source and the projection of the detector is very believable, simple and convenient for calculating geometry. (5 tabs.)

  17. Effect of Professional Ethics on Reducing Medical Errors from the Viewpoint of Faculty Members in Medical School of Tabriz University of Medical Sciences

    Directory of Open Access Journals (Sweden)

    Fatemeh Donboli Miandoab

    2017-12-01

    Full Text Available Background: Professionalism and adherence to ethics and professional standards are among the most important topics in medical ethics that can play a role in reducing medical errors. This paper examines and evaluates the effect of professional ethics on reducing medical errors from the viewpoint of faculty members in the medical school of the Tabriz University of Medical Sciences. Methods: in this cross-sectional descriptive study, faculty members of the Tabriz University of Medical Sciences were the statistical population from whom 105 participants were randomly selected through simple random sampling. A questionnaire was used, to examine and compare the self-assessed opinions of faculty members in the internal, surgical, pediatric, gynecological, and psychiatric departments. The questionnaires were completed by a self-assessment method and the collected data was analyzed using SPSS 21. Results: Based on physicians’ opinions, professional ethical considerations and its three domains and aspects have a significant role in reducing medical errors and crimes. The mean scores (standard deviations of the managerial, knowledge and communication skills and environmental variables were respectively 46.7 (5.64, 64.6 (8.14 and 16.2 (2.97 from the physicians’ viewpoints. The significant factors with highest scores on the reduction of medical errors and crimes in all three domains were as follows: in the managerial skills variable, trust, physician’s sense of responsibility against the patient and his/her respect for patients’ rights; in the knowledge and communication skills domain, general competence and eligibility as a physician and examination and diagnosis skills; and, last, in the environmental domain, the sufficiency of trainings in ethical issues during education and their satisfaction with basic needs. Conclusion: Based on the findings of this research, attention to the improvement of communication, management and environment skills should

  18. The Geometry Conference

    CERN Document Server

    Bárány, Imre; Vilcu, Costin

    2016-01-01

    This volume presents easy-to-understand yet surprising properties obtained using topological, geometric and graph theoretic tools in the areas covered by the Geometry Conference that took place in Mulhouse, France from September 7–11, 2014 in honour of Tudor Zamfirescu on the occasion of his 70th anniversary. The contributions address subjects in convexity and discrete geometry, in distance geometry or with geometrical flavor in combinatorics, graph theory or non-linear analysis. Written by top experts, these papers highlight the close connections between these fields, as well as ties to other domains of geometry and their reciprocal influence. They offer an overview on recent developments in geometry and its border with discrete mathematics, and provide answers to several open questions. The volume addresses a large audience in mathematics, including researchers and graduate students interested in geometry and geometrical problems.

  19. Hyperbolic geometry

    CERN Document Server

    Iversen, Birger

    1992-01-01

    Although it arose from purely theoretical considerations of the underlying axioms of geometry, the work of Einstein and Dirac has demonstrated that hyperbolic geometry is a fundamental aspect of modern physics

  20. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  1. Geometry of the Universe

    International Nuclear Information System (INIS)

    Gurevich, L.Eh.; Gliner, Eh.B.

    1978-01-01

    Problems of investigating the Universe space-time geometry are described on a popular level. Immediate space-time geometries, corresponding to three cosmologic models are considered. Space-time geometry of a closed model is the spherical Riemann geonetry, of an open model - is the Lobachevskij geometry; and of a plane model - is the Euclidean geometry. The Universe real geometry in the contemporary epoch of development is based on the data testifying to the fact that the Universe is infinitely expanding

  2. Progress of conversion system from CAD data to MCNP geometry data in Japan

    International Nuclear Information System (INIS)

    Sato, S.; Nashif, H.; Masuda, F.; Morota, H.; Iida, H.; Konno, C.

    2010-01-01

    Automatic conversion systems from CAD data to MCNP geometry input data have been developed to convert the CAD data of the fusion reactor with very complicated structure. So far, two conversion systems (GEOMIT-1 and ARCMCP) have been developed and the third system (GEOMIT-2) is under developing. The void data can be created in these systems. GEOMIT-1 was developed in 2007, but a lot of manual shape splitting work for the CAD data was required to convert the complicated geometry. ARCMCP was developed in 2008. The algorithm has been drastically improved on automatic creation of ambiguous surface in ARCMCP, but it still required a little manual shape splitting work. The latest system, GEOMIT-2, does not require additional commercial software packages, though the previous systems require them. It also has functions of the CAD data healing and the automatic shape splitting. Geometrical errors of CAD data can be automatically revised by the healing function, and complicated geometries can be automatically split into simple geometries by the shape splitting function. Any manual works for CAD data are not required in GEOMIT-2. GEOMIT-2 is very useful for nuclear analyses of fusion reactors.

  3. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    Science.gov (United States)

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  4. The effect of experimental sleep fragmentation on error monitoring.

    Science.gov (United States)

    Ko, Cheng-Hung; Fang, Ya-Wen; Tsai, Ling-Ling; Hsieh, Shulan

    2015-01-01

    Experimental sleep fragmentation (SF) is characterized by frequent brief arousals without reduced total sleep time and causes daytime sleepiness and impaired neurocognitive processes. This study explored the impact of SF on error monitoring. Thirteen adults underwent auditory stimuli-induced high-level (H) and low-level (L) SF nights. Flanker task performance and electroencephalogram data were collected in the morning following SF nights. Compared to LSF, HSF induced more arousals and stage N1 sleep, decreased slow wave sleep and rapid-eye-movement sleep (REMS), decreased subjective sleep quality, increased daytime sleepiness, and decreased amplitudes of P300 and error-related positivity (Pe). SF effects on N1 sleep were negatively correlated with SF effects on the Pe amplitude. Furthermore, as REMS was reduced by SF, post-error accuracy compensations were greatly reduced. In conclusion, attentional processes and error monitoring were impaired following one night of frequent sleep disruptions, even when total sleep time was not reduced. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. A dose error evaluation study for 4D dose calculations

    Science.gov (United States)

    Milz, Stefan; Wilkens, Jan J.; Ullrich, Wolfgang

    2014-10-01

    Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms. The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms. The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex

  6. Exponential characteristics spatial quadrature for discrete ordinates radiation transport in slab geometry

    International Nuclear Information System (INIS)

    Mathews, K.; Sjoden, G.; Minor, B.

    1994-01-01

    The exponential characteristic spatial quadrature for discrete ordinates neutral particle transport in slab geometry is derived and compared with current methods. It is similar to the linear characteristic (or, in slab geometry, the linear nodal) quadrature but differs by assuming an exponential distribution of the scattering source within each cell, S(x) = a exp(bx), whose parameters are root-solved to match the known (from the previous iteration) average and first moment of the source over the cell. Like the linear adaptive method, the exponential characteristic method is positive and nonlinear but more accurate and more readily extended to other cell shapes. The nonlinearity has not interfered with convergence. The authors introduce the ''exponential moment functions,'' a generalization of the functions used by Walters in the linear nodal method, and use them to avoid numerical ill-conditioning. The method exhibits O(Δx 4 ) truncation error on fine enough meshes; the error is insensitive to mesh size for coarse meshes. In a shielding problem, it is accurate to 10% using 16-mfp-thick cells; conventional methods err by 8 to 15 orders of magnitude. The exponential characteristic method is computationally more costly per cell than current methods but can be accurate with very thick cells, leading to increased computational efficiency on appropriate problems

  7. Geometry of the local equivalence of states

    Energy Technology Data Exchange (ETDEWEB)

    Sawicki, A; Kus, M, E-mail: assawi@cft.edu.pl, E-mail: marek.kus@cft.edu.pl [Center for Theoretical Physics, Polish Academy of Sciences, Al Lotnikow 32/46, 02-668 Warszawa (Poland)

    2011-12-09

    We present a description of locally equivalent states in terms of symplectic geometry. Using the moment map between local orbits in the space of states and coadjoint orbits of the local unitary group, we reduce the problem of local unitary equivalence to an easy part consisting of identifying the proper coadjoint orbit and a harder problem of the geometry of fibers of the moment map. We give a detailed analysis of the properties of orbits of 'equally entangled states'. In particular, we show connections between certain symplectic properties of orbits such as their isotropy and coisotropy with effective criteria of local unitary equivalence. (paper)

  8. On organizing principles of discrete differential geometry. Geometry of spheres

    International Nuclear Information System (INIS)

    Bobenko, Alexander I; Suris, Yury B

    2007-01-01

    Discrete differential geometry aims to develop discrete equivalents of the geometric notions and methods of classical differential geometry. This survey contains a discussion of the following two fundamental discretization principles: the transformation group principle (smooth geometric objects and their discretizations are invariant with respect to the same transformation group) and the consistency principle (discretizations of smooth parametrized geometries can be extended to multidimensional consistent nets). The main concrete geometric problem treated here is discretization of curvature-line parametrized surfaces in Lie geometry. Systematic use of the discretization principles leads to a discretization of curvature-line parametrization which unifies circular and conical nets.

  9. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  10. Geometry and its applications

    CERN Document Server

    Meyer, Walter J

    2006-01-01

    Meyer''s Geometry and Its Applications, Second Edition, combines traditional geometry with current ideas to present a modern approach that is grounded in real-world applications. It balances the deductive approach with discovery learning, and introduces axiomatic, Euclidean geometry, non-Euclidean geometry, and transformational geometry. The text integrates applications and examples throughout and includes historical notes in many chapters. The Second Edition of Geometry and Its Applications is a significant text for any college or university that focuses on geometry''s usefulness in other disciplines. It is especially appropriate for engineering and science majors, as well as future mathematics teachers.* Realistic applications integrated throughout the text, including (but not limited to): - Symmetries of artistic patterns- Physics- Robotics- Computer vision- Computer graphics- Stability of architectural structures- Molecular biology- Medicine- Pattern recognition* Historical notes included in many chapters...

  11. Cognitive aspect of diagnostic errors.

    Science.gov (United States)

    Phua, Dong Haur; Tan, Nigel C K

    2013-01-01

    Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.

  12. A spectral nodal method for discrete ordinates problems in x,y geometry

    International Nuclear Information System (INIS)

    Barros, R.C. de; Larsen, E.W.

    1991-06-01

    A new nodal method is proposed for the solution of S N problems in x- y-geometry. This method uses the Spectral Green's Function (SGF) scheme for solving the one-dimensional transverse-integrated nodal transport equations with no spatial truncation error. Thus, the only approximations in the x, y-geometry nodal method occur in the transverse leakage terms, as in diffusion theory. We approximate these leakage terms using a flat or constant approximation, and we refer to the resulting method as the SGF-Constant Nodal (SGF-CN) method. We show in numerical calculations that the SGF-CN method is much more accurate than other well-known transport nodal methods for coarse-mesh deep-penetration S N problems, even though the transverse leakage terms are approximated rather simply. (author)

  13. Radiologic errors, past, present and future.

    Science.gov (United States)

    Berlin, Leonard

    2014-01-01

    During the 10-year period beginning in 1949 with publication of five articles in two radiology journals and UKs The Lancet, a California radiologist named L.H. Garland almost single-handedly shocked the entire medical and especially the radiologic community. He focused their attention on the fact now known and accepted by all, but at that time not previously recognized and acknowledged only with great reluctance, that a substantial degree of observer error was prevalent in radiologic interpretation. In the more than half-century that followed, Garland's pioneering work has been affirmed and reaffirmed by numerous researchers. Retrospective studies disclosed then and still disclose today that diagnostic errors in radiologic interpretations of plain radiographic (as well as CT, MR, ultrasound, and radionuclide) images hover in the 30% range, not too dissimilar to the error rates in clinical medicine. Seventy percent of these errors are perceptual in nature, i.e., the radiologist does not "see" the abnormality on the imaging exam, perhaps due to poor conspicuity, satisfaction of search, or simply the "inexplicable psycho-visual phenomena of human perception." The remainder are cognitive errors: the radiologist sees an abnormality but fails to render a correct diagnoses by attaching the wrong significance to what is seen, perhaps due to inadequate knowledge, or an alliterative or judgmental error. Computer-assisted detection (CAD), a technology that for the past two decades has been utilized primarily in mammographic interpretation, increases sensitivity but at the same time decreases specificity; whether it reduces errors is debatable. Efforts to reduce diagnostic radiological errors continue, but the degree to which they will be successful remains to be determined.

  14. Errors and mistakes in the traditional optimum design of experiments on exponential absorption

    International Nuclear Information System (INIS)

    Burge, E.J.

    1977-01-01

    The treatment of statistical errors in absorption experiments using particle counters, given by Rose and Shapiro (1948), is shown to be incorrect for non-zero background counts. For the simplest case of only one absorber thickness, revised conditions are computed for the optimum geometry and the best apportionment of counting times for the incident and transmitted beams for a wide range of relative backgrounds (0, 10 -5 -10 2 ). The two geometries of Rose and Shapiro are treated, (I) beam area fixed, absorber thickness varied, and (II) beam area and absorber thickness both varied, but with effective volume of absorber constant. For case (I) the new calculated errors in the absorption coefficients are shown to be about 0.7 of the Rose and Shapiro values for the largest background, and for case (II) about 0.4. The corresponding fractional times for background counts are (I) 0.7 and (II) 0.07 of those given by Rose and Shapiro. For small backgrounds the differences are negligible. Revised values are also computed for the sensitivity of the accuracy to deviations from optimum transmission. (Auth.)

  15. Beautiful geometry

    CERN Document Server

    Maor, Eli

    2014-01-01

    If you've ever thought that mathematics and art don't mix, this stunning visual history of geometry will change your mind. As much a work of art as a book about mathematics, Beautiful Geometry presents more than sixty exquisite color plates illustrating a wide range of geometric patterns and theorems, accompanied by brief accounts of the fascinating history and people behind each. With artwork by Swiss artist Eugen Jost and text by acclaimed math historian Eli Maor, this unique celebration of geometry covers numerous subjects, from straightedge-and-compass constructions to intriguing configur

  16. A Mobile Device App to Reduce Time to Drug Delivery and Medication Errors During Simulated Pediatric Cardiopulmonary Resuscitation: A Randomized Controlled Trial.

    Science.gov (United States)

    Siebert, Johan N; Ehrler, Frederic; Combescure, Christophe; Lacroix, Laurence; Haddad, Kevin; Sanchez, Oliver; Gervaix, Alain; Lovis, Christian; Manzano, Sergio

    2017-02-01

    During pediatric cardiopulmonary resuscitation (CPR), vasoactive drug preparation for continuous infusion is both complex and time-consuming, placing children at higher risk than adults for medication errors. Following an evidence-based ergonomic-driven approach, we developed a mobile device app called Pediatric Accurate Medication in Emergency Situations (PedAMINES), intended to guide caregivers step-by-step from preparation to delivery of drugs requiring continuous infusion. The aim of our study was to determine whether the use of PedAMINES reduces drug preparation time (TDP) and time to delivery (TDD; primary outcome), as well as medication errors (secondary outcomes) when compared with conventional preparation methods. The study was a randomized controlled crossover trial with 2 parallel groups comparing PedAMINES with a conventional and internationally used drugs infusion rate table in the preparation of continuous drug infusion. We used a simulation-based pediatric CPR cardiac arrest scenario with a high-fidelity manikin in the shock room of a tertiary care pediatric emergency department. After epinephrine-induced return of spontaneous circulation, pediatric emergency nurses were first asked to prepare a continuous infusion of dopamine, using either PedAMINES (intervention group) or the infusion table (control group), and second, a continuous infusion of norepinephrine by crossing the procedure. The primary outcome was the elapsed time in seconds, in each allocation group, from the oral prescription by the physician to TDD by the nurse. TDD included TDP. The secondary outcome was the medication dosage error rate during the sequence from drug preparation to drug injection. A total of 20 nurses were randomized into 2 groups. During the first study period, mean TDP while using PedAMINES and conventional preparation methods was 128.1 s (95% CI 102-154) and 308.1 s (95% CI 216-400), respectively (180 s reduction, P=.002). Mean TDD was 214 s (95% CI 171-256) and

  17. [Efficacy of motivational interviewing for reducing medication errors in chronic patients over 65 years with polypharmacy: Results of a cluster randomized trial].

    Science.gov (United States)

    Pérula de Torres, Luis Angel; Pulido Ortega, Laura; Pérula de Torres, Carlos; González Lama, Jesús; Olaya Caro, Inmaculada; Ruiz Moral, Roger

    2014-10-21

    To evaluate the effectiveness of an intervention based on motivational interviewing to reduce medication errors in chronic patients over 65 with polypharmacy. Cluster randomized trial that included doctors and nurses of 16 Primary Care centers and chronic patients with polypharmacy over 65 years. The professionals were assigned to the experimental or the control group using stratified randomization. Interventions consisted of training of professionals and revision of patient treatments, application of motivational interviewing in the experimental group and also the usual approach in the control group. The primary endpoint (medication error) was analyzed at individual level, and was estimated with the absolute risk reduction (ARR), relative risk reduction (RRR), number of subjects to treat (NNT) and by multiple logistic regression analysis. Thirty-two professionals were randomized (19 doctors and 13 nurses), 27 of them recruited 154 patients consecutively (13 professionals in the experimental group recruited 70 patients and 14 professionals recruited 84 patients in the control group) and completed 6 months of follow-up. The mean age of patients was 76 years (68.8% women). A decrease in the average of medication errors was observed along the period. The reduction was greater in the experimental than in the control group (F=5.109, P=.035). RRA 29% (95% confidence interval [95% CI] 15.0-43.0%), RRR 0.59 (95% CI:0.31-0.76), and NNT 3.5 (95% CI 2.3-6.8). Motivational interviewing is more efficient than the usual approach to reduce medication errors in patients over 65 with polypharmacy. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.

  18. Revolutions of Geometry

    CERN Document Server

    O'Leary, Michael

    2010-01-01

    Guides readers through the development of geometry and basic proof writing using a historical approach to the topic. In an effort to fully appreciate the logic and structure of geometric proofs, Revolutions of Geometry places proofs into the context of geometry's history, helping readers to understand that proof writing is crucial to the job of a mathematician. Written for students and educators of mathematics alike, the book guides readers through the rich history and influential works, from ancient times to the present, behind the development of geometry. As a result, readers are successfull

  19. Analysis of the effect of cone-beam geometry and test object configuration on the measurement accuracy of a computed tomography scanner used for dimensional measurement

    International Nuclear Information System (INIS)

    Kumar, Jagadeesha; Attridge, Alex; Williams, Mark A; Wood, P K C

    2011-01-01

    Industrial x-ray computed tomography (CT) scanners are used for non-contact dimensional measurement of small, fragile components and difficult-to-access internal features of castings and mouldings. However, the accuracy and repeatability of measurements are influenced by factors such as cone-beam system geometry, test object configuration, x-ray power, material and size of test object, detector characteristics and data analysis methods. An attempt is made in this work to understand the measurement errors of a CT scanner over the complete scan volume, taking into account only the errors in system geometry and the object configuration within the scanner. A cone-beam simulation model is developed with the radiographic image projection and reconstruction steps. A known amount of errors in geometrical parameters were introduced in the model to understand the effect of geometry of the cone-beam CT system on measurement accuracy for different positions, orientations and sizes of the test object. Simulation analysis shows that the geometrical parameters have a significant influence on the dimensional measurement at specific configurations of the test object. Finally, the importance of system alignment and estimation of correct parameters for accurate CT measurements is outlined based on the analysis

  20. Competition increases binding errors in visual working memory.

    Science.gov (United States)

    Emrich, Stephen M; Ferber, Susanne

    2012-04-20

    When faced with maintaining multiple objects in visual working memory, item information must be bound to the correct object in order to be correctly recalled. Sometimes, however, binding errors occur, and participants report the feature (e.g., color) of an unprobed, non-target item. In the present study, we examine whether the configuration of sample stimuli affects the proportion of these binding errors. The results demonstrate that participants mistakenly report the identity of the unprobed item (i.e., they make a non-target response) when sample items are presented close together in space, suggesting that binding errors can increase independent of increases in memory load. Moreover, the proportion of these non-target responses is linearly related to the distance between sample items, suggesting that these errors are spatially specific. Finally, presenting sample items sequentially decreases non-target responses, suggesting that reducing competition between sample stimuli reduces the number of binding errors. Importantly, these effects all occurred without increases in the amount of error in the memory representation. These results suggest that competition during encoding can account for some of the binding errors made during VWM recall.

  1. An error bound estimate and convergence of the Nodal-LTS {sub N} solution in a rectangle

    Energy Technology Data Exchange (ETDEWEB)

    Hauser, Eliete Biasotto [Faculty of Mathematics, PUCRS Av Ipiranga 6681, Building 15, Porto Alegre - RS 90619-900 (Brazil)]. E-mail: eliete@pucrs.br; Pazos, Ruben Panta [Department of Mathematics, UNISC Av Independencia, 2293, room 1301, Santa Cruz do Sul - RS 96815-900 (Brazil)]. E-mail: rpp@impa.br; Tullio de Vilhena, Marco [Graduate Program in Applied Mathematics, UFRGS Av Bento Goncalves 9500, Building 43-111, Porto Alegre - RS 91509-900 (Brazil)]. E-mail: vilhena@mat.ufrgs.br

    2005-07-15

    In this work, we report the mathematical analysis concerning error bound estimate and convergence of the Nodal-LTS {sub N} solution in a rectangle. For such we present an efficient algorithm, called LTS {sub N} 2D-Diag solution for Cartesian geometry.

  2. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  3. Evaluation of a breath-motion-correction technique in reducing measurement error in hepatic CT perfusion imaging

    International Nuclear Information System (INIS)

    He Wei; Liu Jianyu; Li Xuan; Li Jianying; Liao Jingmin

    2009-01-01

    Objective: To evaluate the effect of a breath-motion-correction (BMC) technique in reducing measurement error of the time-density curve (TDC) in hepatic CT perfusion imaging. Methods: Twenty-five patients with suspected liver diseases underwent hepatic CT perfusion scans. The right branch of portal vein was selected as the anatomy of interest and performed BMC to realign image slices for the TDC according to the rule of minimizing the temporal changes of overall structures. Ten ROIs was selected on the right branch of portal vein to generate 10 TDCs each with and without BMC. The values of peak enhancement and the time-to-peak enhancement for each TDC were measured. The coefficients of variation (CV) of peak enhancement and the time-to-peak enhancement were calculated for each patient with and without BMC. Wilcoxon signed ranks test was used to evaluate the difference between the CV of the two parameters obtained with and without BMC. Independent-samples t test was used to evaluate the difference between the values of peak enhancement obtained with and without BMC. Results: The median (quartiles) of CV of peak enhancement with BMC [2.84% (2.10%, 4.57%)] was significantly lower than that without BMC [5.19% (3.90%, 7.27%)] (Z=-3.108,P<0.01). The median (quartiles) of CV of time-to-peak enhancement with BMC [2.64% (0.76%, 4.41%)] was significantly lower than that without BMC [5.23% (3.81%, 7.43%)] (Z=-3.924, P<0.01). In 8 cases, TDC demonstrated statistically significant higher peak enhancement with BMC (P<0.05). Conclusion: By applying the BMC technique we can effectively reduce measurement error for parameters of the TDC in hepatic CT perfusion imaging. (authors)

  4. Analogy and Dynamic Geometry System Used to Introduce Three-Dimensional Geometry

    Science.gov (United States)

    Mammana, M. F.; Micale, B.; Pennisi, M.

    2012-01-01

    We present a sequence of classroom activities on Euclidean geometry, both plane and space geometry, used to make three dimensional geometry more catchy and simple. The activity consists of a guided research activity that leads the students to discover unexpected properties of two apparently distant geometrical entities, quadrilaterals and…

  5. Reducing errors in aircraft atmospheric inversion estimates of point-source emissions: the Aliso Canyon natural gas leak as a natural tracer experiment

    Science.gov (United States)

    Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.

    2018-04-01

    Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and

  6. Image pre-filtering for measurement error reduction in digital image correlation

    Science.gov (United States)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  7. Information geometry

    CERN Document Server

    Ay, Nihat; Lê, Hông Vân; Schwachhöfer, Lorenz

    2017-01-01

    The book provides a comprehensive introduction and a novel mathematical foundation of the field of information geometry with complete proofs and detailed background material on measure theory, Riemannian geometry and Banach space theory. Parametrised measure models are defined as fundamental geometric objects, which can be both finite or infinite dimensional. Based on these models, canonical tensor fields are introduced and further studied, including the Fisher metric and the Amari-Chentsov tensor, and embeddings of statistical manifolds are investigated. This novel foundation then leads to application highlights, such as generalizations and extensions of the classical uniqueness result of Chentsov or the Cramér-Rao inequality. Additionally, several new application fields of information geometry are highlighted, for instance hierarchical and graphical models, complexity theory, population genetics, or Markov Chain Monte Carlo. The book will be of interest to mathematicians who are interested in geometry, inf...

  8. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Training situational awareness to reduce surgical errors in the operating room

    NARCIS (Netherlands)

    Graafland, M.; Schraagen, J.M.C.; Boermeester, M.A.; Bemelman, W.A.; Schijven, M.P.

    2015-01-01

    Background: Surgical errors result from faulty decision-making, misperceptions and the application of suboptimal problem-solving strategies, just as often as they result from technical failure. To date, surgical training curricula have focused mainly on the acquisition of technical skills. The aim

  10. Representing Misalignments of the STAR Geometry Model using AgML

    Science.gov (United States)

    Webb, Jason C.; Lauret, Jérôme; Perevotchikov, Victor; Smirnov, Dmitri; Van Buren, Gene

    2017-10-01

    The STAR Heavy Flavor Tracker (HFT) was designed to provide high-precision tracking for the identification of charmed hadron decays in heavy-ion collisions at RHIC. It consists of three independently mounted subsystems, providing four precision measurements along the track trajectory, with the goal of pointing decay daughters back to vertices displaced by less than 100 microns from the primary event vertex. The ultimate efficiency and resolution of the physics analysis will be driven by the quality of the simulation and reconstruction of events in heavy-ion collisions. In particular, it is important that the geometry model properly accounts for the relative misalignments of the HFT subsystems, along with the alignment of the HFT relative to STARs primary tracking detector, the Time Projection Chamber (TPC). The Abstract Geometry Modeling Language (AgML) provides a single description of the STAR geometry, generating both our simulation (GEANT 3) and reconstruction geometries (ROOT). AgML implements an ideal detector model, while misalignments are stored separately in database tables. These have historically been applied at the hit level. Simulated detector hits are projected from their ideal position along the track’s trajectory, until they intersect the misaligned detector volume, where the struck detector element is calculated for hit digitization. This scheme has worked well as hit errors have been negligible compared with the size of sensitive volumes. The precision and complexity of the HFT detector require us to apply misalignments to the detector volumes themselves. In this paper we summarize the extension of the AgML language and support libraries to enable the static misalignment of our reconstruction and simulation geometries, discussing the design goals, limitations and path to full misalignment support in ROOT/VMC-based simulation.

  11. Communication: Calculation of interatomic forces and optimization of molecular geometry with auxiliary-field quantum Monte Carlo

    Science.gov (United States)

    Motta, Mario; Zhang, Shiwei

    2018-05-01

    We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.

  12. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  13. Discretisation errors in Landau gauge on the lattice

    International Nuclear Information System (INIS)

    Bonnet DR, Frederic; Bowman O, Patrick; Leinweber B, Derek; Williams G, Anthony; Richards G, David G.

    1999-01-01

    Lattice discretization errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition improves comparison with continuum Landau gauge in two ways: (1) through the elimination of O(a 2 ) errors and (2) through a secondary effect of reducing the size of higher-order errors. These results emphasize the importance of implementing an improved gauge fixing condition

  14. Influence of Daily Set-Up Errors on Dose Distribution During Pelvis Radiotherapy

    International Nuclear Information System (INIS)

    Kasabasic, M.; Ivkovic, A.; Faj, D.; Rajevac, V.; Sobat, H.; Jurkovic, S.

    2011-01-01

    An external beam radiotherapy (EBRT) using megavoltage beam of linear accelerator is usually the treatment of choice for the cancer patients. The goal of EBRT is to deliver the prescribed dose to the target volume, with as low as possible dose to the surrounding healthy tissue. A large number of procedures and different professions involved in radiotherapy process, uncertainty of equipment and daily patient set-up errors can cause a difference between the planned and delivered dose. We investigated a part of this difference caused by daily patient set-up errors. Daily set-up errors for 35 patients were measured. These set-up errors were simulated on 5 patients, using 3D treatment planning software XiO (CMS Inc., St. Louis, MO). The differences in dose distributions between the planned and shifted ''geometry'' were investigated. Additionally, an influence of the error on treatment plan selection was checked by analyzing the change in dose volume histograms, planning target volume conformity index (CI P TV) and homogeneity index (HI). Simulations showed that patient daily set-up errors can cause significant differences between the planned and actual dose distributions. Moreover, for some patients those errors could influence the choice of treatment plan since CI P TV fell under 97 %. Surprisingly, HI was not as sensitive as CI P TV on set-up errors. The results showed the need for minimizing daily set-up errors by quality assurance programme. (author)

  15. Multimodal system designed to reduce errors in recording and administration of drugs in anaesthesia: prospective randomised clinical evaluation.

    Science.gov (United States)

    Merry, Alan F; Webster, Craig S; Hannam, Jacqueline; Mitchell, Simon J; Henderson, Robert; Reid, Papaarangi; Edwards, Kylie-Ellen; Jardim, Anisoara; Pak, Nick; Cooper, Jeremy; Hopley, Lara; Frampton, Chris; Short, Timothy G

    2011-09-22

    To clinically evaluate a new patented multimodal system (SAFERSleep) designed to reduce errors in the recording and administration of drugs in anaesthesia. Prospective randomised open label clinical trial. Five designated operating theatres in a major tertiary referral hospital. Eighty nine consenting anaesthetists managing 1075 cases in which there were 10,764 drug administrations. Use of the new system (which includes customised drug trays and purpose designed drug trolley drawers to promote a well organised anaesthetic workspace and aseptic technique; pre-filled syringes for commonly used anaesthetic drugs; large legible colour coded drug labels; a barcode reader linked to a computer, speakers, and touch screen to provide automatic auditory and visual verification of selected drugs immediately before each administration; automatic compilation of an anaesthetic record; an on-screen and audible warning if an antibiotic has not been administered within 15 minutes of the start of anaesthesia; and certain procedural rules-notably, scanning the label before each drug administration) versus conventional practice in drug administration with a manually compiled anaesthetic record. Primary: composite of errors in the recording and administration of intravenous drugs detected by direct observation and by detailed reconciliation of the contents of used drug vials against recorded administrations; and lapses in responding to an intermittent visual stimulus (vigilance latency task). Secondary: outcomes in patients; analyses of anaesthetists' tasks and assessments of workload; evaluation of the legibility of anaesthetic records; evaluation of compliance with the procedural rules of the new system; and questionnaire based ratings of the respective systems by participants. The overall mean rate of drug errors per 100 administrations was 9.1 (95% confidence interval 6.9 to 11.4) with the new system (one in 11 administrations) and 11.6 (9.3 to 13.9) with conventional methods (one

  16. Indoor Localization and Radio Map Estimation using Unsupervised Manifold Alignment with Geometry Perturbation

    KAUST Repository

    Majeed, Khaqan

    2015-12-22

    The Received Signal Strength (RSS) based fingerprinting approaches for indoor localization pose a need for updating the fingerprint databases due to dynamic nature of the indoor environment. This process is hectic and time-consuming when the size of the indoor area is large. The semi-supervised approaches reduce this workload and achieve good accuracy around 15% of the fingerprinting load but the performance is severely degraded if it is reduced below this level. We propose an indoor localization framework that uses unsupervised manifold alignment. It requires only 1% of the fingerprinting load, some crowd sourced readings and plan coordinates of the indoor area. The 1% fingerprinting load is used only in perturbing the local geometries of the plan coordinates. The proposed framework achieves less than 5m mean localization error, which is considerably better than semi-supervised approaches at very small amount of fingerprinting load. In addition, the few location estimations together with few fingerprints help to estimate the complete radio map of the indoor environment. The estimation of radio map does not demand extra workload rather it employs the already available information from the proposed indoor localization framework. The testing results for radio map estimation show almost 50% performance improvement by using this information as compared to using only fingerprints.

  17. Indoor Localization and Radio Map Estimation using Unsupervised Manifold Alignment with Geometry Perturbation

    KAUST Repository

    Majeed, Khaqan; Sorour, Sameh; Al-Naffouri, Tareq Y.; Valaee, Shahrokh

    2015-01-01

    The Received Signal Strength (RSS) based fingerprinting approaches for indoor localization pose a need for updating the fingerprint databases due to dynamic nature of the indoor environment. This process is hectic and time-consuming when the size of the indoor area is large. The semi-supervised approaches reduce this workload and achieve good accuracy around 15% of the fingerprinting load but the performance is severely degraded if it is reduced below this level. We propose an indoor localization framework that uses unsupervised manifold alignment. It requires only 1% of the fingerprinting load, some crowd sourced readings and plan coordinates of the indoor area. The 1% fingerprinting load is used only in perturbing the local geometries of the plan coordinates. The proposed framework achieves less than 5m mean localization error, which is considerably better than semi-supervised approaches at very small amount of fingerprinting load. In addition, the few location estimations together with few fingerprints help to estimate the complete radio map of the indoor environment. The estimation of radio map does not demand extra workload rather it employs the already available information from the proposed indoor localization framework. The testing results for radio map estimation show almost 50% performance improvement by using this information as compared to using only fingerprints.

  18. Geometry essentials for dummies

    CERN Document Server

    Ryan, Mark

    2011-01-01

    Just the critical concepts you need to score high in geometry This practical, friendly guide focuses on critical concepts taught in a typical geometry course, from the properties of triangles, parallelograms, circles, and cylinders, to the skills and strategies you need to write geometry proofs. Geometry Essentials For Dummies is perfect for cramming or doing homework, or as a reference for parents helping kids study for exams. Get down to the basics - get a handle on the basics of geometry, from lines, segments, and angles, to vertices, altitudes, and diagonals Conque

  19. Systematics of IIB spinorial geometry

    OpenAIRE

    Gran, U.; Gutowski, J.; Papadopoulos, G.; Roest, D.

    2005-01-01

    We reduce the classification of all supersymmetric backgrounds of IIB supergravity to the evaluation of the Killing spinor equations and their integrability conditions, which contain the field equations, on five types of spinors. This extends the work of [hep-th/0503046] to IIB supergravity. We give the expressions of the Killing spinor equations on all five types of spinors. In this way, the Killing spinor equations become a linear system for the fluxes, geometry and spacetime derivatives of...

  20. Use of FMEA analysis to reduce risk of errors in prescribing and administering drugs in paediatric wards: a quality improvement report.

    Science.gov (United States)

    Lago, Paola; Bizzarri, Giancarlo; Scalzotto, Francesca; Parpaiola, Antonella; Amigoni, Angela; Putoto, Giovanni; Perilongo, Giorgio

    2012-01-01

    Administering medication to hospitalised infants and children is a complex process at high risk of error. Failure mode and effect analysis (FMEA) is a proactive tool used to analyse risks, identify failures before they happen and prioritise remedial measures. To examine the hazards associated with the process of drug delivery to children, we performed a proactive risk-assessment analysis. Five multidisciplinary teams, representing different divisions of the paediatric department at Padua University Hospital, were trained to analyse the drug-delivery process, to identify possible causes of failures and their potential effects, to calculate a risk priority number (RPN) for each failure and plan changes in practices. To identify higher-priority potential failure modes as defined by RPNs and planning changes in clinical practice to reduce the risk of patients harm and improve safety in the process of medication use in children. In all, 37 higher-priority potential failure modes and 71 associated causes and effects were identified. The highest RPNs related (>48) mainly to errors in calculating drug doses and concentrations. Many of these failure modes were found in all the five units, suggesting the presence of common targets for improvement, particularly in enhancing the safety of prescription and preparation of endovenous drugs. The introductions of new activities in the revised process of administering drugs allowed reducing the high-risk failure modes of 60%. FMEA is an effective proactive risk-assessment tool useful to aid multidisciplinary groups in understanding a process care and identifying errors that may occur, prioritising remedial interventions and possibly enhancing the safety of drug delivery in children.

  1. VoxelMages: a general-purpose graphical interface for designing geometries and processing DICOM images for PENELOPE.

    Science.gov (United States)

    Giménez-Alventosa, V; Ballester, F; Vijande, J

    2016-12-01

    The design and construction of geometries for Monte Carlo calculations is an error-prone, time-consuming, and complex step in simulations describing particle interactions and transport in the field of medical physics. The software VoxelMages has been developed to help the user in this task. It allows to design complex geometries and to process DICOM image files for simulations with the general-purpose Monte Carlo code PENELOPE in an easy and straightforward way. VoxelMages also allows to import DICOM-RT structure contour information as delivered by a treatment planning system. Its main characteristics, usage and performance benchmarking are described in detail. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Effects of confinement, geometry, inlet velocity profile, and Reynolds number on the asymmetry of opposed-jet flows

    Science.gov (United States)

    Ansari, Abtin; Chen, Kevin K.; Burrell, Robert R.; Egolfopoulos, Fokion N.

    2018-04-01

    The opposed-jet counterflow configuration is widely used to measure fundamental flame properties that are essential targets for validating chemical kinetic models. The main and key assumption of the counterflow configuration in laminar flame experiments is that the flow field is steady and quasi-one-dimensional. In this study, experiments and numerical simulations were carried out to investigate the behavior and controlling parameters of counterflowing isothermal air jets for various nozzle designs, Reynolds numbers, and surrounding geometries. The flow field in the jets' impingement region was analyzed in search of instabilities, asymmetries, and two-dimensional effects that can introduce errors when the data are compared with results of quasi-one-dimensional simulations. The modeling involved transient axisymmetric numerical simulations along with bifurcation analysis, which revealed that when the flow field is confined between walls, local bifurcation occurs, which in turn results in asymmetry, deviation from the one-dimensional assumption, and sensitivity of the flow field structure to boundary conditions and surrounding geometry. Particle image velocimetry was utilized and results revealed that for jets of equal momenta at low Reynolds numbers of the order of 300, the flow field is asymmetric with respect to the middle plane between the nozzles even in the absence of confining walls. The asymmetry was traced to the asymmetric nozzle exit velocity profiles caused by unavoidable imperfections in the nozzle assembly. The asymmetry was not detectable at high Reynolds numbers of the order of 1000 due to the reduced sensitivity of the flow field to boundary conditions. The cases investigated computationally covered a wide range of Reynolds numbers to identify designs that are minimally affected by errors in the experimental procedures or manufacturing imperfections, and the simulations results were used to identify conditions that best conform to the assumptions of

  3. Medication error detection in two major teaching hospitals: What are the types of errors?

    Directory of Open Access Journals (Sweden)

    Fatemeh Saghafi

    2014-01-01

    Full Text Available Background: Increasing number of reports on medication errors and relevant subsequent damages, especially in medical centers has become a growing concern for patient safety in recent decades. Patient safety and in particular, medication safety is a major concern and challenge for health care professionals around the world. Our prospective study was designed to detect prescribing, transcribing, dispensing, and administering medication errors in two major university hospitals. Materials and Methods: After choosing 20 similar hospital wards in two large teaching hospitals in the city of Isfahan, Iran, the sequence was randomly selected. Diagrams for drug distribution were drawn by the help of pharmacy directors. Direct observation technique was chosen as the method for detecting the errors. A total of 50 doses were studied in each ward to detect prescribing, transcribing and administering errors in each ward. The dispensing error was studied on 1000 doses dispensed in each hospital pharmacy. Results: A total of 8162 number of doses of medications were studied during the four stages, of which 8000 were complete data to be analyzed. 73% of prescribing orders were incomplete and did not have all six parameters (name, dosage form, dose and measuring unit, administration route, and intervals of administration. We found 15% transcribing errors. One-third of administration of medications on average was erroneous in both hospitals. Dispensing errors ranged between 1.4% and 2.2%. Conclusion: Although prescribing and administrating compromise most of the medication errors, improvements are needed in all four stages with regard to medication errors. Clear guidelines must be written and executed in both hospitals to reduce the incidence of medication errors.

  4. Hyperbolic geometry of Kuramoto oscillator networks

    Science.gov (United States)

    Chen, Bolun; Engelbrecht, Jan R.; Mirollo, Renato

    2017-09-01

    Kuramoto oscillator networks have the special property that their trajectories are constrained to lie on the (at most) 3D orbits of the Möbius group acting on the state space T N (the N-fold torus). This result has been used to explain the existence of the N-3 constants of motion discovered by Watanabe and Strogatz for Kuramoto oscillator networks. In this work we investigate geometric consequences of this Möbius group action. The dynamics of Kuramoto phase models can be further reduced to 2D reduced group orbits, which have a natural geometry equivalent to the unit disk \

  5. Robust simulation of buckled structures using reduced order modeling

    International Nuclear Information System (INIS)

    Wiebe, R.; Perez, R.A.; Spottswood, S.M.

    2016-01-01

    Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties. (paper)

  6. Robust simulation of buckled structures using reduced order modeling

    Science.gov (United States)

    Wiebe, R.; Perez, R. A.; Spottswood, S. M.

    2016-09-01

    Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.

  7. An improved injector bunching geometry for ATLAS

    Indian Academy of Sciences (India)

    This geometry improves the handling of space charge for high-current beams, significantly increases the capture fraction into the primary rf bucket and reduces the capture fraction of the unwanted parasitic rf bucket. Total capture and transport through the PII has been demonstrated as high as 80% of the injected dc beam ...

  8. Properties of the center of gravity as an algorithm for position measurements: Two-dimensional geometry

    CERN Document Server

    Landi, Gregorio

    2003-01-01

    The center of gravity as an algorithm for position measurements is analyzed for a two-dimensional geometry. Several mathematical consequences of discretization for various types of detector arrays are extracted. Arrays with rectangular, hexagonal, and triangular detectors are analytically studied, and tools are given to simulate their discretization properties. Special signal distributions free of discretized error are isolated. It is proved that some crosstalk spreads are able to eliminate the center of gravity discretization error for any signal distribution. Simulations, adapted to the CMS em-calorimeter and to a triangular detector array, are provided for energy and position reconstruction algorithms with a finite number of detectors.

  9. Discretisation errors in Landau gauge on the lattice

    International Nuclear Information System (INIS)

    Bonnet, F.D.R.; Bowmen, P.O.; Leinweber, D.B.

    1999-01-01

    Lattice discretisation errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition improves comparison with the continuum Landau gauge in two ways: (1) through the elimination of O(a 2 ) errors and (2) through a secondary effect of reducing the size of higher-order errors. These results emphasise the importance of implementing an improved gauge fixing condition. Copyright (1999) CSIRO Australia

  10. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  11. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  12. About errors, inaccuracies and sterotypes: Mistakes in media coverage - and how to reduce them

    Science.gov (United States)

    Scherzler, D.

    2010-12-01

    The main complaint made by scientists about the work of journalists is that there are mistakes and inaccuracies in TV programmes, radio or the print media. This seems to be an important reason why too few researchers want to deal with journalists. Such scientists regularly discover omissions, errors, exaggerations, distortions, stereotypes and sensationalism in the media. Surveys carried out on so-called accuracy research seem to concede this point as well. Errors frequently occur in journalism, and it is the task of the editorial offices to work very hard in order to keep the number of errors as low as possible. On closer inspection some errors, however, turn out to be simplifications and omissions. Both are obligatory in journalism and do not automatically cause factual errors. This paper examines the different kinds of mistakes and misleading information that scientists observe in the mass media. By giving a view from inside the mass media it tries to explain how errors come to exist in the journalist’s working routines. It outlines that the criteria of journalistic quality which scientists and science journalists apply differ substantially. The expectation of many scientists is that good science journalism passes on their results to the public in as “unadulterated” a form as possible. The author suggests, however, that quality criteria for journalism cannot be derived from how true to detail and how comprehensively it reports on science, nor to what extent the journalistic presentation is “correct” in the eyes of the researcher. The paper suggests in its main part that scientists who are contacted or interviewed by the mass media should not accept that errors just happen. On the contrary, they can do a lot to help preventing mistakes that might occur in the journalistic product. The author proposes several strategies how scientists and press information officers could identify possible errors, stereotypes and exaggeration by journalists in advance and

  13. Complex analysis and geometry

    CERN Document Server

    Silva, Alessandro

    1993-01-01

    The papers in this wide-ranging collection report on the results of investigations from a number of linked disciplines, including complex algebraic geometry, complex analytic geometry of manifolds and spaces, and complex differential geometry.

  14. (How) do we learn from errors? A prospective study of the link between the ward's learning practices and medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, A; Somech, A; Admi, H; Peterfreund, I; Peker, H; Priente, O

    2014-03-01

    Attention in the ward should shift from preventing medication administration errors to managing them. Nevertheless, little is known in regard with the practices nursing wards apply to learn from medication administration errors as a means of limiting them. To test the effectiveness of four types of learning practices, namely, non-integrated, integrated, supervisory and patchy learning practices in limiting medication administration errors. Data were collected from a convenient sample of 4 hospitals in Israel by multiple methods (observations and self-report questionnaires) at two time points. The sample included 76 wards (360 nurses). Medication administration error was defined as any deviation from prescribed medication processes and measured by a validated structured observation sheet. Wards' use of medication administration technologies, location of the medication station, and workload were observed; learning practices and demographics were measured by validated questionnaires. Results of the mixed linear model analysis indicated that the use of technology and quiet location of the medication cabinet were significantly associated with reduced medication administration errors (estimate=.03, perrors (estimate=.04, plearning practices, supervisory learning was the only practice significantly linked to reduced medication administration errors (estimate=-.04, plearning were significantly linked to higher levels of medication administration errors (estimate=-.03, plearning was not associated with it (p>.05). How wards manage errors might have implications for medication administration errors beyond the effects of typical individual, organizational and technology risk factors. Head nurse can facilitate learning from errors by "management by walking around" and monitoring nurses' medication administration behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. The Influence of Gaussian Signaling Approximation on Error Performance in Cellular Networks

    KAUST Repository

    Afify, Laila H.; Elsawy, Hesham; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    Stochastic geometry analysis for cellular networks is mostly limited to outage probability and ergodic rate, which abstracts many important wireless communication aspects. Recently, a novel technique based on the Equivalent-in-Distribution (EiD) approach is proposed to extend the analysis to capture these metrics and analyze bit error probability (BEP) and symbol error probability (SEP). However, the EiD approach considerably increases the complexity of the analysis. In this paper, we propose an approximate yet accurate framework, that is also able to capture fine wireless communication details similar to the EiD approach, but with simpler analysis. The proposed methodology is verified against the exact EiD analysis in both downlink and uplink cellular networks scenarios.

  16. The Influence of Gaussian Signaling Approximation on Error Performance in Cellular Networks

    KAUST Repository

    Afify, Laila H.

    2015-08-18

    Stochastic geometry analysis for cellular networks is mostly limited to outage probability and ergodic rate, which abstracts many important wireless communication aspects. Recently, a novel technique based on the Equivalent-in-Distribution (EiD) approach is proposed to extend the analysis to capture these metrics and analyze bit error probability (BEP) and symbol error probability (SEP). However, the EiD approach considerably increases the complexity of the analysis. In this paper, we propose an approximate yet accurate framework, that is also able to capture fine wireless communication details similar to the EiD approach, but with simpler analysis. The proposed methodology is verified against the exact EiD analysis in both downlink and uplink cellular networks scenarios.

  17. Hermeticity of three cryogenic calorimeter geometries

    International Nuclear Information System (INIS)

    Strovink, M.; Wormersley, W.J.; Forden, G.E.

    1989-04-01

    We calculate the effect of cracks and dead material on resolution in three simplified cryogenic calorimeter geometries, using a crude approximation that neglects transverse shower spreading and considers only a small set of incident angles. For each dead region, we estimate the average unseen energy using a shower parametrization, and relate it to resolution broadening using a simple approximation that agrees with experimental data. Making reasonable and consistent assumptions on cryostat wall thicknesses, we find that the effects of cracks and dead material dominate the expected resolution in the region where separate ''barrel'' and ''end'' cryostats meet. This is particularly true for one geometry in which the end calorimeter caps the barrel and also protrudes into the hole within it. We also find that carefully designed auxiliary ''crack filler'' detectors can substantially reduce the loss of resolution in these areas. 6 figs

  18. Geometry and Optics of the Electrostatic ELENA Transfer Lines

    CERN Document Server

    Vanbavinckhove, G; Barna, D; Bartmann, W; Butin, F; Choisnet, O; Yamada, H

    2013-01-01

    The future ELENA ring at CERN will decelerate the AD anti-proton beam further from 5.3 MeV to 100 keV kinetic energy, to increase the efficiency of anti-proton trapping. At present there are four experiments in the AD hall which will be complemented with the installation of ELENA by additional three experiments and an additional source for commissioning. This paper describes the optimization of the transfer line geometry, ring rotation and source position. The optics of the transfer lines and error studies to define field and alignment tolerances are shown, and the optics particularities of electrostatic elements and their optimization highlighted.

  19. Non-intercepted dose errors in prescribing anti-neoplastic treatment

    DEFF Research Database (Denmark)

    Mattsson, T O; Holm, B; Michelsen, H

    2015-01-01

    BACKGROUND: The incidence of non-intercepted prescription errors and the risk factors involved, including the impact of computerised order entry (CPOE) systems on such errors, are unknown. Our objective was to determine the incidence, type, severity, and related risk factors of non-intercepted pr....... Strategies to prevent future prescription errors could usefully focus on integrated computerised systems that can aid dose calculations and reduce transcription errors between databases....

  20. Errors in fracture diagnoses in the emergency department--characteristics of patients and diurnal variation

    DEFF Research Database (Denmark)

    Hallas, Peter; Ellingsen, Trond

    2006-01-01

    Evaluation of the circumstances related to errors in diagnosis of fractures at an Emergency Department may suggest ways to reduce the incidence of such errors.......Evaluation of the circumstances related to errors in diagnosis of fractures at an Emergency Department may suggest ways to reduce the incidence of such errors....

  1. Resolution, coverage, and geometry beyond traditional limits

    Energy Technology Data Exchange (ETDEWEB)

    Ronen, Shuki; Ferber, Ralf

    1998-12-31

    The presentation relates to the optimization of the image of seismic data and improved resolution and coverage of acquired data. Non traditional processing methods such as inversion to zero offset (IZO) are used. To realize the potential of saving acquisition cost by reducing in-fill and to plan resolution improvement by processing, geometry QC methods such as DMO Dip Coverage Spectrum (DDCS) and Bull`s Eyes Analysis are used. The DDCS is a 2-D spectrum whose entries consist of the DMO (Dip Move Out) coverage for a particular reflector specified by it`s true time dip and reflector normal strike. The Bull`s Eyes Analysis relies on real time processing of synthetic data generated with the real geometry. 4 refs., 6 figs.

  2. Association of medication errors with drug classifications, clinical units, and consequence of errors: Are they related?

    Science.gov (United States)

    Muroi, Maki; Shen, Jay J; Angosta, Alona

    2017-02-01

    Registered nurses (RNs) play an important role in safe medication administration and patient safety. This study examined a total of 1276 medication error (ME) incident reports made by RNs in hospital inpatient settings in the southwestern region of the United States. The most common drug class associated with MEs was cardiovascular drugs (24.7%). Among this class, anticoagulants had the most errors (11.3%). The antimicrobials was the second most common drug class associated with errors (19.1%) and vancomycin was the most common antimicrobial that caused errors in this category (6.1%). MEs occurred more frequently in the medical-surgical and intensive care units than any other hospital units. Ten percent of MEs reached the patients with harm and 11% reached the patients with increased monitoring. Understanding the contributing factors related to MEs, addressing and eliminating risk of errors across hospital units, and providing education and resources for nurses may help reduce MEs. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    Energy Technology Data Exchange (ETDEWEB)

    Joint Graduate Group in Bioengineering, University of California, San Francisco and University of California, Berkeley; Department of Radiology, University of California; Gullberg, Grant T; Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-02-15

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50percent when imaging with iodine-125, and up to 25percent when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30percent, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50percent) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the

  4. Review of pump suction reducer selection: Eccentric or concentric reducers

    OpenAIRE

    Mahaffey, R M; van Vuuren, S J

    2014-01-01

    Eccentric reducers are traditionally recommended for the pump suction reducer fitting to allow for transportation of air through the fitting to the pump. The ability of a concentric reducer to provide an improved approach flow to the pump while still allowing air to be transported through the fitting is investigated. Computational fluid dynamics (CFD) were utilised to analyse six concentric and six eccentric reducer geometries at four different inlet velocities to determine the flow velocity ...

  5. Development and application of α-hull and Voronoi diagrams in the assessment of roundness error

    International Nuclear Information System (INIS)

    Li, Xiuming; Liu, Hongqi; Li, Wei

    2011-01-01

    Computational geometry has been used to select effective data points from the measured data points for evaluating the roundness error to improve the computational complexity. However, for precision parts most of the measured points are on the vertices of the convex hull; it cannot have any effect on improving the computational complexity with the Voronoi diagrams. In this paper the roundness error is evaluated with α-hull and the Voronoi diagram instead of convex hull. An approach for constructing α-hull with the minimum radius separation is presented to determine the vertices of the Voronoi diagram. The experimental results showed that the roundness error of the minimum zone circle could be solved efficiently with α-hull and the Voronoi diagram

  6. Numerical optimization of laboratory combustor geometry for NO suppression

    International Nuclear Information System (INIS)

    Mazaheri, Karim; Shakeri, Alireza

    2016-01-01

    Highlights: • A five-step kinetics for NO and CO prediction is extracted from GRI-3.0 mechanism. • Accuracy and applicability of this kinetics for numerical optimization were shown. • Optimized geometry for a combustor was determined using the combined process. • NO emission from optimized geometry is found 10.3% lower than the basis geometry. - Abstract: In this article, geometry optimization of a jet stirred reactor (JSR) combustor has been carried out for minimum NO emissions in methane oxidation using a combined numerical algorithm based on computational fluid dynamics (CFD) and differential evolution (DE) optimization. The optimization algorithm is also used to find a fairly accurate reduced mechanism. The combustion kinetics is based on a five-step mechanism with 17 unknowns which is obtained using an optimization DE algorithm for a PSR–PFR reactor based on GRI-3.0 full mechanism. The optimization design variables are the unknowns of the five-step mechanism and the cost function is the concentration difference of pollutants obtained from the 5-step mechanism and the full mechanism. To validate the flow solver and the chemical kinetics, the computed NO at the outlet of the JSR is compared with experiments. To optimize the geometry of a combustor, the JSR combustor geometry is modeled using three parameters (i.e., design variables). An integrated approach using a flow solver and the DE optimization algorithm produces the lowest NO concentrations. Results show that the exhaust NO emission for the optimized geometry is 10.3% lower than the original geometry, while the inlet temperature of the working fluid and the concentration of O_2 are operating constraints. In addition, the concentration of CO pollutant is also much less than the original chamber.

  7. Base data for looking-up tables of calculation errors in JACS code system

    International Nuclear Information System (INIS)

    Murazaki, Minoru; Okuno, Hiroshi

    1999-03-01

    The report intends to clarify the base data for the looking-up tables of calculation errors cited in 'Nuclear Criticality Safety Handbook'. The tables were obtained by classifying the benchmarks made by JACS code system, and there are two kinds: One kind is for fuel systems in general geometry with a reflected and another kind is for fuel systems specific to simple geometry with a reflector. Benchmark systems were further categorized into eight groups according to the fuel configuration: homogeneous or heterogeneous; and fuel kind: uranium, plutonium and their mixtures, etc. The base data for fuel systems in general geometry with a reflected are summarized in this report for the first time. The base data for fuel systems in simple geometry with a reflector were summarized in a technical report published in 1987. However, the data in a group named homogeneous low-enriched uranium were further selected out later by the working group for making the Nuclear Criticality Safety Handbook. This report includes the selection. As a project has been organized by OECD/NEA for evaluation of criticality safety benchmark experiments, the results are also described. (author)

  8. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    International Nuclear Information System (INIS)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-01-01

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well

  9. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  10. Impact of error fields on plasma identification in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Martone, R., E-mail: Raffaele.Martone@unina2.it [Ass. EURATOM/ENEA/CREATE, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Appel, L. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon (United Kingdom); Chiariello, A.G.; Formisano, A.; Mattei, M. [Ass. EURATOM/ENEA/CREATE, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Pironti, A. [Ass. EURATOM/ENEA/CREATE, Università degli Studi di Napoli “Federico II”, Via Claudio 25, Napoli (Italy)

    2013-10-15

    Highlights: ► The paper deals with the effect on plasma identification of error fields generated by field coils manufacturing and assembly errors. ► EFIT++ is used to identify plasma gaps when poloidal field coils and central solenoid coils are deformed, and the gaps sensitivity with respect to such errors is analyzed. ► Some examples of reconstruction errors in the presence of deformations are reported. -- Abstract: The active control of plasma discharges in present Tokamak devices must be prompt and accurate to guarantee expected performance. As a consequence, the identification step, calculating plasma parameters from diagnostics, should provide in a very short time reliable estimates of the relevant quantities, such as plasma centroid position, plasma-wall distances at given points called gaps, and other geometrical parameters as elongation and triangularity. To achieve the desired response promptness, a number of simplifying assumptions are usually made in the identification algorithms. Among those clearly affecting the quality of the plasma parameters reconstruction, one of the most relevant is the precise knowledge of the magnetic field produced by active coils. Since uncertainties in their manufacturing and assembly process may cause misalignments between the actual and expected geometry and position of magnets, an analysis on the effect of possible wrong information about magnets on the plasma shape identification is documented in this paper.

  11. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  12. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    Science.gov (United States)

    Bishara, Anthony J.; Hittner, James B.

    2015-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…

  13. Geometry

    CERN Document Server

    Pedoe, Dan

    1988-01-01

    ""A lucid and masterly survey."" - Mathematics Gazette Professor Pedoe is widely known as a fine teacher and a fine geometer. His abilities in both areas are clearly evident in this self-contained, well-written, and lucid introduction to the scope and methods of elementary geometry. It covers the geometry usually included in undergraduate courses in mathematics, except for the theory of convex sets. Based on a course given by the author for several years at the University of Minnesota, the main purpose of the book is to increase geometrical, and therefore mathematical, understanding and to he

  14. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Analysis of strain error sources in micro-beam Laue diffraction

    International Nuclear Information System (INIS)

    Hofmann, Felix; Eve, Sophie; Belnoue, Jonathan; Micha, Jean-Sébastien; Korsunsky, Alexander M.

    2011-01-01

    Micro-beam Laue diffraction is an experimental method that allows the measurement of local lattice orientation and elastic strain within individual grains of engineering alloys, ceramics, and other polycrystalline materials. Unlike other analytical techniques, e.g. based on electron microscopy, it is not limited to surface characterisation or thin sections, but rather allows non-destructive measurements in the material bulk. This is of particular importance for in situ loading experiments where the mechanical response of a material volume (rather than just surface) is studied and it is vital that no perturbation/disturbance is introduced by the measurement technique. Whilst the technique allows lattice orientation to be determined to a high level of precision, accurate measurement of elastic strains and estimating the errors involved is a significant challenge. We propose a simulation-based approach to assess the elastic strain errors that arise from geometrical perturbations of the experimental setup. Using an empirical combination rule, the contributions of different geometrical uncertainties to the overall experimental strain error are estimated. This approach was applied to the micro-beam Laue diffraction setup at beamline BM32 at the European Synchrotron Radiation Facility (ESRF). Using a highly perfect germanium single crystal, the mechanical stability of the instrument was determined and hence the expected strain errors predicted. Comparison with the actual strain errors found in a silicon four-point beam bending test showed good agreement. The simulation-based error analysis approach makes it possible to understand the origins of the experimental strain errors and thus allows a directed improvement of the experimental geometry to maximise the benefit in terms of strain accuracy.

  16. Visual correlation analytics of event-based error reports for advanced manufacturing

    OpenAIRE

    Nazir, Iqbal

    2017-01-01

    With the growing digitalization and automation in the manufacturing domain, an increasing amount of process data and error reports become available. To minimize the number of errors and maximize the efficiency of the production line, it is important to analyze the generated error reports and find solutions that can reduce future errors. However, not all errors have the equal importance, as some errors may be the result of previously occurred errors. Therefore, it is important for domain exper...

  17. Complex algebraic geometry

    CERN Document Server

    Kollár, János

    1997-01-01

    This volume contains the lectures presented at the third Regional Geometry Institute at Park City in 1993. The lectures provide an introduction to the subject, complex algebraic geometry, making the book suitable as a text for second- and third-year graduate students. The book deals with topics in algebraic geometry where one can reach the level of current research while starting with the basics. Topics covered include the theory of surfaces from the viewpoint of recent higher-dimensional developments, providing an excellent introduction to more advanced topics such as the minimal model program. Also included is an introduction to Hodge theory and intersection homology based on the simple topological ideas of Lefschetz and an overview of the recent interactions between algebraic geometry and theoretical physics, which involve mirror symmetry and string theory.

  18. CMS geometry through 2020

    International Nuclear Information System (INIS)

    Osborne, I; Brownson, E; Eulisse, G; Jones, C D; Sexton-Kennedy, E; Lange, D J

    2014-01-01

    CMS faces real challenges with upgrade of the CMS detector through 2020 and beyond. One of the challenges, from the software point of view, is managing upgrade simulations with the same software release as the 2013 scenario. We present the CMS geometry description software model, its integration with the CMS event setup and core software. The CMS geometry configuration and selection is implemented in Python. The tools collect the Python configuration fragments into a script used in CMS workflow. This flexible and automated geometry configuration allows choosing either transient or persistent version of the same scenario and specific version of the same scenario. We describe how the geometries are integrated and validated, and how we define and handle different geometry scenarios in simulation and reconstruction. We discuss how to transparently manage multiple incompatible geometries in the same software release. Several examples are shown based on current implementation assuring consistent choice of scenario conditions. The consequences and implications for multiple/different code algorithms are discussed.

  19. Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems

    Science.gov (United States)

    Lutz, Robyn R.

    1993-01-01

    This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.

  20. Classical An-W-geometry

    International Nuclear Information System (INIS)

    Gervais, J.L.

    1993-01-01

    By analyzing the extrinsic geometry of two dimensional surfaces chirally embedded in C P n (the C P n W-surface), we give exact treatments in various aspects of the classical W-geometry in the conformal gauge: First, the basis of tangent and normal vectors are defined at regular points of the surface, such that their infinitesimal displacements are given by connections which coincide with the vector potentials of the (conformal) A n -Toda Lax pair. Since the latter is known to be intrinsically related with the W symmetries, this gives the geometrical meaning of the A n W-Algebra. Second, W-surfaces are put in one-to-one correspondence with solutions of the conformally-reduced WZNW model, which is such that the Toda fields give the Cartan part in the Gauss decomposition of its solutions. Third, the additional variables of the Toda hierarchy are used as coordinates of C P n . This allows us to show that W-transformations may be extended as particular diffeomorphisms of this target-space. Higher-dimensional generalizations of the WZNW equations are derived and related with the Zakharov-Shabat equations of the Toda hierarchy. Fourth, singular points are studied from a global viewpoint, using our earlier observation that W-surfaces may be regarded as instantons. The global indices of the W-geometry, which are written in terms of the Toda fields, are shown to be the instanton numbers for associated mappings of W-surfaces into the Grassmannians. The relation with the singularities of W-surface is derived by combining the Toda equations with the Gauss-Bonnet theorem. (orig.)

  1. Servo control booster system for minimizing following error

    Science.gov (United States)

    Wise, W.L.

    1979-07-26

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, ..delta..S/sub R/, on a continuous real-time basis, for all operational times of consequence and for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error greater than or equal to ..delta..S/sub R/, to produce precise position correction signals. When the command-to-response error is less than ..delta..S/sub R/, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  2. Analysis of the interface tracking errors

    International Nuclear Information System (INIS)

    Cerne, G.; Tiselj, I.; Petelin, S.

    2001-01-01

    An important limitation of the interface-tracking algorithm is the grid density, which determines the space scale of the surface tracking. In this paper the analysis of the interface tracking errors, which occur in a dispersed flow, is performed for the VOF interface tracking method. A few simple two-fluid tests are proposed for the investigation of the interface tracking errors and their grid dependence. When the grid density becomes too coarse to follow the interface changes, the errors can be reduced either by using denser nodalization or by switching to the two-fluid model during the simulation. Both solutions are analyzed and compared on a simple vortex-flow test.(author)

  3. KMRR thermal power measurement error estimation

    International Nuclear Information System (INIS)

    Rhee, B.W.; Sim, B.S.; Lim, I.C.; Oh, S.K.

    1990-01-01

    The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power

  4. Indefinite theta series and generalized error functions

    CERN Document Server

    Alexandrov, Sergei; Manschot, Jan; Pioline, Boris

    2016-01-01

    Theta series for lattices with indefinite signature $(n_+,n_-)$ arise in many areas of mathematics including representation theory and enumerative algebraic geometry. Their modular properties are well understood in the Lorentzian case ($n_+=1$), but have remained obscure when $n_+\\geq 2$. Using a higher-dimensional generalization of the usual (complementary) error function, discovered in an independent physics project, we construct the modular completion of a class of `conformal' holomorphic theta series ($n_+=2$). As an application, we determine the modular properties of a generalized Appell-Lerch sum attached to the lattice ${\\operatorname A}_2$, which arose in the study of rank 3 vector bundles on $\\mathbb{P}^2$. The extension of our method to $n_+>2$ is outlined.

  5. Analisis Keterampilan Geometri Siswa Dalam Memecahkan Masalah Geometri Berdasarkan Tingkat Berpikir Van Hiele

    OpenAIRE

    Muhassanah, Nuraini; Sujadi, Imam; Riyadi, Riyadi

    2014-01-01

    The objective of this research was to describe the VIII grade students geometry skills atSMP N 16 Surakarta in the level 0 (visualization), level 1 (analysis), and level 2 (informaldeduction) van Hiele level of thinking in solving the geometry problem. This research was aqualitative research in the form of case study analyzing deeply the students geometry skill insolving the geometry problem based on van Hiele level of thingking. The subject of this researchwas nine students of VIII grade at ...

  6. Spectral BRDF-based determination of proper measurement geometries to characterize color shift of special effect coatings.

    Science.gov (United States)

    Ferrero, Alejandro; Rabal, Ana; Campos, Joaquín; Martínez-Verdú, Francisco; Chorro, Elísabet; Perales, Esther; Pons, Alicia; Hernanz, María Luisa

    2013-02-01

    A reduced set of measurement geometries allows the spectral reflectance of special effect coatings to be predicted for any other geometry. A physical model based on flake-related parameters has been used to determine nonredundant measurement geometries for the complete description of the spectral bidirectional reflectance distribution function (BRDF). The analysis of experimental spectral BRDF was carried out by means of principal component analysis. From this analysis, a set of nine measurement geometries was proposed to characterize special effect coatings. It was shown that, for two different special effect coatings, these geometries provide a good prediction of their complete color shift.

  7. ERROR VS REJECTION CURVE FOR THE PERCEPTRON

    OpenAIRE

    PARRONDO, JMR; VAN DEN BROECK, Christian

    1993-01-01

    We calculate the generalization error epsilon for a perceptron J, trained by a teacher perceptron T, on input patterns S that form a fixed angle arccos (J.S) with the student. We show that the error is reduced from a power law to an exponentially fast decay by rejecting input patterns that lie within a given neighbourhood of the decision boundary J.S = 0. On the other hand, the error vs. rejection curve epsilon(rho), where rho is the fraction of rejected patterns, is shown to be independent ...

  8. In-hospital fellow coverage reduces communication errors in the surgical intensive care unit.

    Science.gov (United States)

    Williams, Mallory; Alban, Rodrigo F; Hardy, James P; Oxman, David A; Garcia, Edward R; Hevelone, Nathanael; Frendl, Gyorgy; Rogers, Selwyn O

    2014-06-01

    Staff coverage strategies of intensive care units (ICUs) impact clinical outcomes. High-intensity staff coverage strategies are associated with lower morbidity and mortality. Accessible clinical expertise, team work, and effective communication have all been attributed to the success of this coverage strategy. We evaluate the impact of in-hospital fellow coverage (IHFC) on improving communication of cardiorespiratory events. A prospective observational study performed in an academic tertiary care center with high-intensity staff coverage. The main outcome measure was resident to fellow communication of cardiorespiratory events during IHFC vs home coverage (HC) periods. Three hundred twelve cardiorespiratory events were collected in 114 surgical ICU patients in 134 study days. Complete data were available for 306 events. One hundred three communication errors occurred. IHFC was associated with significantly better communication of events compared to HC (Pcommunicated 89% of events during IHFC vs 51% of events during HC (PCommunication patterns of junior and midlevel residents were similar. Midlevel residents communicated 68% of all on-call events (87% IHFC vs 50% HC, Pcommunicated 66% of events (94% IHFC vs 52% HC, PCommunication errors were lower in all ICUs during IHFC (Pcommunication errors. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Optimization of tensile method and specimen geometry in modified ring tensile test

    International Nuclear Information System (INIS)

    Kitano, Koji; Fuketa, Toyoshi; Sasajima, Hideo; Uetsuka, Hiroshi

    2001-03-01

    Several techniques in ring tensile test are proposed in order to evaluate mechanical properties of cladding under hoop loading condition caused by pellet/cladding mechanical interaction (PCMI). In the modified techniques, variety of tensile methods and specimen geometry are being proposed in order to limit deformation within the gauge section. However, the tensile method and the specimen geometry were not determined in the modified techniques. In the present study, we have investigated the tensile method and the specimen geometry through finite element method (FEM) analysis of specimen deformation and tensile test on specimens with various gauge section geometries. In using two-piece tensile tooling, the mechanical properties under hoop loading condition can be correctly evaluated when deformation part (gauge section) is put on the top of a half-mandrel, and friction between the specimen and the half-mandrel is reduced with Teflon tape. In addition, we have shown the optimum specimen geometry for PWR 17 by 17 type cladding. (author)

  10. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  11. Nursing Errors in Intensive Care Unit by Human Error Identification in Systems Tool: A Case Study

    Directory of Open Access Journals (Sweden)

    Nezamodini

    2016-03-01

    Full Text Available Background Although health services are designed and implemented to improve human health, the errors in health services are a very common phenomenon and even sometimes fatal in this field. Medical errors and their cost are global issues with serious consequences for the patients’ community that are preventable and require serious attention. Objectives The current study aimed to identify possible nursing errors applying human error identification in systems tool (HEIST in the intensive care units (ICUs of hospitals. Patients and Methods This descriptive research was conducted in the intensive care unit of a hospital in Khuzestan province in 2013. Data were collected through observation and interview by nine nurses in this section in a period of four months. Human error classification was based on Rose and Rose and Swain and Guttmann models. According to HEIST work sheets the guide questions were answered and error causes were identified after the determination of the type of errors. Results In total 527 errors were detected. The performing operation on the wrong path had the highest frequency which was 150, and the second rate with a frequency of 136 was doing the tasks later than the deadline. Management causes with a frequency of 451 were the first rank among identified errors. Errors mostly occurred in the system observation stage and among the performance shaping factors (PSFs, time was the most influencing factor in occurrence of human errors. Conclusions Finally, in order to prevent the occurrence and reduce the consequences of identified errors the following suggestions were proposed : appropriate training courses, applying work guidelines and monitoring their implementation, increasing the number of work shifts, hiring professional workforce, equipping work space with appropriate facilities and equipment.

  12. Non-Euclidean geometry

    CERN Document Server

    Kulczycki, Stefan

    2008-01-01

    This accessible approach features two varieties of proofs: stereometric and planimetric, as well as elementary proofs that employ only the simplest properties of the plane. A short history of geometry precedes a systematic exposition of the principles of non-Euclidean geometry.Starting with fundamental assumptions, the author examines the theorems of Hjelmslev, mapping a plane into a circle, the angle of parallelism and area of a polygon, regular polygons, straight lines and planes in space, and the horosphere. Further development of the theory covers hyperbolic functions, the geometry of suff

  13. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    Energy Technology Data Exchange (ETDEWEB)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L. [NASA Marshall Space Flight Center, Huntsville, AL 35812 (United States); Khazanov, Igor, E-mail: David.a.Falconer@nasa.gov [Center for Space Plasma and Aeronomic Research, University of Alabama in Huntsville, Huntsville, AL 35899 (United States)

    2016-12-20

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter at central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.

  14. Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMS

    International Nuclear Information System (INIS)

    Diehl, S.E.; Ochoa, A. Jr.; Dressendorfer, P.V.; Koga, R.; Kolasinski, W.A.

    1982-06-01

    Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors

  15. PS-022 Complex automated medication systems reduce medication administration error rates in an acute medical ward

    DEFF Research Database (Denmark)

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    2017-01-01

    Background Medication errors have received extensive attention in recent decades and are of significant concern to healthcare organisations globally. Medication errors occur frequently, and adverse events associated with medications are one of the largest causes of harm to hospitalised patients...... cabinet, automated dispensing and barcode medication administration; (2) non-patient specific automated dispensing and barcode medication administration. The occurrence of administration errors was observed in three 3 week periods. The error rates were calculated by dividing the number of doses with one...

  16. Geometry on the space of geometries

    International Nuclear Information System (INIS)

    Christodoulakis, T.; Zanelli, J.

    1988-06-01

    We discuss the geometric structure of the configuration space of pure gravity. This is an infinite dimensional manifold, M, where each point represents one spatial geometry g ij (x). The metric on M is dictated by geometrodynamics, and from it, the Christoffel symbols and Riemann tensor can be found. A ''free geometry'' tracing a geodesic on the manifold describes the time evolution of space in the strong gravity limit. In a regularization previously introduced by the authors, it is found that M does not have the same dimensionality, D, everywhere, and that D is not a scalar, although it is covariantly constant. In this regularization, it is seen that the path integral measure can be absorbed in a renormalization of the cosmological constant. (author). 19 refs

  17. Error Control for Network-on-Chip Links

    CERN Document Server

    Fu, Bo

    2012-01-01

    As technology scales into nanoscale regime, it is impossible to guarantee the perfect hardware design. Moreover, if the requirement of 100% correctness in hardware can be relaxed, the cost of manufacturing, verification, and testing will be significantly reduced. Many approaches have been proposed to address the reliability problem of on-chip communications. This book focuses on the use of error control codes (ECCs) to improve on-chip interconnect reliability. Coverage includes detailed description of key issues in NOC error control faced by circuit and system designers, as well as practical error control techniques to minimize the impact of these errors on system performance. Provides a detailed background on the state of error control methods for on-chip interconnects; Describes the use of more complex concatenated codes such as Hamming Product Codes with Type-II HARQ, while emphasizing integration techniques for on-chip interconnect links; Examines energy-efficient techniques for integrating multiple error...

  18. Good ergonomics and team diversity reduce absenteeism and errors in car manufacturing.

    Science.gov (United States)

    Fritzsche, Lars; Wegge, Jürgen; Schmauder, Martin; Kliegel, Matthias; Schmidt, Klaus-Helmut

    2014-01-01

    Prior research suggests that ergonomics work design and mixed teams (in age and gender) may compensate declines in certain abilities of ageing employees. This study investigates simultaneous effects of both team level factors on absenteeism and performance (error rates) over one year in a sample of 56 car assembly teams (N = 623). Results show that age was related to prolonged absenteeism and more mistakes in work planning, but not to overall performance. In comparison, high-physical workload was strongly associated with longer absenteeism and increased error rates. Furthermore, controlling for physical workload, age diversity was related to shorter absenteeism, and the presence of females in the team was associated with shorter absenteeism and better performance. In summary, this study suggests that both ergonomics work design and mixed team composition may compensate age-related productivity risks in manufacturing by maintaining the work ability of older employees and improving job quality.

  19. Seamless Mobile Multimedia Broadcasting Using Adaptive Error Recovery

    Directory of Open Access Journals (Sweden)

    Carlos M. Lentisco

    2017-01-01

    Full Text Available Multimedia services over mobile networks present several challenges, such as ensuring a reliable delivery of multimedia content, avoiding undesired service disruptions, or reducing service latency. HTTP adaptive streaming addresses these problems for multimedia unicast services, but it is not efficient from the point of view of radio resource consumption. In Long-Term Evolution (LTE networks, multimedia broadcast services are provided over a common radio channel using a combination of forward error correction and unicast error recovery techniques at the application level. This paper discusses how to avoid service disruptions and reduce service latency for LTE multimedia broadcast services by adding dynamic adaptation capabilities to the unicast error recovery process. The proposed solution provides a seamless mobile multimedia broadcasting without compromising the quality of the service perceived by the users.

  20. Evaluation of Data with Systematic Errors

    International Nuclear Information System (INIS)

    Froehner, F. H.

    2003-01-01

    Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward

  1. Geometry and Combinatorics

    DEFF Research Database (Denmark)

    Kokkendorff, Simon Lyngby

    2002-01-01

    The subject of this Ph.D.-thesis is somewhere in between continuous and discrete geometry. Chapter 2 treats the geometry of finite point sets in semi-Riemannian hyperquadrics,using a matrix whose entries are a trigonometric function of relative distances in a given point set. The distance...... to the geometry of a simplex in a semi-Riemannian hyperquadric. In chapter 3 we study which finite metric spaces that are realizable in a hyperbolic space in the limit where curvature goes to -∞. We show that such spaces are the so called leaf spaces, the set of degree 1 vertices of weighted trees. We also...... establish results on the limiting geometry of such an isometrically realized leaf space simplex in hyperbolic space, when curvature goes to -∞. Chapter 4 discusses negative type of metric spaces. We give a measure theoretic treatment of this concept and related invariants. The theory developed...

  2. Geometry and billiards

    CERN Document Server

    Tabachnikov, Serge

    2005-01-01

    Mathematical billiards describe the motion of a mass point in a domain with elastic reflections off the boundary or, equivalently, the behavior of rays of light in a domain with ideally reflecting boundary. From the point of view of differential geometry, the billiard flow is the geodesic flow on a manifold with boundary. This book is devoted to billiards in their relation with differential geometry, classical mechanics, and geometrical optics. The topics covered include variational principles of billiard motion, symplectic geometry of rays of light and integral geometry, existence and nonexistence of caustics, optical properties of conics and quadrics and completely integrable billiards, periodic billiard trajectories, polygonal billiards, mechanisms of chaos in billiard dynamics, and the lesser-known subject of dual (or outer) billiards. The book is based on an advanced undergraduate topics course (but contains more material than can be realistically taught in one semester). Although the minimum prerequisit...

  3. The concept of error and malpractice in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca; Pinto, Fabio; Reali, Riccardo; Daniele, Stefania; Romano, Luigia

    2012-08-01

    Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. The etiology of radiological error is multifactorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge, and misjudgments. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Errors are an inevitable part of human life, and every health professional has made mistakes. To improve patient safety and reduce the risk from harm, we must accept that some errors are inevitable during the delivery of health care. We must play a cultural change in medicine, wherein errors are actively sought, openly discussed, and aggressively addressed. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Accelerating navigation in the VecGeom geometry modeller

    Science.gov (United States)

    Wenzel, Sandro; Zhang, Yang; pre="for the"> VecGeom Developers,

    2017-10-01

    The VecGeom geometry library is a relatively recent effort aiming to provide a modern and high performance geometry service for particle detector simulation in hierarchical detector geometries common to HEP experiments. One of its principal targets is the efficient use of vector SIMD hardware instructions to accelerate geometry calculations for single track as well as multi-track queries. Previously, excellent performance improvements compared to Geant4/ROOT could be reported for elementary geometry algorithms at the level of single shape queries. In this contribution, we will focus on the higher level navigation algorithms in VecGeom, which are the most important components as seen from the simulation engines. We will first report on our R&D effort and developments to implement SIMD enhanced data structures to speed up the well-known “voxelised” navigation algorithms, ubiquitously used for particle tracing in complex detector modules consisting of many daughter parts. Second, we will discuss complementary new approaches to improve navigation algorithms in HEP. These ideas are based on a systematic exploitation of static properties of the detector layout as well as automatic code generation and specialisation of the C++ navigator classes. Such specialisations reduce the overhead of generic- or virtual function based algorithms and enhance the effectiveness of the SIMD vector units. These novel approaches go well beyond the existing solutions available in Geant4 or TGeo/ROOT, achieve a significantly superior performance, and might be of interest for a wide range of simulation backends (GeantV, Geant4). We exemplify this with concrete benchmarks for the CMS and ALICE detectors.

  5. NLO error propagation exercise: statistical results

    International Nuclear Information System (INIS)

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods

  6. Drawing Dynamic Geometry Figures Online with Natural Language for Junior High School Geometry

    Science.gov (United States)

    Wong, Wing-Kwong; Yin, Sheng-Kai; Yang, Chang-Zhe

    2012-01-01

    This paper presents a tool for drawing dynamic geometric figures by understanding the texts of geometry problems. With the tool, teachers and students can construct dynamic geometric figures on a web page by inputting a geometry problem in natural language. First we need to build the knowledge base for understanding geometry problems. With the…

  7. Extension of the comet method to 2-D hexagonal geometry

    International Nuclear Information System (INIS)

    Connolly, Kevin John; Rahnema, Farzad; Zhang, Dingkang

    2011-01-01

    The capability of the heterogeneous coarse mesh radiation transport (COMET) method developed at Georgia Tech has been expanded. COMET is now able to treat hexagonal geometry in two dimensions, allowing reactor problems to be solved for those next-generation reactors which utilize prismatic block structure and hexagonal lattice geometry in their designs. The COMET method is used to solve whole core reactor analysis problems without resorting to homogenization or low-order transport approximations. The eigenvalue and fission density distribution of the reactor are determined iteratively using response functions. The method has previously proven accurate in solving PWR, BWR, and CANDU eigenvalue problems. In this paper, three simple test cases inspired by high temperature test reactor material cross sections and fuel block geometry are presented. These cases are given not in an attempt to model realistic nuclear power systems, but in order to test the ability of the improved method. Solutions determined by the new hexagonal version of COMET, COMET-Hex, are compared with solutions determined by MCNP5, and the results show the accuracy and efficiency of the improved COMET-Hex method in calculating the eigenvalue and fuel pin fission density in sample full-core problems. COMETHex determines the eigenvalues of these simple problems to an order of within 50 pcm of the reference solutions and all pin fission densities to an average error of 0.2%, and it requires fewer than three minutes to produce these results. (author)

  8. Solution structure of apamin determined by nuclear magnetic resonance and distance geometry

    Energy Technology Data Exchange (ETDEWEB)

    Pease, J.H.B.; Wemmer, D.E.

    1988-11-01

    The solution structure of the bee venom neurotoxin apamin has been determined with a distance geometry program using distance constraints derived from NMR. Twenty embedded structures were generated and refined by using the program DSPACE. After error minimization using both conjugate gradient and dynamics algorithms, six structures had very low residual error. Comparisons of these show that the backbone of the peptide is quite well-defined with the largest rms difference between backbone atoms in these structures of 1.34 /Angstrom/. The side chains have far fewer constraints and show greater variability in their positions. The structure derived here is generally consistent with the qualitative model previously described, with most differences occurring in the loop between the ..beta..-turn (residues 2-5) and the C-terminal ..cap alpha..-helix (residues 9-17). Comparisons are made with previously derived models from NMR data and other methods.

  9. Solution structure of apamin determined by nuclear magnetic resonance and distance geometry

    International Nuclear Information System (INIS)

    Pease, J.H.B.; Wemmer, D.E.

    1988-01-01

    The solution structure of the bee venom neurotoxin apamin has been determined with a distance geometry program using distance constraints derived from NMR. Twenty embedded structures were generated and refined by using the program DSPACE. After error minimization using both conjugate gradient and dynamics algorithms, six structures had very low residual error. Comparisons of these show that the backbone of the peptide is quite well-defined with the largest rms difference between backbone atoms in these structures of 1.34 /Angstrom/. The side chains have far fewer constraints and show greater variability in their positions. The structure derived here is generally consistent with the qualitative model previously described, with most differences occurring in the loop between the β-turn (residues 2-5) and the C-terminal α-helix (residues 9-17). Comparisons are made with previously derived models from NMR data and other methods

  10. Online measurement of bead geometry in GMAW-based additive manufacturing using passive vision

    International Nuclear Information System (INIS)

    Xiong, Jun; Zhang, Guangjun

    2013-01-01

    Additive manufacturing based on gas metal arc welding is an advanced technique for depositing fully dense components with low cost. Despite this fact, techniques to achieve accurate control and automation of the process have not yet been perfectly developed. The online measurement of the deposited bead geometry is a key problem for reliable control. In this work a passive vision-sensing system, comprising two cameras and composite filtering techniques, was proposed for real-time detection of the bead height and width through deposition of thin walls. The nozzle to the top surface distance was monitored for eliminating accumulated height errors during the multi-layer deposition process. Various image processing algorithms were applied and discussed for extracting feature parameters. A calibration procedure was presented for the monitoring system. Validation experiments confirmed the effectiveness of the online measurement system for bead geometry in layered additive manufacturing. (paper)

  11. Accuracy of crystal structure error estimates

    International Nuclear Information System (INIS)

    Taylor, R.; Kennard, O.

    1986-01-01

    A statistical analysis of 100 crystal structures retrieved from the Cambridge Structural Database is reported. Each structure has been determined independently by two different research groups. Comparison of the independent results leads to the following conclusions: (a) The e.s.d.'s of non-hydrogen-atom positional parameters are almost invariably too small. Typically, they are underestimated by a factor of 1.4-1.45. (b) The extent to which e.s.d.'s are underestimated varies significantly from structure to structure and from atom to atom within a structure. (c) Errors in the positional parameters of atoms belonging to the same chemical residue tend to be positively correlated. (d) The e.s.d.'s of heavy-atom positions are less reliable than those of light-atom positions. (e) Experimental errors in atomic positional parameters are normally, or approximately normally, distributed. (f) The e.s.d.'s of cell parameters are grossly underestimated, by an average factor of about 5 for cell lengths and 2.5 for cell angles. There is marginal evidence that the accuracy of atomic-coordinate e.s.d.'s also depends on diffractometer geometry, refinement procedure, whether or not the structure has a centre of symmetry, and the degree of precision attained in the structure determination. (orig.)

  12. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  13. Effect of target-fixture geometry on shock-wave compacted copper powders

    Science.gov (United States)

    Kim, Wooyeol; Ahn, Dong-Hyun; Yoon, Jae Ik; Park, Lee Ju; Kim, Hyoung Seop

    2018-01-01

    In shock compaction with a single gas gun system, a target fixture is used to safely recover a powder compact processed by shock-wave dynamic impact. However, no standard fixture geometry exists, and its effect on the processed compact is not well studied. In this study, two types of fixture are used for the dynamic compaction of hydrogen-reduced copper powders, and the mechanical properties and microstructures are investigated using the Vickers microhardness test and electron backscatter diffraction, respectively. With the assistance of finite element method simulations, we analyze several shock parameters that are experimentally hard to control. The results of the simulations indicate that the target geometry clearly affects the characteristics of incident and reflected shock waves. The hardness distribution and the microstructure of the compacts also show their dependence on the geometry. With the results of the simulations and the experiment, it is concluded that the target geometry affects the shock wave propagation and wave interaction in the specimen.

  14. KEMAJUAN BELAJAR SISWA PADA GEOMETRI TRANSFORMASI MENGGUNAKAN AKTIVITAS REFLEKSI GEOMETRI

    Directory of Open Access Journals (Sweden)

    Irkham Ulil Albab

    2014-10-01

    Full Text Available Abstrak: Penelitian ini bertujuan untuk mendeskripsikan kemajuan belajar siswa pada materi geometri transformasi yang didukung dengan serangkaian aktivitas belajar berdasarkan Pendidikan Matematika Realistik Indonesia. Penelitian didesain melalui tiga tahap, yaitu tahapan perancangan desain awal, pengujian desain melalui pembelajaran awal dan pembelajaran eksperimental, dan tahap analisis retrospektif. Dalam penelitian ini, Hypothetical Learning Trajectory, HLT (HLT berperan penting sebagai desain pembelajaran sekaligus instrumen penelitian. HLT diujikan terhadap 26 siswa kelas VII. Data dikumpulkan dengan teknik wawancara, pengamatan, dan catatan lapangan. Hasil penelitian menunjukkan bahwa desain pembelajaran ini mampu menstimulasi siswa untuk memberikan karakteristik refleksi dan transformasi geometri lainnya secara informal, mengklasifikasikannya dalam transformasi isometri pada level kedua, dan menemukan garis bantuan refleksi pada level yang lebih formal. Selain itu, garis bantuan refleksi digunakan oleh siswa untuk menggambar bayangan refleksi dan pola pencerminan serta memahami bentuk rotasi dan translasi sebagai kombinasi refleksi adalah level tertinggi. Keyword: transformasi geometri, kombinasi refleksi, rotasi, translasi, design research, HLT STUDENTS’ LEARNING PROGRESS ON TRANSFORMATION GEOMETRY USING THE GEOMETRY REFLECTION ACTIVITIES Abstract: This study was aimed at describing the students’ learning progress on transformation geometry supported by a set of learning activities based on Indonesian Realistic Mathematics Education. The study was designed into three stages, that is, the preliminary design stage, the design testing through initial instruction and experiment, and the restrospective analysis stage. In this study, Hypothetical Learning Trajectory (HLT played an important role as an instructional design and a research instrument. HLT was tested to 26 seventh grade students. The data were collected through interviews

  15. Software Geometry in Simulations

    Science.gov (United States)

    Alion, Tyler; Viren, Brett; Junk, Tom

    2015-04-01

    The Long Baseline Neutrino Experiment (LBNE) involves many detectors. The experiment's near detector (ND) facility, may ultimately involve several detectors. The far detector (FD) will be significantly larger than any other Liquid Argon (LAr) detector yet constructed; many prototype detectors are being constructed and studied to motivate a plethora of proposed FD designs. Whether it be a constructed prototype or a proposed ND/FD design, every design must be simulated and analyzed. This presents a considerable challenge to LBNE software experts; each detector geometry must be described to the simulation software in an efficient way which allows for multiple authors to easily collaborate. Furthermore, different geometry versions must be tracked throughout their use. We present a framework called General Geometry Description (GGD), written and developed by LBNE software collaborators for managing software to generate geometries. Though GGD is flexible enough to be used by any experiment working with detectors, we present it's first use in generating Geometry Description Markup Language (GDML) files to interface with LArSoft, a framework of detector simulations, event reconstruction, and data analyses written for all LAr technology users at Fermilab. Brett is the other of the framework discussed here, the General Geometry Description (GGD).

  16. Methods of information geometry

    CERN Document Server

    Amari, Shun-Ichi

    2000-01-01

    Information geometry provides the mathematical sciences with a new framework of analysis. It has emerged from the investigation of the natural differential geometric structure on manifolds of probability distributions, which consists of a Riemannian metric defined by the Fisher information and a one-parameter family of affine connections called the \\alpha-connections. The duality between the \\alpha-connection and the (-\\alpha)-connection together with the metric play an essential role in this geometry. This kind of duality, having emerged from manifolds of probability distributions, is ubiquitous, appearing in a variety of problems which might have no explicit relation to probability theory. Through the duality, it is possible to analyze various fundamental problems in a unified perspective. The first half of this book is devoted to a comprehensive introduction to the mathematical foundation of information geometry, including preliminaries from differential geometry, the geometry of manifolds or probability d...

  17. Developments in special geometry

    International Nuclear Information System (INIS)

    Mohaupt, Thomas; Vaughan, Owen

    2012-01-01

    We review the special geometry of N = 2 supersymmetric vector and hypermultiplets with emphasis on recent developments and applications. A new formulation of the local c-map based on the Hesse potential and special real coordinates is presented. Other recent developments include the Euclidean version of special geometry, and generalizations of special geometry to non-supersymmetric theories. As applications we discuss the proof that the local r-map and c-map preserve geodesic completeness, and the construction of four- and five-dimensional static solutions through dimensional reduction over time. The shared features of the real, complex and quaternionic version of special geometry are stressed throughout.

  18. SNP discovery in nonmodel organisms: strand bias and base-substitution errors reduce conversion rates.

    Science.gov (United States)

    Gonçalves da Silva, Anders; Barendse, William; Kijas, James W; Barris, Wes C; McWilliam, Sean; Bunch, Rowan J; McCullough, Russell; Harrison, Blair; Hoelzel, A Rus; England, Phillip R

    2015-07-01

    Single nucleotide polymorphisms (SNPs) have become the marker of choice for genetic studies in organisms of conservation, commercial or biological interest. Most SNP discovery projects in nonmodel organisms apply a strategy for identifying putative SNPs based on filtering rules that account for random sequencing errors. Here, we analyse data used to develop 4723 novel SNPs for the commercially important deep-sea fish, orange roughy (Hoplostethus atlanticus), to assess the impact of not accounting for systematic sequencing errors when filtering identified polymorphisms when discovering SNPs. We used SAMtools to identify polymorphisms in a velvet assembly of genomic DNA sequence data from seven individuals. The resulting set of polymorphisms were filtered to minimize 'bycatch'-polymorphisms caused by sequencing or assembly error. An Illumina Infinium SNP chip was used to genotype a final set of 7714 polymorphisms across 1734 individuals. Five predictors were examined for their effect on the probability of obtaining an assayable SNP: depth of coverage, number of reads that support a variant, polymorphism type (e.g. A/C), strand-bias and Illumina SNP probe design score. Our results indicate that filtering out systematic sequencing errors could substantially improve the efficiency of SNP discovery. We show that BLASTX can be used as an efficient tool to identify single-copy genomic regions in the absence of a reference genome. The results have implications for research aiming to identify assayable SNPs and build SNP genotyping assays for nonmodel organisms. © 2014 John Wiley & Sons Ltd.

  19. Analysis on Dynamic Transmission Accuracy for RV Reducer

    Directory of Open Access Journals (Sweden)

    Zhang Fengshou

    2017-01-01

    Full Text Available By taking rotate vector (RV reducer as the research object, the factors affecting the transmission accuracy are studied, including the machining errors of the main parts, assembly errors, clearance, micro-displacement, gear mesh stiffness and damping, bearing stiffness. Based on Newton second law, the transmission error mathematical model of RV reducer is set up. Then, the RV reducer transmission error curve is achieved by solving the mathematical model using the Runge-Kutta methods under the combined action of various error factors. Through the analysis of RV reducer transmission test, it can be found that there are similar variation trend and frequency components compared the theoretical research and experimental result. The presented method is useful to the research on dynamic transmission accuracy of RV reducer, and also applies to research the transmission accuracy of other cycloid drive systems.

  20. Optimal control strategy to reduce the temporal wavefront error in AO systems

    NARCIS (Netherlands)

    Doelman, N.J.; Hinnen, K.J.G.; Stoffelen, F.J.G.; Verhaegen, M.H.

    2004-01-01

    An Adaptive Optics (AO) system for astronomy is analysed from a control point of view. The focus is put on the temporal error. The AO controller is identified as a feedback regulator system, operating in closed-loop with the aim of rejecting wavefront disturbances. Limitations on the performance of

  1. Soft error mechanisms, modeling and mitigation

    CERN Document Server

    Sayil, Selahattin

    2016-01-01

    This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption an...

  2. Reduction of weighing errors caused by tritium decay heating

    International Nuclear Information System (INIS)

    Shaw, J.F.

    1978-01-01

    The deuterium-tritium source gas mixture for laser targets is formulated by weight. Experiments show that the maximum weighing error caused by tritium decay heating is 0.2% for a 104-cm 3 mix vessel. Air cooling the vessel reduces the weighing error by 90%

  3. Advances in Spectral Nodal Methods applied to SN Nuclear Reactor Global calculations in Cartesian Geometry

    International Nuclear Information System (INIS)

    Barros, R.C.; Filho, H.A.; Oliveira, F.B.S.; Silva, F.C. da

    2004-01-01

    Presented here are the advances in spectral nodal methods for discrete ordinates (SN) eigenvalue problems in Cartesian geometry. These coarse-mesh methods are based on three ingredients: (i) the use of the standard discretized spatial balance SN equations; (ii) the use of the non-standard spectral diamond (SD) auxiliary equations in the multiplying regions of the domain, e.g. fuel assemblies; and (iii) the use of the non-standard spectral Green's function (SGF) auxiliary equations in the non-multiplying regions of the domain, e.g., the reflector. In slab-geometry the hybrid SD-SGF method generates numerical results that are completely free of spatial truncation errors. In X,Y-geometry, we obtain a system of two 'slab-geometry' SN equations for the node-edge average angular fluxes by transverse-integrating the X,Y-geometry SN equations separately in the y- and then in the x-directions within an arbitrary node of the spatial grid set up on the domain. In this paper, we approximate the transverse leakage terms by constants. These are the only approximations considered in the SD-SGF-constant nodal method, as the source terms, that include scattering and eventually fission events, are treated exactly. Moreover, we describe in this paper the progress of the approximate SN albedo boundary conditions for substituting the non-multiplying regions around the nuclear reactor core. We show numerical results to typical model problems to illustrate the accuracy of spectral nodal methods for coarse-mesh SN criticality calculations. (Author)

  4. Heuristics and Cognitive Error in Medical Imaging.

    Science.gov (United States)

    Itri, Jason N; Patel, Sohil H

    2018-05-01

    The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.

  5. Error Mitigation in Computational Design of Sustainable Energy Materials

    DEFF Research Database (Denmark)

    Christensen, Rune

    by individual C=O bonds. Energy corrections applied to C=O bonds significantly reduce systematic errors and can be extended to adsorbates. A similar study is performed for intermediates in the oxygen evolution and oxygen reduction reactions. An identified systematic error on peroxide bonds is found to also...... be present in the OOH* adsorbate. However, the systematic error will almost be canceled by inclusion of van der Waals energy. The energy difference between key adsorbates is thus similar to that previously found. Finally, a method is developed for error estimation in computationally inexpensive neural...

  6. The design of geometry teaching: learning from the geometry textbooks of Godfrey and Siddons

    OpenAIRE

    Fujita, Taro; Jones, Keith

    2002-01-01

    Deciding how to teach geometry remains a demanding task with one of major arguments being about how to combine the intuitive and deductive aspects of geometry into an effective teaching design. In order to try to obtain an insight into tackling this issue, this paper reports an analysis of innovative geometry textbooks which were published in the early part of the 20th Century, a time when significant efforts were being made to improve the teaching and learning of geometry. The analysis sugge...

  7. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  8. Errors and limits in the determination of plasma electron density by measuring the absolute values of the emitted continuum radiation intensity

    International Nuclear Information System (INIS)

    Bilbao, L.; Bruzzone, H.; Grondona, D.

    1994-01-01

    The reliable determination of a plasma electron structure requires a good knowledge of the errors affecting the employed technique. A technique based on the measurements of the absolute light intensity emitted by travelling plasma structures in plasma focus devices has been used, but it can be easily modified to other geometries and even to stationary plasma structures with time-varying plasma densities. The purpose of this work is to discuss in some detail the errors and limits of this technique. Three separate errors are shown: the minimum size of the density structure that can be resolved, an overall error in the measurements themselves, and an uncertainty in the shape of the density profile. (author)

  9. Sources of hyperbolic geometry

    CERN Document Server

    Stillwell, John

    1996-01-01

    This book presents, for the first time in English, the papers of Beltrami, Klein, and Poincaré that brought hyperbolic geometry into the mainstream of mathematics. A recognition of Beltrami comparable to that given the pioneering works of Bolyai and Lobachevsky seems long overdue-not only because Beltrami rescued hyperbolic geometry from oblivion by proving it to be logically consistent, but because he gave it a concrete meaning (a model) that made hyperbolic geometry part of ordinary mathematics. The models subsequently discovered by Klein and Poincaré brought hyperbolic geometry even further down to earth and paved the way for the current explosion of activity in low-dimensional geometry and topology. By placing the works of these three mathematicians side by side and providing commentaries, this book gives the student, historian, or professional geometer a bird's-eye view of one of the great episodes in mathematics. The unified setting and historical context reveal the insights of Beltrami, Klein, and Po...

  10. The COMET method in 3-D hexagonal geometry

    International Nuclear Information System (INIS)

    Connolly, K. J.; Rahnema, F.

    2012-01-01

    The hybrid stochastic-deterministic coarse mesh radiation transport (COMET) method developed at Georgia Tech now solves reactor core problems in 3-D hexagonal geometry. In this paper, the method is used to solve three preliminary test problems designed to challenge the method with steep flux gradients, high leakage, and strong asymmetry and heterogeneity in the core. The test problems are composed of blocks taken from a high temperature test reactor benchmark problem. As the method is still in development, these problems and their results are strictly preliminary. Results are compared to whole core Monte Carlo reference solutions in order to verify the method. Relative errors are on the order of 50 pcm in core eigenvalue, and mean relative error in pin fission density calculations is less than 1% in these difficult test cores. The method requires the one-time pre-computation of a response expansion coefficient library, which may be compiled in a comparable amount of time to a single whole core Monte Carlo calculation. After the library has been computed, COMET may solve any number of core configurations on the order of an hour, representing a significant gain in efficiency over other methods for whole core transport calculations. (authors)

  11. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    Science.gov (United States)

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling

  12. The geometry description markup language

    International Nuclear Information System (INIS)

    Chytracek, R.

    2001-01-01

    Currently, a lot of effort is being put on designing complex detectors. A number of simulation and reconstruction frameworks and applications have been developed with the aim to make this job easier. A very important role in this activity is played by the geometry description of the detector apparatus layout and its working environment. However, no real common approach to represent geometry data is available and such data can be found in various forms starting from custom semi-structured text files, source code (C/C++/FORTRAN), to XML and database solutions. The XML (Extensible Markup Language) has proven to provide an interesting approach for describing detector geometries, with several different but incompatible XML-based solutions existing. Therefore, interoperability and geometry data exchange among different frameworks is not possible at present. The author introduces a markup language for geometry descriptions. Its aim is to define a common approach for sharing and exchanging of geometry description data. Its requirements and design have been driven by experience and user feedback from existing projects which have their geometry description in XML

  13. Complex analysis and CR geometry

    CERN Document Server

    Zampieri, Giuseppe

    2008-01-01

    Cauchy-Riemann (CR) geometry is the study of manifolds equipped with a system of CR-type equations. Compared to the early days when the purpose of CR geometry was to supply tools for the analysis of the existence and regularity of solutions to the \\bar\\partial-Neumann problem, it has rapidly acquired a life of its own and has became an important topic in differential geometry and the study of non-linear partial differential equations. A full understanding of modern CR geometry requires knowledge of various topics such as real/complex differential and symplectic geometry, foliation theory, the geometric theory of PDE's, and microlocal analysis. Nowadays, the subject of CR geometry is very rich in results, and the amount of material required to reach competence is daunting to graduate students who wish to learn it. However, the present book does not aim at introducing all the topics of current interest in CR geometry. Instead, an attempt is made to be friendly to the novice by moving, in a fairly relaxed way, f...

  14. Study of Errors among Nursing Students

    Directory of Open Access Journals (Sweden)

    Ella Koren

    2007-09-01

    Full Text Available The study of errors in the health system today is a topic of considerable interest aimed at reducing errors through analysis of the phenomenon and the conclusions reached. Errors that occur frequently among health professionals have also been observed among nursing students. True, in most cases they are actually “near errors,” but these could be a future indicator of therapeutic reality and the effect of nurses' work environment on their personal performance. There are two different approaches to such errors: (a The EPP (error prone person approach lays full responsibility at the door of the individual involved in the error, whether a student, nurse, doctor, or pharmacist. According to this approach, handling consists purely in identifying and penalizing the guilty party. (b The EPE (error prone environment approach emphasizes the environment as a primary contributory factor to errors. The environment as an abstract concept includes components and processes of interpersonal communications, work relations, human engineering, workload, pressures, technical apparatus, and new technologies. The objective of the present study was to examine the role played by factors in and components of personal performance as compared to elements and features of the environment. The study was based on both of the aforementioned approaches, which, when combined, enable a comprehensive understanding of the phenomenon of errors among the student population as well as a comparison of factors contributing to human error and to error deriving from the environment. The theoretical basis of the study was a model that combined both approaches: one focusing on the individual and his or her personal performance and the other focusing on the work environment. The findings emphasize the work environment of health professionals as an EPE. However, errors could have been avoided by means of strict adherence to practical procedures. The authors examined error events in the

  15. Reducing Check-in Errors at Brigham Young University through Statistical Process Control

    Science.gov (United States)

    Spackman, N. Andrew

    2005-01-01

    The relationship between the library and its patrons is damaged and the library's reputation suffers when returned items are not checked in. An informal survey reveals librarians' concern for this problem and their efforts to combat it, although few libraries collect objective measurements of errors or the effects of improvement efforts. Brigham…

  16. Global aspects of complex geometry

    CERN Document Server

    Catanese, Fabrizio; Huckleberry, Alan T

    2006-01-01

    Present an overview of developments in Complex Geometry. This book covers topics that range from curve and surface theory through special varieties in higher dimensions, moduli theory, Kahler geometry, and group actions to Hodge theory and characteristic p-geometry.

  17. Errors in imaging patients in the emergency setting.

    Science.gov (United States)

    Pinto, Antonio; Reginelli, Alfonso; Pinto, Fabio; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca

    2016-01-01

    Emergency and trauma care produces a "perfect storm" for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting.

  18. Control rod interaction models for use in 2D and 3D reactor geometries

    International Nuclear Information System (INIS)

    Bannerman, R.C.

    1985-11-01

    Control rod interaction models are developed for use in two-dimensional and three-dimensional reactor geometries. These models allow the total worth of any combination of control rods inserted to be determined from the individual worths in conjunction with an algorithm representing interaction effects between them. The validity of the assumptions is demonstrated for fast and thermal systems showing modelling errors of 1#percent# or less in inserted control rod worths. The models are ideally suited for most reactor systems. (UK)

  19. Analytic geometry

    CERN Document Server

    Burdette, A C

    1971-01-01

    Analytic Geometry covers several fundamental aspects of analytic geometry needed for advanced subjects, including calculus.This book is composed of 12 chapters that review the principles, concepts, and analytic proofs of geometric theorems, families of lines, the normal equation of the line, and related matters. Other chapters highlight the application of graphing, foci, directrices, eccentricity, and conic-related topics. The remaining chapters deal with the concept polar and rectangular coordinates, surfaces and curves, and planes.This book will prove useful to undergraduate trigonometric st

  20. Vector geometry

    CERN Document Server

    Robinson, Gilbert de B

    2011-01-01

    This brief undergraduate-level text by a prominent Cambridge-educated mathematician explores the relationship between algebra and geometry. An elementary course in plane geometry is the sole requirement for Gilbert de B. Robinson's text, which is the result of several years of teaching and learning the most effective methods from discussions with students. Topics include lines and planes, determinants and linear equations, matrices, groups and linear transformations, and vectors and vector spaces. Additional subjects range from conics and quadrics to homogeneous coordinates and projective geom

  1. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Micromagnetic recording model of writer geometry effects at skew

    Science.gov (United States)

    Plumer, M. L.; Bozeman, S.; van Ek, J.; Michel, R. P.

    2006-04-01

    The effects of the pole-tip geometry at the air-bearing surface on perpendicular recording at a skew angle are examined through modeling and spin-stand test data. Head fields generated by the finite element method were used to record transitions within our previously described micromagnetic recording model. Write-field contours for a variety of square, rectangular, and trapezoidal pole shapes were evaluated to determine the impact of geometry on field contours. Comparing results for recorded track width, transition width, and media signal to noise ratio at 0° and 15° skew demonstrate the benefits of trapezoidal and reduced aspect-ratio pole shapes. Consistency between these modeled results and test data is demonstrated.

  3. Quantum entanglement as an aspect of pure spinor geometry

    International Nuclear Information System (INIS)

    Kiosses, V

    2014-01-01

    Relying on the mathematical analogy of the pure states of a two-qubit system with four-component Dirac spinors, we provide an alternative consideration of quantum entanglement using the mathematical formulation of Cartan's pure spinors. A result of our analysis is that the Cartan equation of a two-qubit state is entanglement sensitive in the same way that the Dirac equation for fermions is mass sensitive. The Cartan equation for unentangled qubits is reduced to a pair of Cartan equations for single qubits as the Dirac equation for massless fermions separates into two Weyl equations. Finally, we establish a correspondence between the separability condition in qubit geometry and the separability condition in spinor geometry. (paper)

  4. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  5. Physics- and engineering knowledge-based geometry repair system for robust parametric CAD geometries

    OpenAIRE

    Li, Dong

    2012-01-01

    In modern multi-objective design optimisation, an effective geometry engine is becoming an essential tool and its performance has a significant impact on the entire process. Building a parametric geometry requires difficult compromises between the conflicting goals of robustness and flexibility. The work presents a solution for improving the robustness of parametric geometry models by capturing and modelling relative engineering knowledge into a surrogate model, and deploying it automatically...

  6. Improving Type Error Messages in OCaml

    Directory of Open Access Journals (Sweden)

    Arthur Charguéraud

    2015-12-01

    Full Text Available Cryptic type error messages are a major obstacle to learning OCaml or other ML-based languages. In many cases, error messages cannot be interpreted without a sufficiently-precise model of the type inference algorithm. The problem of improving type error messages in ML has received quite a bit of attention over the past two decades, and many different strategies have been considered. The challenge is not only to produce error messages that are both sufficiently concise and systematically useful to the programmer, but also to handle a full-blown programming language and to cope with large-sized programs efficiently. In this work, we present a modification to the traditional ML type inference algorithm implemented in OCaml that, by significantly reducing the left-to-right bias, allows us to report error messages that are more helpful to the programmer. Our algorithm remains fully predictable and continues to produce fairly concise error messages that always help making some progress towards fixing the code. We implemented our approach as a patch to the OCaml compiler in just a few hundred lines of code. We believe that this patch should benefit not just to beginners, but also to experienced programs developing large-scale OCaml programs.

  7. Operator error and emotions. Operator error and emotions - a major cause of human failure

    International Nuclear Information System (INIS)

    Patterson, B.K.; Bradley, M.; Artiss, W.G.

    2000-01-01

    This paper proposes the idea that a large proportion of the incidents attributed to operator and maintenance error in a nuclear or industrial plant are actually founded in our human emotions. Basic psychological theory of emotions is briefly presented and then the authors present situations and instances that can cause emotions to swell and lead to operator and maintenance error. Since emotional information is not recorded in industrial incident reports, the challenge is extended to industry, to review incident source documents for cases of emotional involvement and to develop means to collect emotion related information in future root cause analysis investigations. Training must then be provided to operators and maintainers to enable them to know one's emotions, manage emotions, motivate one's self, recognize emotions in others and handle relationships. Effective training will reduce the instances of human error based in emotions and enable a cooperative, productive environment in which to work. (author)

  8. Operator error and emotions. Operator error and emotions - a major cause of human failure

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, B.K. [Human Factors Practical Incorporated (Canada); Bradley, M. [Univ. of New Brunswick, Saint John, New Brunswick (Canada); Artiss, W.G. [Human Factors Practical (Canada)

    2000-07-01

    This paper proposes the idea that a large proportion of the incidents attributed to operator and maintenance error in a nuclear or industrial plant are actually founded in our human emotions. Basic psychological theory of emotions is briefly presented and then the authors present situations and instances that can cause emotions to swell and lead to operator and maintenance error. Since emotional information is not recorded in industrial incident reports, the challenge is extended to industry, to review incident source documents for cases of emotional involvement and to develop means to collect emotion related information in future root cause analysis investigations. Training must then be provided to operators and maintainers to enable them to know one's emotions, manage emotions, motivate one's self, recognize emotions in others and handle relationships. Effective training will reduce the instances of human error based in emotions and enable a cooperative, productive environment in which to work. (author)

  9. The District Nursing Clinical Error Reduction Programme.

    Science.gov (United States)

    McGraw, Caroline; Topping, Claire

    2011-01-01

    The District Nursing Clinical Error Reduction (DANCER) Programme was initiated in NHS Islington following an increase in the number of reported medication errors. The objectives were to reduce the actual degree of harm and the potential risk of harm associated with medication errors and to maintain the existing positive reporting culture, while robustly addressing performance issues. One hundred medication errors reported in 2007/08 were analysed using a framework that specifies the factors that predispose to adverse medication events in domiciliary care. Various contributory factors were identified and interventions were subsequently developed to address poor drug calculation and medication problem-solving skills and incorrectly transcribed medication administration record charts. Follow up data were obtained at 12 months and two years. The evaluation has shown that although medication errors do still occur, the programme has resulted in a marked shift towards a reduction in the associated actual degree of harm and the potential risk of harm.

  10. Comment on “Magnetic geometry and physics of advanced divertors: The X-divertor and the snowflake” [Phys. Plasmas 20, 102507 (2013)

    International Nuclear Information System (INIS)

    Ryutov, D. D.; Cohen, R. H.; Rognlien, T. D.; Soukhanovskii, V. A.; Umansky, M. V.

    2014-01-01

    In the recently published paper “Magnetic geometry and physics of advanced divertors: The X-divertor and the snowflake” [Phys. Plasmas 20, 102507 (2013)], the authors raise interesting and important issues concerning divertor physics and design. However, the paper contains significant errors: (a) The conceptual framework used in it for the evaluation of divertor “quality” is reduced to the assessment of the magnetic field structure in the outer Scrape-Off Layer. This framework is incorrect because processes affecting the pedestal, the private flux region and all of the divertor legs (four, in the case of a snowflake) are an inseparable part of divertor operation. (b) The concept of the divertor index focuses on only one feature of the magnetic field structure and can be quite misleading when applied to divertor design. (c) The suggestion to rename the divertor configurations experimentally realized on NSTX (National Spherical Torus Experiment) and DIII-D (Doublet III-D) from snowflakes to X-divertors is not justified: it is not based on comparison of these configurations with the prototypical X-divertor, and it ignores the fact that the NSTX and DIII-D poloidal magnetic field geometries fit very well into the snowflake “two-null” prescription

  11. Spacecraft and propulsion technician error

    Science.gov (United States)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  12. SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry

    Energy Technology Data Exchange (ETDEWEB)

    Kotoku, J; Nakabayashi, S; Kumagai, S; Ishibashi, T; Kobayashi, T [Teikyo University, Itabashi-ku, Tokyo (Japan); Haga, A; Saotome, N [University of Tokyo Hospital, Bunkyo-ku, Tokyo (Japan); Arai, N [Teikyo University Hospital, Itabashi-ku, Tokyo (Japan)

    2014-06-01

    Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image. We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)

  13. SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry

    International Nuclear Information System (INIS)

    Kotoku, J; Nakabayashi, S; Kumagai, S; Ishibashi, T; Kobayashi, T; Haga, A; Saotome, N; Arai, N

    2014-01-01

    Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image. We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)

  14. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  15. Drug dispensing errors in a ward stock system

    DEFF Research Database (Denmark)

    Andersen, Stig Ejdrup

    2010-01-01

    . Multivariable analysis showed that surgical and psychiatric settings were more susceptible to involvement in dispensing errors and that polypharmacy was a risk factor. In this ward stock system, dispensing errors are relatively common, they depend on speciality and are associated with polypharmacy......The aim of this study was to determine the frequency of drug dispensing errors in a traditional ward stock system operated by nurses and to investigate the effect of potential contributing factors. This was a descriptive study conducted in a teaching hospital from January 2005 to June 2007. In five....... These results indicate that strategies to reduce dispensing errors should address polypharmacy and focus on high-risk units. This should, however, be substantiated by a future trial....

  16. Noncommutative geometry

    CERN Document Server

    Connes, Alain

    1994-01-01

    This English version of the path-breaking French book on this subject gives the definitive treatment of the revolutionary approach to measure theory, geometry, and mathematical physics developed by Alain Connes. Profusely illustrated and invitingly written, this book is ideal for anyone who wants to know what noncommutative geometry is, what it can do, or how it can be used in various areas of mathematics, quantization, and elementary particles and fields.Key Features* First full treatment of the subject and its applications* Written by the pioneer of this field* Broad applications in mathemat

  17. Geometry Revealed

    CERN Document Server

    Berger, Marcel

    2010-01-01

    Both classical geometry and modern differential geometry have been active subjects of research throughout the 20th century and lie at the heart of many recent advances in mathematics and physics. The underlying motivating concept for the present book is that it offers readers the elements of a modern geometric culture by means of a whole series of visually appealing unsolved (or recently solved) problems that require the creation of concepts and tools of varying abstraction. Starting with such natural, classical objects as lines, planes, circles, spheres, polygons, polyhedra, curves, surfaces,

  18. Plutonium Finishing Plant (PFP) Generalized Geometry Holdup Calculations and Total Measurement Uncertainty

    International Nuclear Information System (INIS)

    Keele, B.D.

    2005-01-01

    A collimated portable gamma-ray detector will be used to quantify the plutonium content of items that can be approximated as a point, line, or area geometry with respect to the detector. These items can include ducts, piping, glove boxes, isolated equipment inside of gloveboxes, and HEPA filters. The Generalized Geometry Holdup (GGH) model is used for the reduction of counting data. This document specifies the calculations to reduce counting data into contained plutonium and the associated total measurement uncertainty.

  19. Discrete differential geometry. Consistency as integrability

    OpenAIRE

    Bobenko, Alexander I.; Suris, Yuri B.

    2005-01-01

    A new field of discrete differential geometry is presently emerging on the border between differential and discrete geometry. Whereas classical differential geometry investigates smooth geometric shapes (such as surfaces), and discrete geometry studies geometric shapes with finite number of elements (such as polyhedra), the discrete differential geometry aims at the development of discrete equivalents of notions and methods of smooth surface theory. Current interest in this field derives not ...

  20. Assessing errors related to characteristics of the items measured

    International Nuclear Information System (INIS)

    Liggett, W.

    1980-01-01

    Errors that are related to some intrinsic property of the items measured are often encountered in nuclear material accounting. An example is the error in nondestructive assay measurements caused by uncorrected matrix effects. Nuclear material accounting requires for each materials type one measurement method for which bounds on these errors can be determined. If such a method is available, a second method might be used to reduce costs or to improve precision. If the measurement error for the first method is longer-tailed than Gaussian, then precision might be improved by measuring all items by both methods. 8 refs

  1. Effects of the variation of samples geometry on radionuclide calibrator response for radiopharmaceuticals used in nuclear medicine

    Energy Technology Data Exchange (ETDEWEB)

    Albuquerque, Antonio Morais de Sa; Fragoso, Maria Conceicao de Farias; Oliveira, Mercia L. [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2011-07-01

    In the nuclear medicine practice, the accurate knowledge of the activity of radiopharmaceuticals which will be administered to the subjects is an important factor to ensure the success of diagnosis or therapy. The instrument used for this purpose is the radionuclide calibrator. The radiopharmaceuticals are usually contained on glass vials or syringes. However, the radionuclide calibrators response is sensitive to the measurement geometry. In addition, the calibration factors supplied by manufactures are valid only for single sample geometry. To minimize the uncertainty associated with the activity measurements, it is important to use the appropriate corrections factors for the each radionuclide in the specific geometry in which the measurement is to be made. The aims of this work were to evaluate the behavior of radionuclide calibrators varying the geometry of radioactive sources and to determine experimentally the correction factors for different volumes and containers types commonly used in nuclear medicine practice. The measurements were made in two ionization chambers of different manufacturers (Capintec and Biodex), using four radionuclides with different photon energies: {sup 18}F, {sup 99m}Tc, {sup 131}I and {sup 201}Tl. The results confirm the significant dependence of radionuclide calibrators reading on the sample geometry, showing the need of use correction factors in order to minimize the errors which affect the activity measurements. (author)

  2. Spinorial Geometry and Branes

    International Nuclear Information System (INIS)

    Sloane, Peter

    2007-01-01

    We adapt the spinorial geometry method introduced in [J. Gillard, U. Gran and G. Papadopoulos, 'The spinorial geometry of supersymmetric backgrounds,' Class. Quant. Grav. 22 (2005) 1033 [ (arXiv:hep-th/0410155)

  3. Safety coaches in radiology: decreasing human error and minimizing patient harm

    Energy Technology Data Exchange (ETDEWEB)

    Dickerson, Julie M.; Adams, Janet M. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Koch, Bernadette L.; Donnelly, Lane F. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Cincinnati Children' s Hospital Medical Center, Department of Pediatrics, Cincinnati, OH (United States); Goodfriend, Martha A. [Cincinnati Children' s Hospital Medical Center, Department of Quality Improvement, Cincinnati, OH (United States)

    2010-09-15

    Successful programs to improve patient safety require a component aimed at improving safety culture and environment, resulting in a reduced number of human errors that could lead to patient harm. Safety coaching provides peer accountability. It involves observing for safety behaviors and use of error prevention techniques and provides immediate feedback. For more than a decade, behavior-based safety coaching has been a successful strategy for reducing error within the context of occupational safety in industry. We describe the use of safety coaches in radiology. Safety coaches are an important component of our comprehensive patient safety program. (orig.)

  4. Safety coaches in radiology: decreasing human error and minimizing patient harm

    International Nuclear Information System (INIS)

    Dickerson, Julie M.; Adams, Janet M.; Koch, Bernadette L.; Donnelly, Lane F.; Goodfriend, Martha A.

    2010-01-01

    Successful programs to improve patient safety require a component aimed at improving safety culture and environment, resulting in a reduced number of human errors that could lead to patient harm. Safety coaching provides peer accountability. It involves observing for safety behaviors and use of error prevention techniques and provides immediate feedback. For more than a decade, behavior-based safety coaching has been a successful strategy for reducing error within the context of occupational safety in industry. We describe the use of safety coaches in radiology. Safety coaches are an important component of our comprehensive patient safety program. (orig.)

  5. Safety coaches in radiology: decreasing human error and minimizing patient harm.

    Science.gov (United States)

    Dickerson, Julie M; Koch, Bernadette L; Adams, Janet M; Goodfriend, Martha A; Donnelly, Lane F

    2010-09-01

    Successful programs to improve patient safety require a component aimed at improving safety culture and environment, resulting in a reduced number of human errors that could lead to patient harm. Safety coaching provides peer accountability. It involves observing for safety behaviors and use of error prevention techniques and provides immediate feedback. For more than a decade, behavior-based safety coaching has been a successful strategy for reducing error within the context of occupational safety in industry. We describe the use of safety coaches in radiology. Safety coaches are an important component of our comprehensive patient safety program.

  6. Error analysis of motion correction method for laser scanning of moving objects

    Science.gov (United States)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  7. Teleparallel Lagrange geometry and a unified field theory

    Energy Technology Data Exchange (ETDEWEB)

    Wanas, M I [Department of Astronomy, Faculty of Science, Cairo University, CTP of the British University in Egypt (BUE) (Egypt); Youssef, Nabil L; Sid-Ahmed, A M, E-mail: wanas@frcu.eun.eg, E-mail: nyoussef@frcu.eun.e, E-mail: nlyoussef2003@yahoo.f, E-mail: amrs@mailer.eun.e, E-mail: amrsidahmed@gmail.co [Department of Mathematics, Faculty of Science, Cairo University (Egypt)

    2010-02-21

    In this paper, we construct a field theory unifying gravity and electromagnetism in the context of extended absolute parallelism (EAP) geometry. This geometry combines, within its structure, the geometric richness of the tangent bundle and the mathematical simplicity of absolute parallelism (AP) geometry. The constructed field theory is a generalization of the generalized field theory (GFT) formulated by Mikhail and Wanas. The theory obtained is purely geometric. The horizontal (resp. vertical) field equations are derived by applying the Euler-Lagrange equations to an appropriate horizontal (resp. vertical) scalar Lagrangian. The symmetric part of the resulting horizontal (resp. vertical) field equations gives rise to a generalized form of Einstein's field equations in which the horizontal (resp. vertical) energy-momentum tensor is purely geometric. The skew-symmetric part of the resulting horizontal (resp. vertical) field equations gives rise to a generalized form of Maxwell equations in which the electromagnetic field is purely geometric. Some interesting special cases, which reveal the role of the nonlinear connection in the obtained field equations, are examined. Finally, the condition under which our constructed field equations reduce to the GFT is explicitly established.

  8. An introduction to incidence geometry

    CERN Document Server

    De Bruyn, Bart

    2016-01-01

    This book gives an introduction to the field of Incidence Geometry by discussing the basic families of point-line geometries and introducing some of the mathematical techniques that are essential for their study. The families of geometries covered in this book include among others the generalized polygons, near polygons, polar spaces, dual polar spaces and designs. Also the various relationships between these geometries are investigated. Ovals and ovoids of projective spaces are studied and some applications to particular geometries will be given. A separate chapter introduces the necessary mathematical tools and techniques from graph theory. This chapter itself can be regarded as a self-contained introduction to strongly regular and distance-regular graphs. This book is essentially self-contained, only assuming the knowledge of basic notions from (linear) algebra and projective and affine geometry. Almost all theorems are accompanied with proofs and a list of exercises with full solutions is given at the end...

  9. Reduction of digital errors of digital charge division type position-sensitive detectors

    International Nuclear Information System (INIS)

    Uritani, A.; Yoshimura, K.; Takenaka, Y.; Mori, C.

    1994-01-01

    It is well known that ''digital errors'', i.e. differential non-linearity, appear in a position profile of radiation interactions when the profile is obtained with a digital charge-division-type position-sensitive detector. Two methods are presented to reduce the digital errors. They are the methods using logarithmic amplifiers and a weighting function. The validities of these two methods have been evaluated mainly by computer simulation. These methods can considerably reduce the digital errors. The best results are obtained when both methods are applied. ((orig.))

  10. Development of a new error field correction coil (C-coil) for DIII-D

    International Nuclear Information System (INIS)

    Robinson, J.I.; Scoville, J.T.

    1995-12-01

    The C-coil recently installed on the DIII-D tokamak was developed to reduce the error fields created by imperfections in the location and geometry of the existing coils used to confine, heat, and shape the plasma. First results from C-coil experiments include stable operation in a 1.6 MA plasma with a density less than 1.0 x 10 13 cm -3 , nearly a factor of three lower density than that achievable without the C-coil. The C-coil has also been used in magnetic braking of the plasma rotation and high energy particle confinement experiments. The C-coil system consists of six individual saddle coils, each 60 degree wide toroidally, spanning the midplane of the vessel with a vertical height of 1.6 m. The coils are located at a major radius of 3.2 m, just outside of the toroidal field coils. The actual shape and geometry of each coil section varied somewhat from the nominal dimensions due to the large number of obstructions to the desired coil path around the already crowded tokamak. Each coil section consists of four turns of 750 MCM insulated copper cable banded with stainless steel straps within the web of a 3 in. x 3 in. stainless steel angle frame. The C-coil structure was designed to resist peak transient radial forces (up to 1,800 Nm) exerted on the coil by the toroidal and ploidal fields. The coil frames were supported from existing poloidal field coil case brackets, coil studs, and various other structures on the tokamak

  11. Spinorial Geometry and Branes

    Energy Technology Data Exchange (ETDEWEB)

    Sloane, Peter [Department of Mathematics, King' s College, University of London, Strand, London WC2R 2LS (United Kingdom)

    2007-09-15

    We adapt the spinorial geometry method introduced in [J. Gillard, U. Gran and G. Papadopoulos, 'The spinorial geometry of supersymmetric backgrounds,' Class. Quant. Grav. 22 (2005) 1033 [ (arXiv:hep-th/0410155)

  12. Introduction to non-Euclidean geometry

    CERN Document Server

    Wolfe, Harold E

    2012-01-01

    One of the first college-level texts for elementary courses in non-Euclidean geometry, this concise, readable volume is geared toward students familiar with calculus. A full treatment of the historical background explores the centuries-long efforts to prove Euclid's parallel postulate and their triumphant conclusion. Numerous original exercises form an integral part of the book.Topics include hyperbolic plane geometry and hyperbolic plane trigonometry, applications of calculus to the solutions of some problems in hyperbolic geometry, elliptic plane geometry and trigonometry, and the consistenc

  13. Optical geometry across the horizon

    International Nuclear Information System (INIS)

    Jonsson, Rickard

    2006-01-01

    In a recent paper (Jonsson and Westman 2006 Class. Quantum Grav. 23 61), a generalization of optical geometry, assuming a non-shearing reference congruence, is discussed. Here we illustrate that this formalism can be applied to (a finite four-volume) of any spherically symmetric spacetime. In particular we apply the formalism, using a non-static reference congruence, to do optical geometry across the horizon of a static black hole. While the resulting geometry in principle is time dependent, we can choose the reference congruence in such a manner that an embedding of the geometry always looks the same. Relative to the embedded geometry the reference points are then moving. We discuss the motion of photons, inertial forces and gyroscope precession in this framework

  14. Quantification of Airfoil Geometry-Induced Aerodynamic Uncertainties---Comparison of Approaches

    KAUST Repository

    Liu, Dishi

    2015-04-14

    Uncertainty quantification in aerodynamic simulations calls for efficient numerical methods to reduce computational cost, especially for uncertainties caused by random geometry variations which involve a large number of variables. This paper compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and by point collocation, radial basis function and a gradient-enhanced version of kriging, and examines their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry which is parameterized by independent Gaussian variables. The results show that gradient-enhanced surrogate methods achieve better accuracy than direct integration methods with the same computational cost.

  15. Quantification of Airfoil Geometry-Induced Aerodynamic Uncertainties---Comparison of Approaches

    KAUST Repository

    Liu, Dishi; Litvinenko, Alexander; Schillings, Claudia; Schulz, Volker

    2015-01-01

    Uncertainty quantification in aerodynamic simulations calls for efficient numerical methods to reduce computational cost, especially for uncertainties caused by random geometry variations which involve a large number of variables. This paper compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and by point collocation, radial basis function and a gradient-enhanced version of kriging, and examines their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry which is parameterized by independent Gaussian variables. The results show that gradient-enhanced surrogate methods achieve better accuracy than direct integration methods with the same computational cost.

  16. Advanced geometries for ballistic neutron guides

    International Nuclear Information System (INIS)

    Schanzer, Christian; Boeni, Peter; Filges, Uwe; Hils, Thomas

    2004-01-01

    Sophisticated neutron guide systems take advantage of supermirrors being used to increase the neutron flux. However, the finite reflectivity of supermirrors becomes a major loss mechanism when many reflections occur, e.g. in long neutron guides and for long wavelengths. In order to reduce the number of reflections, ballistic neutron guides have been proposed. Usually linear tapered sections are used to enlarge the cross-section and finally, focus the beam to the sample. The disadvantages of linear tapering are (i) an inhomogeneous phase space at the sample position and (ii) a decreasing flux with increasing distance from the exit of the guide. We investigate the properties of parabolic and elliptic tapering for ballistic neutron guides, using the Monte Carlo program McStas with a new guide component dedicated for such geometries. We show that the maximum flux can indeed be shifted away from the exit of the guide. In addition we explore the possibilities of parabolic and elliptic geometries to create point like sources for dedicated experimental demands

  17. Complex geometry and quantum string theory

    International Nuclear Information System (INIS)

    Belavin, A.A.; Knizhnik, V.G.

    1986-01-01

    Summation over closed oriented surfaces of genus p ≥ 2 (p - loop vacuum amplitudes in boson string theory) in a critical dimensions D=26 is reduced to integration over M p space of complex structures of Riemann surfaces of genus p. The analytic properties of the integration measure as a function of the complex coordinates on M p are studied. It is shown that the measure multiplied by (det Im τ-circumflex) 13 (τ-circumflex is the surface period matrix) is the square of the modulus of a function which is holomorphic on M p and does not vanish anywhere. The function has a second order pole at infinity of compactified space of moduli M p . These properties define the measure uniquely up to a constant multiple and this permits one to set up explicitformulae for p=2,3 in terms of the theta-constants. Power and logarithmic divergences connected with renormalization of the tachyon wave function and of the slope respectively are involved in the theory. Quantum geometry of critical strings turns out to be a complex geometry

  18. Error Sonification of a Complex Motor Task

    Directory of Open Access Journals (Sweden)

    Riener Robert

    2011-12-01

    Full Text Available Visual information is mainly used to master complex motor tasks. Thus, additional information providing augmented feedback should be displayed in other modalities than vision, e.g. hearing. The present work evaluated the potential of error sonification to enhance learning of a rowing-type motor task. In contrast to a control group receiving self-controlled terminal feedback, the experimental group could not significantly reduce spatial errors. Thus, motor learning was not enhanced by error sonification, although during the training the participant could benefit from it. It seems that the motor task was too slow, resulting in immediate corrections of the movement rather than in an internal representation of the general characteristics of the motor task. Therefore, further studies should elaborate the impact of error sonification when general characteristics of the motor tasks are already known.

  19. Double checking medicines: defence against error or contributory factor?

    Science.gov (United States)

    Armitage, Gerry

    2008-08-01

    The double checking of medicines in health care is a contestable procedure. It occupies an obvious position in health care practice and is understood to be an effective defence against medication error but the process is variable and the outcomes have not been exposed to testing. This paper presents an appraisal of the process using data from part of a larger study on the contributory factors in medication errors and their reporting. Previous research studies are reviewed; data are analysed from a review of 991 drug error reports and a subsequent series of 40 in-depth interviews with health professionals in an acute hospital in northern England. The incident reports showed that errors occurred despite double checking but that action taken did not appear to investigate the checking process. Most interview participants (34) talked extensively about double checking but believed the process to be inconsistent. Four key categories were apparent: deference to authority, reduction of responsibility, automatic processing and lack of time. Solutions to the problems were also offered, which are discussed with several recommendations. Double checking medicines should be a selective and systematic procedure informed by key principles and encompassing certain behaviours. Psychological research may be instructive in reducing checking errors but the aviation industry may also have a part to play in increasing error wisdom and reducing risk.

  20. Reducing the sensitivity of IMPT treatment plans to setup errors and range uncertainties via probabilistic treatment planning

    International Nuclear Information System (INIS)

    Unkelbach, Jan; Bortfeld, Thomas; Martin, Benjamin C.; Soukup, Martin

    2009-01-01

    Treatment plans optimized for intensity modulated proton therapy (IMPT) may be very sensitive to setup errors and range uncertainties. If these errors are not accounted for during treatment planning, the dose distribution realized in the patient may by strongly degraded compared to the planned dose distribution. The authors implemented the probabilistic approach to incorporate uncertainties directly into the optimization of an intensity modulated treatment plan. Following this approach, the dose distribution depends on a set of random variables which parameterize the uncertainty, as does the objective function used to optimize the treatment plan. The authors optimize the expected value of the objective function. They investigate IMPT treatment planning regarding range uncertainties and setup errors. They demonstrate that incorporating these uncertainties into the optimization yields qualitatively different treatment plans compared to conventional plans which do not account for uncertainty. The sensitivity of an IMPT plan depends on the dose contributions of individual beam directions. Roughly speaking, steep dose gradients in beam direction make treatment plans sensitive to range errors. Steep lateral dose gradients make plans sensitive to setup errors. More robust treatment plans are obtained by redistributing dose among different beam directions. This can be achieved by the probabilistic approach. In contrast, the safety margin approach as widely applied in photon therapy fails in IMPT and is neither suitable for handling range variations nor setup errors.

  1. DC-Link Voltage Coordinated-Proportional Control for Cascaded Converter With Zero Steady-State Error and Reduced System Type

    DEFF Research Database (Denmark)

    Tian, Yanjun; Loh, Poh Chiang; Deng, Fujin

    2016-01-01

    Cascaded converter is formed by connecting two subconverters together, sharing a common intermediate dc-link voltage. Regulation of this dc-link voltage is frequently realized with a proportional-integral (PI) controller, whose high gain at dc helps to force a zero steady-state tracking error....... The proposed scheme can be used with either unidirectional or bidirectional power flow, and has been verified by simulation and experimental results presented in this paper........ Such precise tracking is, however, at the expense of increasing the system type, caused by the extra pole at the origin introduced by the PI controller. The overall system may, hence, be tougher to control. To reduce the system type while preserving precise dc-link voltage tracking, this paper proposes...

  2. Convection in Slab and Spheroidal Geometries

    Science.gov (United States)

    Porter, David H.; Woodward, Paul R.; Jacobs, Michael L.

    2000-01-01

    Three-dimensional numerical simulations of compressible turbulent thermally driven convection, in both slab and spheroidal geometries, are reviewed and analyzed in terms of velocity spectra and mixing-length theory. The same ideal gas model is used in both geometries, and resulting flows are compared. The piecewise-parabolic method (PPM), with either thermal conductivity or photospheric boundary conditions, is used to solve the fluid equations of motion. Fluid motions in both geometries exhibit a Kolmogorov-like k(sup -5/3) range in their velocity spectra. The longest wavelength modes are energetically dominant in both geometries, typically leading to one convection cell dominating the flow. In spheroidal geometry, a dipolar flow dominates the largest scale convective motions. Downflows are intensely turbulent and up drafts are relatively laminar in both geometries. In slab geometry, correlations between temperature and velocity fluctuations, which lead to the enthalpy flux, are fairly independent of depth. In spheroidal geometry this same correlation increases linearly with radius over the inner 70 percent by radius, in which the local pressure scale heights are a sizable fraction of the radius. The effects from the impenetrable boundary conditions in the slab geometry models are confused with the effects from non-local convection. In spheroidal geometry nonlocal effects, due to coherent plumes, are seen as far as several pressure scale heights from the lower boundary and are clearly distinguishable from boundary effects.

  3. Complex and symplectic geometry

    CERN Document Server

    Medori, Costantino; Tomassini, Adriano

    2017-01-01

    This book arises from the INdAM Meeting "Complex and Symplectic Geometry", which was held in Cortona in June 2016. Several leading specialists, including young researchers, in the field of complex and symplectic geometry, present the state of the art of their research on topics such as the cohomology of complex manifolds; analytic techniques in Kähler and non-Kähler geometry; almost-complex and symplectic structures; special structures on complex manifolds; and deformations of complex objects. The work is intended for researchers in these areas.

  4. Initiation to global Finslerian geometry

    CERN Document Server

    Akbar-Zadeh, Hassan

    2006-01-01

    After a brief description of the evolution of thinking on Finslerian geometry starting from Riemann, Finsler, Berwald and Elie Cartan, the book gives a clear and precise treatment of this geometry. The first three chapters develop the basic notions and methods, introduced by the author, to reach the global problems in Finslerian Geometry. The next five chapters are independent of each other, and deal with among others the geometry of generalized Einstein manifolds, the classification of Finslerian manifolds of constant sectional curvatures. They also give a treatment of isometric, affine, p

  5. Human error in maintenance: An investigative study for the factories of the future

    International Nuclear Information System (INIS)

    Dhillon, B S

    2014-01-01

    This paper presents a study of human error in maintenance. Many different aspects of human error in maintenance considered useful for the factories of the future are studied, including facts, figures, and examples; occurrence of maintenance error in equipment life cycle, elements of a maintenance person's time, maintenance environment and the causes for the occurrence of maintenance error, types and typical maintenance errors, common maintainability design errors and useful design guidelines to reduce equipment maintenance errors, maintenance work instructions, and maintenance error analysis methods

  6. Families, nurses and organisations contributing factors to medication administration error in paediatrics: a literature review

    Directory of Open Access Journals (Sweden)

    Albara Alomari

    2015-05-01

    Full Text Available Background: Medication error is the most common adverse event for hospitalised children and can lead to significant harm. Despite decades of research and implementation of a number of initiatives, the error rates continue to rise, particularly those associated with administration. Objectives: The objective of this literature review is to explore the factors involving nurses, families and healthcare systems that impact on medication administration errors in paediatric patients. Design: A review was undertaken of studies that reported on factors that contribute to a rise or fall in medication administration errors, from family, nurse and organisational perspectives. The following databases were searched: Medline, Embase, CINAHL and the Cochrane library. The title, abstract and full article were reviewed for relevance. Articles were excluded if they were not research studies, they related to medications and not medication administration errors or they referred to medical errors rather than medication errors. Results: A total of 15 studies met the inclusion criteria. The factors contributing to medication administration errors are communication failure between the parents and healthcare professionals, nurse workload, failure to adhere to policy and guidelines, interruptions, inexperience and insufficient nurse education from organisations. Strategies that were reported to reduce errors were doublechecking by two nurses, implementing educational sessions, use of computerised prescribing and barcoding administration systems. Yet despite such interventions, errors persist. The review highlighted families that have a central role in caring for the child and therefore are key to the administration process, but have largely been ignored in research studies relating to medication administration. Conclusions: While there is a consensus about the factors that contribute to errors, sustainable and effective solutions remain elusive. To date, families have not

  7. Algebraic geometry in India

    Indian Academy of Sciences (India)

    algebraic geometry but also in related fields like number theory. ... every vector bundle on the affine space is trivial. (equivalently ... les on a compact Riemann surface to unitary rep- ... tial geometry and topology and was generalised in.

  8. Grinding Method and Error Analysis of Eccentric Shaft Parts

    Science.gov (United States)

    Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua

    2017-12-01

    RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.

  9. Non-holonomic dynamics and Poisson geometry

    International Nuclear Information System (INIS)

    Borisov, A V; Mamaev, I S; Tsiganov, A V

    2014-01-01

    This is a survey of basic facts presently known about non-linear Poisson structures in the analysis of integrable systems in non-holonomic mechanics. It is shown that by using the theory of Poisson deformations it is possible to reduce various non-holonomic systems to dynamical systems on well-understood phase spaces equipped with linear Lie-Poisson brackets. As a result, not only can different non-holonomic systems be compared, but also fairly advanced methods of Poisson geometry and topology can be used for investigating them. Bibliography: 95 titles

  10. Generalizing optical geometry

    International Nuclear Information System (INIS)

    Jonsson, Rickard; Westman, Hans

    2006-01-01

    We show that by employing the standard projected curvature as a measure of spatial curvature, we can make a certain generalization of optical geometry (Abramowicz M A and Lasota J-P 1997 Class. Quantum Grav. A 14 23-30). This generalization applies to any spacetime that admits a hypersurface orthogonal shearfree congruence of worldlines. This is a somewhat larger class of spacetimes than the conformally static spacetimes assumed in standard optical geometry. In the generalized optical geometry, which in the generic case is time dependent, photons move with unit speed along spatial geodesics and the sideways force experienced by a particle following a spatially straight line is independent of the velocity. Also gyroscopes moving along spatial geodesics do not precess (relative to the forward direction). Gyroscopes that follow a curved spatial trajectory precess according to a very simple law of three-rotation. We also present an inertial force formalism in coordinate representation for this generalization. Furthermore, we show that by employing a new sense of spatial curvature (Jonsson R 2006 Class. Quantum Grav. 23 1)) closely connected to Fermat's principle, we can make a more extensive generalization of optical geometry that applies to arbitrary spacetimes. In general this optical geometry will be time dependent, but still geodesic photons move with unit speed and follow lines that are spatially straight in the new sense. Also, the sideways experienced (comoving) force on a test particle following a line that is straight in the new sense will be independent of the velocity

  11. Finite element method solution of simplified P3 equation for flexible geometry handling

    International Nuclear Information System (INIS)

    Ryu, Eun Hyun; Joo, Han Gyu

    2011-01-01

    In order to obtain efficiently core flux solutions which would be much closer to the transport solution than the diffusion solution is, not being limited by the geometry of the core, the simplified P 3 (SP 3 ) equation is solved with the finite element method (FEM). A generic mesh generator, GMSH, is used to generate linear and quadratic mesh data. The linear system resulting from the SP 3 FEM discretization is solved by Krylov subspace methods (KSM). A symmetric form of the SP 3 equation is derived to apply the conjugate gradient method rather than the KSMs for nonsymmetric linear systems. An optional iso-parametric quadratic mapping scheme, which is to selectively model nonlinear shapes with a quadratic mapping to prevent significant mismatch in local domain volume, is also implemented for efficient application of arbitrary geometry handling. The gain in the accuracy attainable by the SP 3 solution over the diffusion solution is assessed by solving numerous benchmark problems having various core geometries including the IAEA PWR problems involving rectangular fuels and the Takeda fast reactor problems involving hexagonal fuels. The reference transport solution is produced by the McCARD Monte Carlo code and the multiplication factor and power distribution errors are assessed. In addition, the effect of quadratic mapping is examined for circular cell problems. It is shown that significant accuracy gain is possible with the SP 3 solution for the fast reactor problems whereas only marginal improvement is noted for thermal reactor problems. The quadratic mapping is also quite effective handling geometries with curvature. (author)

  12. Ambient Occlusion Effects for Combined Volumes and Tubular Geometry

    KAUST Repository

    Schott, M.; Martin, T.; Grosset, A. V. P.; Smith, S. T.; Hansen, C. D.

    2013-01-01

    This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed.

  13. Ambient Occlusion Effects for Combined Volumes and Tubular Geometry

    KAUST Repository

    Schott, M.

    2013-06-01

    This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed.

  14. Development of CAD-Based Geometry Processing Module for a Monte Carlo Particle Transport Analysis Code

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Kwark, Min Su; Shim, Hyung Jin

    2012-01-01

    As The Monte Carlo (MC) particle transport analysis for a complex system such as research reactor, accelerator, and fusion facility may require accurate modeling of the complicated geometry. Its manual modeling by using the text interface of a MC code to define the geometrical objects is tedious, lengthy and error-prone. This problem can be overcome by taking advantage of modeling capability of the computer aided design (CAD) system. There have been two kinds of approaches to develop MC code systems utilizing the CAD data: the external format conversion and the CAD kernel imbedded MC simulation. The first approach includes several interfacing programs such as McCAD, MCAM, GEOMIT etc. which were developed to automatically convert the CAD data into the MCNP geometry input data. This approach makes the most of the existing MC codes without any modifications, but implies latent data inconsistency due to the difference of the geometry modeling system. In the second approach, a MC code utilizes the CAD data for the direct particle tracking or the conversion to an internal data structure of the constructive solid geometry (CSG) and/or boundary representation (B-rep) modeling with help of a CAD kernel. MCNP-BRL and OiNC have demonstrated their capabilities of the CAD-based MC simulations. Recently we have developed a CAD-based geometry processing module for the MC particle simulation by using the OpenCASCADE (OCC) library. In the developed module, CAD data can be used for the particle tracking through primitive CAD surfaces (hereafter the CAD-based tracking) or the internal conversion to the CSG data structure. In this paper, the performances of the text-based model, the CAD-based tracking, and the internal CSG conversion are compared by using an in-house MC code, McSIM, equipped with the developed CAD-based geometry processing module

  15. Calculation and simulation on mid-spatial frequency error in continuous polishing

    International Nuclear Information System (INIS)

    Xie Lei; Zhang Yunfan; You Yunfeng; Ma Ping; Liu Yibin; Yan Dingyao

    2013-01-01

    Based on theoretical model of continuous polishing, the influence of processing parameters on the polishing result was discussed. Possible causes of mid-spatial frequency error in the process were analyzed. The simulation results demonstrated that the low spatial frequency error was mainly caused by large rotating ratio. The mid-spatial frequency error would decrease as the low spatial frequency error became lower. The regular groove shape was the primary reason of the mid-spatial frequency error. When irregular and fitful grooves were adopted, the mid-spatial frequency error could be lessened. Moreover, the workpiece swing could make the polishing process more uniform and reduce the mid-spatial frequency error caused by the fix-eccentric plane polishing. (authors)

  16. Syntactic and semantic errors in radiology reports associated with speech recognition software.

    Science.gov (United States)

    Ringler, Michael D; Goss, Brian C; Bartholmai, Brian J

    2017-03-01

    Speech recognition software can increase the frequency of errors in radiology reports, which may affect patient care. We retrieved 213,977 speech recognition software-generated reports from 147 different radiologists and proofread them for errors. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods. In all, 20,759 reports (9.7%) contained errors, of which 3992 (1.9%) were material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors ( p reports, reports reinterpreting results of outside examinations, and procedural studies (all p < .001). Error rate decreased over time ( p < .001), which suggests that a quality control program with regular feedback may reduce errors.

  17. Graded geometry and Poisson reduction

    OpenAIRE

    Cattaneo, A S; Zambon, M

    2009-01-01

    The main result of [2] extends the Marsden-Ratiu reduction theorem [4] in Poisson geometry, and is proven by means of graded geometry. In this note we provide the background material about graded geometry necessary for the proof in [2]. Further, we provide an alternative algebraic proof for the main result. ©2009 American Institute of Physics

  18. Geometry of multihadron production

    Energy Technology Data Exchange (ETDEWEB)

    Bjorken, J.D.

    1994-10-01

    This summary talk only reviews a small sample of topics featured at this symposium: Introduction; The Geometry and Geography of Phase space; Space-Time Geometry and HBT; Multiplicities, Intermittency, Correlations; Disoriented Chiral Condensate; Deep Inelastic Scattering at HERA; and Other Contributions.

  19. Geometry of multihadron production

    International Nuclear Information System (INIS)

    Bjorken, J.D.

    1994-10-01

    This summary talk only reviews a small sample of topics featured at this symposium: Introduction; The Geometry and Geography of Phase space; Space-Time Geometry and HBT; Multiplicities, Intermittency, Correlations; Disoriented Chiral Condensate; Deep Inelastic Scattering at HERA; and Other Contributions

  20. Geometry of higher-dimensional black hole thermodynamics

    International Nuclear Information System (INIS)

    Aaman, Jan E.; Pidokrajt, Narit

    2006-01-01

    We investigate thermodynamic curvatures of the Kerr and Reissner-Nordstroem (RN) black holes in spacetime dimensions higher than four. These black holes possess thermodynamic geometries similar to those in four-dimensional spacetime. The thermodynamic geometries are the Ruppeiner geometry and the conformally related Weinhold geometry. The Ruppeiner geometry for a d=5 Kerr black hole is curved and divergent in the extremal limit. For a d≥6 Kerr black hole there is no extremality but the Ruppeiner curvature diverges where one suspects that the black hole becomes unstable. The Weinhold geometry of the Kerr black hole in arbitrary dimension is a flat geometry. For the RN black hole the Ruppeiner geometry is flat in all spacetime dimensions, whereas its Weinhold geometry is curved. In d≥5 the Kerr black hole can possess more than one angular momentum. Finally we discuss the Ruppeiner geometry for the Kerr black hole in d=5 with double angular momenta

  1. On failure of the pruning technique in "error repair in shift-reduce parsers"

    NARCIS (Netherlands)

    Bertsch, E; Nederhof, MJ

    A previous article presented a technique to compute the least-cost error repair by incrementally generating configurations that result from inserting and deleting tokens in a syntactically incorrect input. An additional mechanism to improve the run-time efficiency of this algorithm by pruning some

  2. Reduction in pediatric identification band errors: a quality collaborative.

    Science.gov (United States)

    Phillips, Shannon Connor; Saysana, Michele; Worley, Sarah; Hain, Paul D

    2012-06-01

    Accurate and consistent placement of a patient identification (ID) band is used in health care to reduce errors associated with patient misidentification. Multiple safety organizations have devoted time and energy to improving patient ID, but no multicenter improvement collaboratives have shown scalability of previously successful interventions. We hoped to reduce by half the pediatric patient ID band error rate, defined as absent, illegible, or inaccurate ID band, across a quality improvement learning collaborative of hospitals in 1 year. On the basis of a previously successful single-site intervention, we conducted a self-selected 6-site collaborative to reduce ID band errors in heterogeneous pediatric hospital settings. The collaborative had 3 phases: preparatory work and employee survey of current practice and barriers, data collection (ID band failure rate), and intervention driven by data and collaborative learning to accelerate change. The collaborative audited 11377 patients for ID band errors between September 2009 and September 2010. The ID band failure rate decreased from 17% to 4.1% (77% relative reduction). Interventions including education of frontline staff regarding correct ID bands as a safety strategy; a change to softer ID bands, including "luggage tag" type ID bands for some patients; and partnering with families and patients through education were applied at all institutions. Over 13 months, a collaborative of pediatric institutions significantly reduced the ID band failure rate. This quality improvement learning collaborative demonstrates that safety improvements tested in a single institution can be disseminated to improve quality of care across large populations of children.

  3. Multi-GNSS signal-in-space range error assessment - Methodology and results

    Science.gov (United States)

    Montenbruck, Oliver; Steigenberger, Peter; Hauschild, André

    2018-06-01

    The positioning accuracy of global and regional navigation satellite systems (GNSS/RNSS) depends on a variety of influence factors. For constellation-specific performance analyses it has become common practice to separate a geometry-related quality factor (the dilution of precision, DOP) from the measurement and modeling errors of the individual ranging measurements (known as user equivalent range error, UERE). The latter is further divided into user equipment errors and contributions related to the space and control segment. The present study reviews the fundamental concepts and underlying assumptions of signal-in-space range error (SISRE) analyses and presents a harmonized framework for multi-GNSS performance monitoring based on the comparison of broadcast and precise ephemerides. The implications of inconsistent geometric reference points, non-common time systems, and signal-specific range biases are analyzed, and strategies for coping with these issues in the definition and computation of SIS range errors are developed. The presented concepts are, furthermore, applied to current navigation satellite systems, and representative results are presented along with a discussion of constellation-specific problems in their determination. Based on data for the January to December 2017 time frame, representative global average root-mean-square (RMS) SISRE values of 0.2 m, 0.6 m, 1 m, and 2 m are obtained for Galileo, GPS, BeiDou-2, and GLONASS, respectively. Roughly two times larger values apply for the corresponding 95th-percentile values. Overall, the study contributes to a better understanding and harmonization of multi-GNSS SISRE analyses and their use as key performance indicators for the various constellations.

  4. Non-linear instability of DIII-D to error fields

    International Nuclear Information System (INIS)

    La Haye, R.J.; Scoville, J.T.

    1991-10-01

    Otherwise stable DIII-D discharges can become nonlinearly unstable to locked modes and disrupt when subjected to resonant m = 2, n = 1 error field caused by irregular poloidal field coils, i.e. intrinsic field errors. Instability is observed in DIII-D when the magnitude of the radial component of the m = 2, n = 1 error field with respect to the toroidal field is B r21 /B T of about 1.7 x 10 -4 . The locked modes triggered by an external error field are aligned with the static error field and the plasma fluid rotation ceases as a result of the growth of the mode. The triggered locked modes are the precursors of the subsequent plasma disruption. The use of an ''n = 1 coil'' to partially cancel intrinsic errors, or to increase them, results in a significantly expanded, or reduced, stable operating parameter space. Precise error field measurements have allowed the design of an improved correction coil for DIII-D, the ''C-coil'', which could further cancel error fields and help to avoid disruptive locked modes. 6 refs., 4 figs

  5. Re-Normalization Method of Doppler Lidar Signal for Error Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nakgyu; Baik, Sunghoon; Park, Seungkyu; Kim, Donglyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Dukhyeon [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    In this paper, we presented a re-normalization method for the fluctuations of Doppler signals from the various noises mainly due to the frequency locking error for a Doppler lidar system. For the Doppler lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter and an iodine filter as the Doppler frequency discriminator. For the Doppler frequency shift measurement, the transmission ratio using the injection-seeded laser is locked to stabilize the frequency. If the frequency locking system is not perfect, the Doppler signal has some error due to the frequency locking error. The re-normalization process of the Doppler signals was performed to reduce this error using an additional laser beam to an Iodine cell. We confirmed that the renormalized Doppler signal shows the stable experimental data much more than that of the averaged Doppler signal using our calibration method, the reduced standard deviation was 4.838 Χ 10{sup -3}.

  6. Geometry in a dynamical system without space: Hyperbolic Geometry in Kuramoto Oscillator Systems

    Science.gov (United States)

    Engelbrecht, Jan; Chen, Bolun; Mirollo, Renato

    Kuramoto oscillator networks have the special property that their time evolution is constrained to lie on 3D orbits of the Möbius group acting on the N-fold torus TN which explains the N - 3 constants of motion discovered by Watanabe and Strogatz. The dynamics for phase models can be further reduced to 2D invariant sets in T N - 1 which have a natural geometry equivalent to the unit disk Δ with hyperbolic metric. We show that the classic Kuramoto model with order parameter Z1 (the first moment of the oscillator configuration) is a gradient flow in this metric with a unique fixed point on each generic 2D invariant set, corresponding to the hyperbolic barycenter of an oscillator configuration. This gradient property makes the dynamics especially easy to analyze. We exhibit several new families of Kuramoto oscillator models which reduce to gradient flows in this metric; some of these have a richer fixed point structure including non-hyperbolic fixed points associated with fixed point bifurcations. Work Supported by NSF DMS 1413020.

  7. Effects of generation time on spray aerosol transport and deposition in models of the mouth-throat geometry.

    Science.gov (United States)

    Worth Longest, P; Hindle, Michael; Das Choudhuri, Suparna

    2009-06-01

    For most newly developed spray aerosol inhalers, the generation time is a potentially important variable that can be fully controlled. The objective of this study was to determine the effects of spray aerosol generation time on transport and deposition in a standard induction port (IP) and more realistic mouth-throat (MT) geometry. Capillary aerosol generation (CAG) was selected as a representative system in which spray momentum was expected to significantly impact deposition. Sectional and total depositions in the IP and MT geometries were assessed at a constant CAG flow rate of 25 mg/sec for aerosol generation times of 1, 2, and 4 sec using both in vitro experiments and a previously developed computational fluid dynamics (CFD) model. Both the in vitro and numerical results indicated that extending the generation time of the spray aerosol, delivered at a constant mass flow rate, significantly reduced deposition in the IP and more realistic MT geometry. Specifically, increasing the generation time of the CAG system from 1 to 4 sec reduced the deposition fraction in the IP and MT geometries by approximately 60 and 33%, respectively. Furthermore, the CFD predictions of deposition fraction were found to be in good agreement with the in vitro results for all times considered in both the IP and MT geometries. The numerical results indicated that the reduction in deposition fraction over time was associated with temporal dissipation of what was termed the spray aerosol "burst effect." Based on these results, increasing the spray aerosol generation time, at a constant mass flow rate, may be an effective strategy for reducing deposition in the standard IP and in more realistic MT geometries.

  8. Lectures on Symplectic Geometry

    CERN Document Server

    Silva, Ana Cannas

    2001-01-01

    The goal of these notes is to provide a fast introduction to symplectic geometry for graduate students with some knowledge of differential geometry, de Rham theory and classical Lie groups. This text addresses symplectomorphisms, local forms, contact manifolds, compatible almost complex structures, Kaehler manifolds, hamiltonian mechanics, moment maps, symplectic reduction and symplectic toric manifolds. It contains guided problems, called homework, designed to complement the exposition or extend the reader's understanding. There are by now excellent references on symplectic geometry, a subset of which is in the bibliography of this book. However, the most efficient introduction to a subject is often a short elementary treatment, and these notes attempt to serve that purpose. This text provides a taste of areas of current research and will prepare the reader to explore recent papers and extensive books on symplectic geometry where the pace is much faster. For this reprint numerous corrections and cl...

  9. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  10. A 3D transport-based core analysis code for research reactors with unstructured geometry

    International Nuclear Information System (INIS)

    Zhang, Tengfei; Wu, Hongchun; Zheng, Youqi; Cao, Liangzhi; Li, Yunzhao

    2013-01-01

    Highlights: • A core analysis code package based on 3D neutron transport calculation in complex geometry is developed. • The fine considerations on flux mapping, control rod effects and isotope depletion are modeled. • The code is proved to be with high accuracy and capable of handling flexible operational cases for research reactors. - Abstract: As an effort to enhance the accuracy in simulating the operations of research reactors, a 3D transport core analysis code system named REFT was developed. HELIOS is employed due to the flexibility of describing complex geometry. A 3D triangular nodal S N method transport solver, DNTR, endows the package the capability of modeling cores with unstructured geometry assemblies. A series of dedicated methods were introduced to meet the requirements of research reactor simulations. Afterwards, to make it more user friendly, a graphical user interface was also developed for REFT. In order to validate the developed code system, the calculated results were compared with the experimental results. Both the numerical and experimental results are in close agreement with each other, with the relative errors of k eff being less than 0.5%. Results for depletion calculations were also verified by comparing them with the experimental data and acceptable consistency was observed in results

  11. Spectral Green’s function nodal method for multigroup SN problems with anisotropic scattering in slab-geometry non-multiplying media

    International Nuclear Information System (INIS)

    Menezes, Welton A.; Filho, Hermes Alves; Barros, Ricardo C.

    2014-01-01

    Highlights: • Fixed-source S N transport problems. • Energy multigroup model. • Anisotropic scattering. • Slab-geometry spectral nodal method. - Abstract: A generalization of the spectral Green’s function (SGF) method is developed for multigroup, fixed-source, slab-geometry discrete ordinates (S N ) problems with anisotropic scattering. The offered SGF method with the one-node block inversion (NBI) iterative scheme converges numerical solutions that are completely free from spatial truncation errors for multigroup, slab-geometry S N problems with scattering anisotropy of order L, provided L < N. As a coarse-mesh numerical method, the SGF method generates numerical solutions that generally do not give detailed information on the problem solution profile, as the grid points can be located considerably away from each other. Therefore, we describe in this paper a technique for the spatial reconstruction of the coarse-mesh solution generated by the multigroup SGF method. Numerical results are given to illustrate the method’s accuracy

  12. Error of image saturation in the structured-light method.

    Science.gov (United States)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-01-01

    In the phase-measuring structured-light method, image saturation will induce large phase errors. Usually, by selecting proper system parameters (such as the phase-shift number, exposure time, projection intensity, etc.), the phase error can be reduced. However, due to lack of a complete theory of phase error, there is no rational principle or basis for the selection of the optimal system parameters. For this reason, the phase error due to image saturation is analyzed completely, and the effects of the two main factors, including the phase-shift number and saturation degree, on the phase error are studied in depth. In addition, the selection of optimal system parameters is discussed, including the proper range and the selection principle of the system parameters. The error analysis and the conclusion are verified by simulation and experiment results, and the conclusion can be used for optimal parameter selection in practice.

  13. Complex differential geometry

    CERN Document Server

    Zheng, Fangyang

    2002-01-01

    The theory of complex manifolds overlaps with several branches of mathematics, including differential geometry, algebraic geometry, several complex variables, global analysis, topology, algebraic number theory, and mathematical physics. Complex manifolds provide a rich class of geometric objects, for example the (common) zero locus of any generic set of complex polynomials is always a complex manifold. Yet complex manifolds behave differently than generic smooth manifolds; they are more coherent and fragile. The rich yet restrictive character of complex manifolds makes them a special and interesting object of study. This book is a self-contained graduate textbook that discusses the differential geometric aspects of complex manifolds. The first part contains standard materials from general topology, differentiable manifolds, and basic Riemannian geometry. The second part discusses complex manifolds and analytic varieties, sheaves and holomorphic vector bundles, and gives a brief account of the surface classifi...

  14. Computational synthetic geometry

    CERN Document Server

    Bokowski, Jürgen

    1989-01-01

    Computational synthetic geometry deals with methods for realizing abstract geometric objects in concrete vector spaces. This research monograph considers a large class of problems from convexity and discrete geometry including constructing convex polytopes from simplicial complexes, vector geometries from incidence structures and hyperplane arrangements from oriented matroids. It turns out that algorithms for these constructions exist if and only if arbitrary polynomial equations are decidable with respect to the underlying field. Besides such complexity theorems a variety of symbolic algorithms are discussed, and the methods are applied to obtain new mathematical results on convex polytopes, projective configurations and the combinatorics of Grassmann varieties. Finally algebraic varieties characterizing matroids and oriented matroids are introduced providing a new basis for applying computer algebra methods in this field. The necessary background knowledge is reviewed briefly. The text is accessible to stud...

  15. Learning mechanisms to limit medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, Anat; Pud, Dorit

    2010-04-01

    This paper is a report of a study conducted to identify and test the effectiveness of learning mechanisms applied by the nursing staff of hospital wards as a means of limiting medication administration errors. Since the influential report ;To Err Is Human', research has emphasized the role of team learning in reducing medication administration errors. Nevertheless, little is known about the mechanisms underlying team learning. Thirty-two hospital wards were randomly recruited. Data were collected during 2006 in Israel by a multi-method (observations, interviews and administrative data), multi-source (head nurses, bedside nurses) approach. Medication administration error was defined as any deviation from procedures, policies and/or best practices for medication administration, and was identified using semi-structured observations of nurses administering medication. Organizational learning was measured using semi-structured interviews with head nurses, and the previous year's reported medication administration errors were assessed using administrative data. The interview data revealed four learning mechanism patterns employed in an attempt to learn from medication administration errors: integrated, non-integrated, supervisory and patchy learning. Regression analysis results demonstrated that whereas the integrated pattern of learning mechanisms was associated with decreased errors, the non-integrated pattern was associated with increased errors. Supervisory and patchy learning mechanisms were not associated with errors. Superior learning mechanisms are those that represent the whole cycle of team learning, are enacted by nurses who administer medications to patients, and emphasize a system approach to data analysis instead of analysis of individual cases.

  16. Designs and finite geometries

    CERN Document Server

    1996-01-01

    Designs and Finite Geometries brings together in one place important contributions and up-to-date research results in this important area of mathematics. Designs and Finite Geometries serves as an excellent reference, providing insight into some of the most important research issues in the field.

  17. Error modeling for surrogates of dynamical systems using machine learning

    Science.gov (United States)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-12-01

    A machine-learning-based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (e.g., random forests, LASSO) to map a large set of inexpensively computed `error indicators' (i.e., features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering), and subsequently constructs a `local' regression model to predict the time-instantaneous error within each identified region of feature space. We consider two uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance, and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (e.g., time-integrated errors). We apply the proposed framework to model errors in reduced-order models of nonlinear oil--water subsurface flow simulations. The reduced-order models used in this work entail application of trajectory piecewise linearization with proper orthogonal decomposition. When the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.

  18. Improving image quality in Electrical Impedance Tomography (EIT using Projection Error Propagation-based Regularization (PEPR technique: A simulation study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-03-01

    Full Text Available A Projection Error Propagation-based Regularization (PEPR method is proposed and the reconstructed image quality is improved in Electrical Impedance Tomography (EIT. A projection error is produced due to the misfit of the calculated and measured data in the reconstruction process. The variation of the projection error is integrated with response matrix in each iterations and the reconstruction is carried out in EIDORS. The PEPR method is studied with the simulated boundary data for different inhomogeneity geometries. Simulated results demonstrate that the PEPR technique improves image reconstruction precision in EIDORS and hence it can be successfully implemented to increase the reconstruction accuracy in EIT.>doi:10.5617/jeb.158 J Electr Bioimp, vol. 2, pp. 2-12, 2011

  19. d-geometries revisited

    CERN Document Server

    Ceresole, Anna; Gnecchi, Alessandra; Marrani, Alessio

    2013-01-01

    We analyze some properties of the four dimensional supergravity theories which originate from five dimensions upon reduction. They generalize to N>2 extended supersymmetries the d-geometries with cubic prepotentials, familiar from N=2 special K\\"ahler geometry. We emphasize the role of a suitable parametrization of the scalar fields and the corresponding triangular symplectic basis. We also consider applications to the first order flow equations for non-BPS extremal black holes.

  20. Digital Particle Image Velocimetry: Partial Image Error (PIE)

    International Nuclear Information System (INIS)

    Anandarajah, K; Hargrave, G K; Halliwell, N A

    2006-01-01

    This paper quantifies the errors due to partial imaging of seeding particles which occur at the edges of interrogation regions in Digital Particle Image Velocimetry (DPIV). Hitherto, in the scientific literature the effect of these partial images has been assumed to be negligible. The results show that the error is significant even at a commonly used interrogation region size of 32 x 32 pixels. If correlation of interrogation region sizes of 16 x 16 pixels and smaller is attempted, the error which occurs can preclude meaningful results being obtained. In order to reduce the error normalisation of the correlation peak values is necessary. The paper introduces Normalisation by Signal Strength (NSS) as the preferred means of normalisation for optimum accuracy. In addition, it is shown that NSS increases the dynamic range of DPIV

  1. Analysis of error in Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Booth, T.E.

    1979-01-01

    The Monte Carlo method for neutron transport calculations suffers, in part, because of the inherent statistical errors associated with the method. Without an estimate of these errors in advance of the calculation, it is difficult to decide what estimator and biasing scheme to use. Recently, integral equations have been derived that, when solved, predicted errors in Monte Carlo calculations in nonmultiplying media. The present work allows error prediction in nonanalog Monte Carlo calculations of multiplying systems, even when supercritical. Nonanalog techniques such as biased kernels, particle splitting, and Russian Roulette are incorporated. Equations derived here allow prediction of how much a specific variance reduction technique reduces the number of histories required, to be weighed against the change in time required for calculation of each history. 1 figure, 1 table

  2. Quality of IT service delivery — Analysis and framework for human error prevention

    KAUST Repository

    Shwartz, L.

    2010-12-01

    In this paper, we address the problem of reducing the occurrence of Human Errors that cause service interruptions in IT Service Support and Delivery operations. Analysis of a large volume of service interruption records revealed that more than 21% of interruptions were caused by human error. We focus on Change Management, the process with the largest risk of human error, and identify the main instances of human errors as the 4 Wrongs: request, time, configuration item, and command. Analysis of change records revealed that the humanerror prevention by partial automation is highly relevant. We propose the HEP Framework, a framework for execution of IT Service Delivery operations that reduces human error by addressing the 4 Wrongs using content integration, contextualization of operation patterns, partial automation of command execution, and controlled access to resources.

  3. Geometry success in 20 minutes a day

    CERN Document Server

    LLC, LearningExpress

    2014-01-01

    Whether you're new to geometry or just looking for a refresher, Geometry Success in 20 Minutes a Day offers a 20-step lesson plan that provides quick and thorough instruction in practical, critical skills. Stripped of unnecessary math jargon but bursting with geometry essentials, Geometry Success in 20 Minutes a Day: Covers all vital geometry skills, from the basic building blocks of geometry to ratio, proportion, and similarity to trigonometry and beyond Provides hundreds of practice exercises in test format Applies geometr

  4. A general three-dimensional parametric geometry of the native aortic valve and root for biomechanical modeling.

    Science.gov (United States)

    Haj-Ali, Rami; Marom, Gil; Ben Zekry, Sagit; Rosenfeld, Moshe; Raanani, Ehud

    2012-09-21

    The complex three-dimensional (3D) geometry of the native tricuspid aortic valve (AV) is represented by select parametric curves allowing for a general construction and representation of the 3D-AV structure including the cusps, commissures and sinuses. The proposed general mathematical description is performed by using three independent parametric curves, two for the cusp and one for the sinuses. These curves are used to generate different surfaces that form the structure of the AV. Additional dependent curves are also generated and utilized in this process, such as the joint curve between the cusps and the sinuses. The model's feasibility to generate patient-specific parametric geometry is examined against 3D-transesophageal echocardiogram (3D-TEE) measurements from a non-pathological AV. Computational finite-element (FE) mesh can then be easily constructed from these surfaces. Examples are given for constructing several 3D-AV geometries by estimating the needed parameters from echocardiographic measurements. The average distance (error) between the calculated geometry and the 3D-TEE measurements was only 0.78±0.63mm. The proposed general 3D parametric method is very effective in quantitatively representing a wide range of native AV structures, with and without pathology. It can also facilitate a methodical quantitative investigation over the effect of pathology and mechanical loading on these major AV parameters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Prevention of prescription errors by computerized, on-line, individual patient related surveillance of drug order entry.

    Science.gov (United States)

    Oliven, A; Zalman, D; Shilankov, Y; Yeshurun, D; Odeh, M

    2002-01-01

    Computerized prescription of drugs is expected to reduce the number of many preventable drug ordering errors. In the present study we evaluated the usefullness of a computerized drug order entry (CDOE) system in reducing prescription errors. A department of internal medicine using a comprehensive CDOE, which included also patient-related drug-laboratory, drug-disease and drug-allergy on-line surveillance was compared to a similar department in which drug orders were handwritten. CDOE reduced prescription errors to 25-35%. The causes of errors remained similar, and most errors, on both departments, were associated with abnormal renal function and electrolyte balance. Residual errors remaining on the CDOE-using department were due to handwriting on the typed order, failure to feed patients' diseases, and system failures. The use of CDOE was associated with a significant reduction in mean hospital stay and in the number of changes performed in the prescription. The findings of this study both quantity the impact of comprehensive CDOE on prescription errors and delineate the causes for remaining errors.

  6. Lectures on coarse geometry

    CERN Document Server

    Roe, John

    2003-01-01

    Coarse geometry is the study of spaces (particularly metric spaces) from a 'large scale' point of view, so that two spaces that look the same from a great distance are actually equivalent. This point of view is effective because it is often true that the relevant geometric properties of metric spaces are determined by their coarse geometry. Two examples of important uses of coarse geometry are Gromov's beautiful notion of a hyperbolic group and Mostow's proof of his famous rigidity theorem. The first few chapters of the book provide a general perspective on coarse structures. Even when only metric coarse structures are in view, the abstract framework brings the same simplification as does the passage from epsilons and deltas to open sets when speaking of continuity. The middle section reviews notions of negative curvature and rigidity. Modern interest in large scale geometry derives in large part from Mostow's rigidity theorem and from Gromov's subsequent 'large scale' rendition of the crucial properties of n...

  7. Introduction to tropical geometry

    CERN Document Server

    Maclagan, Diane

    2015-01-01

    Tropical geometry is a combinatorial shadow of algebraic geometry, offering new polyhedral tools to compute invariants of algebraic varieties. It is based on tropical algebra, where the sum of two numbers is their minimum and the product is their sum. This turns polynomials into piecewise-linear functions, and their zero sets into polyhedral complexes. These tropical varieties retain a surprising amount of information about their classical counterparts. Tropical geometry is a young subject that has undergone a rapid development since the beginning of the 21st century. While establishing itself as an area in its own right, deep connections have been made to many branches of pure and applied mathematics. This book offers a self-contained introduction to tropical geometry, suitable as a course text for beginning graduate students. Proofs are provided for the main results, such as the Fundamental Theorem and the Structure Theorem. Numerous examples and explicit computations illustrate the main concepts. Each of t...

  8. Geometry Euclid and beyond

    CERN Document Server

    Hartshorne, Robin

    2000-01-01

    In recent years, I have been teaching a junior-senior-level course on the classi­ cal geometries. This book has grown out of that teaching experience. I assume only high-school geometry and some abstract algebra. The course begins in Chapter 1 with a critical examination of Euclid's Elements. Students are expected to read concurrently Books I-IV of Euclid's text, which must be obtained sepa­ rately. The remainder of the book is an exploration of questions that arise natu­ rally from this reading, together with their modern answers. To shore up the foundations we use Hilbert's axioms. The Cartesian plane over a field provides an analytic model of the theory, and conversely, we see that one can introduce coordinates into an abstract geometry. The theory of area is analyzed by cutting figures into triangles. The algebra of field extensions provides a method for deciding which geometrical constructions are possible. The investigation of the parallel postulate leads to the various non-Euclidean geometries. And ...

  9. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    International Nuclear Information System (INIS)

    Wang, B; Pan, B; Tao, R; Lubineau, G

    2017-01-01

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε . Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach. (paper)

  10. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    KAUST Repository

    Wang, B

    2017-02-15

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε. Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach.

  11. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

    Directory of Open Access Journals (Sweden)

    Mehdi Jahangiri

    2016-03-01

    Conclusion: The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided.

  12. Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach

    KAUST Repository

    Afify, Laila H.

    2015-09-14

    In this work, we develop an analytical paradigm to analyze the average symbol error probability (ASEP) performance of uplink traffic in a multi-tier cellular network. The analysis is based on the recently developed Equivalent-in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important communication system parameters and goes beyond signal-to-interference-plus-noise ratio characterization. That is, the presented model accounts for the modulation scheme, constellation type, and signal recovery techniques to model the ASEP. To this end, we derive single integral expressions for the ASEP for different modulation schemes due to aggregate network interference. Finally, all theoretical findings of the paper are verified via Monte Carlo simulations.

  13. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  14. Geometry characteristics modeling and process optimization in coaxial laser inside wire cladding

    Science.gov (United States)

    Shi, Jianjun; Zhu, Ping; Fu, Geyan; Shi, Shihong

    2018-05-01

    Coaxial laser inside wire cladding method is very promising as it has a very high efficiency and a consistent interaction between the laser and wire. In this paper, the energy and mass conservation law, and the regression algorithm are used together for establishing the mathematical models to study the relationship between the layer geometry characteristics (width, height and cross section area) and process parameters (laser power, scanning velocity and wire feeding speed). At the selected parameter ranges, the predicted values from the models are compared with the experimental measured results, and there is minor error existing, but they reflect the same regularity. From the models, it is seen the width of the cladding layer is proportional to both the laser power and wire feeding speed, while it firstly increases and then decreases with the increasing of the scanning velocity. The height of the cladding layer is proportional to the scanning velocity and feeding speed and inversely proportional to the laser power. The cross section area increases with the increasing of feeding speed and decreasing of scanning velocity. By using the mathematical models, the geometry characteristics of the cladding layer can be predicted by the known process parameters. Conversely, the process parameters can be calculated by the targeted geometry characteristics. The models are also suitable for multi-layer forming process. By using the optimized process parameters calculated from the models, a 45 mm-high thin-wall part is formed with smooth side surfaces.

  15. On systematic and statistic errors in radionuclide mass activity estimation procedure

    International Nuclear Information System (INIS)

    Smelcerovic, M.; Djuric, G.; Popovic, D.

    1989-01-01

    One of the most important requirements during nuclear accidents is the fast estimation of the mass activity of the radionuclides that suddenly and without control reach the environment. The paper points to systematic errors in the procedures of sampling, sample preparation and measurement itself, that in high degree contribute to total mass activity evaluation error. Statistic errors in gamma spectrometry as well as in total mass alpha and beta activity evaluation are also discussed. Beside, some of the possible sources of errors in the partial mass activity evaluation for some of the radionuclides are presented. The contribution of the errors in the total mass activity evaluation error is estimated and procedures that could possibly reduce it are discussed (author)

  16. Basic algebraic geometry, v.2

    CERN Document Server

    Shafarevich, Igor Rostislavovich

    1994-01-01

    Shafarevich Basic Algebraic Geometry 2 The second edition of Shafarevich's introduction to algebraic geometry is in two volumes. The second volume covers schemes and complex manifolds, generalisations in two different directions of the affine and projective varieties that form the material of the first volume. Two notable additions in this second edition are the section on moduli spaces and representable functors, motivated by a discussion of the Hilbert scheme, and the section on Kähler geometry. The book ends with a historical sketch discussing the origins of algebraic geometry. From the Zentralblatt review of this volume: "... one can only respectfully repeat what has been said about the first part of the book (...): a great textbook, written by one of the leading algebraic geometers and teachers himself, has been reworked and updated. As a result the author's standard textbook on algebraic geometry has become even more important and valuable. Students, teachers, and active researchers using methods of al...

  17. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  18. Canonical differential geometry of string backgrounds

    International Nuclear Information System (INIS)

    Schuller, Frederic P.; Wohlfarth, Mattias N.R.

    2006-01-01

    String backgrounds and D-branes do not possess the structure of Lorentzian manifolds, but that of manifolds with area metric. Area metric geometry is a true generalization of metric geometry, which in particular may accommodate a B-field. While an area metric does not determine a connection, we identify the appropriate differential geometric structure which is of relevance for the minimal surface equation in such a generalized geometry. In particular the notion of a derivative action of areas on areas emerges naturally. Area metric geometry provides new tools in differential geometry, which promise to play a role in the description of gravitational dynamics on D-branes

  19. Transforming BIM to BEM: Generation of Building Geometry for the NASA Ames Sustainability Base BIM

    Energy Technology Data Exchange (ETDEWEB)

    O' Donnell, James T. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Maile, Tobias [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rose, Cody [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mrazovic, Natasa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morrissey, Elmer [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Regnier, Cynthia [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Parrish, Kristen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bazjanac, Vladimir [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-01-01

    Typical processes of whole Building Energy simulation Model (BEM) generation are subjective, labor intensive, time intensive and error prone. Essentially, these typical processes reproduce already existing data, i.e. building models already created by the architect. Accordingly, Lawrence Berkeley National Laboratory (LBNL) developed a semi-automated process that enables reproducible conversions of Building Information Model (BIM) representations of building geometry into a format required by building energy modeling (BEM) tools. This is a generic process that may be applied to all building energy modeling tools but to date has only been used for EnergyPlus. This report describes and demonstrates each stage in the semi-automated process for building geometry using the recently constructed NASA Ames Sustainability Base throughout. This example uses ArchiCAD (Graphisoft, 2012) as the originating CAD tool and EnergyPlus as the concluding whole building energy simulation tool. It is important to note that the process is also applicable for professionals that use other CAD tools such as Revit (“Revit Architecture,” 2012) and DProfiler (Beck Technology, 2012) and can be extended to provide geometry definitions for BEM tools other than EnergyPlus. Geometry Simplification Tool (GST) was used during the NASA Ames project and was the enabling software that facilitated semi-automated data transformations. GST has now been superseded by Space Boundary Tool (SBT-1) and will be referred to as SBT-1 throughout this report. The benefits of this semi-automated process are fourfold: 1) reduce the amount of time and cost required to develop a whole building energy simulation model, 2) enable rapid generation of design alternatives, 3) improve the accuracy of BEMs and 4) result in significantly better performing buildings with significantly lower energy consumption than those created using the traditional design process, especially if the simulation model was used as a predictive

  20. Reducing visual deficits caused by refractive errors in school and preschool children: results of a pilot school program in the Andean region of Apurimac, Peru

    Science.gov (United States)

    Latorre-Arteaga, Sergio; Gil-González, Diana; Enciso, Olga; Phelan, Aoife; García-Muñoz, Ángel; Kohler, Johannes

    2014-01-01

    Background Refractive error is defined as the inability of the eye to bring parallel rays of light into focus on the retina, resulting in nearsightedness (myopia), farsightedness (Hyperopia) or astigmatism. Uncorrected refractive error in children is associated with increased morbidity and reduced educational opportunities. Vision screening (VS) is a method for identifying children with visual impairment or eye conditions likely to lead to visual impairment. Objective To analyze the utility of vision screening conducted by teachers and to contribute to a better estimation of the prevalence of childhood refractive errors in Apurimac, Peru. Design A pilot vision screening program in preschool (Group I) and elementary school children (Group II) was conducted with the participation of 26 trained teachers. Children whose visual acuity was<6/9 [20/30] (Group I) and≤6/9 (Group II) in one or both eyes, measured with the Snellen Tumbling E chart at 6 m, were referred for a comprehensive eye exam. Specificity and positive predictive value to detect refractive error were calculated against clinical examination. Program assessment with participants was conducted to evaluate outcomes and procedures. Results A total sample of 364 children aged 3–11 were screened; 45 children were examined at Centro Oftalmológico Monseñor Enrique Pelach (COMEP) Eye Hospital. Prevalence of refractive error was 6.2% (Group I) and 6.9% (Group II); specificity of teacher vision screening was 95.8% and 93.0%, while positive predictive value was 59.1% and 47.8% for each group, respectively. Aspects highlighted to improve the program included extending training, increasing parental involvement, and helping referred children to attend the hospital. Conclusion Prevalence of refractive error in children is significant in the region. Vision screening performed by trained teachers is a valid intervention for early detection of refractive error, including screening of preschool children. Program

  1. ITS version 5.0 :the integrated TIGER series of coupled electron/Photon monte carlo transport codes with CAD geometry.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2005-09-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  2. The Beauty of Geometry

    Science.gov (United States)

    Morris, Barbara H.

    2004-01-01

    This article describes a geometry project that used the beauty of stained-glass-window designs to teach middle school students about geometric figures and concepts. Three honors prealgebra teachers and a middle school mathematics gifted intervention specialist created a geometry project that covered the curriculum and also assessed students'…

  3. Aliasing errors in measurements of beam position and ellipticity

    International Nuclear Information System (INIS)

    Ekdahl, Carl

    2005-01-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all

  4. Aliasing errors in measurements of beam position and ellipticity

    Science.gov (United States)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  5. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  6. Analysis of Employee's Survey for Preventing Human-Errors

    International Nuclear Information System (INIS)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun

    2013-01-01

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses

  7. Evaluation and Error Analysis for a Solar thermal Receiver

    Energy Technology Data Exchange (ETDEWEB)

    Pfander, M.

    2001-07-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Prohermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. the ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver module's entry plane and the receiver operating temperature. (Author) 26 refs.

  8. Evaluation and Error Analysis for a Solar Thermal Receiver

    International Nuclear Information System (INIS)

    Pfander, M.

    2001-01-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Pro hermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. The ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver modules entry plane and the receiver operating temperature. (Author) 26 refs

  9. A simple method for in vivo measurement of implant rod three-dimensional geometry during scoliosis surgery.

    Science.gov (United States)

    Salmingo, Remel A; Tadano, Shigeru; Fujisaki, Kazuhiro; Abe, Yuichiro; Ito, Manabu

    2012-05-01

    Scoliosis is defined as a spinal pathology characterized as a three-dimensional deformity of the spine combined with vertebral rotation. Treatment for severe scoliosis is achieved when the scoliotic spine is surgically corrected and fixed using implanted rods and screws. Several studies performed biomechanical modeling and corrective forces measurements of scoliosis correction. These studies were able to predict the clinical outcome and measured the corrective forces acting on screws, however, they were not able to measure the intraoperative three-dimensional geometry of the spinal rod. In effect, the results of biomechanical modeling might not be so realistic and the corrective forces during the surgical correction procedure were intra-operatively difficult to measure. Projective geometry has been shown to be successful in the reconstruction of a three-dimensional structure using a series of images obtained from different views. In this study, we propose a new method to measure the three-dimensional geometry of an implant rod using two cameras. The reconstruction method requires only a few parameters, the included angle θ between the two cameras, the actual length of the rod in mm, and the location of points for curve fitting. The implant rod utilized in spine surgery was used to evaluate the accuracy of the current method. The three-dimensional geometry of the rod was measured from the image obtained by a scanner and compared to the proposed method using two cameras. The mean error in the reconstruction measurements ranged from 0.32 to 0.45 mm. The method presented here demonstrated the possibility of intra-operatively measuring the three-dimensional geometry of spinal rod. The proposed method could be used in surgical procedures to better understand the biomechanics of scoliosis correction through real-time measurement of three-dimensional implant rod geometry in vivo.

  10. Load alleviation on wind turbine blades using variable airfoil geometry

    Energy Technology Data Exchange (ETDEWEB)

    Basualdo, S.

    2005-03-01

    A two-dimensional theoretical study of the aeroelastic behaviour of an airfoil has been performed, whose geometry can be altered using a rear-mounted flap. This device is governed by a controller, whose objective is to reduce the airfoil displacements and, therefore, the stresses present in a real blade. The aerodynamic problem was solved numerically by a panel method using the potential theory, suitable for modelling attached flows. It is therefore mostly applicable for Pitch Regulated Variable Speed (PRVS) wind turbines, which mainly operate under this flow condition. The results show evident reductions in the airfoil displacements by using simple control strategies having the airfoil position and its first and second derivatives as input, especially at the system's eigenfrequency. The use of variable airfoil geometry is an effective means of reducing the vibration magnitudes of an airfoil that represents a section of a wind turbine blade, when subject to stochastic wind signals. The results of this investigation encourage further investigations with 3D aeroelastic models to predict the reduction in loads in real wind turbines. (author)

  11. The Most Common Geometric and Semantic Errors in CityGML Datasets

    Science.gov (United States)

    Biljecki, F.; Ledoux, H.; Du, X.; Stoter, J.; Soon, K. H.; Khoo, V. H. S.

    2016-10-01

    To be used as input in most simulation and modelling software, 3D city models should be geometrically and topologically valid, and semantically rich. We investigate in this paper what is the quality of currently available CityGML datasets, i.e. we validate the geometry/topology of the 3D primitives (Solid and MultiSurface), and we validate whether the semantics of the boundary surfaces of buildings is correct or not. We have analysed all the CityGML datasets we could find, both from portals of cities and on different websites, plus a few that were made available to us. We have thus validated 40M surfaces in 16M 3D primitives and 3.6M buildings found in 37 CityGML datasets originating from 9 countries, and produced by several companies with diverse software and acquisition techniques. The results indicate that CityGML datasets without errors are rare, and those that are nearly valid are mostly simple LOD1 models. We report on the most common errors we have found, and analyse them. One main observation is that many of these errors could be automatically fixed or prevented with simple modifications to the modelling software. Our principal aim is to highlight the most common errors so that these are not repeated in the future. We hope that our paper and the open-source software we have developed will help raise awareness for data quality among data providers and 3D GIS software producers.

  12. Teaching Spatial Geometry in a Virtual World

    DEFF Research Database (Denmark)

    Förster, Klaus-Tycho

    2017-01-01

    Spatial geometry is one of the fundamental mathematical building blocks of any engineering education. However, it is overshadowed by planar geometry in the curriculum between playful early primary education and later analytical geometry, leaving a multi-year gap where spatial geometry is absent...

  13. Trends and developments in computational geometry

    NARCIS (Netherlands)

    Berg, de M.

    1997-01-01

    This paper discusses some trends and achievements in computational geometry during the past five years, with emphasis on problems related to computer graphics. Furthermore, a direction of research in computational geometry is discussed that could help in bringing the fields of computational geometry

  14. Effect of Pore Geometry on Gas Adsorption: Grand Canonical Monte Carlo Simulation Studies

    International Nuclear Information System (INIS)

    Lee, Eon Ji; Chang, Rak Woo; Han, Ji Hyung; Chung, Taek Dong

    2012-01-01

    In this study, we investigated the pure geometrical effect of porous materials in gas adsorption using the grand canonical Monte Carlo simulations of primitive gas-pore models with various pore geometries such as planar, cylindrical, and random pore geometries. Although the model does not possess atomistic level details of porous materials, our simulation results provided many insightful information in the effect of pore geometry on the adsorption behavior of gas molecules. First, the surface curvature of porous materials plays a significant role in the amount of adsorbed gas molecules: the concave surface such as in cylindrical pores induces more attraction between gas molecules and pore, which results in the enhanced gas adsorption. On the contrary, the convex surface of random pores gives the opposite effect. Second, this geometrical effect shows a nonmonotonic dependence on the gas-pore interaction strength and length. Third, as the external gas pressure is increased, the change in the gas adsorption due to pore geometry is reduced. Finally, the pore geometry also affects the collision dynamics of gas molecules. Since our model is based on primitive description of fluid molecules, our conclusion can be applied to any fluidic systems including reactant-electrode systems

  15. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  16. User interface for MAWST limit of error program

    International Nuclear Information System (INIS)

    Crain, B. Jr.

    1991-01-01

    This paper reports on a user-friendly interface which is being developed to aid in preparation of input data for the Los Alamos National Laboratory software module MAWST (Materials Accounting With Sequential Testing) used at Savannah River Site to propagate limits of error for facility material balances. The forms-based interface is being designed using traditional software project management tools and using the Ingres family of database management and application development products (products of Relational Technology, Inc.). The software will run on VAX computers (products of Digital Equipment Corporation) on which the VMS operating system and Ingres database management software are installed. Use of the interface software will reduce time required to prepare input data for calculations and also reduce errors associated with data preparation

  17. An approach for management of geometry data

    Science.gov (United States)

    Dube, R. P.; Herron, G. J.; Schweitzer, J. E.; Warkentine, E. R.

    1980-01-01

    The strategies for managing Integrated Programs for Aerospace Design (IPAD) computer-based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. IPAD's data base system makes this information available to all authorized departments in a company. A discussion of the data structures and algorithms required to support geometry in IPIP (IPAD's data base management system) is presented. Through the use of IPIP's data definition language, the structure of the geometry components is defined. The data manipulation language is the vehicle by which a user defines an instance of the geometry. The manipulation language also allows a user to edit, query, and manage the geometry. The selection of canonical forms is a very important part of the IPAD geometry. IPAD has a canonical form for each entity and provides transformations to alternate forms; in particular, IPAD will provide a transformation to the ANSI standard. The DBMS schemas required to support IPAD geometry are explained.

  18. "WGL," a Web Laboratory for Geometry

    Science.gov (United States)

    Quaresma, Pedro; Santos, Vanda; Maric, Milena

    2018-01-01

    The role of information and communication technologies (ICT) in education is nowadays well recognised. The "Web Geometry Laboratory," is an e-learning, collaborative and adaptive, Web environment for geometry, integrating a well known dynamic geometry system. In a collaborative session, teachers and students, engaged in solving…

  19. Error analysis for reducing noisy wide-gap concentric cylinder rheometric data for nonlinear fluids - Theory and applications

    Science.gov (United States)

    Borgia, Andrea; Spera, Frank J.

    1990-01-01

    This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.

  20. Sub-nanometer periodic nonlinearity error in absolute distance interferometers

    Science.gov (United States)

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  1. Analytische Geometrie

    Science.gov (United States)

    Kemnitz, Arnfried

    Der Grundgedanke der Analytischen Geometrie besteht darin, dass geometrische Untersuchungen mit rechnerischen Mitteln geführt werden. Geometrische Objekte werden dabei durch Gleichungen beschrieben und mit algebraischen Methoden untersucht.

  2. Multiple Δt strategy for particle image velocimetry (PIV) error correction, applied to a hot propulsive jet

    Science.gov (United States)

    Nogueira, J.; Lecuona, A.; Nauri, S.; Legrand, M.; Rodríguez, P. A.

    2009-07-01

    PIV (particle image velocimetry) is a measurement technique with growing application to the study of complex flows with relevance to industry. This work is focused on the assessment of some significant PIV measurement errors. In particular, procedures are proposed for estimating, and sometimes correcting, errors coming from the sensor geometry and performance, namely peak-locking and contemporary CCD camera read-out errors. Although the procedures are of general application to PIV, they are applied to a particular real case, giving an example of the methodology steps and the improvement in results that can be obtained. This real case corresponds to an ensemble of hot high-speed coaxial jets, representative of the civil transport aircraft propulsion system using turbofan engines. Errors of ~0.1 pixels displacements have been assessed. This means 10% of the measured magnitude at many points. These results allow the uncertainty interval associated with the measurement to be provided and, under some circumstances, the correction of some of the bias components of the errors. The detection of conditions where the peak-locking error has a period of 2 pixels instead of the classical 1 pixel has been made possible using these procedures. In addition to the increased worth of the measurement, the uncertainty assessment is of interest for the validation of CFD codes.

  3. Connections between algebra, combinatorics, and geometry

    CERN Document Server

    Sather-Wagstaff, Sean

    2014-01-01

    Commutative algebra, combinatorics, and algebraic geometry are thriving areas of mathematical research with a rich history of interaction. Connections Between Algebra, Combinatorics, and Geometry contains lecture notes, along with exercises and solutions, from the Workshop on Connections Between Algebra and Geometry held at the University of Regina from May 29-June 1, 2012. It also contains research and survey papers from academics invited to participate in the companion Special Session on Interactions Between Algebraic Geometry and Commutative Algebra, which was part of the CMS Summer Meeting at the University of Regina held June 2–3, 2012, and the meeting Further Connections Between Algebra and Geometry, which was held at the North Dakota State University, February 23, 2013. This volume highlights three mini-courses in the areas of commutative algebra and algebraic geometry: differential graded commutative algebra, secant varieties, and fat points and symbolic powers. It will serve as a useful resou...

  4. Influence of Ephemeris Error on GPS Single Point Positioning Accuracy

    Science.gov (United States)

    Lihua, Ma; Wang, Meng

    2013-09-01

    The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.

  5. Issues with data and analyses: Errors, underlying themes, and potential solutions.

    Science.gov (United States)

    Brown, Andrew W; Kaiser, Kathryn A; Allison, David B

    2018-03-13

    Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge.

  6. PENGEOM-A general-purpose geometry package for Monte Carlo simulation of radiation transport in material systems defined by quadric surfaces

    Science.gov (United States)

    Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc

    2016-02-01

    The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.

  7. Algebraic Geometry and Number Theory Summer School

    CERN Document Server

    Sarıoğlu, Celal; Soulé, Christophe; Zeytin, Ayberk

    2017-01-01

    This lecture notes volume presents significant contributions from the “Algebraic Geometry and Number Theory” Summer School, held at Galatasaray University, Istanbul, June 2-13, 2014. It addresses subjects ranging from Arakelov geometry and Iwasawa theory to classical projective geometry, birational geometry and equivariant cohomology. Its main aim is to introduce these contemporary research topics to graduate students who plan to specialize in the area of algebraic geometry and/or number theory. All contributions combine main concepts and techniques with motivating examples and illustrative problems for the covered subjects. Naturally, the book will also be of interest to researchers working in algebraic geometry, number theory and related fields.

  8. Applications of Affine and Weyl geometry

    CERN Document Server

    García-Río, Eduardo; Nikcevic, Stana

    2013-01-01

    Pseudo-Riemannian geometry is, to a large extent, the study of the Levi-Civita connection, which is the unique torsion-free connection compatible with the metric structure. There are, however, other affine connections which arise in different contexts, such as conformal geometry, contact structures, Weyl structures, and almost Hermitian geometry. In this book, we reverse this point of view and instead associate an auxiliary pseudo-Riemannian structure of neutral signature to certain affine connections and use this correspondence to study both geometries. We examine Walker structures, Riemannia

  9. Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo.

    Science.gov (United States)

    Krogel, Jaron T; Kent, P R C

    2017-06-28

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energy and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+ and 4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.

  10. The Idea of Order at Geometry Class.

    Science.gov (United States)

    Rishel, Thomas

    The idea of order in geometry is explored using the experience of assignments given to undergraduates in a college geometry course "From Space to Geometry." Discussed are the definition of geometry, and earth measurement using architecture, art, and common experience. This discussion concludes with a consideration of the question of whether…

  11. Special geometry

    International Nuclear Information System (INIS)

    Strominger, A.

    1990-01-01

    A special manifold is an allowed target manifold for the vector multiplets of D=4, N=2 supergravity. These manifolds are of interest for string theory because the moduli spaces of Calabi-Yau threefolds and c=9, (2,2) conformal field theories are special. Previous work has given a local, coordinate-dependent characterization of special geometry. A global description of special geometries is given herein, and their properties are studied. A special manifold M of complex dimension n is characterized by the existence of a holomorphic Sp(2n+2,R)xGL(1,C) vector bundle over M with a nowhere-vanishing holomorphic section Ω. The Kaehler potential on M is the logarithm of the Sp(2n+2,R) invariant norm of Ω. (orig.)

  12. Adding-point strategy for reduced-order hypersonic aerothermodynamics modeling based on fuzzy clustering

    Science.gov (United States)

    Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang

    2016-09-01

    Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.

  13. Using Dynamic Geometry Software to Improve Eight Grade Students' Understanding of Transformation Geometry

    Science.gov (United States)

    Guven, Bulent

    2012-01-01

    This study examines the effect of dynamic geometry software (DGS) on students' learning of transformation geometry. A pre- and post-test quasi-experimental design was used. Participants in the study were 68 eighth grade students (36 in the experimental group and 32 in the control group). While the experimental group students were studying the…

  14. Error estimation for variational nodal calculations

    International Nuclear Information System (INIS)

    Zhang, H.; Lewis, E.E.

    1998-01-01

    Adaptive grid methods are widely employed in finite element solutions to both solid and fluid mechanics problems. Either the size of the element is reduced (h refinement) or the order of the trial function is increased (p refinement) locally to improve the accuracy of the solution without a commensurate increase in computational effort. Success of these methods requires effective local error estimates to determine those parts of the problem domain where the solution should be refined. Adaptive methods have recently been applied to the spatial variables of the discrete ordinates equations. As a first step in the development of adaptive methods that are compatible with the variational nodal method, the authors examine error estimates for use in conjunction with spatial variables. The variational nodal method lends itself well to p refinement because the space-angle trial functions are hierarchical. Here they examine an error estimator for use with spatial p refinement for the diffusion approximation. Eventually, angular refinement will also be considered using spherical harmonics approximations

  15. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  16. Information theory, spectral geometry, and quantum gravity.

    Science.gov (United States)

    Kempf, Achim; Martin, Robert

    2008-01-18

    We show that there exists a deep link between the two disciplines of information theory and spectral geometry. This allows us to obtain new results on a well-known quantum gravity motivated natural ultraviolet cutoff which describes an upper bound on the spatial density of information. Concretely, we show that, together with an infrared cutoff, this natural ultraviolet cutoff beautifully reduces the path integral of quantum field theory on curved space to a finite number of ordinary integrations. We then show, in particular, that the subsequent removal of the infrared cutoff is safe.

  17. Experimental approach for the uncertainty assessment of 3D complex geometry dimensional measurements using computed tomography at the mm and sub-mm scales

    DEFF Research Database (Denmark)

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A.

    2017-01-01

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems......’ traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined...... experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile...

  18. A Geometry Deformation Model for Braided Continuum Manipulators

    Directory of Open Access Journals (Sweden)

    S. M. Hadi Sadati

    2017-06-01

    Full Text Available Continuum manipulators have gained significant attention in the robotic community due to their high dexterity, deformability, and reachability. Modeling of such manipulators has been shown to be very complex and challenging. Despite many research attempts, a general and comprehensive modeling method is yet to be established. In this paper, for the first time, we introduce the bending effect in the model of a braided extensile pneumatic actuator with both stiff and bendable threads. Then, the effect of the manipulator cross-section deformation on the constant curvature and variable curvature models is investigated using simple analytical results from a novel geometry deformation method and is compared to experimental results. We achieve 38% mean reference error simulation accuracy using our constant curvature model for a braided continuum manipulator in presence of body load and 10% using our variable curvature model in presence of extensive external loads. With proper model assumptions and taking to account the cross-section deformation, a 7–13% increase in the simulation mean error accuracy is achieved compared to a fixed cross-section model. The presented models can be used for the exact modeling and design optimization of compound continuum manipulators by providing an analytical tool for the sensitivity analysis of the manipulator performance. Our main aim is the application in minimal invasive manipulation with limited workspaces and manipulators with regional tunable stiffness in their cross section.

  19. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    Science.gov (United States)

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  20. Disformal transformation in Newton-Cartan geometry

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Peng [Zhejiang Chinese Medical University, Department of Information, Hangzhou (China); Sun Yat-Sen University, School of Physics and Astronomy, Guangzhou (China); Yuan, Fang-Fang [Nankai University, School of Physics, Tianjin (China)

    2016-08-15

    Newton-Cartan geometry has played a central role in recent discussions of the non-relativistic holography and condensed matter systems. Although the conformal transformation in non-relativistic holography can easily be rephrased in terms of Newton-Cartan geometry, we show that it requires a nontrivial procedure to arrive at the consistent form of anisotropic disformal transformation in this geometry. Furthermore, as an application of the newly obtained transformation, we use it to induce a geometric structure which may be seen as a particular non-relativistic version of the Weyl integrable geometry. (orig.)