WorldWideScience

Sample records for error concealment method

  1. Video error concealment using block matching and frequency selective extrapolation algorithms

    Science.gov (United States)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  2. High-Performance Region-of-Interest Image Error Concealment with Hiding Technique

    Directory of Open Access Journals (Sweden)

    Shih-Chang Hsia

    2010-01-01

    Full Text Available Recently region-of-interest (ROI based image coding is a popular topic. Since ROI area contains much more important information for an image, it must be prevented from error decoding while suffering from channel lost or unexpected attack. This paper presents an efficient error concealment method to recover ROI information with a hiding technique. Based on the progressive transformation, the low-frequency components of ROI are encoded to disperse its information into the high-frequency bank of original image. The capability of protection is carried out with extracting the ROI coefficients from the damaged image without increasing extra information. Simulation results show that the proposed method can efficiently reconstruct the ROI image when ROI bit-stream occurs errors, and the measurement of PSNR result outperforms the conventional error concealment techniques by 2 to 5 dB.

  3. Error Concealment Method Based on Motion Vector Prediction Using Particle Filters

    Directory of Open Access Journals (Sweden)

    B. Hrusovsky

    2011-09-01

    Full Text Available Video transmitted over unreliable environment, such as wireless channel or in generally any network with unreliable transport protocol, is facing the losses of video packets due to network congestion and different kind of noises. The problem is becoming more important using highly effective video codecs. Visual quality degradation could propagate into subsequent frames due to redundancy elimination in order to obtain high compression ratio. Since the video stream transmission in real time is limited by transmission channel delay, it is not possible to retransmit all faulty or lost packets. It is therefore inevitable to conceal these defects. To reduce the undesirable effects of information losses, the lost data is usually estimated from the received data, which is generally known as error concealment problem. This paper discusses packet loss modeling in order to simulate losses during video transmission, packet losses analysis and their impacts on the motion vectors losses.

  4. Error Concealment using Neural Networks for Block-Based Image Coding

    Directory of Open Access Journals (Sweden)

    M. Mokos

    2006-06-01

    Full Text Available In this paper, a novel adaptive error concealment (EC algorithm, which lowers the requirements for channel coding, is proposed. It conceals errors in block-based image coding systems by using neural network. In this proposed algorithm, only the intra-frame information is used for reconstruction of the image with separated damaged blocks. The information of pixels surrounding a damaged block is used to recover the errors using the neural network models. Computer simulation results show that the visual quality and the MSE evaluation of a reconstructed image are significantly improved using the proposed EC algorithm. We propose also a simple non-neural approach for comparison.

  5. Error Concealment for 3-D DWT Based Video Codec Using Iterative Thresholding

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Forchhammer, Søren; Codreanu, Marian

    2017-01-01

    Error concealment for video coding based on a 3-D discrete wavelet transform (DWT) is considered. We assume that the video sequence has a sparse representation in a known basis different from the DWT, e.g., in a 2-D discrete cosine transform basis. Then, we formulate the concealment problem as l1...

  6. JPEG2000-coded image error concealment exploiting convex sets projections.

    Science.gov (United States)

    Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio

    2005-04-01

    Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.

  7. Error Concealment using Data Hiding in Wireless Image Transmission

    Directory of Open Access Journals (Sweden)

    A. Akbari

    2016-11-01

    Full Text Available The transmission of image/video over unreliable medium like wireless networks generally results in receiving a damaged image/video. In this paper, a novel image error concealment scheme based on the idea of data hiding and Set Partitioning In Hierarchical Trees (SPIHT coding is investigated. In the encoder side, the coefficients of wavelet decomposed image are partitioned into “perfect trees”. The SPIHT coder is applied to encode each per-fect tree independently and generate an efficiently compressed reference code. This code is then embedded into the coefficients of another perfect tree which is located in a different place, using a robust data hiding scheme based on Quantization Index Modulation (QIM. In the decoder side, if a part of the image is lost, the algorithm extracts the embedded code for reference trees related to this part to reconstruct the lost information. Performance results show that for an error prone transmission, the proposed technique is promising to efficiently conceal the lost areas of the transmitted image.

  8. Sequential error concealment for video/images by weighted template matching

    DEFF Research Database (Denmark)

    Koloda, Jan; Østergaard, Jan; Jensen, Søren Holdt

    2012-01-01

    In this paper we propose a novel spatial error concealment algorithm for video and images based on convex optimization. Block-based coding schemes in packet loss environment are considered. Missing macro blocks are sequentially reconstructed by filling them with a weighted set of templates...

  9. Damaged Watermarks Detection in Frequency Domain as a Primary Method for Video Concealment

    Directory of Open Access Journals (Sweden)

    Robert Hudec

    2011-01-01

    Full Text Available This paper deals with video transmission over lossy communication networks. The main idea is to develop video concealment method for information losses and errors correction. At the beginning, three main groups of video concealment methods, divided by encoder/decoder collaboration, are briefly described. The modified algorithm based on the detection and filtration of damaged watermark blocks encapsulated to the transmitted video was developed. Finally, the efficiency of developed algorithm is presented in experimental part of this paper.

  10. 一种基于人脸对称性的差错掩盖方法%An Error Concealment Method Based on Facial Symmetry

    Institute of Scientific and Technical Information of China (English)

    赖俊; 张江鑫

    2013-01-01

    This paper presents an error concealment method based on facial symmetry .We first execute color segmentation , determine the skin color region;then judge the symmetry of this region , conceal symmetry face regions with symmetry algorithm , conceal other regions with adaptive interpolation algorithm .Using JM86 model of H.264 standard to simulate the algorithm , the experimental results show that our method achieves better conceal results when compared with traditional interpolation algorithm .%该文提出一种基于人脸对称性的差错掩盖方法。首先进行肤色分割,判断出肤色区域;然后对检测出的肤色区域进行对称性判断,选出对称的人脸区域并采用人脸对称掩盖算法进行差错掩盖,对其它区域则采用自适应插值算法。采用H.264的JM86模型对算法进行验证,实验结果表明,与传统的插值算法相比,该文算法利用了人脸的对称性,对于对称的人脸区域获得了更好的掩盖效果。

  11. A hybrid frame concealment algorithm for H.264/AVC.

    Science.gov (United States)

    Yan, Bo; Gharavi, Hamid

    2010-01-01

    In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.

  12. Objective Methods for Reliable Detection of Concealed Depression

    Directory of Open Access Journals (Sweden)

    Cynthia eSolomon

    2015-04-01

    Full Text Available Recent research has shown that it is possible to automatically detect clinical depression from audio-visual recordings. Before considering integration in a clinical pathway, a key question that must be asked is whether such systems can be easily fooled. This work explores the potential of acoustic features to detect clinical depression in adults both when acting normally and when asked to conceal their depression. Nine adults diagnosed with mild to moderate depression as per the Beck Depression Inventory (BDI-II and Patient Health Questionnaire (PHQ-9 were asked a series of questions and to read a excerpt from a novel aloud under two different experimental conditions. In one, participants were asked to act naturally and in the other, to suppress anything that they felt would be indicative of their depression. Acoustic features were then extracted from this data and analysed using paired t-tests to determine any statistically significant differences between healthy and depressed participants. Most features that were found to be significantly different during normal behaviour remained so during concealed behaviour. In leave-one-subject-out automatic classification studies of the 9 depressed subjects and 8 matched healthy controls, an 88% classification accuracy and 89% sensitivity was achieved. Results remained relatively robust during concealed behaviour, with classifiers trained on only non-concealed data achieving 81% detection accuracy and 75% sensitivity when tested on concealed data. These results indicate there is good potential to build deception-proof automatic depression monitoring systems.

  13. A Concealed Car Extraction Method Based on Full-Waveform LiDAR Data

    Directory of Open Access Journals (Sweden)

    Chuanrong Li

    2016-01-01

    Full Text Available Concealed cars extraction from point clouds data acquired by airborne laser scanning has gained its popularity in recent years. However, due to the occlusion effect, the number of laser points for concealed cars under trees is not enough. Thus, the concealed cars extraction is difficult and unreliable. In this paper, 3D point cloud segmentation and classification approach based on full-waveform LiDAR was presented. This approach first employed the autocorrelation G coefficient and the echo ratio to determine concealed cars areas. Then the points in the concealed cars areas were segmented with regard to elevation distribution of concealed cars. Based on the previous steps, a strategy integrating backscattered waveform features and the view histogram descriptor was developed to train sample data of concealed cars and generate the feature pattern. Finally concealed cars were classified by pattern matching. The approach was validated by full-waveform LiDAR data and experimental results demonstrated that the presented approach can extract concealed cars with accuracy more than 78.6% in the experiment areas.

  14. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  15. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  16. Self-Concealment and Suicidal Behaviors

    Science.gov (United States)

    Friedlander, Adam; Nazem, Sarra; Fiske, Amy; Nadorff, Michael R.; Smith, Merideth D.

    2012-01-01

    Understanding self-concealment, the tendency to actively conceal distressing personal information from others, may be important in developing effective ways to help individuals with suicidal ideation. No published study has yet assessed the relation between self-concealment and suicidal behaviors. Additionally, most self-concealment research has…

  17. Development and validation of the Body Concealment Scale for Scleroderma

    NARCIS (Netherlands)

    Jewett, L.R.; Malcarne, V.L.; Kwakkenbos, C.M.C.; Harcourt, D.; Rumsey, N.; Körner, A.; Steele, R.J.; Hudson, M.; Baron, M.; Haythornthwaite, J.A.; Heinberg, L.; Wigley, F.M.; Thombs, B.D.

    2016-01-01

    Objective: Body concealment is a component of social avoidance among people with visible differences from disfiguring conditions, including systemic sclerosis (SSc). The study objective was to develop a measure of body concealment related to avoidance behaviors in SSc. Methods: Initial items for the

  18. Psychopathy and Physiological Detection of Concealed Information: A review

    Directory of Open Access Journals (Sweden)

    Bruno Verschuere

    2006-03-01

    Full Text Available The Concealed Information Test has been advocated as the preferred method for deception detection using the polygraph ("lie detector". The Concealed Information Test is argued to be a standardised, highly accurate psychophysiological test founded on the orienting reflex. The validity of polygraph tests for the assessment of psychopathic individuals has, however, been questioned. Two dimensions are said to underlie psychopathy: emotional detachment and antisocial behaviour. Distinct psychophysiological correlates are hypothesised in these facets of psychopathy. Emotional detachment is associated with deficient fear-potentiated startle, and antisocial behaviour with reduced orienting. Few studies have examined the effect of psychopathy on the validity of the Concealed Information Test. This review suggests that reduced orienting in high antisocial individuals is also found in the Concealed Information Test, thereby threatening its validity. Implications for criminal investigations, possible solutions and directions for future research will be discussed.

  19. Concealed object segmentation and three-dimensional localization with passive millimeter-wave imaging

    Science.gov (United States)

    Yeom, Seokwon

    2013-05-01

    Millimeter waves imaging draws increasing attention in security applications for weapon detection under clothing. In this paper, concealed object segmentation and three-dimensional localization schemes are reviewed. A concealed object is segmented by the k-means algorithm. A feature-based stereo-matching method estimates the longitudinal distance of the concealed object. The distance is estimated by the discrepancy between the corresponding centers of the segmented objects. Experimental results are provided with the analysis of the depth resolution.

  20. Concealing with structured light.

    Science.gov (United States)

    Sun, Jingbo; Zeng, Jinwei; Wang, Xi; Cartwright, Alexander N; Litchinitser, Natalia M

    2014-02-13

    While making objects less visible (or invisible) to a human eye or a radar has captured people's imagination for centuries, current attempts towards realization of this long-awaited functionality range from various stealth technologies to recently proposed cloaking devices. A majority of proposed approaches share a number of common deficiencies such as design complexity, polarization effects, bandwidth, losses and the physical size or shape requirement complicating their implementation especially at optical frequencies. Here we demonstrate an alternative way to conceal macroscopic objects by structuring light itself. In our approach, the incident light is transformed into an optical vortex with a dark core that can be used to conceal macroscopic objects. Once such a beam passed around the object it is transformed back into its initial Gaussian shape with minimum amplitude and phase distortions. Therefore, we propose to use that dark core of the vortex beam to conceal an object that is macroscopic yet small enough to fit the dark (negligibly low intensity) region of the beam. The proposed concealing approach is polarization independent, easy to fabricate, lossless, operates at wavelengths ranging from 560 to 700 nm, and can be used to hide macroscopic objects providing they are smaller than vortex core.

  1. Concealable Stigmatized Identities and Psychological Well-Being

    OpenAIRE

    Quinn, Diane M.; Earnshaw, Valerie A.

    2013-01-01

    Many people have concealable stigmatized identities: Identities that can be hidden from others and that are socially devalued and negatively stereotyped. Understanding how these concealable stigmatized identities affect psychological well-being is critical. We present our model of the components of concealable stigmatized identities including valenced content – internalized stigma, experienced discrimination, anticipated stigma, disclosure reactions, and counter-stereotypic/positive informati...

  2. Theory of the Concealed Information Test

    NARCIS (Netherlands)

    Verschuere, B.; Ben-Shakhar, G.; Verschuere, B.; Ben-Shakhar, G.; Meijer, E.

    2011-01-01

    It is now well established that physiological measures can be validly used to detect concealed information. An important challenge is to elucidate the underlying mechanisms of concealed information detection. We review theoretical approaches that can be broadly classified in two major categories:

  3. Induction detection of concealed bulk banknotes

    Science.gov (United States)

    Fuller, Christopher; Chen, Antao

    2012-06-01

    The smuggling of bulk cash across borders is a serious issue that has increased in recent years. In an effort to curb the illegal transport of large numbers of paper bills, a detection scheme has been developed, based on the magnetic characteristics of bank notes. The results show that volumes of paper currency can be detected through common concealing materials such as plastics, cardboard, and fabrics making it a possible potential addition to border security methods. The detection scheme holds the potential of also reducing or eliminating false positives caused by metallic materials found in the vicinity, by observing the stark difference in received signals caused by metal and currency. The detection scheme holds the potential to detect for both the presence and number of concealed bulk notes, while maintaining the ability to reduce false positives caused by metal objects.

  4. Medical makeup for concealing facial scars.

    Science.gov (United States)

    Mee, Donna; Wong, Brian J F

    2012-10-01

    Surgical, laser, and pharmacological therapies are all used to correct scars and surgical incisions, though have limits with respect to how well facial skin can be restored or enhanced. The use of cosmetics has long been a relevant adjunct to all scar treatment modalities. In recent years, technical advancements in the chemistry and composition of cosmetic products have provided the patient with a broader range of products to employ for concealing scars. This review will provide an overview of contemporary methods for concealing facial scars, birthmarks, and pigmentary changes without the use of traditional/dated, heavy appearing camouflage products. Additionally, general guidelines and information will be provided with respect to identifying competent makeup artists for care of the medical patient. The article by no means is meant to be a tutorial, but rather serves as a starting point in this allied field of medicine. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  5. CT diagnosis of concealed rupture of intestine following abdominal trauma

    International Nuclear Information System (INIS)

    Ji Jiansong; Wei Tiemin; Wang Zufei; Zhao Zhongwei; Tu Jianfei; Fan Xiaoxi; Xu Min

    2009-01-01

    Objective: To investigate CT findings of concealed rupture of intestine following abdominal trauma. Methods: CT findings of 11 cases with concealed rupture of intestine following abdominal trauma proved by surgery were identified retrospectively. Results: The main special signs included: (1) Free air in 4 cases, mainly around injured small bowel or under the diaphragm, or in the retroperitoneal space or and in the lump. (2) High density hematoma between the intestines or in the bowel wall (4 cases). (3) Bowel wall injury sign, demonstrated as low density of the injured intestinal wall, attenuated locally but relatively enhanced in neighbor wall on enhanced CT. (4) Lump around the injured bowel wall with obvious ring-shaped enhancement (4 cases). Other signs included: (1) Free fluid in the abdominal cavity or between the intestines with blurred borders. (2) Bowel obstruction. Conclusion: CT is valuable in diagnosing concealed rupture of intestine following abdominal trauma. (authors)

  6. Color image fusion for concealed weapon detection

    NARCIS (Netherlands)

    Toet, A.

    2003-01-01

    Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the

  7. Intra- and interpersonal consequences of experimentally induced concealment

    NARCIS (Netherlands)

    Bouman, T.K.

    2003-01-01

    Secrecy, concealment, and thought supression are assumed to be important aspects of psychopathology. However, most studies address these from an intrapersonal perspective. This study investigates both the intra- as well as the interpersonal consequences of experimentally induced concealment. Two

  8. [Half-gloving cordectomy: a modified procedure for concealed penis].

    Science.gov (United States)

    Sun, Wei-Gui; Zheng, Qi-Chuan; Jiang, Kun

    2012-06-01

    To search for a simple surgical procedure for the treatment of concealed penis that may have better effect and less complications. We used a modified surgical method in the treatment of 58 patients with concealed penis aged from 3 to 15 (mean 6.8) years. The operation was simplified and involved the following steps: wholly unveiling the penis glans, half-degloving the foreskins, cutting off all the adhesive fibers up to the penile suspensory ligaments, and liberating the external penis. The operation was successful in all the patients, with the operative time of 15 -45 (mean 33) minutes, hospital stay of 2 - 5 (mean 3.5) days, but no complications except mild foreskin edema in 5 cases. The external penis was prolonged from 0.5 - 2.8 (mean 1.4) cm preoperatively to 3.2 - 8.5 (mean 3.9) cm postoperatively. The patients were followed up for 1 -3 years, all satisfied with the length and appearance of the penis, and their sexual and reproductive functions were normal. The modified surgical procedure for concealed penis is simple and effective, with desirable outcomes, few postoperative complications and no damage to sexual and reproductive functions.

  9. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    Science.gov (United States)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-08-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  10. Segmentation of Concealed Objects in Passive Millimeter-Wave Images Based on the Gaussian Mixture Model

    Science.gov (United States)

    Yu, Wangyang; Chen, Xiangguang; Wu, Lei

    2015-04-01

    Passive millimeter wave (PMMW) imaging has become one of the most effective means to detect the objects concealed under clothing. Due to the limitations of the available hardware and the inherent physical properties of PMMW imaging systems, images often exhibit poor contrast and low signal-to-noise ratios. Thus, it is difficult to achieve ideal results by using a general segmentation algorithm. In this paper, an advanced Gaussian Mixture Model (GMM) algorithm for the segmentation of concealed objects in PMMW images is presented. Our work is concerned with the fact that the GMM is a parametric statistical model, which is often used to characterize the statistical behavior of images. Our approach is three-fold: First, we remove the noise from the image using both a notch reject filter and a total variation filter. Next, we use an adaptive parameter initialization GMM algorithm (APIGMM) for simulating the histogram of images. The APIGMM provides an initial number of Gaussian components and start with more appropriate parameter. Bayesian decision is employed to separate the pixels of concealed objects from other areas. At last, the confidence interval (CI) method, alongside local gradient information, is used to extract the concealed objects. The proposed hybrid segmentation approach detects the concealed objects more accurately, even compared to two other state-of-the-art segmentation methods.

  11. Management of concealed penis with modified penoplasty.

    Science.gov (United States)

    Xu, Jian-Guo; Lv, Chuan; Wang, Yu-Chong; Zhu, Ji; Xue, Chun-Yu

    2015-03-01

    To investigate the effect of penile degloving in combination with penoscrotal angle reconstruction for the correction of concealed penis. A foreskin circumcision incision was made along the coronal sulcus. After a sharp dissection under the superficial layer of tunica albuginea, the penile shaft was degloved to release the fibrous bands of the tunica dartos. Through a longitudinal incision or Z-plasty at the penoscrotal junction, securing of the tunica albuginea to the proximal tunica dartos was performed. The penoscrotal angle was reconstructed. This procedure effectively corrected the concealed penis, while correcting other problems such as phimosis. From August 2008 to August 2013, we performed 41 procedures for concealed penis. Correction was successful in all patients with an improved median length of 2.1 cm in the flaccid state. Follow-up ranged from 6 months to 2 years, and satisfactory cosmetic outcomes were obtained without scars or erectile discomfort. Our technique includes degloving and penoscrotal angle reconstruction, which provides proper visualization for fixation of the penile base. The longitudinal or Z-plasty incision also opened the degloving dead cavity, which was good for drainage. The procedure is straight forward with good functional and cosmetic outcomes and is thus ideal for correction of the concealed penis. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. A new surgical technique for concealed penis using an advanced musculocutaneous scrotal flap.

    Science.gov (United States)

    Han, Dong-Seok; Jang, Hoon; Youn, Chang-Shik; Yuk, Seung-Mo

    2015-06-19

    Until recently, no single, universally accepted surgical method has existed for all types of concealed penis repairs. We describe a new surgical technique for repairing concealed penis by using an advanced musculocutaneous scrotal flap. From January 2010 to June 2014, we evaluated 12 patients (12-40 years old) with concealed penises who were surgically treated with an advanced musculocutaneous scrotal flap technique after degloving through a ventral approach. All the patients were scheduled for regular follow-up at 6, 12, and 24 weeks postoperatively. The satisfaction grade for penile size, morphology, and voiding status were evaluated using a questionnaire preoperatively and at all of the follow-ups. Information regarding complications was obtained during the postoperative hospital stay and at all follow-ups. The patients' satisfaction grades, which included the penile size, morphology, and voiding status, improved postoperatively compared to those preoperatively. All patients had penile lymphedema postoperatively; however, this disappeared within 6 weeks. There were no complications such as skin necrosis and contracture, voiding difficulty, or erectile dysfunction. Our advanced musculocutaneous scrotal flap technique for concealed penis repair is technically easy and safe. In addition, it provides a good cosmetic appearance, functional outcomes and excellent postoperative satisfaction grades. Lastly, it seems applicable in any type of concealed penis, including cases in which the ventral skin defect is difficult to cover.

  13. Concealing Emotions at Work Is Associated with Allergic Rhinitis in Korea.

    Science.gov (United States)

    Seok, Hongdeok; Yoon, Jin-Ha; Won, Jong-Uk; Lee, Wanhyung; Lee, June-Hee; Jung, Pil Kyun; Roh, Jaehoon

    2016-01-01

    Concealing emotions at work can cause considerable psychological stress. While there is extensive research on the adverse health effects of concealing emotions and the association between allergic diseases and stress, research has not yet investigated whether concealing emotions at work is associated with allergic rhinitis. Allergic rhinitis is a common disease in many industrialized countries, and its prevalence is increasing. Thus, our aim was to determine the strength of this association using data from three years (2007-2009) of the 4th Korean National Health and Nutrition Examination Survey. Participants (aged 20-64) were 8,345 individuals who were economically active and who had completed the questionnaire items on concealing emotions at work. Odds ratio (OR) and 95% confidence intervals (95% CIs) were calculated for allergic rhinitis using logistic regression models. Among all participants, 3,140 subjects (37.6%) reported concealing their emotions at work: 1,661 men and 1,479 women. The OR (95% CIs) for allergic rhinitis among those who concealed emotions at work versus those who did not was 1.318 (1.148-1.512). Stratified by sex, the OR (95% CIs) was 1.307 (1.078-1.585) among men and 1.346 (1.105-1.639) among women. Thus, individuals who concealed their emotions at work were significantly more likely to have a diagnosis of AR in comparison to those who did not. Because concealing emotions at work has adverse health effects, labor policies that aim to reduce this practice are needed.

  14. Modified penoplasty for concealed penis in children.

    Science.gov (United States)

    Yang, Tianyou; Zhang, Liyu; Su, Cheng; Li, Zhongmin; Wen, Yingquan

    2013-09-01

    To report a modified penoplasty technique for concealed penis in children. Between January 2006 and June 2012, 201 cases of concealed penis were surgically repaired with modified penoplasty. The modified penoplasty technique consisted of 3 major steps: (1) degloved the penile skin and excised the inner prepuce, (2) advanced penoscrotal skin to cover penile shaft, and (3) fixed the penis base and reconstructed the penoscrotal angle. Two hundred one cases of concealed penis were enrolled in this study over a period of 6 years. Mean age at the time of surgery was 5.3 years (range 1-13 years) and mean operative time was 40 minutes (range 30-65minutes). All patients were routinely followed up at 1, 3, and 6 months after surgery. Most patients developed postoperative edema and were resolved within 1 month, whereas 20 cases developed prolonged postoperative edema, especially at the site of frenulum, which took 3 months to be resolved. Ten cases had retraction after surgery. No erection difficulties were recorded. Patients/parents reported better hygiene and improved visualization and accessibility of penis after surgery and were satisfied with the cosmetic outcome. The result of this study shows that the modified penoplasty technique is a simple, safe, and effective procedure for concealed penis with satisfied cosmetic outcome. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Management of a Concealable Stigmatized Identity: A Qualitative Study of Concealment, Disclosure, and Role Flexing Among Young, Resilient Sexual and Gender Minority Individuals.

    Science.gov (United States)

    Bry, Laura Jane; Mustanski, Brian; Garofalo, Robert; Burns, Michelle Nicole

    2017-01-01

    Disclosure of a sexual or gender minority status has been associated with both positive and negative effects on wellbeing. Few studies have explored the disclosure and concealment process in young people. Interviews were conducted with 10 sexual and/or gender minority individuals, aged 18-22 years, of male birth sex. Data were analyzed qualitatively, yielding determinants and effects of disclosure and concealment. Determinants of disclosure included holding positive attitudes about one's identity and an implicit devaluation of acceptance by society. Coming out was shown to have both positive and negative effects on communication and social support and was associated with both increases and decreases in experiences of stigma. Determinants of concealment included lack of comfort with one's identity and various motivations to avoid discrimination. Concealment was also related to hypervigilance and unique strategies of accessing social support. Results are discussed in light of their clinical implications.

  16. An Exploratory Investigation of Social Stigma and Concealment in Patients with Multiple Sclerosis.

    Science.gov (United States)

    Cook, Jonathan E; Germano, Adriana L; Stadler, Gertraud

    2016-01-01

    We conducted a preliminary investigation into dimensions of stigma and their relation to disease concealment in a sample of American adults living with multiple sclerosis (MS). Fifty-three adults with MS in the United States completed an online survey assessing anticipated, internalized, and isolation stigma, as well as concealment. Responses to all the scales were relatively low, on average, but above scale minimums (P stigma and concealment were highest. Anticipated stigma strongly predicted concealment. Many adults living with MS may be concerned that they will be the target of social stigma because of their illness. These concerns are associated with disease concealment. More research is needed to investigate how MS stigma and concealment may be independent contributors to health in patients with MS.

  17. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  18. Use of Near-Infrared Spectroscopy and Chemometrics for the Nondestructive Identification of Concealed Damage in Raw Almonds (Prunus dulcis).

    Science.gov (United States)

    Rogel-Castillo, Cristian; Boulton, Roger; Opastpongkarn, Arunwong; Huang, Guangwei; Mitchell, Alyson E

    2016-07-27

    Concealed damage (CD) is defined as a brown discoloration of the kernel interior (nutmeat) that appears only after moderate to high heat treatment (e.g., blanching, drying, roasting, etc.). Raw almonds with CD have no visible defects before heat treatment. Currently, there are no screening methods available for detecting CD in raw almonds. Herein, the feasibility of using near-infrared (NIR) spectroscopy between 1125 and 2153 nm for the detection of CD in almonds is demonstrated. Almond kernels with CD have less NIR absorbance in the region related with oil, protein, and carbohydrates. With the use of partial least squares discriminant analysis (PLS-DA) and selection of specific wavelengths, three classification models were developed. The calibration models have false-positive and false-negative error rates ranging between 12.4 and 16.1% and between 10.6 and 17.2%, respectively. The percent error rates ranged between 8.2 and 9.2%. Second-derivative preprocessing of the selected wavelength resulted in the most robust predictive model.

  19. Internal Error Propagation in Explicit Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2014-09-11

    In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.

  20. Concealed identification symbols and nondestructive determination of the identification symbols

    Science.gov (United States)

    Nance, Thomas A.; Gibbs, Kenneth M.

    2014-09-16

    The concealing of one or more identification symbols into a target object and the subsequent determination or reading of such symbols through non-destructive testing is described. The symbols can be concealed in a manner so that they are not visible to the human eye and/or cannot be readily revealed to the human eye without damage or destruction of the target object. The identification symbols can be determined after concealment by e.g., the compilation of multiple X-ray images. As such, the present invention can also provide e.g., a deterrent to theft and the recovery of lost or stolen objects.

  1. The impact of transmission errors on progressive 720 lines HDTV coded with H.264

    Science.gov (United States)

    Brunnström, Kjell; Stålenbring, Daniel; Pettersson, Martin; Gustafsson, Jörgen

    2010-02-01

    TV sent over the networks based on the Internet Protocol i.e IPTV is moving towards high definition (HDTV). There has been quite a lot of work on how the HDTV is affected by different codecs and bitrates, but the impact of transmission errors over IP-networks have been less studied. The study was focusing on H.264 encoded 1280x720 progressive HDTV format and was comparing three different concealment methods for different packet loss rates. One is included in a propriety decoder, one is part of FFMPEG and different length of freezing. The target is to simulate what typically IPTV settop-boxes will do when encountering packet loss. Another aim is to study whether the presentation upscaled on the full HDTV screen or presented pixel mapped in a smaller area in the center of the sceen would have an effect on the quality. The results show that there were differences between the two packet loss concealment methods in FFMPEG and in the propriety codec. Freezing seemed to have similar effect as been reported before. For low rates of transmission errors the coding impairments has impact on the quality, but for higher degree of transmission errors these does not affect the quality, since they become overshadowed by transmission error. An interesting effect where the higher bitrate videos goes from having higher quality for lower degree of packet loss, to having lower quality than the lower bitrate video at higher packet loss, was discovered. The different way of presenting the video i.e. upscaled or not-upscaled was significant on the 95% level, but just about.

  2. Assessing Visibility of Individual Transmission Errors in Networked Video

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire

    2016-01-01

    could benefit from information about subjective visibility of individual packet losses; for example, computational resources could be directed more efficiently to unequal error protection and concealment by focusing in the visually most disturbing artifacts. In this paper, we present a novel subjective...... methodology for packet loss artifact detection by tapping a touchscreen where a defect is observed. To validate the proposed methodology, the results of a pilot study are presented and analyzed. According to the results, the proposed method can be used to derive qualitatively and statistically meaningful data...... on the subjective visibility of individual packet loss artifacts....

  3. Predictive value of ADAMTS-13 on concealed chronic renal failure in COPD patients

    Science.gov (United States)

    Zeng, Mian; Chen, Qingui; Liang, Wenjie; He, Wanmei; Zheng, Haichong; Huang, Chunrong

    2017-01-01

    Background Impaired renal function is often neglected in COPD patients. Considering that COPD patients usually have an ongoing prothrombotic state and systemic inflammation status, we investigated the association among them and explored the predictive value of a disintegrin and metalloproteinase with a thrombospondin type 1 motif, member 13 (ADAMTS-13), on concealed chronic renal failure (CRF) in COPD patients. Methods COPD patients were recruited from the First Affiliated Hospital of Sun Yat-Sen University between January 2015 and December 2016. Control was selected from contemporaneous hospitalized patients without COPD and matched by age and gender at a ratio of 1:1. Estimated glomerular filtration rate (eGFR) was calculated by using the Chronic Kidney Disease Epidemiology Collaboration formula, and all subjects were categorized as having normal renal function (eGFR ≥60 mL min−1 1.73 m−2) and having concealed CRF (normal serum creatinine while eGFR <60 mL min−1 1.73 m−2). Independent correlates of concealed CRF were investigated by logistic regression analysis, and receiver operating characteristic (ROC) curves were used to determine the predictive value of ADAMTS-13. Results In total, 106 COPD and 106 non-COPD patients were finally recruited, and the incidences of concealed CRF were 19.81% and 7.55%, respectively. ADAMTS-13 (odds ratio [OR] =0.858, 95% CI =0.795–0.926), D-dimer (OR =1.095, 95% CI =1.027–1.169), and C-reactive protein (OR =1.252, 95% CI =1.058–1.480) were significantly associated with concealed CRF. Sensitivity and specificity at an ADAMTS-13 cutoff of 318.72 ng/mL were 100% and 81.2%, respectively. The area under the ROC curve was 0.959. Conclusion Prothrombotic state and systemic inflammation status might contribute to explaining the high incidence of concealed CRF in COPD, and plasma ADAMTS-13 levels may serve as a strong predictor. PMID:29255356

  4. Concealing their communication: exploring psychosocial predictors of young drivers' intentions and engagement in concealed texting.

    Science.gov (United States)

    Gauld, Cassandra S; Lewis, Ioni; White, Katherine M

    2014-01-01

    Making a conscious effort to hide the fact that you are texting while driving (i.e., concealed texting) is a deliberate and risky behaviour involving attention diverted away from the road. As the most frequent users of text messaging services and mobile phones while driving, young people appear at heightened risk of crashing from engaging in this behaviour. This study investigated the phenomenon of concealed texting while driving, and utilised an extended Theory of Planned Behaviour (TPB) including the additional predictors of moral norm, mobile phone involvement, and anticipated regret to predict young drivers' intentions and subsequent behaviour. Participants (n=171) were aged 17-25 years, owned a mobile phone, and had a current driver's licence. Participants completed a questionnaire measuring their intention to conceal texting while driving, and a follow-up questionnaire a week later to report their behavioural engagement. The results of hierarchical multiple regression analyses showed overall support for the predictive utility of the TPB with the standard constructs accounting for 69% of variance in drivers' intentions, and the extended predictors contributing an additional 6% of variance in intentions over and above the standard constructs. Attitude, subjective norm, PBC, moral norm, and mobile phone involvement emerged as significant predictors of intentions; and intention was the only significant predictor of drivers' self-reported behaviour. These constructs can provide insight into key focal points for countermeasures including advertising and other public education strategies aimed at influencing young drivers to reconsider their engagement in this risky behaviour. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Organizational Concealment: An Incentive of Reducing the Responsibility

    OpenAIRE

    Tajika, Tomoya

    2017-01-01

    We studyworkers’ incentives of reporting problems within an OLG organization consisting of a subordinate and a manager. The subordinate is responsible for reporting a problem, and the manager is responsible for solving the reported problem. The subordinate has an incentive to conceal a detected problem since if he reports it but the manager is too lazy to solve the problem, the responsibility is transferred to the subordinate since he becomes a manager in the next period. We show that conceal...

  6. Open-area concealed-weapon detection system

    Science.gov (United States)

    Pati, P.; Mather, P.

    2011-06-01

    Concealed Weapon Detection (CWD) has become a significant challenge to present day security needs; individuals carrying weapons into airplanes, schools, and secured establishments are threat to public security. Although controlled screening, of people for concealed weapons, has been employed in many establishments, procedures and equipment are designed to work in restricted environments like airport passport control, military checkpoints, hospitals, school and university entrance. Furthermore, screening systems do not effectively decipher between threat and non-threat metal objects, thus leading to high rate of false alarms which can become a liability to daily operational needs of establishments. Therefore, the design and development of a new CWD system to operate in a large open area environment with large numbers of people reduced incidences of false alarms and increased location accuracy is essential.

  7. Regulatory focus moderates the social performance of individuals who conceal a stigmatized identity.

    Science.gov (United States)

    Newheiser, Anna-Kaisa; Barreto, Manuela; Ellemers, Naomi; Derks, Belle; Scheepers, Daan

    2015-12-01

    People often choose to hide a stigmatized identity to avoid bias. However, hiding stigma can disrupt social interactions. We considered whether regulatory focus qualifies the social effects of hiding stigma by examining interactions in which stigmatized participants concealed a devalued identity from non-stigmatized partners. In the Prevention Focus condition, stigmatized participants were instructed to prevent a negative impression by concealing the identity; in the Promotion Focus condition, they were instructed to promote a positive impression by concealing the identity; in the Control condition, they were simply asked to conceal the identity. Both non-stigmatized partners and independent raters rated the interactions more positively in the Promotion Focus condition. Thus, promotion focus is interpersonally beneficial for individuals who conceal a devalued identity. © 2015 The British Psychological Society.

  8. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  9. Correction of concealed penis with preservation of the prepuce.

    Science.gov (United States)

    Valioulis, I A; Kallergis, I C; Ioannidou, D C

    2015-10-01

    By definition, congenital concealed penis presents at birth. Children are usually referred to physicians because of parental anxiety caused by their child's penile size. Several surgical procedures have been described to treat this condition, but its correction is still technically challenging. The present study reports a simple surgical approach, which allows preservation of the prepuce. During the last 6 years, 18 children with concealed penis (according to the classification by Maizels et al.) have been treated in the present department (mean age 4.5 years, range 3-12 years). Patients with other conditions that caused buried penis were excluded from the study. The operation was performed through a longitudinal midline ventral incision, which was extended hemi-circumferentially at the penile base. The dysgenetic dartos was identified and its distal part was resected. Dissection of the corpora cavernosa was carried down to the suspensory ligament, which was sectioned. Buck's fascia was fixed to Scarpa's fascia and shaft skin was approximated in the midline. Penoscrotal angle was fashioned by Z-plasty or V-Y plasty. The median follow-up was 24 months (range 8-36). The postoperative edema was mild and resolved within a week. All children had good to excellent outcomes. The median pre-operative to postoperative difference in penile length in the flaccid state was 2.6 cm (range 2.0-3.5). No serious complications or recurrent penile retraction were noted. Recent literature mostly suggests that concealed penis is due to deficient proximal attachments of dysgenetic dartos. Consequences of this include: difficulties in maintaining proper hygiene, balanitis, voiding difficulties with prepuce ballooning and urine spraying, and embarrassment among peers. Surgical treatment for congenital concealed penis is warranted in children aged 3 years or older. The basis of the technique is the perception that in boys with congenital concealed penis, the penile integuments are normal

  10. Compressed Domain Packet Loss Concealment of Sinusoidally Coded Speech

    DEFF Research Database (Denmark)

    Rødbro, Christoffer A.; Christensen, Mads Græsbøll; Andersen, Søren Vang

    2003-01-01

    We consider the problem of packet loss concealment for voice over IP (VoIP). The speech signal is compressed at the transmitter using a sinusoidal coding scheme working at 8 kbit/s. At the receiver, packet loss concealment is carried out working directly on the quantized sinusoidal parameters......, based on time-scaling of the packets surrounding the missing ones. Subjective listening tests show promising results indicating the potential of sinusoidal speech coding for VoIP....

  11. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  12. Measurement errors in voice-key naming latency for Hiragana.

    Science.gov (United States)

    Yamada, Jun; Tamaoka, Katsuo

    2003-12-01

    This study makes explicit the limitations and possibilities of voice-key naming latency research on single hiragana symbols (a Japanese syllabic script) by examining three sets of voice-key naming data against Sakuma, Fushimi, and Tatsumi's 1997 speech-analyzer voice-waveform data. Analysis showed that voice-key measurement errors can be substantial in standard procedures as they may conceal the true effects of significant variables involved in hiragana-naming behavior. While one can avoid voice-key measurement errors to some extent by applying Sakuma, et al.'s deltas and by excluding initial phonemes which induce measurement errors, such errors may be ignored when test items are words and other higher-level linguistic materials.

  13. The association between concealing emotions at work and medical utilization in Korea.

    Science.gov (United States)

    Seok, Hongdeok; Yoon, Jin-Ha; Lee, Wanhyung; Lee, June-Hee; Jung, Pil Kyun; Kim, Inah; Won, Jong-Uk; Roh, Jaehoon

    2014-01-01

    We aimed to investigate the association between concealing emotions at work and medical utilization. Data from the 2007-2009 4th Korea National Health and Nutrition Examination Survey (KNHANES IV) was used, 7,094 participants (3,837 males, 3,257 females) aged between 20 and 54 who were economically active and completed all necessary questionnaire items were included. Odds ratios (ORs) and 95% confidence intervals (95% CI) for differences in hospitalization, outpatient visits, and pharmaceutical drug use between those who concealed their emotions and those who did not were investigated using logistic regression models with and without gender stratification. Among those who concealed their emotions (n = 2,763), 47.4% were females, and 50.1% had chronic disease. In addition, 9.7% of the concealing emotions group had been hospitalized within the last year, 24.8% had been outpatients in the last two weeks, and 28.3% had used pharmaceutical drugs in the last two weeks. All ORs represent the odds of belonging to the concealing emotions group over the non-concealing emotions group. After adjustment for individual, occupational, socioeconomic and disease factors, the adjusted ORs (95% CI) in hospitalization are 1.29 (1.08 ~ 1.53) in the total population, 1.25 (0.98 ~ 1.60) in males and 1.30 (1.02 ~ 1.66) in females, in outpatient visits are 1.15 (1.02 ~ 1.29) in the total population, 1.05 (0.88 ~ 1.24) in males and 1.25 (1.06 ~ 1.47) in females and in pharmaceutical drug use are 1.12 (1.01 ~ 1.25) in the total population, 1.08 (0.92 ~ 1.27) in males and 1.14 (0.98 ~ 1.33) in females. Those who concealed their emotions at work were more likely to use medical services. Moreover, the health effects of concealing emotions at work might be more detrimental in women than in men.

  14. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  15. Detecting concealed information in less than a second: response latency-based measures

    NARCIS (Netherlands)

    Verschuere, B.; de Houwer, J.; Verschuere, B.; Ben-Shakhar, G.; Meijer, E.

    2011-01-01

    Concealed information can be accurately assessed with physiological measures. To overcome the practical limitations of physiological measures, an assessment using response latencies has been proposed. At first sight, research findings on response latency based concealed information tests seem

  16. Concealing emotions: nurses' experiences with induced abortion care.

    Science.gov (United States)

    Yang, Cheng-Fang; Che, Hui-Lian; Hsieh, Hsin-Wan; Wu, Shu-Mei

    2016-05-01

    To explore the experiences of nurses involved with induced abortion care in the delivery room in Taiwan. Induced abortion has emotional, ethical and legal facets. In Taiwan, several studies have addressed the ethical issues, abortion methods and women's experiences with abortion care. Although abortion rates have increased, there has been insufficient attention on the views and experiences of nurses working in the delivery room who are involved with induced abortion care. Qualitative, semistructured interviews. This study used a purposive sampling method. In total, 22 nurses involved with induced abortion care were selected. Semistructured interviews with guidelines were conducted, and the content analysis method was used to analyse the data. Our study identified one main theme and five associated subthemes: concealing emotions, which included the inability to refuse, contradictory emotions, mental unease, respect for life and self-protection. This is the first specific qualitative study performed in Taiwan to explore nurses' experiences, and this study also sought to address the concealing of emotions by nurses when they perform induced abortion care, which causes moral distress and creates ethical dilemmas. The findings of this study showed that social-cultural beliefs profoundly influence nurses' values and that the rights of nurses are neglected. The profession should promote small-group and case-study discussions, the clarification of values and reflective thinking among nurses. Continued professional education that provides stress relief will allow nurses to develop self-healing and self-care behaviours, which will enable them to overcome the fear of death while strengthening pregnancy termination counselling, leading to better quality professional care. © 2016 John Wiley & Sons Ltd.

  17. Bayesian interpolation in a dynamic sinusoidal model with application to packet-loss concealment

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Cemgil, Ali Taylan

    2010-01-01

    a Bayesian inference scheme for the missing observations, hidden states and model parameters of the dynamic model. The inference scheme is based on a Markov chain Monte Carlo method known as Gibbs sampler. We illustrate the performance of the inference scheme to the application of packet-loss concealment...

  18. When concealed handgun licensees break bad: criminal convictions of concealed handgun licensees in Texas, 2001-2009.

    Science.gov (United States)

    Phillips, Charles D; Nwaiwu, Obioma; McMaughan Moudouni, Darcy K; Edwards, Rachel; Lin, Szu-hsuan

    2013-01-01

    We explored differences in criminal convictions between holders and nonholders of a concealed handgun license (CHL) in Texas. The Texas Department of Public Safety (DPS) provides annual data on criminal convictions of holders and nonholders of CHLs. We used 2001 to 2009 DPS data to investigate the differences in the distribution of convictions for these 2 groups across 9 types of criminal offenses. We calculated z scores for the differences in the types of crimes for which CHL holders and nonholders were convicted. CHL holders were much less likely than nonlicensees to be convicted of crimes. Most nonholder convictions involved higher-prevalence crimes (burglary, robbery, or simple assault). CHL holders' convictions were more likely to involve lower-prevalence crimes, such as sexual offenses, gun offenses, or offenses involving a death. Our results imply that expanding the settings in which concealed carry is permitted may increase the risk of specific types of crimes, some quite serious in those settings. These increased risks may be relatively small. Nonetheless, policymakers should consider these risks when contemplating reducing the scope of gun-free zones.

  19. Induction detection of concealed bulk banknotes

    International Nuclear Information System (INIS)

    Fuller, Christopher; Chen, Antao

    2011-01-01

    Bulk cash smuggling is a serious issue that has grown in volume in recent years. By building on the magnetic characteristics of paper currency, induction sensing is found to be capable of quickly detecting large masses of banknotes. The results show that this method is effective in detecting bulk cash through concealing materials such as plastics, cardboards, fabrics and aluminum foil. The significant difference in the observed phase between the received signals caused by conducting materials and ferrite compounds, found in banknotes, provides a good indication that this process can overcome the interference by metal objects in a real sensing application. This identification strategy has the potential to not only detect the presence of banknotes, but also the number, while still eliminating false positives caused by metal objects

  20. Psychopathy and the detection of concealed information

    NARCIS (Netherlands)

    Verschuere, B.; Verschuere, B.; Ben-Shakhar, G.; Meijer, E.

    2011-01-01

    The most common application of concealed information detection is crime knowledge assessment in crime suspects. The validity of this application has mainly been investigated in healthy subjects. Criminals may differ in important aspects from healthy subjects. Psychopathy, for example, is quite

  1. Relationship between concealment of emotions at work and musculoskeletal symptoms: results from the third Korean Working Conditions Survey.

    Science.gov (United States)

    Jung, Kyungyong; Kim, Dae Hwan; Ryu, Ji Young

    2018-05-11

    In this study, we explored the relationship between concealing emotions at work and musculoskeletal symptoms in Korean workers using data from a national, population-based survey. Data were obtained from the third Korean Working Conditions Survey in 2011. We investigated the prevalence of three musculoskeletal symptoms ("back pain", "pain in the upper extremities", and "pain in the lower extremities"). Multiple logistic regression analysis was also performed to determine odds ratios (ORs) for musculoskeletal symptoms according to concealing emotions at work, adjusting for socioeconomic factors. In both sexes, the emotion-concealing group showed a significantly higher prevalence of "pain in the upper extremities" and "pain in the lower extremities" than the non-emotion-concealing group. For back pain, male - but not female - workers who concealed their emotions showed a higher prevalence than their non-emotion-concealing counterparts; the difference was statistically significant. Adjusted ORs for musculoskeletal symptoms (excluding "back pain" for female workers) in the emotion-concealing group were significantly higher. Our study suggests that concealment of emotions is closely associated with musculoskeletal symptoms, and the work environment should operate in consideration not only of the physical health work condition of workers but also of their emotional efforts including concealing emotion at work.

  2. Statistical error estimation of the Feynman-α method using the bootstrap method

    International Nuclear Information System (INIS)

    Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho

    2016-01-01

    Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)

  3. A Concealed Information Test with multimodal measurement.

    Science.gov (United States)

    Ambach, Wolfgang; Bursch, Stephanie; Stark, Rudolf; Vaitl, Dieter

    2010-03-01

    A Concealed Information Test (CIT) investigates differential physiological responses to deed-related (probe) vs. irrelevant items. The present study focused on the detection of concealed information using simultaneous recordings of autonomic and brain electrical measures. As a secondary issue, verbal and pictorial presentations were compared with respect to their influence on the recorded measures. Thirty-one participants underwent a mock-crime scenario with a combined verbal and pictorial presentation of nine items. The subsequent CIT, designed with respect to event-related potential (ERP) measurement, used a 3-3.5s interstimulus interval. The item presentation modality, i.e. pictures or written words, was varied between subjects; no response was required from the participants. In addition to electroencephalogram (EEG), electrodermal activity (EDA), electrocardiogram (ECG), respiratory activity, and finger plethysmogram were recorded. A significant probe-vs.-irrelevant effect was found for each of the measures. Compared to sole ERP measurement, the combination of ERP and EDA yielded incremental information for detecting concealed information. Although, EDA per se did not reach the predictive value known from studies primarily designed for peripheral physiological measurement. Presentation modality neither influenced the detection accuracy for autonomic measures nor EEG measures; this underpins the equivalence of verbal and pictorial item presentation in a CIT, regardless of the physiological measures recorded. Future studies should further clarify whether the incremental validity observed in the present study reflects a differential sensitivity of ERP and EDA to different sub-processes in a CIT. Copyright 2009 Elsevier B.V. All rights reserved.

  4. Effects of modified penoplasty for concealed penis in children.

    Science.gov (United States)

    Chen, Chao; Li, Ning; Luo, Yi-Ge; Wang, Hong; Tang, Xian-Ming; Chen, Jia-Bo; Dong, Chun-Qiang; Liu, Qiang; Dong, Kun; Su, Cheng; Yang, Ti-Quan

    2016-10-01

    To evaluate the effect of modified penoplasty in the management of concealed penis. We retrospectively reviewed 96 consecutive patients with concealed penis, which had been surgically corrected between July 2013 and July 2015. All patients underwent modified Shiraki phalloplasty. All patients were scheduled for regular follow-up at 1, 3, and 6 months after the surgery. Data on the patients' age, operative time, postoperative complications, and parents' satisfaction grade were collected and analyzed. The mean follow-up period was 17.4 months (range 7-31 months). The mean operative time was 63.2 ± 8.7 min. The mean perpendicular penile length was 1.89 ± 0.77 cm preoperatively and 4.42 ± 0.87 cm postoperatively, with an improved mean length of 2.5 ± 0.68 cm in the flaccid state postoperatively (p penis can achieve maximum utilization of prepuce to assure coverage of the exposed penile shaft. It has fewer complications, achieving marked asthetics, and functional improvement. It is a relatively ideal means for treating concealed penis.

  5. Error Parsing: An alternative method of implementing social judgment theory

    OpenAIRE

    Crystal C. Hall; Daniel M. Oppenheimer

    2015-01-01

    We present a novel method of judgment analysis called Error Parsing, based upon an alternative method of implementing Social Judgment Theory (SJT). SJT and Error Parsing both posit the same three components of error in human judgment: error due to noise, error due to cue weighting, and error due to inconsistency. In that sense, the broad theory and framework are the same. However, SJT and Error Parsing were developed to answer different questions, and thus use different m...

  6. Detection and identification of concealed weapons using matrix pencil

    Science.gov (United States)

    Adve, Raviraj S.; Thayaparan, Thayananthan

    2011-06-01

    The detection and identification of concealed weapons is an extremely hard problem due to the weak signature of the target buried within the much stronger signal from the human body. This paper furthers the automatic detection and identification of concealed weapons by proposing the use of an effective approach to obtain the resonant frequencies in a measurement. The technique, based on Matrix Pencil, a scheme for model based parameter estimation also provides amplitude information, hence providing a level of confidence in the results. Of specific interest is the fact that Matrix Pencil is based on a singular value decomposition, making the scheme robust against noise.

  7. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  8. Self-Concealment Mediates the Relationship Between Perfectionism and Attitudes Toward Seeking Psychological Help Among Adolescents.

    Science.gov (United States)

    Abdollahi, Abbas; Hosseinian, Simin; Beh-Pajooh, Ahmad; Carlbring, Per

    2017-01-01

    One of the biggest barriers in treating adolescents with mental health problems is their refusing to seek psychological help. This study was designed to examine the relationships between two forms of perfectionism, self-concealment and attitudes toward seeking psychological help and to test the mediating role of self-concealment in the relationship between perfectionism and attitudes toward seeking psychological help among Malaysian high school students. The participants were 475 Malaysian high school students from four high schools in Kuala Lumpur, Malaysia. Structural equation modelling results indicated that high school students with high levels of socially prescribed perfectionism, high levels of self-concealment, and low levels of self-oriented perfectionism reported negative attitudes toward seeking psychological help. Bootstrapping analysis showed that self-concealment emerged as a significant, full mediator in the link between socially prescribed perfectionism and attitudes toward seeking psychological help. Moderated mediation analysis also examined whether the results generalized across men and women. The results revealed that male students with socially prescribed perfectionism are more likely to engage in self-concealment, which in turn, leads to negative attitudes toward seeking psychological help more than their female counterparts. The results suggested that students high in socially prescribed perfectionism were more likely to engage in self-concealment and be less inclined to seek psychological help.

  9. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper; Tempone, Raul

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  10. College Students' Reasons for Concealing Suicidal Ideation

    Science.gov (United States)

    Burton Denmark, Adryon; Hess, Elaine; Becker, Martin Swanbrow

    2012-01-01

    Self-reported reasons for concealing suicidal ideation were explored using data from a national survey of undergraduate and graduate students: 558 students indicated that they seriously considered attempting suicide during the previous year and did not tell anyone about their suicidal thoughts. Content analysis of students' qualitative responses…

  11. Anxiety and Related Disorders and Concealment in Sexual Minority Young Adults.

    Science.gov (United States)

    Cohen, Jeffrey M; Blasey, Christine; Barr Taylor, C; Weiss, Brandon J; Newman, Michelle G

    2016-01-01

    Sexual minorities face greater exposure to discrimination and rejection than heterosexuals. Given these threats, sexual minorities may engage in sexual orientation concealment in order to avoid danger. This social stigma and minority stress places sexual minorities at risk for anxiety and related disorders. Given that three fourths of anxiety disorder onset occurs before the age of 24, the current study investigated the symptoms of generalized anxiety disorder, social phobia, panic disorder, posttraumatic stress disorder, and depression in sexual minority young adults relative to their heterosexual peers. Secondarily, the study investigated sexual orientation concealment as a predictor of anxiety and related disorders. A sample of 157 sexual minority and 157 heterosexual young adults matched on age and gender completed self-report measures of the aforementioned disorders, and indicated their level of sexual orientation concealment. Results revealed that sexual minority young adults reported greater symptoms relative to heterosexuals across all outcome measures. There were no interactions between sexual minority status and gender, however, women had higher symptoms across all disorders. Sexual minority young women appeared to be at the most risk for clinical levels of anxiety and related disorders. In addition, concealment of sexual orientation significantly predicted symptoms of social phobia. Implications are offered for the cognitive and behavioral treatment of anxiety and related disorders in this population. Copyright © 2015. Published by Elsevier Ltd.

  12. Rating Emotion Communication: Display and Concealment as Effects of Culture, Gender, Emotion Type, and Relationship

    Directory of Open Access Journals (Sweden)

    Arne Vikan

    2009-01-01

    Full Text Available Students from a collectivistic (Brazilian, n= 401 and an individualistic (Norwegian, n= 418culture rated their ability to display and conceal anger, sadness, and anxiety in relation to immediate family, partner, friends, and "other persons." Norwegians showed higher display ratings for anger and sadness, and higher concealment ratings for anger and anxiety. Display ratings were much higher, and concealment ratings much lower in relation to close persons than in relation to "other persons." A culture x relationship interaction was that Brazilian' ratings suggested more emotional openness to friends than to family and partner, whereas Norwegians showed the inverse patterns. Gender differences supported previous research by showing higher display and lower concealment ratings, and less differentiation between relationships by females.

  13. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  14. A straightness error measurement method matched new generation GPS

    International Nuclear Information System (INIS)

    Zhang, X B; Lu, H; Jiang, X Q; Li, Z

    2005-01-01

    The axis of the non-diffracting beam produced by an axicon is very stable and can be adopted as the datum line to measure the spatial straightness error in continuous working distance, which may be short, medium or long. Though combining the non-diffracting beam datum-line with LVDT displace detector, a new straightness error measurement method is developed. Because the non-diffracting beam datum-line amends the straightness error gauged by LVDT, the straightness error is reliable and this method is matchs new generation GPS

  15. Remote laser drilling and sampling system for the detection of concealed explosives

    Science.gov (United States)

    Wild, D.; Pschyklenk, L.; Theiß, C.; Holl, G.

    2017-05-01

    The detection of hazardous materials like explosives is a central issue in national security in the field of counterterrorism. One major task includes the development of new methods and sensor systems for the detection. Many existing remote or standoff methods like infrared or raman spectroscopy find their limits, if the hazardous material is concealed in an object. Imaging technologies using x-ray or terahertz radiation usually yield no information about the chemical content itself. However, the exact knowledge of the real threat potential of a suspicious object is crucial for disarming the device. A new approach deals with a laser drilling and sampling system for the use as verification detector for suspicious objects. Central part of the system is a miniaturised, diode pumped Nd:YAG laser oscillator-amplifier. The system allows drilling into most materials like metals, synthetics or textiles with bore hole diameters in the micron scale. During the drilling process, the hazardous material can be sampled for further investigation with suitable detection methods. In the reported work, laser induced breakdown spectroscopy (LIBS) is used to monitor the drilling process and to classify the drilled material. Also experiments were carried out to show the system's ability to not ignite even sensitive explosives like triacetone triperoxide (TATP). The detection of concealed hazardous material is shown for different explosives using liquid chromatography and ion mobility spectrometry.

  16. Method for decoupling error correction from privacy amplification

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.

  17. Method for decoupling error correction from privacy amplification

    International Nuclear Information System (INIS)

    Lo, Hoi-Kwong

    2003-01-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof

  18. Geophysical techniques for exploration of concealed uranium deposits in the Gwalior basin

    International Nuclear Information System (INIS)

    Choudhary, Kalpan; Singh, R.B.

    2004-01-01

    There is no direct geophysical method for the exploration of concealed uranium ore. Scope of geophysics for this in the Gwalior basin comprises delineating the basement topography, demarcation or zones of intense fracturing intersecting the unconformities and to identify the presence of carbonaceous rocks, specially in the graben-like structures. These geophysical problems have been successfully solved in other places by employing IP, resistivity, SP and gravity techniques for basement mapping, identification of fracture zone/shear zone, delineation of electrical conductors like carbonaceous rocks and sulphides. Three such case histories are presented here that include: a). basement and shear/fracture zone mapping in the Vindhyan basin north of Son-Narmada lineament, b). delineation of conductive zone (proved to be carbon phyllite) in the Mahakoshal Group of Kanhara area of Sonbhadra district, UP and c). Identification of a conductive zone, proved to be sulphide body, within the Mahakoshal group in the Gurharpahar area of Sidhi and Sonbhadra districts of MP and UP respectively. In the context of exploration for concealed uranium in the Gwalior basin, it is suggested to employ IP, resistivity, SP, gravity and magnetic methods for delineation of conductive zones like carbonaceous rocks, basement topography, including the graben like structures, fracture zone, geological boundaries and demarcation of the basin boundary. (author)

  19. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  20. CONCEAL TO SURVIVE: RESISTANCE STRATEGIES

    Directory of Open Access Journals (Sweden)

    Francisca Zuleide Duarte de Souza

    2013-04-01

    Full Text Available This paper analyzes the strategy of concealment, theoretically bounded by Accetto (2001, and used by Delfina, character in the novel O Alegre Canto da Perdiz of Paulina Chiziane, Mozambican writer. Focuses, among other things, the relationship colonizer versus colonized, discussing the con­dition of female inferiority that forces a reaction apparently submissive, which assumes the sale of the body and the rejection of their ancestral tra­ditions. To interpret the attitudes of Delfina as a strategy that masks resent­ment against abusive domain power.

  1. Concealment of Child Sexual Abuse in Sports

    Science.gov (United States)

    Hartill, Mike

    2013-01-01

    When the sexual abuse of children is revealed, it is often found that other nonabusing adults were aware of the abuse but failed to act. During the past twenty years or so, the concealment of child sexual abuse (CSA) within organizations has emerged as a key challenge for child protection work. Recent events at Pennsylvania State University (PSU)…

  2. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2014-01-01

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...

  3. Propagation of internal errors in explicit Runge–Kutta methods and internal stability of SSP and extrapolation methods

    KAUST Repository

    Ketcheson, David I.

    2014-04-11

    In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.

  4. Direct and indirect links between parenting styles, self-concealment (secrets), impaired control over drinking and alcohol-related outcomes.

    Science.gov (United States)

    Hartman, Jessica D; Patock-Peckham, Julie A; Corbin, William R; Gates, Jonathan R; Leeman, Robert F; Luk, Jeremy W; King, Kevin M

    2015-01-01

    Self-concealment reflects uncomfortable feelings, thoughts, and information people have about themselves that they avoid telling others (Larson & Chastain, 1990). According to Larson and Chastain (1990) these secrets range from the slightly embarrassing to the very distressing with an individual's most traumatic experiences often concealed. Parental attitudes including those involving self-disclosure are thought to be expressed in their choice of parenting style (Brand, Hatzinger, Beck, & Holsboer-Trachsler, 2009). The specific aim of this investigation was to examine the direct and indirect influences of parenting styles on self-concealment, impaired control over drinking (i.e. the inability to stop drinking when intended), alcohol use (quantity/frequency), and alcohol-related problems. A structural equation model with 419 (223 men, 196 women) university students was examined. Two and three path mediated effects were examined with the bias corrected bootstrap technique in Mplus. Having an authoritarian mother was directly linked to more self-concealment, while having an authoritative father was directly linked to less self-concealment. Higher levels of mother authoritarianism were indirectly linked to both increased alcohol use and alcohol-related problems through more self-concealment and more impaired control over drinking. Moreover, higher levels of father authoritativeness were indirectly linked to less alcohol use and alcohol-related problems through less self-concealment and less impaired control over drinking. These findings suggest that parenting styles influence vulnerabilities such as self-concealment in the impaired control over the drinking pathway to alcohol use and alcohol-related problems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Directory of Open Access Journals (Sweden)

    Gyungho Khim

    2015-01-01

    Full Text Available We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement.

  6. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Science.gov (United States)

    Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok

    2015-01-01

    We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715

  7. ID-check: Online concealed information test reveals true identity

    NARCIS (Netherlands)

    Verschuere, B.; Kleinberg, B.

    2016-01-01

    The Internet has already changed people's lives considerably and is likely to drastically change forensic research. We developed a web-based test to reveal concealed autobiographical information. Initial studies identified a number of conditions that affect diagnostic efficiency. By combining these

  8. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  9. A Qualitative Study Examining Experiences and Dilemmas in Concealment and Disclosure of People Living With Serious Mental Illness.

    Science.gov (United States)

    Bril-Barniv, Shani; Moran, Galia S; Naaman, Adi; Roe, David; Karnieli-Miller, Orit

    2017-03-01

    People with mental illnesses face the dilemma of whether to disclose or conceal their diagnosis, but this dilemma was scarcely researched. To gain in-depth understanding of this dilemma, we interviewed 29 individuals with mental illnesses: 16 with major depression/bipolar disorders and 13 with schizophrenia. Using a phenomenological design, we analyzed individuals' experiences, decision-making processes, and views of gains and costs regarding concealment and disclosure of mental illness. We found that participants employed both positive and negative disclosure/concealment practices. Positive practices included enhancing personal recovery, community integration, and/or supporting others. Negative practices occurred in forced, uncontrolled situations. We also identified various influencing factors, including familial norms of sharing, accumulated experiences with disclosure, and ascribed meaning to diagnosis. Based on these findings, we deepen the understanding about decision-making processes and the consequences of disclosing or concealing mental illness. We discuss how these finding can help consumers explore potential benefits and disadvantages of mental illness disclosure/concealment occurrences.

  10. New method of classifying human errors at nuclear power plants and the analysis results of applying this method to maintenance errors at domestic plants

    International Nuclear Information System (INIS)

    Takagawa, Kenichi; Miyazaki, Takamasa; Gofuku, Akio; Iida, Hiroyasu

    2007-01-01

    Since many of the adverse events that have occurred in nuclear power plants in Japan and abroad have been related to maintenance or operation, it is necessary to plan preventive measures based on detailed analyses of human errors made by maintenance workers or operators. Therefore, before planning preventive measures, we developed a new method of analyzing human errors. Since each human error is an unsafe action caused by some misjudgement made by a person, we decided to classify them into six categories according to the stage in the judgment process in which the error was made. By further classifying each error into either an omission-type or commission-type, we produced 12 categories of errors. Then, we divided them into the two categories of basic error tendencies and individual error tendencies, and categorized background factors into four categories: imperfect planning; imperfect facilities or tools; imperfect environment; and imperfect instructions or communication. We thus defined the factors in each category to make it easy to identify factors that caused the error. Then using this method, we studied the characteristics of human errors that involved maintenance workers and planners since many maintenance errors have occurred. Among the human errors made by workers (worker errors) during the implementation stage, the following three types were prevalent with approximately 80%: commission-type 'projection errors', omission-type comprehension errors' and commission type 'action errors'. The most common among the individual factors of worker errors was 'repetition or habit' (schema), based on the assumption of a typical situation, and the half number of the 'repetition or habit' cases (schema) were not influenced by any background factors. The most common background factor that contributed to the individual factor was 'imperfect work environment', followed by 'insufficient knowledge'. Approximately 80% of the individual factors were 'repetition or habit' or

  11. THE PRACTICAL ANALYSIS OF FINITE ELEMENTS METHOD ERRORS

    Directory of Open Access Journals (Sweden)

    Natalia Bakhova

    2011-03-01

    Full Text Available Abstract. The most important in the practical plan questions of reliable estimations of finite elementsmethod errors are considered. Definition rules of necessary calculations accuracy are developed. Methodsand ways of the calculations allowing receiving at economical expenditures of computing work the best finalresults are offered.Keywords: error, given the accuracy, finite element method, lagrangian and hermitian elements.

  12. HUMAN RELIABILITY ANALYSIS DENGAN PENDEKATAN COGNITIVE RELIABILITY AND ERROR ANALYSIS METHOD (CREAM

    Directory of Open Access Journals (Sweden)

    Zahirah Alifia Maulida

    2015-01-01

    Full Text Available Kecelakaan kerja pada bidang grinding dan welding menempati urutan tertinggi selama lima tahun terakhir di PT. X. Kecelakaan ini disebabkan oleh human error. Human error terjadi karena pengaruh lingkungan kerja fisik dan non fisik.Penelitian kali menggunakan skenario untuk memprediksi serta mengurangi kemungkinan terjadinya error pada manusia dengan pendekatan CREAM (Cognitive Reliability and Error Analysis Method. CREAM adalah salah satu metode human reliability analysis yang berfungsi untuk mendapatkan nilai Cognitive Failure Probability (CFP yang dapat dilakukan dengan dua cara yaitu basic method dan extended method. Pada basic method hanya akan didapatkan nilai failure probabailty secara umum, sedangkan untuk extended method akan didapatkan CFP untuk setiap task. Hasil penelitian menunjukkan faktor- faktor yang mempengaruhi timbulnya error pada pekerjaan grinding dan welding adalah kecukupan organisasi, kecukupan dari Man Machine Interface (MMI & dukungan operasional, ketersediaan prosedur/ perencanaan, serta kecukupan pelatihan dan pengalaman. Aspek kognitif pada pekerjaan grinding yang memiliki nilai error paling tinggi adalah planning dengan nilai CFP 0.3 dan pada pekerjaan welding yaitu aspek kognitif execution dengan nilai CFP 0.18. Sebagai upaya untuk mengurangi nilai error kognitif pada pekerjaan grinding dan welding rekomendasi yang diberikan adalah memberikan training secara rutin, work instrucstion yang lebih rinci dan memberikan sosialisasi alat. Kata kunci: CREAM (cognitive reliability and error analysis method, HRA (human reliability analysis, cognitive error Abstract The accidents in grinding and welding sectors were the highest cases over the last five years in PT. X and it caused by human error. Human error occurs due to the influence of working environment both physically and non-physically. This study will implement an approaching scenario called CREAM (Cognitive Reliability and Error Analysis Method. CREAM is one of human

  13. Unsupervised image segmentation for passive THz broadband images for concealed weapon detection

    Science.gov (United States)

    Ramírez, Mabel D.; Dietlein, Charles R.; Grossman, Erich; Popović, Zoya

    2007-04-01

    This work presents the application of a basic unsupervised classification algorithm for the segmentation of indoor passive Terahertz images. The 30,000 pixel broadband images of a person with concealed weapons under clothing are taken at a range of 0.8-2m over a frequency range of 0.1-1.2THz using single-pixel row-based raster scanning. The spiral-antenna coupled 36x1x0.02μm Nb bridge cryogenic micro-bolometers are developed at NIST-Optoelectronics Division. The antenna is evaporated on a 250μm thick Si substrate with a 4mm diameter hyper-hemispherical Si lens. The NETD of the microbolometer is 125mK at an integration time of 30 ms. The background temperature calibration is performed with a known 25 pixel source above 330 K, and a measured background fluctuation of 200-500mK. Several weapons were concealed under different fabrics: cotton, polyester, windblocker jacket and thermal sweater. Measured temperature contrasts ranged from 0.5-1K for wrinkles in clothing to 5K for a zipper and 8K for the concealed weapon. In order to automate feature detection in the images, some image processing and pattern recognition techniques have been applied and the results are presented here. We show that even simple algorithms, that can potentially be performed in real time, are capable of differentiating between a metal and a dielectric object concealed under clothing. Additionally, we show that pre-processing can reveal low temperature contrast features, such as folds in clothing.

  14. A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers

    Energy Technology Data Exchange (ETDEWEB)

    Melboe, Hallgeir

    2001-10-01

    This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)

  15. Guilt, censure, and concealment of active smoking status among cancer patients and family members after diagnosis: a nationwide study.

    Science.gov (United States)

    Shin, Dong Wook; Park, Jong Hyock; Kim, So Young; Park, Eal Whan; Yang, Hyung Kook; Ahn, Eunmi; Park, Seon Mee; Lee, Young Joon; Lim, Myong Cheol; Seo, Hong Gwan

    2014-05-01

    We aimed to identify the prevalence of feelings of guilt, censure, and concealment of smoking status among cancer patients and their family members who continued to smoke after the patient's diagnosis. Among 990 patient-family member dyads, 45 patients and 173 family members who continued to smoke for at least 1 month after the patients' diagnoses were administered questions examining feelings of guilt, censure, and smoking concealment. Most patients who continued to smoke reported experiencing feelings of guilt toward their families (75.6%) and censure from their family members (77.8%), and many concealed their smoking from their family members (44.4%) or healthcare professionals (46.7%). Family members who continued to smoke also reported feelings of guilt with respect to the patient (63.6%) and that the patient was critical of them (68.9%), and many concealed their smoking from the patient (28.5%) or healthcare professionals (9.3%). Patients' feeling of guilt was associated with concealment of smoking from family members (55.9% vs. 10.0%) or health care professionals (55.9% vs. 20.0%). Family members who reported feeling guilty (36.5% vs. 16.3%) or censured (34.5% vs. 16.7%) were more likely to conceal smoking from patients. Many patients and family members continue to smoke following cancer diagnosis, and the majority of them experience feelings of guilt and censure, which can lead to the concealment of smoking status from families or health care professionals. Feelings of guilt, censure, and concealment of smoking should be considered in the development and implementation of smoking cessation programs for cancer patients and family members. Copyright © 2013 John Wiley & Sons, Ltd.

  16. The assessment of cognitive errors using an observer-rated method.

    Science.gov (United States)

    Drapeau, Martin

    2014-01-01

    Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.

  17. Internal quality control of RIA with Tonks error calculation method

    International Nuclear Information System (INIS)

    Chen Xiaodong

    1996-01-01

    According to the methodology feature of RIA, an internal quality control chart with Tonks error calculation method which is suitable for RIA is designed. The quality control chart defines the value of the allowance error with normal reference range. The method has the simplicity of its performance and directly perceived through the senses. Taking the example of determining T 3 and T 4 , the calculation of allowance error, drawing of quality control chart and the analysis of result are introduced

  18. Error response test system and method using test mask variable

    Science.gov (United States)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  19. ID-Check: Online Concealed Information Test Reveals True Identity.

    Science.gov (United States)

    Verschuere, Bruno; Kleinberg, Bennett

    2016-01-01

    The Internet has already changed people's lives considerably and is likely to drastically change forensic research. We developed a web-based test to reveal concealed autobiographical information. Initial studies identified a number of conditions that affect diagnostic efficiency. By combining these moderators, this study investigated the full potential of the online ID-check. Participants (n = 101) tried to hide their identity and claimed a false identity in a reaction time-based Concealed Information Test. Half of the participants were presented with personal details (e.g., first name, last name, birthday), whereas the others only saw irrelevant details. Results showed that participants' true identity could be detected with high accuracy (AUC = 0.98; overall accuracy: 86-94%). Online memory detection can reliably and validly detect whether someone is hiding their true identity. This suggests that online memory detection might become a valuable tool for forensic applications. © 2015 American Academy of Forensic Sciences.

  20. Internal Error Propagation in Explicit Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.; Loczi, Lajos; Parsani, Matteo

    2014-01-01

    of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods

  1. Narratives around concealment and agency for stigma-reduction: a study of women affected by leprosy in Cirebon District, Indonesia.

    NARCIS (Netherlands)

    Peters, R.M.H.; Hofker, M.E.; Zweekhorst, M.B.M.; van Brakel, W.H.; Bunders-Aelen, J.G.F.

    2014-01-01

    Purpose: This study analyses the experiences of women affected by leprosy, taking into consideration whether they concealed or disclosed their status, and looks specifically at their ‘agency’. The aim is to provide recommendations for stigma-reduction interventions. Methods: The study population

  2. An in-situ measuring method for planar straightness error

    Science.gov (United States)

    Chen, Xi; Fu, Luhua; Yang, Tongyu; Sun, Changku; Wang, Zhong; Zhao, Yan; Liu, Changjie

    2018-01-01

    According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.

  3. Grinding Method and Error Analysis of Eccentric Shaft Parts

    Science.gov (United States)

    Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua

    2017-12-01

    RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.

  4. Sharp and blunt force trauma concealment by thermal alteration in homicides: an in-vitro experiment for methodology and protocol development in forensic anthropological analysis of burnt bones

    OpenAIRE

    Macoveciuc, I; Marquez-Grant, N; Horsfall, I; Zioupos, P

    2017-01-01

    Burning of human remains is one method used by perpetrators to conceal fatal trauma and expert opinions regarding the degree of skeletal evidence concealment are often disparate. This experiment aimed to reduce this incongruence in forensic anthropological interpretation of burned human remains and implicitly contribute to the development of research methodologies sufficiently robust to withstand forensic scrutiny in the courtroom. We have tested the influence of thermal alteration on pre-exi...

  5. Error Analysis for Fourier Methods for Option Pricing

    KAUST Repository

    Häppölä, Juho

    2016-01-06

    We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE). Applying a Fourier transformation to the PIDE yields an ordinary differential equation that can be solved analytically in terms of the characteristic exponent of the Levy process. Then, a numerical inverse Fourier transform allows us to obtain the option price. We present a novel bound for the error and use this bound to set the parameters for the numerical method. We analyze the properties of the bound for a dissipative and pure-jump example. The bound presented is independent of the asymptotic behaviour of option prices at extreme asset prices. The error bound can be decomposed into a product of terms resulting from the dynamics and the option payoff, respectively. The analysis is supplemented by numerical examples that demonstrate results comparable to and superior to the existing literature.

  6. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  7. Estimating misclassification error: a closer look at cross-validation based methods

    Directory of Open Access Journals (Sweden)

    Ounpraseuth Songthip

    2012-11-01

    Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.

  8. SCHEME (Soft Control Human error Evaluation MEthod) for advanced MCR HRA

    International Nuclear Information System (INIS)

    Jang, Inseok; Jung, Wondea; Seong, Poong Hyun

    2015-01-01

    The Technique for Human Error Rate Prediction (THERP), Korean Human Reliability Analysis (K-HRA), Human Error Assessment and Reduction Technique (HEART), A Technique for Human Event Analysis (ATHEANA), Cognitive Reliability and Error Analysis Method (CREAM), and Simplified Plant Analysis Risk Human Reliability Assessment (SPAR-H) in relation to NPP maintenance and operation. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). They are still used for HRA in advanced MCRs even though the operating environment of advanced MCRs in NPPs has been considerably changed by the adoption of new human-system interfaces such as computer-based soft controls. Among the many features in advanced MCRs, soft controls are an important feature because the operation action in NPP advanced MCRs is performed by soft controls. Consequently, those conventional methods may not sufficiently consider the features of soft control execution human errors. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing the soft control task analysis and the literature reviews regarding widely accepted human error taxonomies. In this study, the framework of a HRA method for evaluating soft control execution human error in advanced MCRs is developed. First, the factors which HRA method in advanced MCRs should encompass are derived based on the literature review, and soft control task analysis. Based on the derived factors, execution HRA framework in advanced MCRs is developed mainly focusing on the features of soft control. Moreover, since most current HRA database deal with operation in conventional type of MCRs and are not explicitly designed to deal with digital HSI, HRA database are developed under lab scale simulation

  9. Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure

    Directory of Open Access Journals (Sweden)

    Hesheng Zhang

    2016-01-01

    Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.

  10. Output Error Method for Tiltrotor Unstable in Hover

    Directory of Open Access Journals (Sweden)

    Lichota Piotr

    2017-03-01

    Full Text Available This article investigates unstable tiltrotor in hover system identification from flight test data. The aircraft dynamics was described by a linear model defined in Body-Fixed-Coordinate System. Output Error Method was selected in order to obtain stability and control derivatives in lateral motion. For estimating model parameters both time and frequency domain formulations were applied. To improve the system identification performed in the time domain, a stabilization matrix was included for evaluating the states. In the end, estimates obtained from various Output Error Method formulations were compared in terms of parameters accuracy and time histories. Evaluations were performed in MATLAB R2009b environment.

  11. Error of image saturation in the structured-light method.

    Science.gov (United States)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-01-01

    In the phase-measuring structured-light method, image saturation will induce large phase errors. Usually, by selecting proper system parameters (such as the phase-shift number, exposure time, projection intensity, etc.), the phase error can be reduced. However, due to lack of a complete theory of phase error, there is no rational principle or basis for the selection of the optimal system parameters. For this reason, the phase error due to image saturation is analyzed completely, and the effects of the two main factors, including the phase-shift number and saturation degree, on the phase error are studied in depth. In addition, the selection of optimal system parameters is discussed, including the proper range and the selection principle of the system parameters. The error analysis and the conclusion are verified by simulation and experiment results, and the conclusion can be used for optimal parameter selection in practice.

  12. Asouzu's phenomenon of concealment and Bacon's idols of the ...

    African Journals Online (AJOL)

    The study emanates from the contentions of leaders of states, who, instead of promoting the ideals and values which promote social and political-coexistence, limit and conceal their views of leadership to some tribalistic, ethnocentric and self-serving idols, and by so doing, they cause a monumental harm to the polity. This is ...

  13. Analysis of Formal Methods for Specification of E-Commerce Applications

    Directory of Open Access Journals (Sweden)

    Sadiq Ali Khan

    2016-01-01

    Full Text Available E-commerce based application characteristics portray elevated dynamics while incorporating decentralized nature. Extreme emphasis influencing structural design plus implementation, positions such applications highly appreciated. Significant research articles reveal that, applying formal methods addressing challenges incumbent with E-commerce based applications, contribute towards reliability and robustness obliging the system. Anticipating and designing sturdy e-process and concurrent implementation, allows application behavior extra strength against errors, frauds and hacking, minimizing program faults during application operations. Programmers find extreme difficulty guaranteeing correct processing under all circumstances, however, not impossible. Concealed flaws and errors, triggered only under unexpected and unanticipated scenarios, pilot subtle mistakes and appalling failures. Code authors utilize various formal methods for reducing these flaws. Mentioning prominent methods would include, ASM (Abstract State Machines, B-Method, z-Language, UML (Unified Modelling Language etc. This paper primarily focuses different formal methods applied while deliberating specification and verification techniques for cost effective.

  14. Is anterior N2 enhancement a reliable electrophysiological index of concealed information?

    Science.gov (United States)

    Ganis, Giorgio; Bridges, David; Hsu, Chun-Wei; Schendan, Haline E

    2016-12-01

    Concealed information tests (CITs) are used to determine whether an individual possesses information about an item of interest. Event-related potential (ERP) measures in CITs have focused almost exclusively on the P3b component, showing that this component is larger when lying about the item of interest (probe) than telling the truth about control items (irrelevants). Recent studies have begun to examine other ERP components, such as the anterior N2, with mixed results. A seminal CIT study found that visual probes elicit a larger anterior N2 than irrelevants (Gamer and Berti, 2010) and suggested that this component indexes cognitive control processes engaged when lying about probes. However, this study did not control for potential intrinsic differences among the stimuli: the same probe and irrelevants were used for all participants, and there was no control condition composed of uninformed participants. Here, first we show that the N2 effect found in the study by Gamer and Berti (2010) was in large part due to stimulus differences, as the effect observed in a concealed information condition was comparable to that found in two matched control conditions without any concealed information (Experiments 1 and 2). Next, we addressed the issue of the generality of the N2 findings by counterbalancing a new set of stimuli across participants and by using a control condition with uninformed participants (Experiment 3). Results show that the probe did not elicit a larger anterior N2 than the irrelevants under these controlled conditions. These findings suggest that caution should be taken in using the N2 as an index of concealed information in CITs. Furthermore, they are a reminder that results of CIT studies (not only with ERPs) performed without stimulus counterbalancing and suitable control conditions may be confounded by differential intrinsic properties of the stimuli employed. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Self-stigma among concealable minorities in Hong Kong: conceptualization and unified measurement.

    Science.gov (United States)

    Mak, Winnie W S; Cheung, Rebecca Y M

    2010-04-01

    Self-stigma refers to the internalized stigma that individuals may have toward themselves as a result of their minority status. Not only can self-stigma dampen the mental health of individuals, it can deter them from seeking professional help lest disclosing their minority status lead to being shunned by service providers. No unified instrument has been developed to measure consistently self-stigma that could be applied to different concealable minority groups. The present study presented findings based on 4 studies on the development and validation of the Self-Stigma Scale, conducted in Hong Kong with community samples of mental health consumers, recent immigrants from Mainland China, and sexual minorities. Upon a series of validation procedures, a 9-item Self-Stigma Scale-Short Form was developed. Initial support on its reliability and construct validity (convergent and criterion validities) were found among 3 stigmatized groups. Utility of this unified measure was to establish an empirical basis upon which self-stigma of different concealable minority groups could be assessed under the same dimensions. Health-care professionals could make use of this short scale to assess potential self-stigmatization among concealable minorities, which may hamper their treatment process as well as their overall well-being.

  16. A platform-independent method for detecting errors in metagenomic sequencing data: DRISEE.

    Directory of Open Access Journals (Sweden)

    Kevin P Keegan

    Full Text Available We provide a novel method, DRISEE (duplicate read inferred sequencing error estimation, to assess sequencing quality (alternatively referred to as "noise" or "error" within and/or between sequencing samples. DRISEE provides positional error estimates that can be used to inform read trimming within a sample. It also provides global (whole sample error estimates that can be used to identify samples with high or varying levels of sequencing error that may confound downstream analyses, particularly in the case of studies that utilize data from multiple sequencing samples. For shotgun metagenomic data, we believe that DRISEE provides estimates of sequencing error that are more accurate and less constrained by technical limitations than existing methods that rely on reference genomes or the use of scores (e.g. Phred. Here, DRISEE is applied to (non amplicon data sets from both the 454 and Illumina platforms. The DRISEE error estimate is obtained by analyzing sets of artifactual duplicate reads (ADRs, a known by-product of both sequencing platforms. We present DRISEE as an open-source, platform-independent method to assess sequencing error in shotgun metagenomic data, and utilize it to discover previously uncharacterized error in de novo sequence data from the 454 and Illumina sequencing platforms.

  17. Specific NIST projects in support of the NIJ Concealed Weapon Detection and Imaging Program

    Science.gov (United States)

    Paulter, Nicholas G.

    1998-12-01

    The Electricity Division of the National Institute of Standards and Technology is developing revised performance standards for hand-held (HH) and walk-through (WT) metal weapon detectors, test procedures and systems for these detectors, and a detection/imaging system for finding concealed weapons. The revised standards will replace the existing National Institute of Justice (NIJ) standards for HH and WT devices and will include detection performance specifications as well as system specifications (environmental conditions, mechanical strength and safety, response reproducibility and repeatability, quality assurance, test reporting, etc.). These system requirements were obtained from the Law Enforcement and corrections Technology Advisory Council, an advisory council for the NIJ. Reproducible and repeatable test procedures and appropriate measurement systems will be developed for evaluating HH and WT detection performance. A guide to the technology and application of non- eddy-current-based detection/imaging methods (such as acoustic, passive millimeter-wave and microwave, active millimeter-wave and terahertz-wave, x-ray, etc.) Will be developed. The Electricity Division is also researching the development of a high- frequency/high-speed (300 GH to 1 THz) pulse-illuminated, stand- off, video-rate, concealed weapons/contraband imaging system.

  18. [Cardioversion for paroxysmal supraventricular tachycardia during lung surgery in a patient with concealed Wolff-Parkinson-White syndrome].

    Science.gov (United States)

    Sato, Yoshiharu; Nagata, Hirofumi; Inoda, Ayako; Miura, Hiroko; Watanabe, Yoko; Suzuki, Kenji

    2014-10-01

    We report a case of paroxysmal supraventricular tachycardia (PSVT) that occurred during video-assisted thoracoscopic (VATS) lobectomy in a patient with concealed Wolff-Parkinson-White (WPW) syndrome. A 59-year-old man with lung cancer was scheduled for VATS lobectomy under general anesthesia. After inserting a thoracic epidural catheter, general anesthesia was induced with intravenous administration of propofol. Anesthesia was maintained with inhalation of desfurane in an air/oxygen mixture and intravenous infusion of remifentanil. Recurrent PSVT occurred three times, and the last episode of PSVT continued for 50 minutes regardless of administration of antiarrhythmic drugs. Synchronized electric shock via adhesive electrode pads on the patient's chest successfully converted PSVT back to normal sinus rhythm. The remaining course and postoperative period were uneventful. An electrophysiological study performed after hospital discharge detected concealed WPW syndrome, which had contributed to the development of atrioventricular reciprocating tachycardia. Concealed WPW syndrome is a rare, but critical complication that could possibly cause lethal atrial tachyarrhythmias during the perioperative period. In the present case, cardioversion using adhesive electrode pads briefly terminated PSVT in a patient with concealed WPW syndrome.

  19. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    Science.gov (United States)

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  20. Nonlinear error dynamics for cycled data assimilation methods

    International Nuclear Information System (INIS)

    Moodey, Alexander J F; Lawless, Amos S; Potthast, Roland W E; Van Leeuwen, Peter Jan

    2013-01-01

    We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at t k , k = 1, 2, 3, …, with a first guess given by the state propagated via a dynamical system model M k from time t k−1 to time t k . In particular, for nonlinear dynamical systems M k that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ‖e k ‖ ≔ ‖x (a) k − x (t) k ‖ between the estimated state x (a) and the true state x (t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system M k under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ‖e k ‖, depending on the size δ of the observation error, the reconstruction operator R α , the observation operator H and the Lipschitz constants K (1) and K (2) on the lower and higher modes of M k controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c‖R α ‖δ with some constant c. Since ‖R α ‖ → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz ‘63 system. (paper)

  1. Decision makers use norms, not cost-benefit analysis, when choosing to conceal or reveal unfair rewards.

    Directory of Open Access Journals (Sweden)

    Marco Heimann

    Full Text Available We introduce the Conceal or Reveal Dilemma, in which individuals receive unfair benefits, and must decide whether to conceal or to reveal this unfair advantage. This dilemma has two important characteristics: it does not lend itself easily to cost-benefit analysis, neither to the application of any strong universal norm. As a consequence, it is ideally suited to the study of interindividual and intercultural variations in moral-economic norms. In this paper we focus on interindividual variations, and we report four studies showing that individuals cannot be swayed by financial incentives to conceal or to reveal, and follow instead fixed, idiosyncratic strategies. We discuss how this result can be extended to individual and cultural variations in the tendency to display or to hide unfair rewards.

  2. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...... application despite the large sample. Unit root tests based on the IV estimator have better finite sample properties in this context....

  3. Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems

    International Nuclear Information System (INIS)

    Stephansen, A.F.

    2007-12-01

    In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)

  4. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  5. Validation of the Body Concealment Scale for Scleroderma (BCSS): Replication in the Scleroderma Patient-centered Intervention Network (SPIN) Cohort

    NARCIS (Netherlands)

    Jewett, L.R.; Kwakkenbos, C.M.C.; Carrier, M.E.; Malcarne, V.L.; Harcourt, D.; Rumsey, N.; Mayes, M.D.; Assassi, S.; Körner, A.; Fox, R.S.; Gholizadeh, S.; Mills, S.D.; Fortune, C.; Thombs, B.D.

    2017-01-01

    Body concealment is an important component of appearance distress for individuals with disfiguring conditions, including scleroderma. The objective was to replicate the validation study of the Body Concealment Scale for Scleroderma (BCSS) among 897 scleroderma patients. The factor structure of the

  6. Error analysis and system improvements in phase-stepping methods for photoelasticity

    International Nuclear Information System (INIS)

    Wenyan Ji

    1997-11-01

    In the past automated photoelasticity has been demonstrated to be one of the most efficient technique for determining the complete state of stress in a 3-D component. However, the measurement accuracy, which depends on many aspects of both the theoretical foundations and experimental procedures, has not been studied properly. The objective of this thesis is to reveal the intrinsic properties of the errors, provide methods for reducing them and finally improve the system accuracy. A general formulation for a polariscope with all the optical elements in an arbitrary orientation was deduced using the method of Mueller Matrices. The deduction of this formulation indicates an inherent connectivity among the optical elements and gives a knowledge of the errors. In addition, this formulation also shows a common foundation among the photoelastic techniques, consequently, these techniques share many common error sources. The phase-stepping system proposed by Patterson and Wang was used as an exemplar to analyse the errors and provide the proposed improvements. This system can be divided into four parts according to their function, namely the optical system, light source, image acquisition equipment and image analysis software. All the possible error sources were investigated separately and the methods for reducing the influence of the errors and improving the system accuracy are presented. To identify the contribution of each possible error to the final system output, a model was used to simulate the errors and analyse their consequences. Therefore the contribution to the results from different error sources can be estimated quantitatively and finally the accuracy of the systems can be improved. For a conventional polariscope, the system accuracy can be as high as 99.23% for the fringe order and the error less than 5 degrees for the isoclinic angle. The PSIOS system is limited to the low fringe orders. For a fringe order of less than 1.5, the accuracy is 94.60% for fringe

  7. Study of on-machine error identification and compensation methods for micro machine tools

    International Nuclear Information System (INIS)

    Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng

    2016-01-01

    Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results

  8. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  9. Re-Normalization Method of Doppler Lidar Signal for Error Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nakgyu; Baik, Sunghoon; Park, Seungkyu; Kim, Donglyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Dukhyeon [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    In this paper, we presented a re-normalization method for the fluctuations of Doppler signals from the various noises mainly due to the frequency locking error for a Doppler lidar system. For the Doppler lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter and an iodine filter as the Doppler frequency discriminator. For the Doppler frequency shift measurement, the transmission ratio using the injection-seeded laser is locked to stabilize the frequency. If the frequency locking system is not perfect, the Doppler signal has some error due to the frequency locking error. The re-normalization process of the Doppler signals was performed to reduce this error using an additional laser beam to an Iodine cell. We confirmed that the renormalized Doppler signal shows the stable experimental data much more than that of the averaged Doppler signal using our calibration method, the reduced standard deviation was 4.838 Χ 10{sup -3}.

  10. [Management of moderate to severe pediatric concealed penis in children by Devine's technique via incision between the penis and scrotum].

    Science.gov (United States)

    Zhang, Xin-Sheng; Liu, Shi-Xiong; Xiang, Xue-Yan; Zhang, Wen-Gang; Tang, Da-Xing

    2014-04-01

    To search for a simple and effective surgical approach to the management of moderate to severe pediatric concealed penis in children. We used Devine's technique via incision between the penis and scrotum in the treatment of 68 cases of moderate to severe pediatric concealed penis. The patients were aged 3 -13 (mean 6.5) years, 30 with moderate and 38 with severe pediatric concealed penis. This strategy achieved good near- and long-term effects and satisfactory appearance of the penis, which was similar to that of circumcision. At 3 months after surgery, the penile length was 3 - 5.2 cm, averaging (2.35 +/- 0.35) cm. Devine's technique via incision between the penis and scrotum is a simple and effective surgical option for moderate to severe pediatric concealed penis in children.

  11. Impulse radar imaging system for concealed object detection

    Science.gov (United States)

    Podd, F. J. W.; David, M.; Iqbal, G.; Hussain, F.; Morris, D.; Osakue, E.; Yeow, Y.; Zahir, S.; Armitage, D. W.; Peyton, A. J.

    2013-10-01

    Electromagnetic systems for imaging concealed objects at checkpoints typically employ radiation at millimetre and terahertz frequencies. These systems have been shown to be effective and provide a sufficiently high resolution image. However there are difficulties and current electromagnetic systems have limitations particularly in accurately differentiating between threat and innocuous objects based on shape, surface emissivity or reflectivity, which are indicative parameters. In addition, water has a high absorption coefficient at millimetre wavelength and terahertz frequencies, which makes it more difficult for these frequencies to image through thick damp clothing. This paper considers the potential of using ultra wideband (UWB) in the low gigahertz range. The application of this frequency band to security screening appears to be a relatively new field. The business case for implementing the UWB system has been made financially viable by the recent availability of low-cost integrated circuits operating at these frequencies. Although designed for the communication sector, these devices can perform the required UWB radar measurements as well. This paper reports the implementation of a 2 to 5 GHz bandwidth linear array scanner. The paper describes the design and fabrication of transmitter and receiver antenna arrays whose individual elements are a type of antipodal Vivaldi antenna. The antenna's frequency and angular response were simulated in CST Microwave Studio and compared with laboratory measurements. The data pre-processing methods of background subtraction and deconvolution are implemented to improve the image quality. The background subtraction method uses a reference dataset to remove antenna crosstalk and room reflections from the dataset. The deconvolution method uses a Wiener filter to "sharpen" the returned echoes which improves the resolution of the reconstructed image. The filter uses an impulse response reference dataset and a signal

  12. Do Children Understand That People Selectively Conceal or Express Emotion?

    Science.gov (United States)

    Hayashi, Hajimu; Shiomi, Yuki

    2015-01-01

    This study examined whether children understand that people selectively conceal or express emotion depending upon the context. We prepared two contexts for a verbal display task for 70 first-graders, 80 third-graders, 64 fifth-graders, and 71 adults. In both contexts, protagonists had negative feelings because of the behavior of the other…

  13. The use of error and uncertainty methods in the medical laboratory.

    Science.gov (United States)

    Oosterhuis, Wytze P; Bayat, Hassan; Armbruster, David; Coskun, Abdurrahman; Freeman, Kathleen P; Kallner, Anders; Koch, David; Mackenzie, Finlay; Migliarino, Gabriel; Orth, Matthias; Sandberg, Sverre; Sylte, Marit S; Westgard, Sten; Theodorsson, Elvar

    2018-01-26

    Error methods - compared with uncertainty methods - offer simpler, more intuitive and practical procedures for calculating measurement uncertainty and conducting quality assurance in laboratory medicine. However, uncertainty methods are preferred in other fields of science as reflected by the guide to the expression of uncertainty in measurement. When laboratory results are used for supporting medical diagnoses, the total uncertainty consists only partially of analytical variation. Biological variation, pre- and postanalytical variation all need to be included. Furthermore, all components of the measuring procedure need to be taken into account. Performance specifications for diagnostic tests should include the diagnostic uncertainty of the entire testing process. Uncertainty methods may be particularly useful for this purpose but have yet to show their strength in laboratory medicine. The purpose of this paper is to elucidate the pros and cons of error and uncertainty methods as groundwork for future consensus on their use in practical performance specifications. Error and uncertainty methods are complementary when evaluating measurement data.

  14. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage

  15. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  16. Hidden Markov Model-based Packet Loss Concealment for Voice over IP

    DEFF Research Database (Denmark)

    Rødbro, Christoffer A.; Murthi, Manohar N.; Andersen, Søren Vang

    2006-01-01

    As voice over IP proliferates, packet loss concealment (PLC) at the receiver has emerged as an important factor in determining voice quality of service. Through the use of heuristic variations of signal and parameter repetition and overlap-add interpolation to handle packet loss, conventional PLC...

  17. Portable concealed weapon detection using millimeter-wave FMCW radar imaging

    Science.gov (United States)

    Johnson, Michael A.; Chang, Yu-Wen

    2001-02-01

    Unobtrusive detection of concealed weapons on persons or in abandoned bags would provide law enforcement a powerful tool to focus resources and increase traffic throughput in high- risk situations. We have developed a fast image scanning 94 GHz radar system that is suitable for portable operation and remote viewing of radar data. This system includes a novel fast image-scanning antenna that allows for the acquisition of medium resolution 3D millimeter wave images of stationary targets with frame times on order of one second. The 3D radar data allows for potential isolation of concealed weapons from body and environmental clutter such as nearby furniture or other people. The radar is an active system so image quality is not affected indoors, emitted power is however very low so there are no health concerns for operator or targets. The low power operation is still sufficient to penetrate heavy clothing or material. Small system size allows for easy transport and rapid deployment of the system as well as an easy migration path to future hand held systems.

  18. An Analysis and Quantification Method of Human Errors of Soft Controls in Advanced MCRs

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jae Whan; Jang, Seung Cheol

    2011-01-01

    In this work, a method was proposed for quantifying human errors that may occur during operation executions using soft control. Soft controls of advanced main control rooms (MCRs) have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to define the human error modes and to quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests a modified K-HRA method for quantifying error probability

  19. Error Analysis for Fourier Methods for Option Pricing

    KAUST Repository

    Hä ppö lä , Juho

    2016-01-01

    We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE

  20. Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels

    Science.gov (United States)

    Mulligan, Jeffrey B.

    1990-01-01

    A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.

  1. Research on Electronic Transformer Data Synchronization Based on Interpolation Methods and Their Error Analysis

    Directory of Open Access Journals (Sweden)

    Pang Fubin

    2015-09-01

    Full Text Available In this paper the origin problem of data synchronization is analyzed first, and then three common interpolation methods are introduced to solve the problem. Allowing for the most general situation, the paper divides the interpolation error into harmonic and transient interpolation error components, and the error expression of each method is derived and analyzed. Besides, the interpolation errors of linear, quadratic and cubic methods are computed at different sampling rates, harmonic orders and transient components. Further, the interpolation accuracy and calculation amount of each method are compared. The research results provide theoretical guidance for selecting the interpolation method in the data synchronization application of electronic transformer.

  2. Out and healthy: Being more "out" about a concealable stigmatized identity may boost the health benefits of social support.

    Science.gov (United States)

    Weisz, Bradley M; Quinn, Diane M; Williams, Michelle K

    2016-12-01

    This research examined whether the relationship between perceived social support and health would be moderated by level of outness for people living with different concealable stigmatized identities (mental illness, substance abuse, domestic violence, rape, or childhood abuse). A total of 394 people living with a concealable stigmatized identity completed a survey. Consistent with hypotheses, at high levels of outness, social support predicted better health; at low levels of outness, social support was less predictive of health. People concealing a stigmatized identity may only be able to reap the health benefits of social support if they are "out" about the stigmatized identity. © The Author(s) 2015.

  3. The problem of assessing landmark error in geometric morphometrics: theory, methods, and modifications.

    Science.gov (United States)

    von Cramon-Taubadel, Noreen; Frazier, Brenda C; Lahr, Marta Mirazón

    2007-09-01

    Geometric morphometric methods rely on the accurate identification and quantification of landmarks on biological specimens. As in any empirical analysis, the assessment of inter- and intra-observer error is desirable. A review of methods currently being employed to assess measurement error in geometric morphometrics was conducted and three general approaches to the problem were identified. One such approach employs Generalized Procrustes Analysis to superimpose repeatedly digitized landmark configurations, thereby establishing whether repeat measures fall within an acceptable range of variation. The potential problem of this error assessment method (the "Pinocchio effect") is demonstrated and its effect on error studies discussed. An alternative approach involves employing Euclidean distances between the configuration centroid and repeat measures of a landmark to assess the relative repeatability of individual landmarks. This method is also potentially problematic as the inherent geometric properties of the specimen can result in misleading estimates of measurement error. A third approach involved the repeated digitization of landmarks with the specimen held in a constant orientation to assess individual landmark precision. This latter approach is an ideal method for assessing individual landmark precision, but is restrictive in that it does not allow for the incorporation of instrumentally defined or Type III landmarks. Hence, a revised method for assessing landmark error is proposed and described with the aid of worked empirical examples. (c) 2007 Wiley-Liss, Inc.

  4. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Directory of Open Access Journals (Sweden)

    Huiliang Cao

    2016-01-01

    Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  5. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  6. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  7. Concealment tactics among HIV-positive nurses in Uganda

    OpenAIRE

    Kyakuwa, M.; Hardon, A.

    2012-01-01

    This paper is based on two-and-a-half years of ethnographic fieldwork in two rural Ugandan health centres during a period of ART scale-up. Around one-third of the nurses in these two sites were themselves HIV-positive but most concealed their status. We describe how a group of HIV-positive nurses set up a secret circle to talk about their predicament as HIV-positive healthcare professionals and how they developed innovative care technologies to overcome the skin rashes caused by ART that thre...

  8. HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD

    Energy Technology Data Exchange (ETDEWEB)

    Harold S. Blackman; David I. Gertman; Ronald L. Boring

    2008-09-01

    This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.

  9. Statistical analysis with measurement error or misclassification strategy, method and application

    CERN Document Server

    Yi, Grace Y

    2017-01-01

    This monograph on measurement error and misclassification covers a broad range of problems and emphasizes unique features in modeling and analyzing problems arising from medical research and epidemiological studies. Many measurement error and misclassification problems have been addressed in various fields over the years as well as with a wide spectrum of data, including event history data (such as survival data and recurrent event data), correlated data (such as longitudinal data and clustered data), multi-state event data, and data arising from case-control studies. Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application brings together assorted methods in a single text and provides an update of recent developments for a variety of settings. Measurement error effects and strategies of handling mismeasurement for different models are closely examined in combination with applications to specific problems. Readers with diverse backgrounds and objectives can utilize th...

  10. Detection method of nonlinearity errors by statistical signal analysis in heterodyne Michelson interferometer.

    Science.gov (United States)

    Hu, Juju; Hu, Haijiang; Ji, Yinghua

    2010-03-15

    Periodic nonlinearity that ranges from tens of nanometers to a few nanometers in heterodyne interferometer limits its use in high accuracy measurement. A novel method is studied to detect the nonlinearity errors based on the electrical subdivision and the analysis method of statistical signal in heterodyne Michelson interferometer. Under the movement of micropositioning platform with the uniform velocity, the method can detect the nonlinearity errors by using the regression analysis and Jackknife estimation. Based on the analysis of the simulations, the method can estimate the influence of nonlinearity errors and other noises for the dimensions measurement in heterodyne Michelson interferometer.

  11. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  12. Concealed Threat Detection at Multiple Frames-per-second

    Energy Technology Data Exchange (ETDEWEB)

    Chang, J T

    2005-11-08

    In this LDRD project, our research purpose is to investigate the science and technology necessary to enable real-time array imaging as a rapid way to detect hidden threats through obscurants such as smoke, fog, walls, doors, and clothing. The goal of this research is to augment the capabilities of protective forces in concealed threat detection. In the current context, threats include people as well as weapons. In most cases, security personnel must make very fast assessments of a threat based upon limited amount of data. Among other attributes, UWB has been shown and quantified to penetrate and propagate through many materials (wood, some concretes, non-metallic building materials, some soils, etc.) while maintaining high range resolution. We have build collaborations with university partners and government agencies. We have considered the impact of psychometrics on target recognition and identification. Specifically we have formulated images in real-time that will engage the user's vision system in a more active way to enhance image interpretation capabilities. In this project, we are researching the use of real time (field programmable gate arrays) integrated with high resolution (cm scale), ultra wide band (UWB) electromagnetic signals for imaging personnel through smoke and walls. We evaluated the ability of real-time UWB imaging for detecting smaller objects, such as concealed weapons that are carried by the obscured personnel. We also examined the cognitive interpretation process of real time UWB electromagnetic images.

  13. The Invisible Work of Closeting: A Qualitative Study About Strategies Used by Lesbian and Gay Persons to Conceal Their Sexual Orientation.

    Science.gov (United States)

    Malterud, Kirsti; Bjorkman, Mari

    2016-10-01

    The last decades have offered substantial improvement regarding human rights for lesbian and gay (LG) persons. Yet LG persons are often in the closet, concealing their sexual orientation. We present a qualitative study based on 182 histories submitted from 161 LG individuals to a Web site. The aim was to explore experiences of closeting among LG persons in Norway. A broad range of strategies was used for closeting, even among individuals who generally considered themselves to be out of the closet. Concealment was enacted by blunt denial, clever avoidance, or subtle vagueness. Other strategies included changing or eliminating the pronoun or name of the partner in ongoing conversations. Context-dependent concealment, differentiating between persons, situations, or arenas, was repeatedly applied for security or convenience. We propose a shift from "being in the closet" to "situated concealment of sexual orientation."

  14. Improvement of spatial discretization error on the semi-analytic nodal method using the scattered source subtraction method

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Tatsumi, Masahiro

    2006-01-01

    In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)

  15. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  16. The regression-calibration method for fitting generalized linear models with additive measurement error

    OpenAIRE

    James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll

    2003-01-01

    This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...

  17. Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme

    Directory of Open Access Journals (Sweden)

    Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin

    2012-08-01

    Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words:  Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.

  18. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  19. Concealed semantic and episodic autobiographical memory electrified.

    Science.gov (United States)

    Ganis, Giorgio; Schendan, Haline E

    2012-01-01

    Electrophysiology-based concealed information tests (CIT) try to determine whether somebody possesses concealed information about a crime-related item (probe) by comparing event-related potentials (ERPs) between this item and comparison items (irrelevants). Although the broader field is sometimes referred to as "memory detection," little attention has been paid to the precise type of underlying memory involved. This study begins addressing this issue by examining the key distinction between semantic and episodic memory in the autobiographical domain within a CIT paradigm. This study also addresses the issue of whether multiple repetitions of the items over the course of the session habituate the brain responses. Participants were tested in a 3-stimulus CIT with semantic autobiographical probes (their own date of birth) and episodic autobiographical probes (a secret date learned just before the study). Results dissociated these two memory conditions on several ERP components. Semantic probes elicited a smaller frontal N2 than episodic probes, consistent with the idea that the frontal N2 decreases with greater pre-existing knowledge about the item. Likewise, semantic probes elicited a smaller central N400 than episodic probes. Semantic probes also elicited a larger P3b than episodic probes because of their richer meaning. In contrast, episodic probes elicited a larger late positive complex (LPC) than semantic probes, because of the recent episodic memory associated with them. All these ERPs showed a difference between probes and irrelevants in both memory conditions, except for the N400, which showed a difference only in the semantic condition. Finally, although repetition affected the ERPs, it did not reduce the difference between probes and irrelevants. These findings show that the type of memory associated with a probe has both theoretical and practical importance for CIT research.

  20. Concealed semantic and episodic autobiographical memory electrified

    Directory of Open Access Journals (Sweden)

    Giorgio eGanis

    2013-01-01

    Full Text Available Electrophysiology-based concealed information tests (CIT try to determine whether somebody possesses concealed information about a probe item by comparing event-related potentials (ERPs between this item and comparison items (irrelevants. Although the broader field is sometimes referred to as memory detection, little attention has been paid to the precise type of underlying memory involved. This study begins addressing this issue by examining the key distinction between semantic and episodic memory in the autobiographical domain within a CIT paradigm. This study also addressed the issue of whether multiple repetitions of the items over the course of the session habituate the brain responses. Participants were tested in a 3-stimulus CIT with semantic autobiographical probes (their own date of birth and episodic autobiographical probes (a secret date learned just before the study. Results dissociated these two memory conditions on several ERP components. Semantic probes elicited a smaller frontal N2 than episodic probes, consistent with the idea that the frontal N2 decreases with greater pre-existing semantic knowledge about the item. Likewise, semantic probes elicited a smaller central N400 than episodic probes. Semantic probes also elicited a larger P3b than episodic probes because of their richer meaning. In contrast, episodic probes elicited a larger late positive component (LPC than semantic probes, because of the recent episodic memory associated with them. All these ERPs showed a difference between probes and irrelevants in both memory conditions, except for the N400, which showed a difference only in the semantic condition. Finally, although repetition affected the ERPs, it did not reduce the difference between probes and irrelevants. Thus, the type of memory associated with a probe has both theoretical and practical importance for CIT research.

  1. Concealed semantic and episodic autobiographical memory electrified

    Science.gov (United States)

    Ganis, Giorgio; Schendan, Haline E.

    2013-01-01

    Electrophysiology-based concealed information tests (CIT) try to determine whether somebody possesses concealed information about a crime-related item (probe) by comparing event-related potentials (ERPs) between this item and comparison items (irrelevants). Although the broader field is sometimes referred to as “memory detection,” little attention has been paid to the precise type of underlying memory involved. This study begins addressing this issue by examining the key distinction between semantic and episodic memory in the autobiographical domain within a CIT paradigm. This study also addresses the issue of whether multiple repetitions of the items over the course of the session habituate the brain responses. Participants were tested in a 3-stimulus CIT with semantic autobiographical probes (their own date of birth) and episodic autobiographical probes (a secret date learned just before the study). Results dissociated these two memory conditions on several ERP components. Semantic probes elicited a smaller frontal N2 than episodic probes, consistent with the idea that the frontal N2 decreases with greater pre-existing knowledge about the item. Likewise, semantic probes elicited a smaller central N400 than episodic probes. Semantic probes also elicited a larger P3b than episodic probes because of their richer meaning. In contrast, episodic probes elicited a larger late positive complex (LPC) than semantic probes, because of the recent episodic memory associated with them. All these ERPs showed a difference between probes and irrelevants in both memory conditions, except for the N400, which showed a difference only in the semantic condition. Finally, although repetition affected the ERPs, it did not reduce the difference between probes and irrelevants. These findings show that the type of memory associated with a probe has both theoretical and practical importance for CIT research. PMID:23355816

  2. Minority Stress and Same-Sex Relationship Satisfaction: The Role of Concealment Motivation.

    Science.gov (United States)

    Pepping, Christopher A; Cronin, Timothy J; Halford, W Kim; Lyons, Anthony

    2018-04-30

    Most lesbian, gay, and bisexual (LGB) people want a stable, satisfying romantic relationship. Although many of the predictors of relationship outcomes are similar to those of heterosexual couples, same-sex couples face some additional challenges associated with minority stress that also impact upon relationship quality. Here, we investigate the association between minority stressors and relationship quality in a sample of 363 adults (M age = 30.37, SD = 10.78) currently in a same-sex romantic relationship. Internalized homophobia and difficulties accepting one's LGB identity were each negatively associated with relationship satisfaction via heightened concealment motivation. We also examined the protective role of identity affirmation on relationship quality, finding a direct positive relationship between the two variables. Minority stressors were negatively associated with couple relationship satisfaction via heightened concealment motivation. The finding that identity affirmation directly predicted increased couple satisfaction also highlights the important role of protective factors in same-sex couple relationships. © 2018 Family Process Institute.

  3. Indian program for development of technologies relevant to reliable, non-intrusive, concealed-contraband detection

    International Nuclear Information System (INIS)

    Auluck, S.K.H.

    2007-01-01

    Generating capability for reliable, non-intrusive detection of concealed-contraband, particularly, organic contraband like explosives and narcotics, has become a national priority. This capability spans a spectrum of technologies. If a technology mission addressing the needs of a highly sophisticated technology like PFNA is set up, the capabilities acquired would be adequate to meet the requirements of many other sets of technologies. This forms the background of the Indian program for development of technologies relevant to reliable, non-intrusive, concealed contraband detection. One of the central themes of the technology development programs would be modularization of the neutron source and detector technologies, so that common elements can be combined in different ways for meeting a variety of application requirements. (author)

  4. The Psychological Implications of Concealing a Stigma: A Cognitive-Affective-Behavioral Model

    Science.gov (United States)

    Pachankis, John E.

    2007-01-01

    Many assume that individuals with a hidden stigma escape the difficulties faced by individuals with a visible stigma. However, recent research has shown that individuals with a concealable stigma also face considerable stressors and psychological challenges. The ambiguity of social situations combined with the threat of potential discovery makes…

  5. Incremental Volumetric Remapping Method: Analysis and Error Evaluation

    International Nuclear Information System (INIS)

    Baptista, A. J.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.; Alves, J. L.

    2007-01-01

    In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90 deg. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the

  6. [Ladder step strategy for surgical repair of congenital concealed penis in children].

    Science.gov (United States)

    Wang, Fu-Ran; Zhong, Hong-Ji; Chen, Yi; Zhao, Jun-Feng; Li, Yan

    2016-11-01

    To assess the feasibility of the ladder step strategy in surgical repair of congenital concealed penis in children. This study included 52 children with congenital concealed penis treated in the past two years by surgical repair using the ladder step strategy, which consists of five main steps: cutting the narrow ring of the foreskin, degloving the penile skin, fixing the penile skin at the base, covering the penile shaft, and reshaping the prepuce. The perioperative data of the patients were prospectively collected and statistically described. Of the 52 patients, 20 needed remodeling of the frenulum and 27 received longitudinal incision in the penoscrotal junction to expose and deglove the penile shaft. The advanced scrotal flap technique was applied in 8 children to cover the penile shaft without tension, the pedicled foreskin flap technique employed in 11 to repair the penile skin defect, and excision of the webbed skin of the ventral penis performed in another 44 to remodel the penoscrotal angle. The operation time, blood loss, and postoperative hospital stay were 40-100 minutes, 5-30 ml, and 3-6 days, respectively. Wound bleeding and infection occurred in 1 and 5 cases, respectively. Follow-up examinations at 3 and 6 months after surgery showed that all the children had a satisfactory penile appearance except for some minor complications (2 cases of penile retraction, 2 cases of redundant ventral skin, and 1 case of iatrogenic penile curvature). The ladder step strategy for surgical repair of congenital concealed penis in children is a simple procedure with minor injury and satisfactory appearance of the penis.

  7. Regulatory focus moderates the social performance of individuals who conceal a stigmatized identity

    NARCIS (Netherlands)

    Newheiser, Anna-Kaisa; Barreto, Manuela; Ellemers, Naomi; Derks, Belle; Scheepers, Daan

    2015-01-01

    People often choose to hide a stigmatized identity to avoid bias. However, hiding stigma can disrupt social interactions. We considered whether regulatory focus qualifies the social effects of hiding stigma by examining interactions in which stigmatized participants concealed a devalued identity

  8. A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors

    Directory of Open Access Journals (Sweden)

    Shuang Wang

    2015-12-01

    Full Text Available In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF and Least Square Methods (LSM is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.

  9. Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography

    DEFF Research Database (Denmark)

    Müller, P.; Hiller, Jochen; Dai, Y.

    2015-01-01

    X-ray Computed Tomography (CT) has become an important technology for quality control of industrial components. As with other technologies, e.g., tactile coordinate measurements or optical measurements, CT is influenced by numerous quantities which may have negative impact on the accuracy...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...

  10. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER

    International Nuclear Information System (INIS)

    QIAN, S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-01-01

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately

  11. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  12. Reduction of very large reaction mechanisms using methods based on simulation error minimization

    Energy Technology Data Exchange (ETDEWEB)

    Nagy, Tibor; Turanyi, Tamas [Institute of Chemistry, Eoetvoes University (ELTE), P.O. Box 32, H-1518 Budapest (Hungary)

    2009-02-15

    A new species reduction method called the Simulation Error Minimization Connectivity Method (SEM-CM) was developed. According to the SEM-CM algorithm, a mechanism building procedure is started from the important species. Strongly connected sets of species, identified on the basis of the normalized Jacobian, are added and several consistent mechanisms are produced. The combustion model is simulated with each of these mechanisms and the mechanism causing the smallest error (i.e. deviation from the model that uses the full mechanism), considering the important species only, is selected. Then, in several steps other strongly connected sets of species are added, the size of the mechanism is gradually increased and the procedure is terminated when the error becomes smaller than the required threshold. A new method for the elimination of redundant reactions is also presented, which is called the Principal Component Analysis of Matrix F with Simulation Error Minimization (SEM-PCAF). According to this method, several reduced mechanisms are produced by using various PCAF thresholds. The reduced mechanism having the least CPU time requirement among the ones having almost the smallest error is selected. Application of SEM-CM and SEM-PCAF together provides a very efficient way to eliminate redundant species and reactions from large mechanisms. The suggested approach was tested on a mechanism containing 6874 irreversible reactions of 345 species that describes methane partial oxidation to high conversion. The aim is to accurately reproduce the concentration-time profiles of 12 major species with less than 5% error at the conditions of an industrial application. The reduced mechanism consists of 246 reactions of 47 species and its simulation is 116 times faster than using the full mechanism. The SEM-CM was found to be more effective than the classic Connectivity Method, and also than the DRG, two-stage DRG, DRGASA, basic DRGEP and extended DRGEP methods. (author)

  13. M/T method based incremental encoder velocity measurement error analysis and self-adaptive error elimination algorithm

    DEFF Research Database (Denmark)

    Chen, Yangyang; Yang, Ming; Long, Jiang

    2017-01-01

    For motor control applications, the speed loop performance is largely depended on the accuracy of speed feedback signal. M/T method, due to its high theoretical accuracy, is the most widely used in incremental encoder adopted speed measurement. However, the inherent encoder optical grating error...

  14. Fiber Optic Coupled Raman Based Detection of Hazardous Liquids Concealed in Commercial Products

    Directory of Open Access Journals (Sweden)

    Michael L. Ramírez-Cedeño

    2012-01-01

    Full Text Available Raman spectroscopy has been widely proposed as a technique to nondestructively and noninvasively interrogate the contents of glass and plastic bottles. In this work, Raman spectroscopy is used in a concealed threat scenario where hazardous liquids have been intentionally mixed with common consumer products to mask its appearance or spectra. The hazardous liquids under consideration included the chemical warfare agent (CWA simulant triethyl phosphate (TEP, hydrogen peroxide, and acetone as representative of toxic industrial compounds (TICs. Fiber optic coupled Raman spectroscopy (FOCRS and partial least squares (PLS algorithm analysis were used to quantify hydrogen peroxide in whiskey, acetone in perfume, and TEP in colored beverages. Spectral data was used to evaluate if the hazardous liquids can be successfully concealed in consumer products. Results demonstrated that FOC-RS systems were able to discriminate between nonhazardous consumer products and mixtures with hazardous materials at concentrations lower than 5%.

  15. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    Directory of Open Access Journals (Sweden)

    Jeffrey R Kugelman

    Full Text Available Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5 of all compared methods.

  16. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    Science.gov (United States)

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  17. Tight Error Bounds for Fourier Methods for Option Pricing for Exponential Levy Processes

    KAUST Repository

    Crocce, Fabian

    2016-01-06

    Prices of European options whose underlying asset is driven by the L´evy process are solutions to partial integrodifferential Equations (PIDEs) that generalise the Black-Scholes equation by incorporating a non-local integral term to account for the discontinuities in the asset price. The Levy -Khintchine formula provides an explicit representation of the characteristic function of a L´evy process (cf, [6]): One can derive an exact expression for the Fourier transform of the solution of the relevant PIDE. The rapid rate of convergence of the trapezoid quadrature and the speedup provide efficient methods for evaluationg option prices, possibly for a range of parameter configurations simultaneously. A couple of works have been devoted to the error analysis and parameter selection for these transform-based methods. In [5] several payoff functions are considered for a rather general set of models, whose characteristic function is assumed to be known. [4] presents the framework and theoretical approach for the error analysis, and establishes polynomial convergence rates for approximations of the option prices. [1] presents FT-related methods with curved integration contour. The classical flat FT-methods have been, on the other hand, extended for option pricing problems beyond the European framework [3]. We present a methodology for studying and bounding the error committed when using FT methods to compute option prices. We also provide a systematic way of choosing the parameters of the numerical method, minimising the error bound and guaranteeing adherence to a pre-described error tolerance. We focus on exponential L´evy processes that may be of either diffusive or pure jump in type. Our contribution is to derive a tight error bound for a Fourier transform method when pricing options under risk-neutral Levy dynamics. We present a simplified bound that separates the contributions of the payoff and of the process in an easily processed and extensible product form that

  18. Investigation of the structure and lithology of bedrock concealed by basin fill, using ground-based magnetic-field-profile data acquired in the San Rafael Basin, southeastern Arizona

    Science.gov (United States)

    Bultman, Mark W.

    2013-01-01

    Data on the Earth’s total-intensity magnetic field acquired near ground level and at measurement intervals as small as 1 m include information on the spatial distribution of nearsurface magnetic dipoles that in many cases are unique to a specific lithology. Such spatial information is expressed in the texture (physical appearance or characteristics) of the data at scales of hundreds of meters to kilometers. These magnetic textures are characterized by several descriptive statistics, their power spectrum, and their multifractal spectrum. On the basis of a graphical comparison and textural characterization, ground-based magnetic-field profile data can be used to estimate bedrock lithology concealed by as much as 100 m of basin fill in some cases, information that is especially important in assessing and exploring for concealed mineral deposits. I demonstrate that multifractal spectra of ground-based magnetic-field-profile data can be used to differentiate exposed lithologies and that the shape and position of the multifractal spectrum of the ground-based magnetic-field-profile of concealed lithologies can be matched to the upward-continued multifractal spectrum of an exposed lithology to help distinguish the concealed lithology. In addition, ground-based magnetic-field-profile data also detect minute differences in the magnetic susceptibility of rocks over small horizontal and vertical distances and so can be used for precise modeling of bedrock geometry and structure, even when that bedrock is concealed by 100 m or more of nonmagnetic basin fill. Such data contain valuable geologic information on the bedrock concealed by basin fill that may not be so visible in aeromagnetic data, including areas of hydrothermal alteration, faults, and other bedrock structures. Interpretation of these data in the San Rafael Basin, southeastern Arizona, has yielded results for estimating concealed lithologies, concealed structural geology, and a concealed potential mineral

  19. A heteroscedastic measurement error model for method comparison data with replicate measurements.

    Science.gov (United States)

    Nawarathna, Lakshika S; Choudhary, Pankaj K

    2015-03-30

    Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  20. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    Science.gov (United States)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  1. Error Analysis of Galerkin's Method for Semilinear Equations

    Directory of Open Access Journals (Sweden)

    Tadashi Kawanago

    2012-01-01

    Full Text Available We establish a general existence result for Galerkin's approximate solutions of abstract semilinear equations and conduct an error analysis. Our results may be regarded as some extension of a precedent work (Schultz 1969. The derivation of our results is, however, different from the discussion in his paper and is essentially based on the convergence theorem of Newton’s method and some techniques for deriving it. Some of our results may be applicable for investigating the quality of numerical verification methods for solutions of ordinary and partial differential equations.

  2. A posteriori error analysis of multiscale operator decomposition methods for multiphysics models

    International Nuclear Information System (INIS)

    Estep, D; Carey, V; Tavener, S; Ginting, V; Wildey, T

    2008-01-01

    Multiphysics, multiscale models present significant challenges in computing accurate solutions and for estimating the error in information computed from numerical solutions. In this paper, we describe recent advances in extending the techniques of a posteriori error analysis to multiscale operator decomposition solution methods. While the particulars of the analysis vary considerably with the problem, several key ideas underlie a general approach being developed to treat operator decomposition multiscale methods. We explain these ideas in the context of three specific examples

  3. Periodic boundary conditions and the error-controlled fast multipole method

    Energy Technology Data Exchange (ETDEWEB)

    Kabadshow, Ivo

    2012-08-22

    The simulation of pairwise interactions in huge particle ensembles is a vital issue in scientific research. Especially the calculation of long-range interactions poses limitations to the system size, since these interactions scale quadratically with the number of particles. Fast summation techniques like the Fast Multipole Method (FMM) can help to reduce the complexity to O(N). This work extends the possible range of applications of the FMM to periodic systems in one, two and three dimensions with one unique approach. Together with a tight error control, this contribution enables the simulation of periodic particle systems for different applications without the need to know and tune the FMM specific parameters. The implemented error control scheme automatically optimizes the parameters to obtain an approximation for the minimal runtime for a given energy error bound.

  4. Research on the Method of Noise Error Estimation of Atomic Clocks

    Science.gov (United States)

    Song, H. J.; Dong, S. W.; Li, W.; Zhang, J. H.; Jing, Y. J.

    2017-05-01

    The simulation methods of different noises of atomic clocks are given. The frequency flicker noise of atomic clock is studied by using the Markov process theory. The method for estimating the maximum interval error of the frequency white noise is studied by using the Wiener process theory. Based on the operation of 9 cesium atomic clocks in the time frequency reference laboratory of NTSC (National Time Service Center), the noise coefficients of the power-law spectrum model are estimated, and the simulations are carried out according to the noise models. Finally, the maximum interval error estimates of the frequency white noises generated by the 9 cesium atomic clocks have been acquired.

  5. A New Error Analysis and Accuracy Synthesis Method for Shoe Last Machine

    Directory of Open Access Journals (Sweden)

    Bian Xiangjuan

    2014-05-01

    Full Text Available In order to improve the manufacturing precision of the shoe last machine, a new error-computing model has been put forward to. At first, Based on the special topological structure of the shoe last machine and multi-rigid body system theory, a spatial error-calculating model of the system was built; Then, the law of error distributing in the whole work space was discussed, and the maximum error position of the system was found; At last, The sensitivities of error parameters were analyzed at the maximum position and the accuracy synthesis was conducted by using Monte Carlo method. Considering the error sensitivities analysis, the accuracy of the main parts was distributed. Results show that the probability of the maximal volume error less than 0.05 mm of the new scheme was improved from 0.6592 to 0.7021 than the probability of the old scheme, the precision of the system was improved obviously, the model can be used for the error analysis and accuracy synthesis of the complex multi- embranchment motion chain system, and to improve the system precision of manufacturing.

  6. A method to deal with installation errors of wearable accelerometers for human activity recognition

    International Nuclear Information System (INIS)

    Jiang, Ming; Wang, Zhelong; Shang, Hong; Li, Hongyi; Wang, Yuechao

    2011-01-01

    Human activity recognition (HAR) by using wearable accelerometers has gained significant interest in recent years in a range of healthcare areas, including inferring metabolic energy expenditure, predicting falls, measuring gait parameters and monitoring daily activities. The implementation of HAR relies heavily on the correctness of sensor fixation. The installation errors of wearable accelerometers may dramatically decrease the accuracy of HAR. In this paper, a method is proposed to improve the robustness of HAR to the installation errors of accelerometers. The method first calculates a transformation matrix by using Gram–Schmidt orthonormalization in order to eliminate the sensor's orientation error and then employs a low-pass filter with a cut-off frequency of 10 Hz to eliminate the main effect of the sensor's misplacement. The experimental results showed that the proposed method obtained a satisfactory performance for HAR. The average accuracy rate from ten subjects was 95.1% when there were no installation errors, and was 91.9% when installation errors were involved in wearable accelerometers

  7. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong; Sun, Shuyu; Xie, Xiaoping

    2015-01-01

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  8. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong

    2015-10-26

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  9. Estimation of Mechanical Signals in Induction Motors using the Recursive Prediction Error Method

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik

    1993-01-01

    Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed ........Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed .....

  10. A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation

    Directory of Open Access Journals (Sweden)

    Tianshuang Qiu

    2007-12-01

    Full Text Available This paper proposes a geometric method to locate a mobile station (MS in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA, angle of arrival (AOA, and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of “biased” or “unbiased” is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.

  11. A method for analysing incidents due to human errors on nuclear installations

    International Nuclear Information System (INIS)

    Griffon, M.

    1980-01-01

    This paper deals with the development of a methodology adapted to a detailed analysis of incidents considered to be due to human errors. An identification of human errors and a search for their eventual multiple causes is then needed. They are categorized in eight classes: education and training of personnel, installation design, work organization, time and work duration, physical environment, social environment, history of the plant and performance of the operator. The method is illustrated by the analysis of a handling incident generated by multiple human errors. (author)

  12. An error compensation method for a linear array sun sensor with a V-shaped slit

    International Nuclear Information System (INIS)

    Fan, Qiao-yun; Tan, Xiao-feng

    2015-01-01

    Existing methods of improving measurement accuracy, such as polynomial fitting and increasing pixel numbers, cannot guarantee high precision and good miniaturization specifications of a microsun sensor at the same time. Therefore, a novel integrated and accurate error compensation method is proposed. A mathematical error model is established according to the analysis results of all the contributing factors, and the model parameters are calculated through multi-sets simultaneous calibration. The numerical simulation results prove that the calibration method is unaffected by installation errors introduced by the calibration process, and is capable of separating the sensor’s intrinsic and extrinsic parameters precisely, and obtaining accurate and robust intrinsic parameters. In laboratorial calibration, the calibration data are generated by using a two-axis rotation table and a sun simulator. The experimental results show that owing to the proposed error compensation method, the sun sensor’s measurement accuracy is improved by 30 times throughout its field of view (±60°  ×  ±60°), with a RMS error of 0.1°. (paper)

  13. [Effects of false memories on the Concealed Information Test].

    Science.gov (United States)

    Zaitsu, Wataru

    2012-10-01

    The effects of false memories on polygraph examinations with the Concealed Information Test (CIT) were investigated by using the Deese-Roediger-McDermott (DRM) paradigm, which allows participants to evoke false memories. Physiological responses to questions consisting of learned, lure, and unlearned items were measured and recorded. The results indicated that responses to lure questions showed critical responses to questions about learned items. These responses included repression of respiration, an increase in electrodermal activity, and a drop in heart rate. These results suggest that critical response patterns are generated in the peripheral nervous system by true and false memories.

  14. Correction method for the error of diamond tool's radius in ultra-precision cutting

    Science.gov (United States)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  15. The current and future status of the Concealed Information Test for field use

    Directory of Open Access Journals (Sweden)

    Izumi eMatsuda

    2012-11-01

    Full Text Available The Concealed Information Test (CIT is a psychophysiological technique for examining whether a person has knowledge of crime-relevant information. Many laboratory studies have shown that the CIT has good scientific validity. However, the CIT has seldom been used for actual criminal investigations. One successful exception is its use by the Japanese police. In Japan, the CIT has been widely used for criminal investigations, although its probative force in court is not strong. In this paper, we first review the current use of the field CIT in Japan. Then, we discuss two possible approaches to increase its probative force: sophisticated statistical judgment methods and combining new psychophysiological measures with classic autonomic measures. On the basis of these considerations, we propose several suggestions for future practice and research involving the field CIT.

  16. Round-off error in long-term orbital integrations using multistep methods

    Science.gov (United States)

    Quinlan, Gerald D.

    1994-01-01

    Techniques for reducing roundoff error are compared by testing them on high-order Stormer and summetric multistep methods. The best technique for most applications is to write the equation in summed, function-evaluation form and to store the coefficients as rational numbers. A larger error reduction can be achieved by writing the equation in backward-difference form and performing some of the additions in extended precision, but this entails a larger central processing unit (cpu) cost.

  17. The commission errors search and assessment (CESA) method

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B.; Dang, V. N

    2007-05-15

    Errors of Commission (EOCs) refer to the performance of inappropriate actions that aggravate a situation. In Probabilistic Safety Assessment (PSA) terms, they are human failure events that result from the performance of an action. This report presents the Commission Errors Search and Assessment (CESA) method and describes the method in the form of user guidance. The purpose of the method is to identify risk-significant situations with a potential for EOCs in a predictive analysis. The main idea underlying the CESA method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. The catalog of required actions provides a basis for a systematic search of context-action combinations. To focus the search towards risk-significant scenarios, the actions that are examined in the CESA search are prioritized according to the importance of the systems and functions that are affected by these actions. The existing PSA provides this importance information; the Risk Achievement Worth or Risk Increase Factor values indicate the systems/functions for which an EOC contribution would be more significant. In addition, the contexts, i.e. PSA scenarios, for which the EOC opportunities are reviewed are also prioritized according to their importance (top sequences or cut sets). The search through these context-action combinations results in a set of EOC situations to be examined in detail. CESA has been applied in a plant-specific pilot study, which showed the method to be feasible and effective in identifying plausible EOC opportunities. This experience, as well as the experience with other EOC analyses, showed that the quantification of EOCs remains an issue. The quantification difficulties and the outlook for their resolution conclude the report. (author)

  18. The commission errors search and assessment (CESA) method

    International Nuclear Information System (INIS)

    Reer, B.; Dang, V. N.

    2007-05-01

    Errors of Commission (EOCs) refer to the performance of inappropriate actions that aggravate a situation. In Probabilistic Safety Assessment (PSA) terms, they are human failure events that result from the performance of an action. This report presents the Commission Errors Search and Assessment (CESA) method and describes the method in the form of user guidance. The purpose of the method is to identify risk-significant situations with a potential for EOCs in a predictive analysis. The main idea underlying the CESA method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. The catalog of required actions provides a basis for a systematic search of context-action combinations. To focus the search towards risk-significant scenarios, the actions that are examined in the CESA search are prioritized according to the importance of the systems and functions that are affected by these actions. The existing PSA provides this importance information; the Risk Achievement Worth or Risk Increase Factor values indicate the systems/functions for which an EOC contribution would be more significant. In addition, the contexts, i.e. PSA scenarios, for which the EOC opportunities are reviewed are also prioritized according to their importance (top sequences or cut sets). The search through these context-action combinations results in a set of EOC situations to be examined in detail. CESA has been applied in a plant-specific pilot study, which showed the method to be feasible and effective in identifying plausible EOC opportunities. This experience, as well as the experience with other EOC analyses, showed that the quantification of EOCs remains an issue. The quantification difficulties and the outlook for their resolution conclude the report. (author)

  19. Propagation of internal errors in explicit Runge–Kutta methods and internal stability of SSP and extrapolation methods

    KAUST Repository

    Ketcheson, David I.; Loczi, Lajos; Parsani, Matteo

    2014-01-01

    of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods

  20. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  1. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  2. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  3. Three-point method for measuring the geometric error components of linear and rotary axes based on sequential multilateration

    International Nuclear Information System (INIS)

    Zhang, Zhenjiu; Hu, Hong

    2013-01-01

    The linear and rotary axes are fundamental parts of multi-axis machine tools. The geometric error components of the axes must be measured for motion error compensation to improve the accuracy of the machine tools. In this paper, a simple method named the three point method is proposed to measure the geometric error of the linear and rotary axes of the machine tools using a laser tracker. A sequential multilateration method, where uncertainty is verified through simulation, is applied to measure the 3D coordinates. Three noncollinear points fixed on the stage of each axis are selected. The coordinates of these points are simultaneously measured using a laser tracker to obtain their volumetric errors by comparing these coordinates with ideal values. Numerous equations can be established using the geometric error models of each axis. The geometric error components can be obtained by solving these equations. The validity of the proposed method is verified through a series of experiments. The results indicate that the proposed method can measure the geometric error of the axes to compensate for the errors in multi-axis machine tools.

  4. Band extension in digital methods of transfer function determination – signal conditioners asymmetry error corrections

    Directory of Open Access Journals (Sweden)

    Zbigniew Staroszczyk

    2014-12-01

    Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors

  5. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    Science.gov (United States)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  6. A GROSS ERROR ELIMINATION METHOD FOR POINT CLOUD DATA BASED ON KD-TREE

    Directory of Open Access Journals (Sweden)

    Q. Kang

    2018-04-01

    Full Text Available Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data’s pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  7. Living with a concealable stigmatized identity: the impact of anticipated stigma, centrality, salience, and cultural stigma on psychological distress and health.

    Science.gov (United States)

    Quinn, Diane M; Chaudoir, Stephenie R

    2009-10-01

    The current research provides a framework for understanding how concealable stigmatized identities impact people's psychological well-being and health. The authors hypothesize that increased anticipated stigma, greater centrality of the stigmatized identity to the self, increased salience of the identity, and possession of a stigma that is more strongly culturally devalued all predict heightened psychological distress. In Study 1, the hypotheses were supported with a sample of 300 participants who possessed 13 different concealable stigmatized identities. Analyses comparing people with an associative stigma to those with a personal stigma showed that people with an associative stigma report less distress and that this difference is fully mediated by decreased anticipated stigma, centrality, and salience. Study 2 sought to replicate the findings of Study 1 with a sample of 235 participants possessing concealable stigmatized identities and to extend the model to predicting health outcomes. Structural equation modeling showed that anticipated stigma and cultural stigma were directly related to self-reported health outcomes. Discussion centers on understanding the implications of intraindividual processes (anticipated stigma, identity centrality, and identity salience) and an external process (cultural devaluation of stigmatized identities) for mental and physical health among people living with a concealable stigmatized identity. 2009 APA, all rights reserved.

  8. A posteriori error estimator and AMR for discrete ordinates nodal transport methods

    International Nuclear Information System (INIS)

    Duo, Jose I.; Azmy, Yousry Y.; Zikatanov, Ludmil T.

    2009-01-01

    In the development of high fidelity transport solvers, optimization of the use of available computational resources and access to a tool for assessing quality of the solution are key to the success of large-scale nuclear systems' simulation. In this regard, error control provides the analyst with a confidence level in the numerical solution and enables for optimization of resources through Adaptive Mesh Refinement (AMR). In this paper, we derive an a posteriori error estimator based on the nodal solution of the Arbitrarily High Order Transport Method of the Nodal type (AHOT-N). Furthermore, by making assumptions on the regularity of the solution, we represent the error estimator as a function of computable volume and element-edges residuals. The global L 2 error norm is proved to be bound by the estimator. To lighten the computational load, we present a numerical approximation to the aforementioned residuals and split the global norm error estimator into local error indicators. These indicators are used to drive an AMR strategy for the spatial discretization. However, the indicators based on forward solution residuals alone do not bound the cell-wise error. The estimator and AMR strategy are tested in two problems featuring strong heterogeneity and highly transport streaming regime with strong flux gradients. The results show that the error estimator indeed bounds the global error norms and that the error indicator follows the cell-error's spatial distribution pattern closely. The AMR strategy proves beneficial to optimize resources, primarily by reducing the number of unknowns solved for to achieve prescribed solution accuracy in global L 2 error norm. Likewise, AMR achieves higher accuracy compared to uniform refinement when resolving sharp flux gradients, for the same number of unknowns

  9. Errors of the backextrapolation method in determination of the blood volume

    Science.gov (United States)

    Schröder, T.; Rösler, U.; Frerichs, I.; Hahn, G.; Ennker, J.; Hellige, G.

    1999-01-01

    Backextrapolation is an empirical method to calculate the central volume of distribution (for example the blood volume). It is based on the compartment model, which says that after an injection the substance is distributed instantaneously in the central volume with no time delay. The occurrence of recirculation is not taken into account. The change of concentration with time of indocyanine green (ICG) was observed in an in vitro model, in which the volume was recirculating in 60 s and the clearance of the ICG could be varied. It was found that the higher the elimination of ICG, the higher was the error of the backextrapolation method. The theoretical consideration of Schröder et al ( Biomed. Tech. 42 (1997) 7-11) was proved. If the injected substance is eliminated somewhere in the body (i.e. not by radioactive decay), the backextrapolation method produces large errors.

  10. Concealment tactics among HIV-positive nurses in Uganda.

    Science.gov (United States)

    Kyakuwa, Margaret; Hardon, Anita

    2012-01-01

    This paper is based on two-and-a-half years of ethnographic fieldwork in two rural Ugandan health centres during a period of ART scale-up. Around one-third of the nurses in these two sites were themselves HIV-positive but most concealed their status. We describe how a group of HIV-positive nurses set up a secret circle to talk about their predicament as HIV-positive healthcare professionals and how they developed innovative care technologies to overcome the skin rashes caused by ART that threatened to give them away. Together with patients and a traditional healer, the nurses resisted hegemonic biomedical norms denouncing herbal medicines and then devised and advocated for a herbal skin cream treatment to be included in the ART programme.

  11. Error of the slanted edge method for measuring the modulation transfer function of imaging systems.

    Science.gov (United States)

    Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu

    2018-03-01

    The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.

  12. The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications

    International Nuclear Information System (INIS)

    Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em

    2008-01-01

    Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L 2 error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods

  13. An Estimation of Human Error Probability of Filtered Containment Venting System Using Dynamic HRA Method

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)

    2016-10-15

    The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.

  14. Analysis of possible systematic errors in the Oslo method

    International Nuclear Information System (INIS)

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-01-01

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  15. Testing a potential alternative to traditional identification procedures: Reaction time-based concealed information test does not work for lineups with cooperative witnesses.

    Science.gov (United States)

    Sauerland, Melanie; Wolfs, Andrea C F; Crans, Samantha; Verschuere, Bruno

    2017-11-27

    Direct eyewitness identification is widely used, but prone to error. We tested the validity of indirect eyewitness identification decisions using the reaction time-based concealed information test (CIT) for assessing cooperative eyewitnesses' face memory as an alternative to traditional lineup procedures. In a series of five experiments, a total of 401 mock eyewitnesses watched one of 11 different stimulus events that depicted a breach of law. Eyewitness identifications in the CIT were derived from longer reaction times as compared to well-matched foil faces not encountered before. Across the five experiments, the weighted mean effect size d was 0.14 (95% CI 0.08-0.19). The reaction time-based CIT seems unsuited for testing cooperative eyewitnesses' memory for faces. The careful matching of the faces required for a fair lineup or the lack of intent to deceive may have hampered the diagnosticity of the reaction time-based CIT.

  16. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    Science.gov (United States)

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  17. Calculating method on human error probabilities considering influence of management and organization

    International Nuclear Information System (INIS)

    Gao Jia; Huang Xiangrui; Shen Zupei

    1996-01-01

    This paper is concerned with how management and organizational influences can be factored into quantifying human error probabilities on risk assessments, using a three-level Influence Diagram (ID) which is originally only as a tool for construction and representation of models of decision-making trees or event trees. An analytical model of human errors causation has been set up with three influence levels, introducing a method for quantification assessments (of the ID), which can be applied into quantifying probabilities) of human errors on risk assessments, especially into the quantification of complex event trees (system) as engineering decision-making analysis. A numerical case study is provided to illustrate the approach

  18. Investigation of error estimation method of observational data and comparison method between numerical and observational results toward V and V of seismic simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Kawakami, Yoshiaki; Nakajima, Norihiro

    2017-01-01

    The method to estimate errors included in observational data and the method to compare numerical results with observational results are investigated toward the verification and validation (V and V) of a seismic simulation. For the method to estimate errors, 144 literatures for the past 5 years (from the year 2010 to 2014) in the structure engineering field and earthquake engineering field where the description about acceleration data is frequent are surveyed. As a result, it is found that some processes to remove components regarded as errors from observational data are used in about 30% of those literatures. Errors are caused by the resolution, the linearity, the temperature coefficient for sensitivity, the temperature coefficient for zero shift, the transverse sensitivity, the seismometer property, the aliasing, and so on. Those processes can be exploited to estimate errors individually. For the method to compare numerical results with observational results, public materials of ASME V and V Symposium 2012-2015, their references, and above 144 literatures are surveyed. As a result, it is found that six methods have been mainly proposed in existing researches. Evaluating those methods using nine items, advantages and disadvantages for those methods are arranged. The method is not well established so that it is necessary to employ those methods by compensating disadvantages and/or to search for a solution to a novel method. (author)

  19. A new method for weakening the combined effect of residual errors on multibeam bathymetric data

    Science.gov (United States)

    Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue

    2014-12-01

    Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.

  20. The behaviour of the local error in splitting methods applied to stiff problems

    International Nuclear Information System (INIS)

    Kozlov, Roman; Kvaernoe, Anne; Owren, Brynjulf

    2004-01-01

    Splitting methods are frequently used in solving stiff differential equations and it is common to split the system of equations into a stiff and a nonstiff part. The classical theory for the local order of consistency is valid only for stepsizes which are smaller than what one would typically prefer to use in the integration. Error control and stepsize selection devices based on classical local order theory may lead to unstable error behaviour and inefficient stepsize sequences. Here, the behaviour of the local error in the Strang and Godunov splitting methods is explained by using two different tools, Lie series and singular perturbation theory. The two approaches provide an understanding of the phenomena from different points of view, but both are consistent with what is observed in numerical experiments

  1. Error analysis of motion correction method for laser scanning of moving objects

    Science.gov (United States)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  2. Analysis of Statistical Methods and Errors in the Articles Published in the Korean Journal of Pain

    Science.gov (United States)

    Yim, Kyoung Hoon; Han, Kyoung Ah; Park, Soo Young

    2010-01-01

    Background Statistical analysis is essential in regard to obtaining objective reliability for medical research. However, medical researchers do not have enough statistical knowledge to properly analyze their study data. To help understand and potentially alleviate this problem, we have analyzed the statistical methods and errors of articles published in the Korean Journal of Pain (KJP), with the intention to improve the statistical quality of the journal. Methods All the articles, except case reports and editorials, published from 2004 to 2008 in the KJP were reviewed. The types of applied statistical methods and errors in the articles were evaluated. Results One hundred and thirty-nine original articles were reviewed. Inferential statistics and descriptive statistics were used in 119 papers and 20 papers, respectively. Only 20.9% of the papers were free from statistical errors. The most commonly adopted statistical method was the t-test (21.0%) followed by the chi-square test (15.9%). Errors of omission were encountered 101 times in 70 papers. Among the errors of omission, "no statistics used even though statistical methods were required" was the most common (40.6%). The errors of commission were encountered 165 times in 86 papers, among which "parametric inference for nonparametric data" was the most common (33.9%). Conclusions We found various types of statistical errors in the articles published in the KJP. This suggests that meticulous attention should be given not only in the applying statistical procedures but also in the reviewing process to improve the value of the article. PMID:20552071

  3. Suppressing carrier removal error in the Fourier transform method for interferogram analysis

    International Nuclear Information System (INIS)

    Fan, Qi; Yang, Hongru; Li, Gaoping; Zhao, Jianlin

    2010-01-01

    A new carrier removal method for interferogram analysis using the Fourier transform is presented. The proposed method can be used to suppress the carrier removal error as well as the spectral leakage error. First, the carrier frequencies are estimated with the spectral centroid of the up sidelobe of the apodized interferogram, and then the up sidelobe can be shifted to the origin in the frequency domain by multiplying the original interferogram by a constructed plane reference wave. The influence of the carrier frequencies without an integer multiple of the frequency interval and the window function for apodization of the interferogram can be avoided in our work. The simulation and experimental results show that this method is effective for phase measurement with a high accuracy from a single interferogram

  4. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    Directory of Open Access Journals (Sweden)

    Zheng You

    2013-04-01

    Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  5. Optical system error analysis and calibration method of high-accuracy star trackers.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng

    2013-04-08

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  6. Face and voice as social stimuli enhance differential physiological responding in a Concealed Information Test

    Directory of Open Access Journals (Sweden)

    Wolfgang eAmbach

    2012-11-01

    Full Text Available Attentional, intentional, and motivational factors are known to influence the physiological responses in a Concealed Information Test (CIT. Although concealing information is essentially a social action closely related to motivation, CIT studies typically rely on testing participants in an environment lacking of social stimuli: Subjects interact with a computer while sitting alone in an experimental room. To address this gap, we examined the influence of social stimuli on the physiological responses in a CIT.Seventy-one participants underwent a mock-crime experiment with a modified CIT. In a between-subjects design, subjects were either questioned acoustically by a pre-recorded male voice presented together with a virtual male experimenter’s uniform face or by a text field on the screen, which displayed the question devoid of face and voice. Electrodermal activity (EDA, respiration line length (RLL, phasic heart rate (pHR, and finger pulse waveform length (FPWL were registered. The Psychopathic Personality Inventory - Revised (PPI-R was administered in addition. The differential responses of RLL, pHR, and FPWL to probe vs. irrelevant items were greater in the condition with social stimuli than in the text condition; interestingly, the differential responses of EDA did not differ between conditions. No modulatory influence of the PPI-R sum or subscale scores was found.The results emphasize the relevance of social aspects in the process of concealing information and in its detection. Attentional demands as well as the participants’ motivation to avoid detection might be the important links between social stimuli and physiological responses in the CIT.

  7. Combining Blink, Pupil, and Response Time Measures in a Concealed Knowledge Test

    Directory of Open Access Journals (Sweden)

    Travis eSeymour

    2013-02-01

    Full Text Available The response time (RT based Concealed Knowledge Test (CKT has been shown to accurately detect participants’ knowledge of mock-crime related information. Tests based on ocular measures such as pupil size and blink rate have sometimes resulted in poor classification, or lacked detailed classification analyses. The present study examines the fitness of multiple pupil and blink related responses in the CKT paradigm. To maximize classification efficiency, participants’ concealed knowledge was assessed using both individual test measures and combinations of test measures. Results show that individual pupil-size, pupil-slope, and pre-response blink-rate measures produce efficient classifications. Combining pupil and blink measures yielded more accuracy classifications than individual ocular measures. Although RT-based tests proved efficient, combining RT with ocular measures had little incremental benefit. It is argued that covertly assessing ocular measures during RT-based tests may guard against effective countermeasure use in applied settings. A compound classification procedure was used to categorize individual participants and yielded high hit rates and low false-alarm rates without the need for adjustments between test paradigms or subject populations. We conclude that with appropriate test paradigms and classification analyses, ocular measures may prove as effective as other indices, though additional research is needed.

  8. What's on your mind? Recent advances in memory detection using the Concealed Information Test

    NARCIS (Netherlands)

    Verschuere, B.; Meijer, E.H.

    2014-01-01

    Lie detectors can be applied in a wide variety of settings. But this advantage comes with a considerable cost: False positives. The applicability of the Concealed Information Test (CIT) is more limited, yet when it can be applied, the risk of false accusations can be set a priori at a very low

  9. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  10. Error Estimation and Accuracy Improvements in Nodal Transport Methods; Estimacion de Errores y Aumento de la Precision en Metodos Nodales de Transporte

    Energy Technology Data Exchange (ETDEWEB)

    Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.

  11. Error assessment in recombinant baculovirus titration: evaluation of different methods.

    Science.gov (United States)

    Roldão, António; Oliveira, Rui; Carrondo, Manuel J T; Alves, Paula M

    2009-07-01

    The success of baculovirus/insect cells system in heterologous protein expression depends on the robustness and efficiency of the production workflow. It is essential that process parameters are controlled and include as little variability as possible. The multiplicity of infection (MOI) is the most critical factor since irreproducible MOIs caused by inaccurate estimation of viral titers hinder batch consistency and process optimization. This lack of accuracy is related to intrinsic characteristics of the method such as the inability to distinguish between infectious and non-infectious baculovirus. In this study, several methods for baculovirus titration were compared. The most critical issues identified were the incubation time and cell concentration at the time of infection. These variables influence strongly the accuracy of titers and must be defined for optimal performance of the titration method. Although the standard errors of the methods varied significantly (7-36%), titers were within the same order of magnitude; thus, viral titers can be considered independent of the method of titration. A cost analysis of the baculovirus titration methods used in this study showed that the alamarblue, real time Q-PCR and plaque assays were the most expensive techniques. The remaining methods cost on average 75% less than the former methods. Based on the cost, time and error analysis undertaken in this study, the end-point dilution assay, microculture tetrazolium assay and flow cytometric assay were found to be the techniques that combine all these three main factors better. Nevertheless, it is always recommended to confirm the accuracy of the titration either by comparison with a well characterized baculovirus reference stock or by titration using two different methods and verification of the variability of results.

  12. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  13. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Fibonacci collocation method with a residual error Function to solve linear Volterra integro differential equations

    Directory of Open Access Journals (Sweden)

    Salih Yalcinbas

    2016-01-01

    Full Text Available In this paper, a new collocation method based on the Fibonacci polynomials is introduced to solve the high-order linear Volterra integro-differential equations under the conditions. Numerical examples are included to demonstrate the applicability and validity of the proposed method and comparisons are made with the existing results. In addition, an error estimation based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation.

  15. Risk factors responsible for atrial fibrillation development between symptomatic patients with concealed or manifest atrioventricular accessory pathways

    Directory of Open Access Journals (Sweden)

    Mu Chen

    2015-06-01

    Conclusions: Results from this study demonstrate that the risk factors of AF are not homogenous between concealed and manifest APs, which might suggest heterogeneous pathogenesis of AF in these two types of APs.

  16. Total error components - isolation of laboratory variation from method performance

    International Nuclear Information System (INIS)

    Bottrell, D.; Bleyler, R.; Fisk, J.; Hiatt, M.

    1992-01-01

    The consideration of total error across sampling and analytical components of environmental measurements is relatively recent. The U.S. Environmental Protection Agency (EPA), through the Contract Laboratory Program (CLP), provides complete analyses and documented reports on approximately 70,000 samples per year. The quality assurance (QA) functions of the CLP procedures provide an ideal data base-CLP Automated Results Data Base (CARD)-to evaluate program performance relative to quality control (QC) criteria and to evaluate the analysis of blind samples. Repetitive analyses of blind samples within each participating laboratory provide a mechanism to separate laboratory and method performance. Isolation of error sources is necessary to identify effective options to establish performance expectations, and to improve procedures. In addition, optimized method performance is necessary to identify significant effects that result from the selection among alternative procedures in the data collection process (e.g., sampling device, storage container, mode of sample transit, etc.). This information is necessary to evaluate data quality; to understand overall quality; and to provide appropriate, cost-effective information required to support a specific decision

  17. Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method

    Science.gov (United States)

    Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.

    2012-01-01

    Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978

  18. Errors in accident data, its types, causes and methods of rectification-analysis of the literature.

    Science.gov (United States)

    Ahmed, Ashar; Sadullah, Ahmad Farhan Mohd; Yahya, Ahmad Shukri

    2017-07-29

    Most of the decisions taken to improve road safety are based on accident data, which makes it the back bone of any country's road safety system. Errors in this data will lead to misidentification of black spots and hazardous road segments, projection of false estimates pertinent to accidents and fatality rates, and detection of wrong parameters responsible for accident occurrence, thereby making the entire road safety exercise ineffective. Its extent varies from country to country depending upon various factors. Knowing the type of error in the accident data and the factors causing it enables the application of the correct method for its rectification. Therefore there is a need for a systematic literature review that addresses the topic at a global level. This paper fulfils the above research gap by providing a synthesis of literature for the different types of errors found in the accident data of 46 countries across the six regions of the world. The errors are classified and discussed with respect to each type and analysed with respect to income level; assessment with regard to the magnitude for each type is provided; followed by the different causes that result in their occurrence, and the various methods used to address each type of error. Among high-income countries the extent of error in reporting slight, severe, non-fatal and fatal injury accidents varied between 39-82%, 16-52%, 12-84%, and 0-31% respectively. For middle-income countries the error for the same categories varied between 93-98%, 32.5-96%, 34-99% and 0.5-89.5% respectively. The only four studies available for low-income countries showed that the error in reporting non-fatal and fatal accidents varied between 69-80% and 0-61% respectively. The logistic relation of error in accident data reporting, dichotomised at 50%, indicated that as the income level of a country increases the probability of having less error in accident data also increases. Average error in recording information related to the

  19. Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion

    KAUST Repository

    Jin, B.; Lazarov, R.; Pasciak, J.; Zhou, Z.

    2014-01-01

    © 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.

  20. Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion

    KAUST Repository

    Jin, B.

    2014-05-30

    © 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.

  1. Prospective effects of social support on internalized homonegativity and sexual identity concealment among middle-aged and older gay men: a longitudinal cohort study.

    Science.gov (United States)

    Lyons, Anthony; Pepping, Christopher A

    2017-09-01

    Middle-aged and older gay men experience higher rates of depression and anxiety compared to their heterosexual counterparts, with internalized homonegativity and sexual identity concealment known to be major stress-related contributors. This study examined the prospective effect of different types and sources of social support on internalized homonegativity and sexual identity concealment experienced among middle-aged and older gay men. A longitudinal survey involving two waves of data collection separated by 12 months was conducted among a cohort of 186 gay-identified men aged 40 years and older. Two types of social support were found to be important. Greater baseline tangible or practical support independently predicted lower internalized homonegativity at 12-month follow-up, while greater baseline emotional or psychological support independently predicted a lower tendency toward sexual identity concealment at 12-month follow-up. Greater baseline support from community or government agencies, such as health services and support organizations, predicted higher internalized homonegativity at 12-month follow-up. These findings suggest that tangible and emotional support may be beneficial in reducing internalized homonegativity and sexual identity concealment among middle-aged and older gay men. Ensuring that services provide environments that do not compound the stressful impact of stigma also appears to be important.

  2. The manifest but concealed background of our communication

    Directory of Open Access Journals (Sweden)

    Erkut SEZGIN

    2012-01-01

    Full Text Available That manifest background needs to be elucidated as against intentional memory and imagination habits structured by our learning and operating with rules and pictures (representations of language. That’s the background which is concealed by our very demonstrative forms of expressions meaning and speaking habits expressed by intentional gestures and gesticulations of meaning the surrounding differences and identities: As if they are self essential representative of their own truth and certainty, which is supposed to be meant by the demonstrative, intentional form of the expression. While on the other hand, such intentional demonstrative gestures and gesticulations of meaning operate as conditioned forms of expressions of truth beliefs of imagination and memory habits expressed in reaction to the differences and identities pictured (represented by names and descriptions in deep oblivion of the internal signifying connections of the Use of pictures.

  3. The relative size of measurement error and attrition error in a panel survey. Comparing them with a new multi-trait multi-method model

    NARCIS (Netherlands)

    Lugtig, Peter

    2017-01-01

    This paper proposes a method to simultaneously estimate both measurement and nonresponse errors for attitudinal and behavioural questions in a longitudinal survey. The method uses a Multi-Trait Multi-Method (MTMM) approach, which is commonly used to estimate the reliability and validity of survey

  4. Error analysis of some Galerkin - least squares methods for the elasticity equations

    International Nuclear Information System (INIS)

    Franca, L.P.; Stenberg, R.

    1989-05-01

    We consider the recent technique of stabilizing mixed finite element methods by augmenting the Galerkin formulation with least squares terms calculated separately on each element. The error analysis is performed in a unified manner yielding improved results for some methods introduced earlier. In addition, a new formulation is introduced and analyzed [pt

  5. Mixed Methods Analysis of Medical Error Event Reports: A Report from the ASIPS Collaborative

    National Research Council Canada - National Science Library

    Harris, Daniel M; Westfall, John M; Fernald, Douglas H; Duclos, Christine W; West, David R; Niebauer, Linda; Marr, Linda; Quintela, Javan; Main, Deborah S

    2005-01-01

    .... This paper presents a mixed methods approach to analyzing narrative error event reports. Mixed methods studies integrate one or more qualitative and quantitative techniques for data collection and analysis...

  6. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  7. Development of an analysis rule of diagnosis error for standard method of human reliability analysis

    International Nuclear Information System (INIS)

    Jeong, W. D.; Kang, D. I.; Jeong, K. S.

    2003-01-01

    This paper presents the status of development of Korea standard method for Human Reliability Analysis (HRA), and proposed a standard procedure and rules for the evaluation of diagnosis error probability. The quality of KSNP HRA was evaluated using the requirement of ASME PRA standard guideline, and the design requirement for the standard HRA method was defined. Analysis procedure and rules, developed so far, to analyze diagnosis error probability was suggested as a part of the standard method. And also a study of comprehensive application was performed to evaluate the suitability of the proposed rules

  8. The systematic error of temperature noise correlation measurement method and self-calibration

    International Nuclear Information System (INIS)

    Tian Hong; Tong Yunxian

    1993-04-01

    The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments

  9. The Concealed Information Test in the Laboratory Versus Japanese Field Practice: Bridging the Scientist-Practitioner Gap

    NARCIS (Netherlands)

    Ogawa, T.; Matsuda, I.; Tsuneoka, M.; Verschuere, B.

    2015-01-01

    Whereas the Concealed Information Test (CIT) is heavily researched in laboratories, Japan is the only country that applies it on a large scale to real criminal investigations. Here we note that important differences exist in CIT design, data-analysis, and test conclusions between these two settings.

  10. Filtering Methods for Error Reduction in Spacecraft Attitude Estimation Using Quaternion Star Trackers

    Science.gov (United States)

    Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil

    2011-01-01

    Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.

  11. Between Concealing and Revealing Intersexed Bodies: Parental Strategies.

    Science.gov (United States)

    Danon, Limor Meoded; Krämer, Anike

    2017-08-01

    Parents of intersex children are perceived in many studies as hopeless, highly dependent on the medical system, and as gate keepers of normative gendered bodies. In this article, we challenge these perceptions and argue that parents of intersex children are problematically positioned between their children's needs for care and well-being and the socialmedical forces that aim to "normalize" them. Their in-between position leads them to establish different parental strategies within and outside of traditional sex/gender norms. We focus on three intertwined parental strategy frameworks: bodily dialogue, sex/gender framing, and concealing/revealing practices, and describe how, in each of these strategic frameworks, the parents maneuver, act in accordance with or against, react to, and challenge the medical system, social interactions, and the sex/gender paradigm. This is a comparative study based on narrative interviews with 22 parents of intersex children in Germany and Israel.

  12. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  13. Errors of absolute methods of reactor neutron activation analysis caused by non-1/E epithermal neutron spectra

    International Nuclear Information System (INIS)

    Erdtmann, G.

    1993-08-01

    A sufficiently accurate characterization of the neutron flux and spectrum, i.e. the determination of the thermal flux, the flux ratio and the epithermal flux spectrum shape factor, α, is a prerequisite for all types of absolute and monostandard methods of reactor neutron activation analysis. A convenient method for these measurements is the bare triple monitor method. However, the results of this method, are very imprecise, because there are high error propagation factors form the counting errors of the monitor activities. Procedures are described to calculate the errors of the flux parameters, the α-dependent cross-section ratios, and of the analytical results from the errors of the activities of the monitor isotopes. They are included in FORTRAN programs which also allow a graphical representation of the results. A great number of examples were calculated for ten different irradiation facilities in four reactors and for 28 elements. Plots of the results are presented and discussed. (orig./HP) [de

  14. On Round-off Error for Adaptive Finite Element Methods

    KAUST Repository

    Alvarez-Aramberri, J.

    2012-06-02

    Round-off error analysis has been historically studied by analyzing the condition number of the associated matrix. By controlling the size of the condition number, it is possible to guarantee a prescribed round-off error tolerance. However, the opposite is not true, since it is possible to have a system of linear equations with an arbitrarily large condition number that still delivers a small round-off error. In this paper, we perform a round-off error analysis in context of 1D and 2D hp-adaptive Finite Element simulations for the case of Poisson equation. We conclude that boundary conditions play a fundamental role on the round-off error analysis, specially for the so-called ‘radical meshes’. Moreover, we illustrate the importance of the right-hand side when analyzing the round-off error, which is independent of the condition number of the matrix.

  15. On Round-off Error for Adaptive Finite Element Methods

    KAUST Repository

    Alvarez-Aramberri, J.; Pardo, David; Paszynski, Maciej; Collier, Nathan; Dalcin, Lisandro; Calo, Victor M.

    2012-01-01

    Round-off error analysis has been historically studied by analyzing the condition number of the associated matrix. By controlling the size of the condition number, it is possible to guarantee a prescribed round-off error tolerance. However, the opposite is not true, since it is possible to have a system of linear equations with an arbitrarily large condition number that still delivers a small round-off error. In this paper, we perform a round-off error analysis in context of 1D and 2D hp-adaptive Finite Element simulations for the case of Poisson equation. We conclude that boundary conditions play a fundamental role on the round-off error analysis, specially for the so-called ‘radical meshes’. Moreover, we illustrate the importance of the right-hand side when analyzing the round-off error, which is independent of the condition number of the matrix.

  16. A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy

    International Nuclear Information System (INIS)

    Boswell, Sarah A.; Jeraj, Robert; Ruchala, Kenneth J.; Olivera, Gustavo H.; Jaradat, Hazim A.; James, Joshua A.; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T. Rock

    2005-01-01

    An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle

  17. A review of some a posteriori error estimates for adaptive finite element methods

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2010-01-01

    Roč. 80, č. 8 (2010), s. 1589-1600 ISSN 0378-4754. [European Seminar on Coupled Problems. Jetřichovice, 08.06.2008-13.06.2008] R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : hp-adaptive finite element method * a posteriori error estimators * computational error estimates Subject RIV: BA - General Mathematics Impact factor: 0.812, year: 2010 http://www.sciencedirect.com/science/article/pii/S0378475408004230

  18. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  19. The error analysis of the determination of the activity coefficients via the isopiestic method

    International Nuclear Information System (INIS)

    Zhou Jun; Chen Qiyuan; Fang Zheng; Liang Yizeng; Liu Shijun; Zhou Yong

    2005-01-01

    Error analysis is very important to experimental designs. The error analysis of the determination of activity coefficients for a binary system via the isopiestic method shows that the error sources include not only the experimental errors of the analyzed molalities and the measured osmotic coefficients, but also the deviation of the regressed values from the experimental data when the regression function is used. It also shows that the accurate chemical analysis of the molality of the test solution is important, and it is preferable to keep the error of the measured osmotic coefficients changeless in all isopiestic experiments including those experiments on the very dilute solutions. The isopiestic experiments on the dilute solutions are very important, and the lowest molality should be low enough so that a theoretical method can be used below the lowest molality. And it is necessary that the isopiestic experiment should be done on the test solutions of lower than 0.1 mol . kg -1 . For most electrolytes solutions, it is usually preferable to require the lowest molality to be less than 0.05 mol . kg -1 . Moreover, the experimental molalities of the test solutions should be firstly arranged by keeping the interval of the logarithms of the molalities nearly constant, and secondly more number of high molalities should be arranged, and we propose to arrange the experimental molalities greater than 1 mol . kg -1 according to some kind of the arithmetical progression of the intervals of the molalities. After experiments, the error of the calculated activity coefficients of the solutes could be calculated from the actually values of the errors of the measured isopiestic molalities and the deviations of the regressed values from the experimental values with our obtained equations

  20. A method for the quantification of model form error associated with physical systems.

    Energy Technology Data Exchange (ETDEWEB)

    Wallen, Samuel P.; Brake, Matthew Robert

    2014-03-01

    In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.

  1. Prevention of firearm-related injuries with restrictive licensing and concealed carry laws: An Eastern Association for the Surgery of Trauma systematic review.

    Science.gov (United States)

    Crandall, Marie; Eastman, Alexander; Violano, Pina; Greene, Wendy; Allen, Steven; Block, Ernest; Christmas, Ashley Britton; Dennis, Andrew; Duncan, Thomas; Foster, Shannon; Goldberg, Stephanie; Hirsh, Michael; Joseph, D'Andrea; Lommel, Karen; Pappas, Peter; Shillinglaw, William

    2016-11-01

    In the past decade, more than 300,000 people in the United States have died from firearm injuries. Our goal was to assess the effectiveness of two particular prevention strategies, restrictive licensing of firearms and concealed carry laws, on firearm-related injuries in the US Restrictive Licensing was defined to include denials of ownership for various offenses, such as performing background checks for domestic violence and felony convictions. Concealed carry laws allow licensed individuals to carry concealed weapons. A comprehensive review of the literature was performed. We used Grading of Recommendations Assessment, Development, and Evaluation methodology to assess the breadth and quality of the data specific to our Population, Intervention, Comparator, Outcomes (PICO) questions. A total of 4673 studies were initially identified, then seven more added after two subsequent, additional literature reviews. Of these, 3,623 remained after removing duplicates; 225 case reports, case series, and reviews were excluded, and 3,379 studies were removed because they did not focus on prevention or did not address our comparators of interest. This left a total of 14 studies which merited inclusion for PICO 1 and 13 studies which merited inclusion for PICO 2. PICO 1: We recommend the use of restrictive licensing to reduce firearm-related injuries.PICO 2: We recommend against the use of concealed carry laws to reduce firearm-related injuries.This committee found an association between more restrictive licensing and lower firearm injury rates. All 14 studies were population-based, longitudinal, used modeling to control for covariates, and 11 of the 14 were multi-state. Twelve of the studies reported reductions in firearm injuries, from 7% to 40%. We found no consistent effect of concealed carry laws. Of note, the varied quality of the available data demonstrates a significant information gap, and this committee recommends that we as a society foster a nurturing and encouraging

  2. The nuclear physical method for high pressure steam manifold water level gauging and its error

    International Nuclear Information System (INIS)

    Li Nianzu; Li Beicheng; Jia Shengming

    1993-10-01

    A new method, which is non-contact on measured water level, for measuring high pressure steam manifold water level with nuclear detection technique is introduced. This method overcomes the inherent drawback of previous water level gauges based on other principles. This method can realize full range real time monitoring on the continuous water level of high pressure steam manifold from the start to full load of boiler, and the actual value of water level can be obtained. The measuring errors were analysed on site. Errors from practical operation in Tianjin Junliangcheng Power Plant and in laboratory are also presented

  3. Radon measurements-discussion of error estimates for selected methods

    International Nuclear Information System (INIS)

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2010-01-01

    The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%.

  4. Residual-based Methods for Controlling Discretization Error in CFD

    Science.gov (United States)

    2015-08-24

    ccjccjccj iVi Jwxf V dVxf V 1 ,,, )(det)( 1)(1   . (25) where J is the Jacobian of the coordinate transformation and the weights can be found from...179. Layton, W., Lee , H.K., and Peterson, J. (2002). “A Defect-Correction Method for the Incompressible Navier-Stokes Equations,” Applied Mathematics...and Computation, Vol. 129, pp. 1-19. Lee , D. and Tsuei, Y.M. (1992). “A Formula for Estimation of Truncation Errors of Convective Terms in a

  5. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  6. Adaptive EWMA Method Based on Abnormal Network Traffic for LDoS Attacks

    Directory of Open Access Journals (Sweden)

    Dan Tang

    2014-01-01

    Full Text Available The low-rate denial of service (LDoS attacks reduce network services capabilities by periodically sending high intensity pulse data flows. For their concealed performance, it is more difficult for traditional DoS detection methods to detect LDoS attacks; at the same time the accuracy of the current detection methods for LDoS attacks is relatively low. As the fact that LDoS attacks led to abnormal distribution of the ACK traffic, LDoS attacks can be detected by analyzing the distribution characteristics of ACK traffic. Then traditional EWMA algorithm which can smooth the accidental error while being the same as the exceptional mutation may cause some misjudgment; therefore a new LDoS detection method based on adaptive EWMA (AEWMA algorithm is proposed. The AEWMA algorithm which uses an adaptive weighting function instead of the constant weighting of EWMA algorithm can smooth the accidental error and retain the exceptional mutation. So AEWMA method is more beneficial than EWMA method for analyzing and measuring the abnormal distribution of ACK traffic. The NS2 simulations show that AEWMA method can detect LDoS attacks effectively and has a low false negative rate and a false positive rate. Based on DARPA99 datasets, experiment results show that AEWMA method is more efficient than EWMA method.

  7. A New Method to Detect and Correct the Critical Errors and Determine the Software-Reliability in Critical Software-System

    International Nuclear Information System (INIS)

    Krini, Ossmane; Börcsök, Josef

    2012-01-01

    In order to use electronic systems comprising of software and hardware components in safety related and high safety related applications, it is necessary to meet the Marginal risk numbers required by standards and legislative provisions. Existing processes and mathematical models are used to verify the risk numbers. On the hardware side, various accepted mathematical models, processes, and methods exist to provide the required proof. To this day, however, there are no closed models or mathematical procedures known that allow for a dependable prediction of software reliability. This work presents a method that makes a prognosis on the residual critical error number in software. Conventional models lack this ability and right now, there are no methods that forecast critical errors. The new method will show that an estimate of the residual error number of critical errors in software systems is possible by using a combination of prediction models, a ratio of critical errors, and the total error number. Subsequently, the critical expected value-function at any point in time can be derived from the new solution method, provided the detection rate has been calculated using an appropriate estimation method. Also, the presented method makes it possible to make an estimate on the critical failure rate. The approach is modelled on a real process and therefore describes two essential processes - detection and correction process.

  8. The concealed finds from the Mühlberg-Ensemble in Kempten (southern Germany)

    DEFF Research Database (Denmark)

    Atzbach, Rainer

    2012-01-01

    history. This paper examines an exceptional collection of assemblages recovered from dead spaces within three adjacent buildings in the town of Kempten, southern Germany. It summarizes the major research project based on the wide variety of finds recovered, including numerous objects of wood, leather, fur......Concealed finds in buildings are a worldwide phenomenon. Since the 14th century, the angles of vaults, the dead space between ceilings and floors, walled niches and other voids in buildings have been used to dump waste, mostly on the occasion of rebuilding activities. In a few cases, careful...

  9. Human reliability analysis of errors of commission: a review of methods and applications

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2007-06-15

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  10. Human reliability analysis of errors of commission: a review of methods and applications

    International Nuclear Information System (INIS)

    Reer, B.

    2007-06-01

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  11. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  12. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  13. A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization

    Science.gov (United States)

    Foster, John V.; Cunningham, Kevin

    2010-01-01

    Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the

  14. Knowledge-base for the new human reliability analysis method, A Technique for Human Error Analysis (ATHEANA)

    International Nuclear Information System (INIS)

    Cooper, S.E.; Wreathall, J.; Thompson, C.M., Drouin, M.; Bley, D.C.

    1996-01-01

    This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ''A Technique for Human Error Analysis'' (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst

  15. Evaluating Method Engineer Performance: an error classification and preliminary empirical study

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    1998-11-01

    Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.

  16. Geochemical Data for Samples Collected in 2007 Near the Concealed Pebble Porphyry Cu-Au-Mo Deposit, Southwest Alaska

    Science.gov (United States)

    Fey, David L.; Granitto, Matthew; Giles, Stuart A.; Smith, Steven M.; Eppinger, Robert G.; Kelley, Karen D.

    2008-01-01

    In the summer of 2007, the U.S. Geological Survey (USGS) began an exploration geochemical research study over the Pebble porphyry copper-gold-molydenum (Cu-Au-Mo) deposit in southwest Alaska. The Pebble deposit is extremely large and is almost entirely concealed by tundra, glacial deposits, and post-Cretaceous volcanic and volcaniclastic rocks. The deposit is presently being explored by Northern Dynasty Minerals, Ltd., and Anglo-American LLC. The USGS undertakes unbiased, broad-scale mineral resource assessments of government lands to provide Congress and citizens with information on national mineral endowment. Research on known deposits is also done to refine and better constrain methods and deposit models for the mineral resource assessments. The Pebble deposit was chosen for this study because it is concealed by surficial cover rocks, it is relatively undisturbed (except for exploration company drill holes), it is a large mineral system, and it is fairly well constrained at depth by the drill hole geology and geochemistry. The goals of the USGS study are (1) to determine whether the concealed deposit can be detected with surface samples, (2) to better understand the processes of metal migration from the deposit to the surface, and (3) to test and develop methods for assessing mineral resources in similar concealed terrains. This report presents analytical results for geochemical samples collected in 2007 from the Pebble deposit and surrounding environs. The analytical data are presented digitally both as an integrated Microsoft 2003 Access? database and as Microsoft 2003 Excel? files. The Pebble deposit is located in southwestern Alaska on state lands about 30 km (18 mi) northwest of the village of Illiamna and 320 km (200 mi) southwest of Anchorage (fig. 1). Elevations in the Pebble area range from 287 m (940 ft) at Frying Pan Lake just south of the deposit to 1146 m (3760 ft) on Kaskanak Mountain about 5 km (5 mi) to the west. The deposit is in an area of

  17. A Multipoint Method for Detecting Genotyping Errors and Mutations in Sibling-Pair Linkage Data

    OpenAIRE

    Douglas, Julie A.; Boehnke, Michael; Lange, Kenneth

    2000-01-01

    The identification of genes contributing to complex diseases and quantitative traits requires genetic data of high fidelity, because undetected errors and mutations can profoundly affect linkage information. The recent emphasis on the use of the sibling-pair design eliminates or decreases the likelihood of detection of genotyping errors and marker mutations through apparent Mendelian incompatibilities or close double recombinants. In this article, we describe a hidden Markov method for detect...

  18. A Method to Optimize Geometric Errors of Machine Tool based on SNR Quality Loss Function and Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Cai Ligang

    2017-01-01

    Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.

  19. A Method and Support Tool for the Analysis of Human Error Hazards in Digital Devices

    International Nuclear Information System (INIS)

    Lee, Yong Hee; Kim, Seon Soo; Lee, Yong Hee

    2012-01-01

    In recent years, many nuclear power plants have adopted modern digital I and C technologies since they are expected to significantly improve their performance and safety. Modern digital technologies were expected to significantly improve both the economical efficiency and safety of nuclear power plants. However, the introduction of an advanced main control room (MCR) is accompanied with lots of changes in forms and features and differences through virtue of new digital devices. Many user-friendly displays and new features in digital devices are not enough to prevent human errors in nuclear power plants (NPPs). It may be an urgent to matter find the human errors potentials due to digital devices, and their detailed mechanisms. We can then consider them during the design of digital devices and their interfaces. The characteristics of digital technologies and devices may give many opportunities to the interface management, and can be integrated into a compact single workstation in an advanced MCR, such that workers can operate the plant with minimum burden under any operating condition. However, these devices may introduce new types of human errors, and thus we need a means to evaluate and prevent such errors, especially within digital devices for NPPs. This research suggests a new method named HEA-BIS (Human Error Analysis based on Interaction Segment) to confirm and detect human errors associated with digital devices. This method can be facilitated by support tools when used to ensure the safety when applying digital devices in NPPs

  20. Randomized clinical trials in dentistry: Risks of bias, risks of random errors, reporting quality, and methodologic quality over the years 1955-2013.

    Directory of Open Access Journals (Sweden)

    Humam Saltaji

    Full Text Available To examine the risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions and the development of these aspects over time.We included 540 randomized clinical trials from 64 selected systematic reviews. We extracted, in duplicate, details from each of the selected randomized clinical trials with respect to publication and trial characteristics, reporting and methodologic characteristics, and Cochrane risk of bias domains. We analyzed data using logistic regression and Chi-square statistics.Sequence generation was assessed to be inadequate (at unclear or high risk of bias in 68% (n = 367 of the trials, while allocation concealment was inadequate in the majority of trials (n = 464; 85.9%. Blinding of participants and blinding of the outcome assessment were judged to be inadequate in 28.5% (n = 154 and 40.5% (n = 219 of the trials, respectively. A sample size calculation before the initiation of the study was not performed/reported in 79.1% (n = 427 of the trials, while the sample size was assessed as adequate in only 17.6% (n = 95 of the trials. Two thirds of the trials were not described as double blinded (n = 358; 66.3%, while the method of blinding was appropriate in 53% (n = 286 of the trials. We identified a significant decrease over time (1955-2013 in the proportion of trials assessed as having inadequately addressed methodological quality items (P < 0.05 in 30 out of the 40 quality criteria, or as being inadequate (at high or unclear risk of bias in five domains of the Cochrane risk of bias tool: sequence generation, allocation concealment, incomplete outcome data, other sources of bias, and overall risk of bias.The risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions have improved over time; however, further efforts that contribute to the development of more stringent

  1. Detecting concealed information from groups using a dynamic questioning approach: simultaneous skin conductance measurement and immediate feedback

    NARCIS (Netherlands)

    Meijer, E.H.; Bente, G.; Ben-Shakhar, G.; Schumacher, A.

    2013-01-01

    Lie detection procedures typically aim at determining the guilt or innocence of a single suspect. The Concealed Information Test (CIT), for example, has been shown to be highly successful in detecting the presence or absence of crime-related information in a suspect's memory. Many of today's

  2. A highly accurate finite-difference method with minimum dispersion error for solving the Helmholtz equation

    KAUST Repository

    Wu, Zedong

    2018-04-05

    Numerical simulation of the acoustic wave equation in either isotropic or anisotropic media is crucial to seismic modeling, imaging and inversion. Actually, it represents the core computation cost of these highly advanced seismic processing methods. However, the conventional finite-difference method suffers from severe numerical dispersion errors and S-wave artifacts when solving the acoustic wave equation for anisotropic media. We propose a method to obtain the finite-difference coefficients by comparing its numerical dispersion with the exact form. We find the optimal finite difference coefficients that share the dispersion characteristics of the exact equation with minimal dispersion error. The method is extended to solve the acoustic wave equation in transversely isotropic (TI) media without S-wave artifacts. Numerical examples show that the method is is highly accurate and efficient.

  3. Nonlinear effect of the structured light profilometry in the phase-shifting method and error correction

    International Nuclear Information System (INIS)

    Zhang Wan-Zhen; Chen Zhe-Bo; Xia Bin-Feng; Lin Bin; Cao Xiang-Qun

    2014-01-01

    Digital structured light (SL) profilometry is increasingly used in three-dimensional (3D) measurement technology. However, the nonlinearity of the off-the-shelf projectors and cameras seriously reduces the measurement accuracy. In this paper, first, we review the nonlinear effects of the projector–camera system in the phase-shifting structured light depth measurement method. We show that high order harmonic wave components lead to phase error in the phase-shifting method. Then a practical method based on frequency domain filtering is proposed for nonlinear error reduction. By using this method, the nonlinear calibration of the SL system is not required. Moreover, both the nonlinear effects of the projector and the camera can be effectively reduced. The simulations and experiments have verified our nonlinear correction method. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  4. Using the CAIR-method to derive cognitive error mechanisms

    International Nuclear Information System (INIS)

    Straeter, Oliver

    2000-01-01

    This paper describes an application of the second-generation method CAHR (Connectionism Assessment of Human Reliability; Straeter, 1997) that was developed at the Technical University of Munich and the GRS in the years from 1992 to 1998. The method enables to combine event analysis and assessment and therefore to base human reliability assessment on past experience. The term connectionism' was coined by modeling human cognition on the basis of artificial intelligence models. Connectionism is a term describing methods that represent complex interrelations of various parameters (known for pattern recognition, expert systems, modeling of cognition). The method enables to combine event analysis and assessment on past experience. The paper will demonstrate the application of the method to communication aspects in NPPs (Nuclear Power Plants) and will give some outlooks for further developments. Application of the method to the problem of communication failures, for examples, initial work on communication within the low-power and shut down study for Boiling Water Reactors (BWRs), investigation of communication failures, importance of procedural and verbal communication for different error type and causes for failures in procedural and verbal communication are explained. (S.Y.)

  5. Investigating the error sources of the online state of charge estimation methods for lithium-ion batteries in electric vehicles

    Science.gov (United States)

    Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu

    2018-02-01

    Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.

  6. A human error taxonomy and its application to an automatic method accident analysis

    International Nuclear Information System (INIS)

    Matthews, R.H.; Winter, P.W.

    1983-01-01

    Commentary is provided on the quantification aspects of human factors analysis in risk assessment. Methods for quantifying human error in a plant environment are discussed and their application to system quantification explored. Such a programme entails consideration of the data base and a taxonomy of factors contributing to human error. A multi-levelled approach to system quantification is proposed, each level being treated differently drawing on the advantages of different techniques within the fault/event tree framework. Management, as controller of organization, planning and procedure, is assigned a dominant role. (author)

  7. Are There Limits to Collectivism? Culture and Children's Reasoning About Lying to Conceal a Group Transgression.

    Science.gov (United States)

    Sweet, Monica A; Heyman, Gail D; Fu, Genyue; Lee, Kang

    2010-07-01

    This study explored the effects of collectivism on lying to conceal a group transgression. Seven-, 9-, and 11-year-old US and Chinese children (N = 374) were asked to evaluate stories in which protagonists either lied or told the truth about their group's transgression and were then asked about either the protagonist's motivations or justification for their own evaluations. Previous research suggests that children in collectivist societies such as China find lying for one's group to be more acceptable than do children from individualistic societies such as the United States. The current study provides evidence that this is not always the case: Chinese children in this study viewed lies told to conceal a group's transgressions less favourably than did US children. An examination of children's reasoning about protagonists' motivations for lying indicated that children in both countries focused on an impact to self when discussing motivations for protagonists to lie for their group. Overall, results suggest that children living in collectivist societies do not always focus on the needs of the group.

  8. Using snowball sampling method with nurses to understand medication administration errors.

    Science.gov (United States)

    Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In

    2009-02-01

    We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non

  9. Error analysis in Fourier methods for option pricing for exponential Lévy processes

    KAUST Repository

    Crocce, Fabian; Hä ppö lä , Juho; Keissling, Jonas; Tempone, Raul

    2015-01-01

    We derive an error bound for utilising the discrete Fourier transform method for solving Partial Integro-Differential Equations (PIDE) that describe european option prices for exponential Lévy driven asset prices. We give sufficient conditions

  10. Control of Human Error and comparison Level risk after correction action With the SHERPA Method in a control Room of petrochemical industry

    Directory of Open Access Journals (Sweden)

    A. Zakerian

    2011-12-01

    Full Text Available Background and aims Today in many jobs like nuclear, military and chemical industries, human errors may result in a disaster. Accident in different places of the world emphasizes this subject and we indicate for example, Chernobyl disaster in (1986, tree Mile accident in (1974 and Flixborough explosion in (1974.So human errors identification especially in important and intricate systems is necessary and unavoidable for predicting control methods.   Methods Recent research is a case study and performed in Zagross Methanol Company in Asalouye (South pars.   Walking –Talking through method with process expert and control room operators, inspecting technical documents are used for collecting required information and completing Systematic Human Error Reductive and Predictive Approach (SHERPA worksheets.   Results analyzing SHERPA worksheet indicated that, were accepting capable invertebrate errors % 71.25, % 26.75 undesirable errors, % 2 accepting capable(with change errors, % 0 accepting capable errors, and after correction action forecast Level risk to this arrangement, accepting capable invertebrate errors % 0, % 4.35 undesirable errors , % 58.55 accepting capable(with change errors, % 37.1 accepting capable errors .   ConclusionFinally this result is comprehension that this method in different industries especially in chemical industries is enforceable and useful for human errors identification that may lead to accident and adventures.

  11. Errors in the estimation method for the rejection of vibrations in adaptive optics systems

    Science.gov (United States)

    Kania, Dariusz

    2017-06-01

    In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.

  12. Blood pressure shifts resulting from a concealed arteriovenous fistula associated with an iliac aneurysm: a case report.

    Science.gov (United States)

    Doi, Shintaro; Motoyama, Yoshiaki; Ito, Hiromi

    2016-01-01

    A solitary iliac aneurysm (SIA) is more uncommon than an abdominal aortic aneurysm. The aneurysm is located in the deep pelvis and is diagnosed when it reaches a large size with symptoms of compression around adjacent structures and organs or when it ruptures. A definite diagnosis of an arteriovenous fistula (AVF) associated with a SIA is difficult preoperatively because there might not be enough symptoms and time for diagnosis. Here, we present a patient with asymptomatic rupture of SIA into the common iliac vein with characteristic blood pressure shifts. A 41-year-old man with a huge SIA underwent aortobifemoral graft replacement. Preoperatively, his blood pressure showed characteristic shifts for one or two heartbeats out of five beats, indicating that an AVF was present and that the shunt was about to having a high flow. During surgery, an AVF associated with the SIA was found to be concealed owing to compression from the huge iliac artery aneurysm, and the shunt showed a high flow, resulting in shock during the surgery. No complications were noted after aortobifemoral graft replacement. Postoperatively, we noted an enhanced paravertebral vein on computed tomography (CT), which indicated the presence of an AVF. Definite diagnosis of an AVF offers advantages in surgical and anesthetic management. We emphasize that a large SIA can push the iliac vein and occlude an AVF laceration, concealing the enhancement of the veins in the arterial phase on CT. Blood pressure shifts might predict the existence of a concealed AVF that has a large shunt. Even if the vena cava and the iliac veins are not enhanced on CT, anesthesiologists should carefully determine whether their distal branches are enhanced.

  13. The effects of sweep numbers per average and protocol type on the accuracy of the p300-based concealed information test.

    Science.gov (United States)

    Dietrich, Ariana B; Hu, Xiaoqing; Rosenfeld, J Peter

    2014-03-01

    In the first of two experiments, we compared the accuracy of the P300 concealed information test protocol as a function of numbers of trials experienced by subjects and ERP averages analyzed by investigators. Contrary to Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), we found no evidence that 100 trial based averages are more accurate than 66 or 33 trial based averages (all numbers led to accuracies of 84-94 %). There was actually a trend favoring the lowest trial numbers. The second study compared numbers of irrelevant stimuli recalled and recognized in the 3-stimulus protocol versus the complex trial protocol (Rosenfeld in Memory detection: theory and application of the concealed information test, Cambridge University Press, New York, pp 63-89, 2011). Again, in contrast to expectations from Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), there were no differences between protocols, although there were more irrelevant stimuli recognized than recalled, and irrelevant 4-digit number group stimuli were neither recalled nor recognized as well as irrelevant city name stimuli. We therefore conclude that stimulus processing in the P300-based complex trial protocol-with no more than 33 sweep averages-is adequate to allow accurate detection of concealed information.

  14. [EFFECTIVENESS OF ADVANCED SKIN FLAP AND V-SHAPED VENTRAL INCISION ALONG THE ROOT OF PENILE SHAFT FOR CONCEALED PENIS].

    Science.gov (United States)

    Lin, Junshan; Li, Dumiao; Zhang, Jianxing; Wu, Qiang; Xu, Yali; Lin, Li

    2015-09-01

    To investigate effectiveness of advanced skin flap and V-shaped ventral incision along the root of penile shaft for concealed penis in children. Between July 2007 and January 2015, 121 boys with concealed penis were treated with advanced skin flap and V-shaped ventral incision along the root of penile shaft. The age varied from 18 months to 13 years (mean, 7.2 years). Repair was based on a vertical incision in median raphe, complete degloving of penis and tacking its base to the dermis of the skin. Advanced skin flap and a V-shaped ventral incision along the root of penile shaft were used to cover the penile shaft. The operation time ranged from 60 to 100 minutes (mean, 75 minutes). Disruption of wound occurred in 1 case, and was cured after dressing change; and primary healing of incision was obtained in the others. The follow-up period ranged from 3 months to 7 years (median, 24 months). All patients achieved good to excellent cosmetic results with a low incidence of complications. The results were satisfactory in exposure of penis and prepuce appearance. No obvious scar was observed. The penis had similar appearance to that after prepuce circumcision. A combination of advanced skin flap and V-shaped ventral incision along the root of penile shaft is a simple, safe, and effective procedure for concealed penis with a similar appearance result to the prepuce circumcision.

  15. Crustal concealing of small-scale core-field secular variation

    DEFF Research Database (Denmark)

    Hulot, G.; Olsen, Nils; Thebault, E.

    2009-01-01

    of internal origin happen to be detectable now in spherical harmonic degrees up to, perhaps, 16. All of these changes are usually attributed to changes in the core field itself, the secular variation, on the ground that the lithospheric magnetization cannot produce such signals. It has, however, been pointed...... out, on empirical grounds, that temporal changes in the field of internal origin produced by the induced part of the lithospheric magnetization could dominate the core field signal beyond degree 22. This short note revisits this issue by taking advantage of our improved knowledge of the small...... cause of the observed changes in the field of internal origin up to some critical degree, N-C, is indeed likely to be the secular variation of the core field, but that the signal produced by the time-varying lithospheric field is bound to dominate and conceal the time-varying core signal beyond...

  16. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    Science.gov (United States)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  17. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    Science.gov (United States)

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Local and accumulated truncation errors in a class of perturbative numerical methods

    International Nuclear Information System (INIS)

    Adam, G.; Adam, S.; Corciovei, A.

    1980-01-01

    The approach to the solution of the radial Schroedinger equation using piecewise perturbative theory with a step function reference potential leads to a class of powerful numerical methods, conveniently abridged as SF-PNM(K), where K denotes the order at which the perturbation series was truncated. In the present paper rigorous results are given for the local truncation errors and bounds are derived for the accumulated truncated errors associated to SF-PNM(K), K = 0, 1, 2. They allow us to establish the smoothness conditions which have to be fulfilled by the potential in order to ensure a safe use of SF-PNM(K), and to understand the experimentally observed behaviour of the numerical results with the step size h. (author)

  19. The Adaptive-Clustering and Error-Correction Method for Forecasting Cyanobacteria Blooms in Lakes and Reservoirs

    Directory of Open Access Journals (Sweden)

    Xiao-zhe Bai

    2017-01-01

    Full Text Available Globally, cyanobacteria blooms frequently occur, and effective prediction of cyanobacteria blooms in lakes and reservoirs could constitute an essential proactive strategy for water-resource protection. However, cyanobacteria blooms are very complicated because of the internal stochastic nature of the system evolution and the external uncertainty of the observation data. In this study, an adaptive-clustering algorithm is introduced to obtain some typical operating intervals. In addition, the number of nearest neighbors used for modeling was optimized by particle swarm optimization. Finally, a fuzzy linear regression method based on error-correction was used to revise the model dynamically near the operating point. We found that the combined method can characterize the evolutionary track of cyanobacteria blooms in lakes and reservoirs. The model constructed in this paper is compared to other cyanobacteria-bloom forecasting methods (e.g., phase space reconstruction and traditional-clustering linear regression, and, then, the average relative error and average absolute error are used to compare the accuracies of these models. The results suggest that the proposed model is superior. As such, the newly developed approach achieves more precise predictions, which can be used to prevent the further deterioration of the water environment.

  20. Self-Concealment, Social Network Sites Usage, Social Appearance Anxiety, Loneliness of High School Students: A Model Testing

    Science.gov (United States)

    Dogan, Ugur; Çolak, Tugba Seda

    2016-01-01

    This study was tested a model for explain to social networks sites (SNS) usage with structural equation modeling (SEM). Using SEM on a sample of 475 high school students (35% male, 65% female) students, model was investigated the relationship between self-concealment, social appearance anxiety, loneliness on SNS such as Twitter and Facebook usage.…

  1. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  2. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2012-09-01

    Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their

  3. Analysis of a HP-refinement method for solving the neutron transport equation using two error estimators

    International Nuclear Information System (INIS)

    Fournier, D.; Le Tellier, R.; Suteau, C.; Herbin, R.

    2011-01-01

    The solution of the time-independent neutron transport equation in a deterministic way invariably consists in the successive discretization of the three variables: energy, angle and space. In the SNATCH solver used in this study, the energy and the angle are respectively discretized with a multigroup approach and the discrete ordinate method. A set of spatial coupled transport equations is obtained and solved using the Discontinuous Galerkin Finite Element Method (DGFEM). Within this method, the spatial domain is decomposed into elements and the solution is approximated by a hierarchical polynomial basis in each one. This approach is time and memory consuming when the mesh becomes fine or the basis order high. To improve the computational time and the memory footprint, adaptive algorithms are proposed. These algorithms are based on an error estimation in each cell. If the error is important in a given region, the mesh has to be refined (h−refinement) or the polynomial basis order increased (p−refinement). This paper is related to the choice between the two types of refinement. Two ways to estimate the error are compared on different benchmarks. Analyzing the differences, a hp−refinement method is proposed and tested. (author)

  4. Comparative evaluation of three cognitive error analysis methods through an application to accident management tasks in NPPs

    International Nuclear Information System (INIS)

    Jung, Won Dea; Kim, Jae Whan; Ha, Jae Joo; Yoon, Wan C.

    1999-01-01

    This study was performed to comparatively evaluate selected Human Reliability Analysis (HRA) methods which mainly focus on cognitive error analysis, and to derive the requirement of a new human error analysis (HEA) framework for Accident Management (AM) in nuclear power plants(NPPs). In order to achieve this goal, we carried out a case study of human error analysis on an AM task in NPPs. In the study we evaluated three cognitive HEA methods, HRMS, CREAM and PHECA, which were selected through the review of the currently available seven cognitive HEA methods. The task of reactor cavity flooding was chosen for the application study as one of typical tasks of AM in NPPs. From the study, we derived seven requirement items for a new HEA method of AM in NPPs. We could also evaluate the applicability of three cognitive HEA methods to AM tasks. CREAM is considered to be more appropriate than others for the analysis of AM tasks. But, PHECA is regarded less appropriate for the predictive HEA technique as well as for the analysis of AM tasks. In addition to these, the advantages and disadvantages of each method are described. (author)

  5. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  6. Medication Errors in a Swiss Cardiovascular Surgery Department: A Cross-Sectional Study Based on a Novel Medication Error Report Method

    Directory of Open Access Journals (Sweden)

    Kaspar Küng

    2013-01-01

    Full Text Available The purpose of this study was (1 to determine frequency and type of medication errors (MEs, (2 to assess the number of MEs prevented by registered nurses, (3 to assess the consequences of ME for patients, and (4 to compare the number of MEs reported by a newly developed medication error self-reporting tool to the number reported by the traditional incident reporting system. We conducted a cross-sectional study on ME in the Cardiovascular Surgery Department of Bern University Hospital in Switzerland. Eligible registered nurses ( involving in the medication process were included. Data on ME were collected using an investigator-developed medication error self reporting tool (MESRT that asked about the occurrence and characteristics of ME. Registered nurses were instructed to complete a MESRT at the end of each shift even if there was no ME. All MESRTs were completed anonymously. During the one-month study period, a total of 987 MESRTs were returned. Of the 987 completed MESRTs, 288 (29% indicated that there had been an ME. Registered nurses reported preventing 49 (5% MEs. Overall, eight (2.8% MEs had patient consequences. The high response rate suggests that this new method may be a very effective approach to detect, report, and describe ME in hospitals.

  7. Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.

    Science.gov (United States)

    Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2016-01-01

    Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (Pcutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. HA03 as an Iranian Candidate Concealed Antigen for Vaccination against Hyalomma anatolicum anatolicum: Comparative Structural and In silico Studies

    Directory of Open Access Journals (Sweden)

    Mohammadi, A.

    2013-12-01

    Full Text Available In the last decades researchers had focused on developing a vaccine against tick based on protective antigen. Recombinant vaccines based on concealed antigen from Boophilus microplus have been developed in Australia and Cuba by the name of TICKGARD and GAVAC (De La Fuente and Kocan, 2006. Further studies on this antigen have shown some extent of protection against other species (De Vos et al., 2001. In Iran most important species is Hyalomma anatolicum and limited information about its control are available. This paper reports structural and polymorphic analysis of HA03 as an Iranian candidate concealed antigen of H. a. anatolicum deposited in Gen-Bank .(Aghaeipour et al. GQ228820. The comparison between this antigen and other mid gut concealed antigen that their characteristics are available in GenBank showed there are high rate of similarity between them. The HA03 amino acid sequence had a homology of around 89%, 64%, 56% with HA98, BM86, BM95 respectively. Potential of MHC class I and II binding region indicated a considerable variation between BM86 antigen and its efficiency against Iranian H. a. anatolicum. In addition, predicted major of hydrophobisity and similarity in N-glycosylation besides large amount of cystein and seven EGF like regions presented in protein structure revealed that value of HA03 as a new protective antigen and the necessity of the development, BM86 homolog of H. a. anatolicum HA03 based recombinant vaccine.

  9. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  10. Report from LHC MDs 1391 and 1483: Tests of new methods for study of nonlinear errors in the LHC experimental insertions

    CERN Document Server

    Maclean, Ewen Hamish; Fuchsberger, Kajetan; Giovannozzi, Massimo; Persson, Tobias Hakan Bjorn; Tomas Garcia, Rogelio; CERN. Geneva. ATS Department

    2017-01-01

    Nonlinear errors in experimental insertions can pose a significant challenge to the operability of low-β∗ colliders. Previously such errors in the LHC have been studied via their feed-down to tune and coupling under the influence of the nominal crossing angle bumps. This method has proved useful in validating various components of the magnetic model. To understand and correct those errors where significant discrepancies exist with the magnetic model however, will require further development of this technique, in addition to the application of novel methods. In 2016 studies were performed to test new methods for the study of the IR-nonlinear errors.

  11. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    Science.gov (United States)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  12. Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems; Methodes de Galerkine discontinues et analyse d'erreur a posteriori pour les problemes de diffusion heterogene

    Energy Technology Data Exchange (ETDEWEB)

    Stephansen, A.F

    2007-12-15

    In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)

  13. Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations

    KAUST Repository

    Jin, Bangti

    2013-01-01

    We consider the initial boundary value problem for a homogeneous time-fractional diffusion equation with an initial condition ν(x) and a homogeneous Dirichlet boundary condition in a bounded convex polygonal domain Ω. We study two semidiscrete approximation schemes, i.e., the Galerkin finite element method (FEM) and lumped mass Galerkin FEM, using piecewise linear functions. We establish almost optimal with respect to the data regularity error estimates, including the cases of smooth and nonsmooth initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally, we present some numerical results that give insight into the reliability of the theoretical study. © 2013 Society for Industrial and Applied Mathematics.

  14. MO-F-BRA-04: Voxel-Based Statistical Analysis of Deformable Image Registration Error via a Finite Element Method.

    Science.gov (United States)

    Li, S; Lu, M; Kim, J; Glide-Hurst, C; Chetty, I; Zhong, H

    2012-06-01

    Purpose Clinical implementation of adaptive treatment planning is limited by the lack of quantitative tools to assess deformable image registration errors (R-ERR). The purpose of this study was to develop a method, using finite element modeling (FEM), to estimate registration errors based on mechanical changes resulting from them. Methods An experimental platform to quantify the correlation between registration errors and their mechanical consequences was developed as follows: diaphragm deformation was simulated on the CT images in patients with lung cancer using a finite element method (FEM). The simulated displacement vector fields (F-DVF) were used to warp each CT image to generate a FEM image. B-Spline based (Elastix) registrations were performed from reference to FEM images to generate a registration DVF (R-DVF). The F- DVF was subtracted from R-DVF. The magnitude of the difference vector was defined as the registration error, which is a consequence of mechanically unbalanced energy (UE), computed using 'in-house-developed' FEM software. A nonlinear regression model was used based on imaging voxel data and the analysis considered clustered voxel data within images. Results A regression model analysis showed that UE was significantly correlated with registration error, DVF and the product of registration error and DVF respectively with R̂2=0.73 (R=0.854). The association was verified independently using 40 tracked landmarks. A linear function between the means of UE values and R- DVF*R-ERR has been established. The mean registration error (N=8) was 0.9 mm. 85.4% of voxels fit this model within one standard deviation. Conclusions An encouraging relationship between UE and registration error has been found. These experimental results suggest the feasibility of UE as a valuable tool for evaluating registration errors, thus supporting 4D and adaptive radiotherapy. The research was supported by NIH/NCI R01CA140341. © 2012 American Association of Physicists in

  15. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  16. Some error estimates for the lumped mass finite element method for a parabolic problem

    KAUST Repository

    Chatzipantelidis, P.

    2012-01-01

    We study the spatially semidiscrete lumped mass method for the model homogeneous heat equation with homogeneous Dirichlet boundary conditions. Improving earlier results we show that known optimal order smooth initial data error estimates for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods. © 2011 American Mathematical Society.

  17. Prediction-error of Prediction Error (PPE)-based Reversible Data Hiding

    OpenAIRE

    Wu, Han-Zhou; Wang, Hong-Xia; Shi, Yun-Qing

    2016-01-01

    This paper presents a novel reversible data hiding (RDH) algorithm for gray-scaled images, in which the prediction-error of prediction error (PPE) of a pixel is used to carry the secret data. In the proposed method, the pixels to be embedded are firstly predicted with their neighboring pixels to obtain the corresponding prediction errors (PEs). Then, by exploiting the PEs of the neighboring pixels, the prediction of the PEs of the pixels can be determined. And, a sorting technique based on th...

  18. Three-Dimensional Microwave Imaging for Concealed Weapon Detection Using Range Stacking Technique

    Directory of Open Access Journals (Sweden)

    Weixian Tan

    2017-01-01

    Full Text Available Three-dimensional (3D microwave imaging has been proven to be well suited for concealed weapon detection application. For the 3D image reconstruction under two-dimensional (2D planar aperture condition, most of current imaging algorithms focus on decomposing the 3D free space Green function by exploiting the stationary phase and, consequently, the accuracy of the final imagery is obtained at a sacrifice of computational complexity due to the need of interpolation. In this paper, from an alternative viewpoint, we propose a novel interpolation-free imaging algorithm based on wavefront reconstruction theory. The algorithm is an extension of the 2D range stacking algorithm (RSA with the advantages of low computational cost and high precision. The algorithm uses different reference signal spectrums at different range bins and then forms the target functions at desired range bin by a concise coherent summation. Several practical issues such as the propagation loss compensation, wavefront reconstruction, and aliasing mitigating are also considered. The sampling criterion and the achievable resolutions for the proposed algorithm are also derived. Finally, the proposed method is validated through extensive computer simulations and real-field experiments. The results show that accurate 3D image can be generated at a very high speed by utilizing the proposed algorithm.

  19. Beam-Based Error Identification and Correction Methods for Particle Accelerators

    CERN Document Server

    AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas

    2014-06-10

    Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...

  20. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    Science.gov (United States)

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  1. Optimal Error Estimates of Two Mixed Finite Element Methods for Parabolic Integro-Differential Equations with Nonsmooth Initial Data

    KAUST Repository

    Goswami, Deepjyoti

    2013-05-01

    In the first part of this article, a new mixed method is proposed and analyzed for parabolic integro-differential equations (PIDE) with nonsmooth initial data. Compared to the standard mixed method for PIDE, the present method does not bank on a reformulation using a resolvent operator. Based on energy arguments combined with a repeated use of an integral operator and without using parabolic type duality technique, optimal L2 L2-error estimates are derived for semidiscrete approximations, when the initial condition is in L2 L2. Due to the presence of the integral term, it is, further, observed that a negative norm estimate plays a crucial role in our error analysis. Moreover, the proposed analysis follows the spirit of the proof techniques used in deriving optimal error estimates for finite element approximations to PIDE with smooth data and therefore, it unifies both the theories, i.e., one for smooth data and other for nonsmooth data. Finally, we extend the proposed analysis to the standard mixed method for PIDE with rough initial data and provide an optimal error estimate in L2, L 2, which improves upon the results available in the literature. © 2013 Springer Science+Business Media New York.

  2. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  3. Do natural methods for fertility regulation increase the risks of genetic errors?

    Science.gov (United States)

    Serra, A

    1981-09-01

    Genetic errors of many kinds are connected with the reproductive processes and are favored by a nunber of largely uncontrollable, endogenous, and/or exogenous factors. For a long time human beings have taken into their own hands the control of this process. The regulation of fertility is clearly a forceful request to any family, to any community, were it only to lower the level of the consequences of genetic errors. In connection with this request, and in the context of the Congress for the Family of Africa and Europe (Catholic University, January 1981), 1 question must still be raised and possibly answered. The question is: do or can the so called "natural methods" for the regulation of fertility increase the risks of genetic errors with their generally dramatic effects on families and on communities. It is important to try to give as far as possible a scientifically based answer to this question. Fr. Haring, a moral theologian, citing scientific evidence finds it shocking that the rhythm method, so strongly and recently endorsed again by Church authorities, should be classified among the means of "birth control" by way of spontaneous abortion or at least by spontaneous loss of a large number of zygotes which, due to the concrete application of the rhythm method, lack of necessary vitality for survival. He goes on to state that the scientific research provides overwhelming evidence that the rhythm method in its traditional form is responsible for a disproportionate waste of zygotes and a disproportionate frequency of spontaneous abortions and a defective childern. Professor Hilgers, a reproductive physiologist, takes on opposite view, maintaining that the hypotheses are arbitrary and the alarm false. The strongest evidence upon which Fr. Haring bases his moral principles about the use of the natural methods of fertility regulation is a paper by Guerrero and Rojos (1975). These authors examined, retrospectively, the success of 965 pregnancies which occurred in

  4. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  5. Identification and Assessment of Human Errors in Postgraduate Endodontic Students of Kerman University of Medical Sciences by Using the SHERPA Method

    Directory of Open Access Journals (Sweden)

    Saman Dastaran

    2016-03-01

    Full Text Available Introduction: Human errors are the cause of many accidents, including industrial and medical, therefore finding out an approach for identifying and reducing them is very important. Since no study has been done about human errors in the dental field, this study aimed to identify and assess human errors in postgraduate endodontic students of Kerman University of Medical Sciences by using the SHERPA Method. Methods: This cross-sectional study was performed during year 2014. Data was collected using task observation and interviewing postgraduate endodontic students. Overall, 10 critical tasks, which were most likely to cause harm to patients were determined. Next, Hierarchical Task Analysis (HTA was conducted and human errors in each task were identified by the Systematic Human Error Reduction Prediction Approach (SHERPA technique worksheets. Results: After analyzing the SHERPA worksheets, 90 human errors were identified including (67.7% action errors, (13.3% checking errors, (8.8% selection errors, (5.5% retrieval errors and (4.4% communication errors. As a result, most of them were action errors and less of them were communication errors. Conclusions: The results of the study showed that the highest percentage of errors and the highest level of risk were associated with action errors, therefore, to reduce the occurrence of such errors and limit their consequences, control measures including periodical training of work procedures, providing work check-lists, development of guidelines and establishment of a systematic and standardized reporting system, should be put in place. Regarding the results of this study, the control of recovery errors with the highest percentage of undesirable risk and action errors with the highest frequency of errors should be in the priority of control

  6. When interference helps: Increasing executive load to facilitate deception detection in the Concealed Information Test

    Directory of Open Access Journals (Sweden)

    George eVisu-Petra

    2013-03-01

    Full Text Available The possibility to enhance the detection efficiency of the Concealed Information Test (CIT by increasing executive load was investigated, using an interference design. After learning and executing a mock crime scenario, subjects underwent three deception detection tests: an RT-based CIT, an RT-based CIT plus a concurrent memory task (CITMem, and an RT-based CIT plus a concurrent set-shifting task (CITShift. The concealed information effect, consisting in increased RT and lower response accuracy for probe items compared to irrelevant items, was evidenced across all three conditions. The group analyses indicated a larger difference between RTs to probe and irrelevant items in the dual-task conditions, but this difference was not translated in a significantly increased detection efficiency at an individual level. Signal detection parameters based on the comparison with a simulated innocent group showed accurate discrimination for all conditions. Overall response accuracy on the CITMem was highest and the difference between response accuracy to probes and irrelevants was smallest in this condition. Accuracy on the concurrent tasks (Mem and Shift was high, and responses on these tasks were significantly influenced by CIT stimulus type (probes vs. irrelevants. The findings are interpreted in relation to the cognitive load/dual-task interference literature, generating important insights for research on the involvement of executive functions in deceptive behavior.

  7. Impact of Channel Estimation Errors on Multiuser Detection via the Replica Method

    Directory of Open Access Journals (Sweden)

    Li Husheng

    2005-01-01

    Full Text Available For practical wireless DS-CDMA systems, channel estimation is imperfect due to noise and interference. In this paper, the impact of channel estimation errors on multiuser detection (MUD is analyzed under the framework of the replica method. System performance is obtained in the large system limit for optimal MUD, linear MUD, and turbo MUD, and is validated by numerical results for finite systems.

  8. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  9. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  10. Beam-pointing error compensation method of phased array radar seeker with phantom-bit technology

    Directory of Open Access Journals (Sweden)

    Qiuqiu WEN

    2017-06-01

    Full Text Available A phased array radar seeker (PARS must be able to effectively decouple body motion and accurately extract the line-of-sight (LOS rate for target missile tracking. In this study, the real-time two-channel beam pointing error (BPE compensation method of PARS for LOS rate extraction is designed. The PARS discrete beam motion principium is analyzed, and the mathematical model of beam scanning control is finished. According to the principle of the antenna element shift phase, both the antenna element shift phase law and the causes of beam-pointing error under phantom-bit conditions are analyzed, and the effect of BPE caused by phantom-bit technology (PBT on the extraction accuracy of the LOS rate is examined. A compensation method is given, which includes coordinate transforms, beam angle margin compensation, and detector dislocation angle calculation. When the method is used, the beam angle margin in the pitch and yaw directions is calculated to reduce the effect of the missile body disturbance and to improve LOS rate extraction precision by compensating for the detector dislocation angle. The simulation results validate the proposed method.

  11. Error characterization methods for surface soil moisture products from remote sensing

    International Nuclear Information System (INIS)

    Doubková, M.

    2012-01-01

    To support the operational use of Synthetic Aperture Radar (SAR) earth observation systems, the European Space Agency (ESA) is developing Sentinel-1 radar satellites operating in C-band. Much like its SAR predecessors (Earth Resource Satellite, ENVISAT, and RADARSAT), the Sentinel-1 will operate at a medium spatial resolution (ranging from 5 to 40 m), but with a greatly improved revisit period, especially over Europe (∼2 days). Given the planned high temporal sampling and the operational configuration Sentinel-1 is expected to be beneficial for operational monitoring of dynamic processes in hydrology and phenology. The benefit of a C-band SAR monitoring service in hydrology has already been demonstrated within the scope of the Soil Moisture for Hydrometeorologic Applications (SHARE) project using data from the Global Mode (GM) of the Advanced Synthetic Aperture Radar (ASAR). To fully exploit the potential of the SAR soil moisture products, well characterized error needs to be provided with the products. Understanding errors of remotely sensed surface soil moisture (SSM) datasets was indispensible for their application in models, for extractions of blended SSM products, as well as for their usage in evaluation of other soil moisture datasets. This thesis has several objectives. First, it provides the basics and state of the art methods for evaluating measures of SSM, including both the standard (e.g. Root Mean Square Error, Correlation coefficient) and the advanced (e.g. Error propagation, Triple collocation) evaluation measures. A summary of applications of soil moisture datasets is presented and evaluation measures are suggested for each application according to its requirement on the dataset quality. The evaluation of the Advanced Synthetic Aperture Radar (ASAR) Global Mode (GM) SSM using the standard and advanced evaluation measures comprises a second objective of the work. To achieve the second objective, the data from the Australian Water Assessment System

  12. Evaluation of coping strategies in established rheumatoid arthritis patients: emergence of concealment in an Asian cohort.

    Science.gov (United States)

    Chew, Elizabeth; Griva, Konstadina; Cheung, Peter P

    2016-11-01

    To evaluate coping strategies of Asian RA patients and their associations with health-related quality of life (HRQoL). A cross-sectional sample of patients with established RA was evaluated using measures of coping (Coping in Rheumatoid Arthritis Questionnaire [C-RAQ]; appraisal of coping effectiveness and helplessness), HRQoL (Mental and Physical Components [MCS/PCS] of the Short Form 12v2; Rheumatoid Arthritis Impact of Disease score [RAID]) and clinical/laboratory assessments. Principal component analysis was conducted to identify coping strategies. Multiple linear regression analyses were performed to evaluate the associations between coping strategies and HRQoL outcomes. The study sample comprised 101 patients, 81% female, 72.3% Chinese, mean age 54.2 ± 12.6 years. Five coping strategies were identified: Active problem solving (E = 5.36), Distancing (E = 2.30), Concealment (E = 1.89), Cognitive reframing (E = 1.55) and Emotional expression (E = 1.26). Concealment was consistently associated with PCS (r s = -0.23, P = 0.049), MCS (r s = -0.24, P = 0.04) and RAID (r s = 0.39, P culture-specific. Interventions should tailor psychosocial support needs to address not only coping strategies, but patients' perception of their coping. © 2016 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  13. Effect of Temperature and Moisture on the Development of Concealed Damage in Raw Almonds (Prunus dulcis).

    Science.gov (United States)

    Rogel-Castillo, Cristian; Zuskov, David; Chan, Bronte Lee; Lee, Jihyun; Huang, Guangwei; Mitchell, Alyson E

    2015-09-23

    Concealed damage (CD) is a brown discoloration of nutmeat that appears only after kernels are treated with moderate heat (e.g., roasting). Identifying factors that promote CD in almonds is of significant interest to the nut industry. Herein, the effect of temperature (35 and 45 °C) and moisture (almonds (Prunus dulcis var. Nonpareil) was studied using HS-SPME-GC/MS. A CIE LCh colorimetric method was developed to identify raw almonds with CD. A significant increase in CD was demonstrated in almonds exposed to moisture (8% kernel moisture content) at 45 °C as compared to 35 °C. Elevated levels of volatiles related to lipid peroxidation and amino acid degradation were observed in almonds with CD. These results suggest that postharvest moisture exposure resulting in an internal kernel moisture ≥ 8% is a key factor in the development of CD in raw almonds and that CD is accelerated by temperature.

  14. A remarkable systemic error in calibration methods of γ spectrometer used for determining activity of 238U

    International Nuclear Information System (INIS)

    Su Qiong; Cheng Jianping; Diao Lijun; Li Guiqun

    2006-01-01

    A remarkable systemic error which was unknown in past long time has been indicated. The error appears in the calibration methods of determining activity of 238 U is used with γ-spectrometer with high resolution. When the γ-ray of 92.6 keV as the characteristic radiation from 238 U is used to determine the activity of 238 U in natural environment samples, the disturbing radiation produced by external excitation (or called outer sourcing X-ray radiation) is the main problem. Because the X-ray intensity is changed with many indefinite factors, it is advised that the calibration methods should be put away. As the influence of the systemic errors has been left in some past research papers, the authors suggest that the data from those papers should be cited carefully and if possible the data ought to be re-determined. (authors)

  15. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  16. Vitamin D and Musculoskeletal Status in Nova Scotian Women Who Wear Concealing Clothing

    Directory of Open Access Journals (Sweden)

    Jo M. Welch

    2012-05-01

    Full Text Available Bone and muscle weakness due to vitamin D deficiency is common among Muslim women who reside in sunny, equatorial countries. The purpose of this study was to determine if living in a northern maritime location additionally disadvantages women who wear concealing clothes. A cross-sectional matched pair design was used to compare women who habitually wore concealing clothing with women who dressed according to western norms. Each premenopausal hijab-wearing woman (n = 11 was matched by age, height, weight and skin tone with a western-dressed woman. Subjects were tested by hand grip dynamometry to assess muscular strength and by quantitative ultrasound at the calcaneus to assess bone status. Nutritional intake was obtained by 24 h recall. Serum 25-hydroxyvitamin D (s-25(OHD status was determined in seven matched pairs. The hijab group had lower s-25(OHD than women who wore western clothes (40 ± 28 vs. 81 ± 32 nmol/L, p = 0.01. Grip strength in the right hand was lower in the hijab-wearing women (p = 0.05 but this appeared to be due to less participation in intense exercise. Bone status did not differ between groups (p = 0.9. Dietary intake of vitamin D was lower in the hijab-wearers (316 ± 353 vs. 601 ± 341 IU/day, p = 0.001. This pilot study suggests that women living in a northern maritime location appear to be at risk for vitamin D insufficiency and therefore should consider taking vitamin D supplements.

  17. Wind power error estimation in resource assessments.

    Directory of Open Access Journals (Sweden)

    Osvaldo Rodríguez

    Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  18. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  19. Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations

    KAUST Repository

    Jin, Bangti; Lazarov, Raytcho; Zhou, Zhi

    2013-01-01

    initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally

  20. Recursive prediction error methods for online estimation in nonlinear state-space models

    Directory of Open Access Journals (Sweden)

    Dag Ljungquist

    1994-04-01

    Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.

  1. A numerical method for multigroup slab-geometry discrete ordinates problems with no spatial truncation error

    International Nuclear Information System (INIS)

    Barros, R.C. de; Larsen, E.W.

    1991-01-01

    A generalization of the one-group Spectral Green's Function (SGF) method is developed for multigroup, slab-geometry discrete ordinates (S N ) problems. The multigroup SGF method is free from spatial truncation errors; it generated numerical values for the cell-edge and cell-average angular fluxes that agree with the analytic solution of the multigroup S N equations. Numerical results are given to illustrate the method's accuracy

  2. On Error Estimation in the Conjugate Gradient Method and why it Works in Finite Precision Computations

    Czech Academy of Sciences Publication Activity Database

    Strakoš, Zdeněk; Tichý, Petr

    2002-01-01

    Roč. 13, - (2002), s. 56-80 ISSN 1068-9613 R&D Projects: GA ČR GA201/02/0595 Institutional research plan: AV0Z1030915 Keywords : conjugate gradient method * Gauss kvadrature * evaluation of convergence * error bounds * finite precision arithmetic * rounding errors * loss of orthogonality Subject RIV: BA - General Mathematics Impact factor: 0.565, year: 2002 http://etna.mcs.kent.edu/volumes/2001-2010/vol13/abstract.php?vol=13&pages=56-80

  3. The "good cop, bad cop" effect in the RT-based concealed information test: exploring the effect of emotional expressions displayed by a virtual investigator.

    Directory of Open Access Journals (Sweden)

    Mihai Varga

    Full Text Available Concealing the possession of relevant information represents a complex cognitive process, shaped by contextual demands and individual differences in cognitive and socio-emotional functioning. The Reaction Time-based Concealed Information Test (RT-CIT is used to detect concealed knowledge based on the difference in RTs between denying recognition of critical (probes and newly encountered (irrelevant information. Several research questions were addressed in this scenario implemented after a mock crime. First, we were interested whether the introduction of a social stimulus (facial identity simulating a virtual investigator would facilitate the process of deception detection. Next, we explored whether his emotional displays (friendly, hostile or neutral would have a differential impact on speed of responses to probe versus irrelevant items. We also compared the impact of introducing similar stimuli in a working memory (WM updating context without requirements to conceal information. Finally, we explored the association between deceptive behavior and individual differences in WM updating proficiency or in internalizing problems (state / trait anxiety and depression. Results indicated that the mere presence of a neutral virtual investigator slowed down participants' responses, but not the appended lie-specific time (difference between probes and irrelevants. Emotional expression was shown to differentially affect speed of responses to critical items, with positive displays from the virtual examiner enhancing lie-specific time, compared to negative facial expressions, which had an opposite impact. This valence-specific effect was not visible in the WM updating context. Higher levels of trait / state anxiety were related to faster responses to probes in the negative condition (hostile facial expression of the RT-CIT. These preliminary findings further emphasize the need to take into account motivational and emotional factors when considering the

  4. The treatment of commission errors in first generation human reliability analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da, E-mail: bayout@cnen.gov.b, E-mail: rfonseca@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN) Rio de Janeiro, RJ (Brazil); Melo, Paulo Fernando Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)

  5. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  6. Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.

    Science.gov (United States)

    Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn

    2017-07-01

    The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.

  7. Courtesy stigma: A concealed consternation among caregivers of people affected by leprosy.

    Science.gov (United States)

    Dako-Gyeke, Mavis

    2018-01-01

    This study explored experiences of courtesy stigma among caregivers of people affected by leprosy. Using a qualitative research approach, twenty participants were purposively selected and in-depth interviews conducted. The interviews were audio-recorded, transcribed, and analyzed to identify emerging themes that addressed objectives of the study. The findings indicated that caregivers of people affected by leprosy experienced courtesy stigma. Evidence showed that fear of contagion underpinned caregivers' experiences, especially in employment and romantic relationships. In addition, participants adopted different strategies (disregarding, concealment, education, faith-based trust) to handle courtesy stigma. The findings demonstrate that psychosocial support and financial assistance to caregivers are necessary considerations for attainment of effective care for people affected by leprosy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. The error model and experiment of measuring angular position error based on laser collimation

    Science.gov (United States)

    Cai, Yangyang; Yang, Jing; Li, Jiakun; Feng, Qibo

    2018-01-01

    Rotary axis is the reference component of rotation motion. Angular position error is the most critical factor which impair the machining precision among the six degree-of-freedom (DOF) geometric errors of rotary axis. In this paper, the measuring method of angular position error of rotary axis based on laser collimation is thoroughly researched, the error model is established and 360 ° full range measurement is realized by using the high precision servo turntable. The change of space attitude of each moving part is described accurately by the 3×3 transformation matrices and the influences of various factors on the measurement results is analyzed in detail. Experiments results show that the measurement method can achieve high measurement accuracy and large measurement range.

  9. A comment on Farwell : brain fingerprinting: a comprehensive tutorial review of detection of concealed information with event-related brain potentials

    NARCIS (Netherlands)

    Meijer, E.H.; Ben-Shakhar, G.; Verschuere, B.; Donchin, E.

    2013-01-01

    In a recent issue of Cognitive Neurodynamics Farwell (Cogn Neurodyn 6:115-154, 2012) published a comprehensive tutorial review of the use of Event Related Brain Potentials (ERP) in the detection of concealed information. Farwell’s review covered much of his own work employing his ‘‘brain

  10. Method for evaluation of risk due to seismic related design and construction errors based on past reactor experience

    International Nuclear Information System (INIS)

    Gonzalez Cuesta, M.; Okrent, D.

    1985-01-01

    This paper proposes a methodology for quantification of risk due to seismic related design and construction errors in nuclear power plants, based on information available on errors discovered in the past. For the purposes of this paper, an error is defined as any event that causes the seismic safety margins of a nuclear power plant to be smaller than implied by current regulatory requirements and industry common practice. Also, the actual reduction in the safety margins caused by the error will be called a deficiency. The method is based on a theoretical model of errors, called a deficiency logic diagram. First, an ultimate cause is present. This ultimate cause is consumated as a specific instance, called originating error. As originating errors may occur in actions to be applied a number of times, a deficiency generation system may be involved. Quality assurance activities will hopefully identify most of these deficiencies, requesting their disposition. However, the quality assurance program is not perfect and some operating plant deficiencies may persist, causing different levels of impact to the plant logic. The paper provides a way of extrapolating information about errors discovered in plants under construction in order to assess the risk due to errors that have not been discovered

  11. Dynamic detection-rate-based bit allocation with genuine interval concealment for binary biometric representation.

    Science.gov (United States)

    Lim, Meng-Hui; Teoh, Andrew Beng Jin; Toh, Kar-Ann

    2013-06-01

    Biometric discretization is a key component in biometric cryptographic key generation. It converts an extracted biometric feature vector into a binary string via typical steps such as segmentation of each feature element into a number of labeled intervals, mapping of each interval-captured feature element onto a binary space, and concatenation of the resulted binary output of all feature elements into a binary string. Currently, the detection rate optimized bit allocation (DROBA) scheme is one of the most effective biometric discretization schemes in terms of its capability to assign binary bits dynamically to user-specific features with respect to their discriminability. However, we learn that DROBA suffers from potential discriminative feature misdetection and underdiscretization in its bit allocation process. This paper highlights such drawbacks and improves upon DROBA based on a novel two-stage algorithm: 1) a dynamic search method to efficiently recapture such misdetected features and to optimize the bit allocation of underdiscretized features and 2) a genuine interval concealment technique to alleviate crucial information leakage resulted from the dynamic search. Improvements in classification accuracy on two popular face data sets vindicate the feasibility of our approach compared with DROBA.

  12. On-Error Training (Book Excerpt).

    Science.gov (United States)

    Fukuda, Ryuji

    1985-01-01

    This excerpt from "Managerial Engineering: Techniques for Improving Quality and Productivity in the Workplace" describes the development, objectives, and use of On-Error Training (OET), a method which trains workers to learn from their errors. Also described is New Joharry's Window, a performance-error data analysis technique used in…

  13. Probability of error in information-hiding protocols

    NARCIS (Netherlands)

    Chatzikokolakis, K.; Palamidessi, C.; Panangaden, P.

    2007-01-01

    Randomized protocols for hiding private information can fruitfully be regarded as noisy channels in the information-theoretic sense, and the inference of the concealed information can be regarded as a hypothesis-testing problem. We consider the Bayesian approach to the problem, and investigate the

  14. Human errors, countermeasures for their prevention and evaluation

    International Nuclear Information System (INIS)

    Kohda, Takehisa; Inoue, Koichi

    1992-01-01

    The accidents originated in human errors have occurred as ever in recent large accidents such as the TMI accident and the Chernobyl accident. The proportion of the accidents originated in human errors is unexpectedly high, therefore, the reliability and safety of hardware are improved hereafter, but the improvement of human reliability cannot be expected. Human errors arise by the difference between the function required for men and the function actually accomplished by men, and the results exert some adverse effect to systems. Human errors are classified into design error, manufacture error, operation error, maintenance error, checkup error and general handling error. In terms of behavior, human errors are classified into forget to do, fail to do, do that must not be done, mistake in order and do at improper time. The factors in human error occurrence are circumstantial factor, personal factor and stress factor. As the method of analyzing and evaluating human errors, system engineering method such as probabilistic risk assessment is used. The technique for human error rate prediction, the method for human cognitive reliability, confusion matrix and SLIM-MAUD are also used. (K.I.)

  15. Counting OCR errors in typeset text

    Science.gov (United States)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  16. Shifted Legendre method with residual error estimation for delay linear Fredholm integro-differential equations

    Directory of Open Access Journals (Sweden)

    Şuayip Yüzbaşı

    2017-03-01

    Full Text Available In this paper, we suggest a matrix method for obtaining the approximate solutions of the delay linear Fredholm integro-differential equations with constant coefficients using the shifted Legendre polynomials. The problem is considered with mixed conditions. Using the required matrix operations, the delay linear Fredholm integro-differential equation is transformed into a matrix equation. Additionally, error analysis for the method is presented using the residual function. Illustrative examples are given to demonstrate the efficiency of the method. The results obtained in this study are compared with the known results.

  17. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    Science.gov (United States)

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current

  18. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    Science.gov (United States)

    Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng

    2006-12-01

    This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).

  19. Understanding Where Americas Public Discussion Takes Place In Todays Society: Case Studies of Concealed Weapons Carry Reform

    Science.gov (United States)

    2016-06-01

    arguing that concealed carry permit holders are a danger to public safety and that mass shootings are taking place by citizens who are legally armed.2...who worked at an abortion clinic that had recently been bombed and whose life had been threatened was denied a license to carry because he was not...populace. The new law laid out new prohibitions and penalties enforceable statewide. Additionally, the Preemption Act was necessary to set the legal

  20. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    Science.gov (United States)

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  1. Analysis of focusing error signals by differential astigmatic method under off-center tracking in the land-groove-type optical disk

    Science.gov (United States)

    Shinoda, Masahisa; Nakatani, Hidehiko

    2015-04-01

    We theoretically calculate the behavior of the focusing error signal in the land-groove-type optical disk when the objective lens traverses on out of the radius of the optical disk. The differential astigmatic method is employed instead of the conventional astigmatic method for generating the focusing error signals. The signal behaviors are compared and analyzed in terms of the gain difference of the slope sensitivity of the focusing error signals from the land and the groove. In our calculation, the format of digital versatile disc-random access memory (DVD-RAM) is adopted as the land-groove-type optical disk model, and advantageous conditions for suppressing the gain difference are investigated. The calculation method and results described in this paper will be reflected in the next generation land-groove-type optical disks.

  2. Error-diffusion binarization for joint transform correlators

    Science.gov (United States)

    Inbar, Hanni; Mendlovic, David; Marom, Emanuel

    1993-02-01

    A normalized nonlinearly scaled binary joint transform image correlator (JTC) based on a 1D error-diffusion binarization method has been studied. The behavior of the error-diffusion method is compared with hard-clipping, the most widely used method of binarized JTC approaches, using a single spatial light modulator. Computer simulations indicate that the error-diffusion method is advantageous for the production of a binarized power spectrum interference pattern in JTC configurations, leading to better definition of the correlation location. The error-diffusion binary JTC exhibits autocorrelation characteristics which are superior to those of the high-clipping binary JTC over the whole nonlinear scaling range of the Fourier-transform interference intensity for all noise levels considered.

  3. Analysis of potential dynamic concealed factors in the difficulty of lower third molar extraction.

    Science.gov (United States)

    Singh, P; Ajmera, D-H; Xiao, S-S; Yang, X-Z; Liu, X; Peng, B

    2016-11-01

    The purpose of this study was to identify potential concealed variables associated with the difficulty of lower third molar (M3) extractions. To address the research purpose, we implemented a prospective study and enrolled a sample of subjects presenting for M3 removal. Predictor variables were categorized into Group-I and Group-II, based on predetermined criteria. The primary outcome variable was the difficulty of extraction, measured as extraction time. Appropriate univariate and multivariate statistics were computed using ordinal logistic regression. The sample comprised of 1235 subjects with a mean age of 29.49 +/- 8.92 years in Group-I and 26.20 +/- 11.55 years in Group-II subjects. The mean operating time per M3 extraction was 21.24 +/- 12.80 and 20.24 +/- 12.50 minutes for Group-I and Group-II subjects respectively. Three linear parameters including B-M2 height (distance between imaginary point B on the inferior border of mandibular body, and M2), lingual cortical thickness, bone density and one angular parameter including Rc-Cs angle (angle between ramus curvature and curve of spee), in addition to patient's age, profile type, facial type, cant of occlusal plane, and decreased overbite, were found to be statistically associated ( p < or = 0.05) with extraction difficulty under regression models. In conclusion, our study indicates that the difficulty of lower M3 extractions is possibly governed by morphological and biomechanical factors with substantial influence of myofunctional factors. Preoperative evaluation of dynamic concealed factors may not only help in envisaging the difficulty and planning of surgical approach but might also help in better time management in clinical practice.

  4. Medication error detection in two major teaching hospitals: What are the types of errors?

    Directory of Open Access Journals (Sweden)

    Fatemeh Saghafi

    2014-01-01

    Full Text Available Background: Increasing number of reports on medication errors and relevant subsequent damages, especially in medical centers has become a growing concern for patient safety in recent decades. Patient safety and in particular, medication safety is a major concern and challenge for health care professionals around the world. Our prospective study was designed to detect prescribing, transcribing, dispensing, and administering medication errors in two major university hospitals. Materials and Methods: After choosing 20 similar hospital wards in two large teaching hospitals in the city of Isfahan, Iran, the sequence was randomly selected. Diagrams for drug distribution were drawn by the help of pharmacy directors. Direct observation technique was chosen as the method for detecting the errors. A total of 50 doses were studied in each ward to detect prescribing, transcribing and administering errors in each ward. The dispensing error was studied on 1000 doses dispensed in each hospital pharmacy. Results: A total of 8162 number of doses of medications were studied during the four stages, of which 8000 were complete data to be analyzed. 73% of prescribing orders were incomplete and did not have all six parameters (name, dosage form, dose and measuring unit, administration route, and intervals of administration. We found 15% transcribing errors. One-third of administration of medications on average was erroneous in both hospitals. Dispensing errors ranged between 1.4% and 2.2%. Conclusion: Although prescribing and administrating compromise most of the medication errors, improvements are needed in all four stages with regard to medication errors. Clear guidelines must be written and executed in both hospitals to reduce the incidence of medication errors.

  5. A strategy to the development of a human error analysis method for accident management in nuclear power plants using industrial accident dynamics

    International Nuclear Information System (INIS)

    Lee, Yong Hee; Kim, Jae Whan; Jung, Won Dae; Ha, Jae Ju

    1998-06-01

    This technical report describes the early progress of he establishment of a human error analysis method as a part of a human reliability analysis(HRA) method for the assessment of the human error potential in a given accident management strategy. At first, we review the shortages and limitations of the existing HRA methods through an example application. In order to enhance the bias to the quantitative aspect of the HRA method, we focused to the qualitative aspect, i.e., human error analysis(HEA), during the proposition of a strategy to the new method. For the establishment of a new HEA method, we discuss the basic theories and approaches to the human error in industry, and propose three basic requirements that should be maintained as pre-requisites for HEA method in practice. Finally, we test IAD(Industrial Accident Dynamics) which has been widely utilized in industrial fields, in order to know whether IAD can be so easily modified and extended to the nuclear power plant applications. We try to apply IAD to the same example case and develop new taxonomy of the performance shaping factors in accident management and their influence matrix, which could enhance the IAD method as an HEA method. (author). 33 refs., 17 tabs., 20 figs

  6. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  7. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  8. Sharp and blunt force trauma concealment by thermal alteration in homicides: An in-vitro experiment for methodology and protocol development in forensic anthropological analysis of burnt bones.

    Science.gov (United States)

    Macoveciuc, Ioana; Márquez-Grant, Nicholas; Horsfall, Ian; Zioupos, Peter

    2017-06-01

    Burning of human remains is one method used by perpetrators to conceal fatal trauma and expert opinions regarding the degree of skeletal evidence concealment are often disparate. This experiment aimed to reduce this incongruence in forensic anthropological interpretation of burned human remains and implicitly contribute to the development of research methodologies sufficiently robust to withstand forensic scrutiny in the courtroom. We have tested the influence of thermal alteration on pre-existing sharp and blunt trauma on twenty juvenile sheep radii in the laboratory using an automated impact testing system and an electric furnace. The testing conditions simulated a worst-case scenario where remains with pre-existing sharp or blunt trauma were exposed to burning with an intentional vehicular fire scenario in mind. All impact parameters as well as the burning conditions were based on those most commonly encountered in forensic cases and maintained constant throughout the experiment. The results have shown that signatures associated with sharp and blunt force trauma were not masked by heat exposure and highlights the potential for future standardization of fracture analysis in burned bone. Our results further emphasize the recommendation given by other experts on handling, processing and recording burned remains at the crime scene and mortuary. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Practical, Reliable Error Bars in Quantum Tomography

    OpenAIRE

    Faist, Philippe; Renner, Renato

    2015-01-01

    Precise characterization of quantum devices is usually achieved with quantum tomography. However, most methods which are currently widely used in experiments, such as maximum likelihood estimation, lack a well-justified error analysis. Promising recent methods based on confidence regions are difficult to apply in practice or yield error bars which are unnecessarily large. Here, we propose a practical yet robust method for obtaining error bars. We do so by introducing a novel representation of...

  10. Error analysis in Fourier methods for option pricing for exponential Lévy processes

    KAUST Repository

    Crocce, Fabian

    2015-01-07

    We derive an error bound for utilising the discrete Fourier transform method for solving Partial Integro-Differential Equations (PIDE) that describe european option prices for exponential Lévy driven asset prices. We give sufficient conditions for the existence of a L? bound that separates the dynamical contribution from that arising from the type of the option n in question. The bound achieved does not rely on information of the asymptotic behaviour of option prices at extreme asset values. In addition, we demonstrate improved numerical performance for select examples of practical relevance when compared to established bounding methods.

  11. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  12. Water flux in animals: analysis of potential errors in the tritiated water method

    International Nuclear Information System (INIS)

    Nagy, K.A.; Costa, D.

    1979-03-01

    Laboratory studies indicate that tritiated water measurements of water flux are accurate to within -7 to +4% in mammals, but errors are larger in some reptiles. However, under conditions that can occur in field studies, errors may be much greater. Influx of environmental water vapor via lungs and skin can cause errors exceeding +-50% in some circumstances. If water flux rates in an animal vary through time, errors approach +-15% in extreme situations, but are near +-3% in more typical circumstances. Errors due to fractional evaporation of tritiated water may approach -9%. This error probably varies between species. Use of an inappropriate equation for calculating water flux from isotope data can cause errors exceeding +-100%. The following sources of error are either negligible or avoidable: use of isotope dilution space as a measure of body water volume, loss of nonaqueous tritium bound to excreta, binding of tritium with nonaqueous substances in the body, radiation toxicity effects, and small analytical errors in isotope measurements. Water flux rates measured with tritiated water should be within +-10% of actual flux rates in most situations

  13. Water flux in animals: analysis of potential errors in the tritiated water method

    Energy Technology Data Exchange (ETDEWEB)

    Nagy, K.A.; Costa, D.

    1979-03-01

    Laboratory studies indicate that tritiated water measurements of water flux are accurate to within -7 to +4% in mammals, but errors are larger in some reptiles. However, under conditions that can occur in field studies, errors may be much greater. Influx of environmental water vapor via lungs and skin can cause errors exceeding +-50% in some circumstances. If water flux rates in an animal vary through time, errors approach +-15% in extreme situations, but are near +-3% in more typical circumstances. Errors due to fractional evaporation of tritiated water may approach -9%. This error probably varies between species. Use of an inappropriate equation for calculating water flux from isotope data can cause errors exceeding +-100%. The following sources of error are either negligible or avoidable: use of isotope dilution space as a measure of body water volume, loss of nonaqueous tritium bound to excreta, binding of tritium with nonaqueous substances in the body, radiation toxicity effects, and small analytical errors in isotope measurements. Water flux rates measured with tritiated water should be within +-10% of actual flux rates in most situations.

  14. Study protocol: the empirical investigation of methods to correct for measurement error in biobanks with dietary assessment

    Directory of Open Access Journals (Sweden)

    Masson Lindsey F

    2011-10-01

    Full Text Available Abstract Background The Public Population Project in Genomics (P3G is an organisation that aims to promote collaboration between researchers in the field of population-based genomics. The main objectives of P3G are to encourage collaboration between researchers and biobankers, optimize study design, promote the harmonization of information use in biobanks, and facilitate transfer of knowledge between interested parties. The importance of calibration and harmonisation of methods for environmental exposure assessment to allow pooling of data across studies in the evaluation of gene-environment interactions has been recognised by P3G, which has set up a methodological group on calibration with the aim of; 1 reviewing the published methodological literature on measurement error correction methods with assumptions and methods of implementation; 2 reviewing the evidence available from published nutritional epidemiological studies that have used a calibration approach; 3 disseminating information in the form of a comparison chart on approaches to perform calibration studies and how to obtain correction factors in order to support research groups collaborating within the P3G network that are unfamiliar with the methods employed; 4 with application to the field of nutritional epidemiology, including gene-diet interactions, ultimately developing a inventory of the typical correction factors for various nutrients. Methods/Design Systematic review of (a the methodological literature on methods to correct for measurement error in epidemiological studies; and (b studies that have been designed primarily to investigate the association between diet and disease and have also corrected for measurement error in dietary intake. Discussion The conduct of a systematic review of the methodological literature on calibration will facilitate the evaluation of methods to correct for measurement error and the design of calibration studies for the prospective pooling of

  15. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Metering error quantification under voltage and current waveform distortion

    Science.gov (United States)

    Wang, Tao; Wang, Jia; Xie, Zhi; Zhang, Ran

    2017-09-01

    With integration of more and more renewable energies and distortion loads into power grid, the voltage and current waveform distortion results in metering error in the smart meters. Because of the negative effects on the metering accuracy and fairness, it is an important subject to study energy metering combined error. In this paper, after the comparing between metering theoretical value and real recorded value under different meter modes for linear and nonlinear loads, a quantification method of metering mode error is proposed under waveform distortion. Based on the metering and time-division multiplier principles, a quantification method of metering accuracy error is proposed also. Analyzing the mode error and accuracy error, a comprehensive error analysis method is presented which is suitable for new energy and nonlinear loads. The proposed method has been proved by simulation.

  17. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  18. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    Directory of Open Access Journals (Sweden)

    Shanshan He

    2015-10-01

    Full Text Available Piecewise linear (G01-based tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical instability, lack of chord error constraint, and lack of assurance of a usable result. Progressive and Iterative Approximation for Least Squares (LSPIA is an efficient method for data fitting that solves the numerical instability problem. However, it does not consider chord errors and needs more work to ensure ironclad results for commercial applications. In this paper, we use LSPIA method incorporating Energy term (ELSPIA to avoid the numerical instability, and lower chord errors by using stretching energy term. We implement several algorithm improvements, including (1 an improved technique for initial control point determination over Dominant Point Method, (2 an algorithm that updates foot point parameters as needed, (3 analysis of the degrees of freedom of control points to insert new control points only when needed, (4 chord error refinement using a similar ELSPIA method with the above enhancements. The proposed approach can generate a shape-preserving B-spline curve. Experiments with data analysis and machining tests are presented for verification of quality and efficiency. Comparisons with other known solutions are included to evaluate the worthiness of the proposed solution.

  19. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  20. Error evaluation of inelastic response spectrum method for earthquake design

    International Nuclear Information System (INIS)

    Paz, M.; Wong, J.

    1981-01-01

    Two-story, four-story and ten-story shear building-type frames subjected to earthquake excitaion, were analyzed at several levels of their yield resistance. These frames were subjected at their base to the motion recorded for north-south component of the 1940 El Centro earthquake, and to an artificial earthquake which would produce the response spectral charts recommended for design. The frames were first subjected to 25% or 50% of the intensity level of these earthquakes. The resulting maximum relative displacement for each story of the frames was assumed to be yield resistance for the subsequent analyses at 100% of intensity for the excitation. The frames analyzed were uniform along their height with the stiffness adjusted as to result in 0.20 seconds of the fundamental period for the two-story frame, 0.40 seconds for the four-story frame and 1.0 seconds for the ten-story frame. Results of the study provided the following conclusions: (1) The percentage error in floor displacement for linear behavior was less than 10%; (2) The percentage error in floor displacement for inelastic behavior (elastoplastic) could be as high as 100%; (3) In most of the cases analyzed, the error increased with damping in the system; (4) As a general rule, the error increased as the modal yield resistance decreased; (5) The error was lower for the structures subjected to the 1940 E1 Centro earthquake than for the same structures subjected to an artificial earthquake which was generated from the response spectra for design. (orig./HP)

  1. A low error reconstruction method for confocal holography to determine 3-dimensional properties

    Energy Technology Data Exchange (ETDEWEB)

    Jacquemin, P.B., E-mail: pbjacque@nps.edu [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada); Herring, R.A. [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada)

    2012-06-15

    A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as 'wily'. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: Black-Right-Pointing-Pointer Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. Black-Right-Pointing-Pointer Processing of multiple holograms containing the cumulative refractive index through the fluid. Black-Right-Pointing-Pointer Reconstruction issues due to restricting angular scanning to the numerical aperture of the

  2. A low error reconstruction method for confocal holography to determine 3-dimensional properties

    International Nuclear Information System (INIS)

    Jacquemin, P.B.; Herring, R.A.

    2012-01-01

    A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as “wily”. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: ► Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. ► Processing of multiple holograms containing the cumulative refractive index through the fluid. ► Reconstruction issues due to restricting angular scanning to the numerical aperture of the beam. ► Minimizing tomographic reconstruction error by defining boundary

  3. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  4. Mental Health Stigma and Self-Concealment as Predictors of Help-Seeking Attitudes among Latina/o College Students in the United States

    Science.gov (United States)

    Mendoza, Hadrian; Masuda, Akihiko; Swartout, Kevin M.

    2015-01-01

    The study examined whether mental health stigma and self-concealment are uniquely related to various dimensions of attitudes toward seeking professional psychological services (i.e., help-seeking attitudes) in Latina/o college students. Data from 129 Latina/o undergraduates (76% female) were used in the analysis. Results revealed that mental…

  5. A method for optical ground station reduce alignment error in satellite-ground quantum experiments

    Science.gov (United States)

    He, Dong; Wang, Qiang; Zhou, Jian-Wei; Song, Zhi-Jun; Zhong, Dai-Jun; Jiang, Yu; Liu, Wan-Sheng; Huang, Yong-Mei

    2018-03-01

    A satellite dedicated for quantum science experiments, has been developed and successfully launched from Jiuquan, China, on August 16, 2016. Two new optical ground stations (OGSs) were built to cooperate with the satellite to complete satellite-ground quantum experiments. OGS corrected its pointing direction by satellite trajectory error to coarse tracking system and uplink beacon sight, therefore fine tracking CCD and uplink beacon optical axis alignment accuracy was to ensure that beacon could cover the quantum satellite in all time when it passed the OGSs. Unfortunately, when we tested specifications of the OGSs, due to the coarse tracking optical system was commercial telescopes, the change of position of the target in the coarse CCD was up to 600μrad along with the change of elevation angle. In this paper, a method of reduce alignment error between beacon beam and fine tracking CCD is proposed. Firstly, OGS fitted the curve of target positions in coarse CCD along with the change of elevation angle. Secondly, OGS fitted the curve of hexapod secondary mirror positions along with the change of elevation angle. Thirdly, when tracking satellite, the fine tracking error unloaded on the real-time zero point position of coarse CCD which computed by the firstly calibration data. Simultaneously the positions of the hexapod secondary mirror were adjusted by the secondly calibration data. Finally the experiment result is proposed. Results show that the alignment error is less than 50μrad.

  6. Advancing the research agenda for diagnostic error reduction.

    Science.gov (United States)

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  7. Solution of Large Systems of Linear Equations in the Presence of Errors. A Constructive Criticism of the Least Squares Method

    Energy Technology Data Exchange (ETDEWEB)

    Nygaard, K

    1968-09-15

    From the point of view that no mathematical method can ever minimise or alter errors already made in a physical measurement, the classical least squares method has severe limitations which makes it unsuitable for the statistical analysis of many physical measurements. Based on the assumptions that the experimental errors are characteristic for each single experiment and that the errors must be properly estimated rather than minimised, a new method for solving large systems of linear equations is developed. The new method exposes the entire range of possible solutions before the decision is taken which of the possible solutions should be chosen as a representative one. The choice is based on physical considerations which (in two examples, curve fitting and unfolding of a spectrum) are presented in such a form that a computer is able to make the decision, A description of the computation is given. The method described is a tool for removing uncertainties due to conventional mathematical formulations (zero determinant, linear dependence) and which are not inherent in the physical problem as such. The method is therefore especially well fitted for unfolding of spectra.

  8. Solution of Large Systems of Linear Equations in the Presence of Errors. A Constructive Criticism of the Least Squares Method

    International Nuclear Information System (INIS)

    Nygaard, K.

    1968-09-01

    From the point of view that no mathematical method can ever minimise or alter errors already made in a physical measurement, the classical least squares method has severe limitations which makes it unsuitable for the statistical analysis of many physical measurements. Based on the assumptions that the experimental errors are characteristic for each single experiment and that the errors must be properly estimated rather than minimised, a new method for solving large systems of linear equations is developed. The new method exposes the entire range of possible solutions before the decision is taken which of the possible solutions should be chosen as a representative one. The choice is based on physical considerations which (in two examples, curve fitting and unfolding of a spectrum) are presented in such a form that a computer is able to make the decision, A description of the computation is given. The method described is a tool for removing uncertainties due to conventional mathematical formulations (zero determinant, linear dependence) and which are not inherent in the physical problem as such. The method is therefore especially well fitted for unfolding of spectra

  9. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  10. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    Science.gov (United States)

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  11. On nonstationarity-related errors in modal combination rules of the response spectrum method

    Science.gov (United States)

    Pathak, Shashank; Gupta, Vinay K.

    2017-10-01

    Characterization of seismic hazard via (elastic) design spectra and the estimation of linear peak response of a given structure from this characterization continue to form the basis of earthquake-resistant design philosophy in various codes of practice all over the world. Since the direct use of design spectrum ordinates is a preferred option for the practicing engineers, modal combination rules play central role in the peak response estimation. Most of the available modal combination rules are however based on the assumption that nonstationarity affects the structural response alike at the modal and overall response levels. This study considers those situations where this assumption may cause significant errors in the peak response estimation, and preliminary models are proposed for the estimation of the extents to which nonstationarity affects the modal and total system responses, when the ground acceleration process is assumed to be a stationary process. It is shown through numerical examples in the context of complete-quadratic-combination (CQC) method that the nonstationarity-related errors in the estimation of peak base shear may be significant, when strong-motion duration of the excitation is too small compared to the period of the system and/or the response is distributed comparably in several modes. It is also shown that these errors are reduced marginally with the use of the proposed nonstationarity factor models.

  12. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  13. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  14. Influence of allocation concealment and intention-to-treat analysis on treatment effects of physical therapy interventions in low back pain randomised controlled trials: a protocol of a meta-epidemiological study.

    Science.gov (United States)

    Almeida, Matheus Oliveira; Saragiotto, Bruno T; Maher, Chris G; Pena Costa, Leonardo Oliveira

    2017-09-27

    Meta-epidemiological studies examining the influence of methodological characteristics, such as allocation concealment and intention-to-treat analysis have been performed in a large number of healthcare areas. However, there are no studies investigating these characteristics in physical therapy interventions for patients with low back pain. The aim of this study is to investigate the influence of allocation concealment and the use of intention-to-treat analysis on estimates of treatment effects of physical therapy interventions in low back pain clinical trials. Searches on PubMed, Embase, Cochrane Database of Systematic Reviews, Physiotherapy Evidence Database (PEDro) and CINAHL databases will be performed. We will search for systematic reviews that include a meta-analysis of randomised controlled trials that compared physical therapy interventions in patients with low back pain with placebo or no intervention, and have pain intensity or disability as the primary outcomes. Information about selection (allocation concealment) and attrition bias (intention-to-treat analysis) will be extracted from the PEDro database for each included trial. Information about bibliographic data, study characteristics, participants' characteristics and study results will be extracted. A random-effects model will be used to provide separate estimates of treatment effects for trials with and without allocation concealment and with and without intention-to-treat analysis (eg, four estimates). A meta-regression will be performed to measure the association between methodological features and treatment effects from each trial. The dependent variable will be the treatment effect (the mean between-group differences) for the primary outcomes (pain or disability), while the independent variables will be the methodological features of interest (allocation concealment and intention-to-treat analysis). Other covariates will include sample size and sequence generation. No ethical approval will be

  15. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  16. Taylor-series and Monte-Carlo-method uncertainty estimation of the width of a probability distribution based on varying bias and random error

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Uncertainties are typically assumed to be constant or a linear function of the measured value; however, this is generally not true. Particle image velocimetry (PIV) is one example of a measurement technique that has highly nonlinear, time varying local uncertainties. Traditional uncertainty methods are not adequate for the estimation of the uncertainty of measurement statistics (mean and variance) in the presence of nonlinear, time varying errors. Propagation of instantaneous uncertainty estimates into measured statistics is performed allowing accurate uncertainty quantification of time-mean and statistics of measurements such as PIV. It is shown that random errors will always elevate the measured variance, and thus turbulent statistics such as u'u'-bar. Within this paper, nonlinear, time varying errors are propagated from instantaneous measurements into the measured mean and variance using the Taylor-series method. With these results and knowledge of the systematic and random uncertainty of each measurement, the uncertainty of the time-mean, the variance and covariance can be found. Applicability of the Taylor-series uncertainty equations to time varying systematic and random errors and asymmetric error distributions are demonstrated with Monte-Carlo simulations. The Taylor-series uncertainty estimates are always accurate for uncertainties on the mean quantity. The Taylor-series variance uncertainty is similar to the Monte-Carlo results for cases in which asymmetric random errors exist or the magnitude of the instantaneous variations in the random and systematic errors is near the ‘true’ variance. However, the Taylor-series method overpredicts the uncertainty in the variance as the instantaneous variations of systematic errors are large or are on the same order of magnitude as the ‘true’ variance. (paper)

  17. ERRORS MEASUREMENT OF INTERPOLATION METHODS FOR GEOID MODELS: STUDY CASE IN THE BRAZILIAN REGION

    Directory of Open Access Journals (Sweden)

    Daniel Arana

    Full Text Available Abstract: The geoid is an equipotential surface regarded as the altimetric reference for geodetic surveys and it therefore, has several practical applications for engineers. In recent decades the geodetic community has concentrated efforts on the development of highly accurate geoid models through modern techniques. These models are supplied through regular grids which users need to make interpolations. Yet, little information can be obtained regarding the most appropriate interpolation method to extract information from the regular grid of geoidal models. The use of an interpolator that does not represent the geoid surface appropriately can impair the quality of geoid undulations and consequently the height transformation. This work aims to quantify the magnitude of error that comes from a regular mesh of geoid models. The analysis consisted of performing a comparison between the interpolation of the MAPGEO2015 program and three interpolation methods: bilinear, cubic spline and neural networks Radial Basis Function. As a result of the experiments, it was concluded that 2.5 cm of the 18 cm error of the MAPGEO2015 validation is caused by the use of interpolations in the 5'x5' grid.

  18. A comparative evaluation of emerging methods for errors of commission based on applications to the Davis-Besse (1985) event

    International Nuclear Information System (INIS)

    Reer, B.; Dang, V.N.; Hirschberg, S.; Straeter, O.

    1999-12-01

    In considering the human role in accidents, the classical PSA methodology applied today focuses primarily on the omissions of actions required of the operators at specific points in the scenario models. A practical, proven methodology is not available for systematically identifying and analyzing the scenario contexts in which the operators might perform inappropriate actions that aggravate the scenario. As a result, typical PSA's do not comprehensively treat these actions, referred to as errors of commission (EOCs). This report presents the results of a joint project of the Paul Scherrer Institut (PSI, Villigen, Switzerland) and the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, Garching, Germany) that examined some methods recently proposed for addressing the EOC issue. Five methods were investigated: 1 ) ATHEANA, 2) the Borssele screening methodology. 3) CREAM, 4) CAHR, and 5) CODA. In addition to a comparison of their scope, basic assumptions, and analytical approach, the methods were each applied in the analysis of PWR Loss of Feedwater scenarios based on the 1985 Davis-Besse event, in which the operator response included actions that can be categorized as EOCs. The aim was to compare how the methods consider a concrete scenario in which EOCs have in fact been observed. These case applications show how the methods are used in practical terms and constitute a common basis for comparing the methods and the insights that they provide. The identification of the potentially significant EOCs to be analysed in the PSA is currently the central problem for their treatment. The identification or search scheme has to consider an extensive set of potential actions that the operators may take. These actions may take place instead of required actions, for example, because the operators fail to assess the plant state correctly, or they may occur even when no action is required. As a result of this broad search space, most methodologies apply multiple schemes to

  19. Assessing errors related to characteristics of the items measured

    International Nuclear Information System (INIS)

    Liggett, W.

    1980-01-01

    Errors that are related to some intrinsic property of the items measured are often encountered in nuclear material accounting. An example is the error in nondestructive assay measurements caused by uncorrected matrix effects. Nuclear material accounting requires for each materials type one measurement method for which bounds on these errors can be determined. If such a method is available, a second method might be used to reduce costs or to improve precision. If the measurement error for the first method is longer-tailed than Gaussian, then precision might be improved by measuring all items by both methods. 8 refs

  20. Instance Analysis for the Error of Three-pivot Pressure Transducer Static Balancing Method for Hydraulic Turbine Runner

    Science.gov (United States)

    Weng, Hanli; Li, Youping

    2017-04-01

    The working principle, process device and test procedure of runner static balancing test method by weighting with three-pivot pressure transducers are introduced in this paper. Based on an actual instance of a V hydraulic turbine runner, the error and sensitivity of the three-pivot pressure transducer static balancing method are analysed. Suggestions about improving the accuracy and the application of the method are also proposed.

  1. Different grades MEMS accelerometers error characteristics

    Science.gov (United States)

    Pachwicewicz, M.; Weremczuk, J.

    2017-08-01

    The paper presents calibration effects of two different MEMS accelerometers of different price and quality grades and discusses different accelerometers errors types. The calibration for error determining is provided by reference centrifugal measurements. The design and measurement errors of the centrifuge are discussed as well. It is shown that error characteristics of the sensors are very different and it is not possible to use simple calibration methods presented in the literature in both cases.

  2. The human operational sex ratio: effects of marriage, concealed ovulation, and menopause on mate competition.

    Science.gov (United States)

    Marlowe, Frank W; Berbesque, J Colette

    2012-12-01

    Among mammals, male-male competition for sexual access to females frequently involves fighting. Larger body size gives males an advantage in fighting, which explains why males tend to be larger than females in many species, including anthropoid primates. Mitani et al. derived a formula to measure the operational sex ratio (OSR) to reflect the degree of male-male competition using the number of reproductively available males to females who are cycling and capable of conceiving. The OSR should predict the degree of sexual dimorphism in body mass-at least if male-male competition involves much fighting or threatening. Here, we use hunter-gatherer demographic data and the Mitani et al. formula to calculate the human OSR. We show that humans have a much lower degree of body mass sexual dimorphism than is predicted by our OSR. We suggest this is because human competition rarely involves fighting. In human hunter-gatherer societies, differences in the ages of marriage have an impact on competition in that the age of males at first marriage is younger when there is a lower percentage of married men with two or more wives, and older when there is a higher percentage of married men with two or more wives. We discuss the implications of this for females, along with the effects of two key life history traits that influence the OSR, concealed ovulation and menopause. While menopause decreases the number of reproductively available females to males and thus increases male-male competition, concealed ovulation decreases male-male competition. Finally, we discuss the importance of mostly monogamous mate bonds in human evolution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Error Analysis of a Finite Element Method for the Space-Fractional Parabolic Equation

    KAUST Repository

    Jin, Bangti; Lazarov, Raytcho; Pasciak, Joseph; Zhou, Zhi

    2014-01-01

    © 2014 Society for Industrial and Applied Mathematics We consider an initial boundary value problem for a one-dimensional fractional-order parabolic equation with a space fractional derivative of Riemann-Liouville type and order α ∈ (1, 2). We study a spatial semidiscrete scheme using the standard Galerkin finite element method with piecewise linear finite elements, as well as fully discrete schemes based on the backward Euler method and the Crank-Nicolson method. Error estimates in the L2(D)- and Hα/2 (D)-norm are derived for the semidiscrete scheme and in the L2(D)-norm for the fully discrete schemes. These estimates cover both smooth and nonsmooth initial data and are expressed directly in terms of the smoothness of the initial data. Extensive numerical results are presented to illustrate the theoretical results.

  4. Evaluation of roundness error using a new method based on a small displacement screw

    International Nuclear Information System (INIS)

    Nouira, Hichem; Bourdet, Pierre

    2014-01-01

    In relation to industrial need and the progress of technology, LNE would like to improve the measurement of its primary pressure, spherical and flick standards. The spherical and flick standards are respectively used to calibrate the spindle motion error and the probe which equips commercial conventional cylindricity measuring machines. The primary pressure standards are obtained using pressure balances equipped with rotary pistons with an uncertainty of 5 nm for a piston diameter of 10 mm. Conventional machines are not able to reach such an uncertainty level. That is why the development of a new machine is necessary. To ensure such a level of uncertainty, both stability and performance of the machine are not sufficient, and the data processing should also be done with accuracy less than a nanometre. In this paper, a new method based on the small displacement screw (SDS) model is proposed. A first validation of this method is proposed on a theoretical dataset published by the European Community Bureau of Reference (BCR) in report no 3327. Then, an experiment is prepared in order to validate the new method on real datasets. Specific environment conditions are taken into account and many precautions are considered. The new method is applied to analyse the least-squares circle, minimum zone circle, maximum inscribed circle and minimum circumscribed circle. The results are compared to those done by the reference Chebyshev best-fit method and reveal perfect agreement. The sensibilities of the SDS and Chebyshev methodologies are investigated, and it is revealed that results remain unchanged when the value of the diameter exceeds 700 times the form error. (paper)

  5. Improved Side Information Generation for Distributed Video Coding by Exploiting Spatial and Temporal Correlations

    Directory of Open Access Journals (Sweden)

    Ye Shuiming

    2009-01-01

    Full Text Available Distributed video coding (DVC is a video coding paradigm allowing low complexity encoding for emerging applications such as wireless video surveillance. Side information (SI generation is a key function in the DVC decoder, and plays a key-role in determining the performance of the codec. This paper proposes an improved SI generation for DVC, which exploits both spatial and temporal correlations in the sequences. Partially decoded Wyner-Ziv (WZ frames, based on initial SI by motion compensated temporal interpolation, are exploited to improve the performance of the whole SI generation. More specifically, an enhanced temporal frame interpolation is proposed, including motion vector refinement and smoothing, optimal compensation mode selection, and a new matching criterion for motion estimation. The improved SI technique is also applied to a new hybrid spatial and temporal error concealment scheme to conceal errors in WZ frames. Simulation results show that the proposed scheme can achieve up to 1.0 dB improvement in rate distortion performance in WZ frames for video with high motion, when compared to state-of-the-art DVC. In addition, both the objective and perceptual qualities of the corrupted sequences are significantly improved by the proposed hybrid error concealment scheme, outperforming both spatial and temporal concealments alone.

  6. Error estimation for variational nodal calculations

    International Nuclear Information System (INIS)

    Zhang, H.; Lewis, E.E.

    1998-01-01

    Adaptive grid methods are widely employed in finite element solutions to both solid and fluid mechanics problems. Either the size of the element is reduced (h refinement) or the order of the trial function is increased (p refinement) locally to improve the accuracy of the solution without a commensurate increase in computational effort. Success of these methods requires effective local error estimates to determine those parts of the problem domain where the solution should be refined. Adaptive methods have recently been applied to the spatial variables of the discrete ordinates equations. As a first step in the development of adaptive methods that are compatible with the variational nodal method, the authors examine error estimates for use in conjunction with spatial variables. The variational nodal method lends itself well to p refinement because the space-angle trial functions are hierarchical. Here they examine an error estimator for use with spatial p refinement for the diffusion approximation. Eventually, angular refinement will also be considered using spherical harmonics approximations

  7. Forecast Combination under Heavy-Tailed Errors

    Directory of Open Access Journals (Sweden)

    Gang Cheng

    2015-11-01

    Full Text Available Forecast combination has been proven to be a very important technique to obtain accurate predictions for various applications in economics, finance, marketing and many other areas. In many applications, forecast errors exhibit heavy-tailed behaviors for various reasons. Unfortunately, to our knowledge, little has been done to obtain reliable forecast combinations for such situations. The familiar forecast combination methods, such as simple average, least squares regression or those based on the variance-covariance of the forecasts, may perform very poorly due to the fact that outliers tend to occur, and they make these methods have unstable weights, leading to un-robust forecasts. To address this problem, in this paper, we propose two nonparametric forecast combination methods. One is specially proposed for the situations in which the forecast errors are strongly believed to have heavy tails that can be modeled by a scaled Student’s t-distribution; the other is designed for relatively more general situations when there is a lack of strong or consistent evidence on the tail behaviors of the forecast errors due to a shortage of data and/or an evolving data-generating process. Adaptive risk bounds of both methods are developed. They show that the resulting combined forecasts yield near optimal mean forecast errors relative to the candidate forecasts. Simulations and a real example demonstrate their superior performance in that they indeed tend to have significantly smaller prediction errors than the previous combination methods in the presence of forecast outliers.

  8. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  9. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    Directory of Open Access Journals (Sweden)

    Zhongzhou Du

    2015-04-01

    Full Text Available The signal transmission module of a magnetic nanoparticle thermometer (MNPT was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias, was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA when the hardware system of the MNPT was designed with the aforementioned method.

  10. Findings from analysing and quantifying human error using current methods

    International Nuclear Information System (INIS)

    Dang, V.N.; Reer, B.

    1999-01-01

    In human reliability analysis (HRA), the scarcity of data means that, at best, judgement must be applied to transfer to the domain of the analysis what data are available for similar tasks. In particular for the quantification of tasks involving decisions, the analyst has to choose among quantification approaches that all depend to a significant degree on expert judgement. The use of expert judgement can be made more reliable by eliciting relative judgements rather than absolute judgements. These approaches, which are based on multiple criterion decision theory, focus on ranking the tasks to be analysed by difficulty. While these approaches remedy at least partially the poor performance of experts in the estimation of probabilities, they nevertheless require the calibration of the relative scale on which the actions are ranked in order to obtain the probabilities of interest. This paper presents some results from a comparison of some current HRA methods performed in the frame of a study of SLIM calibration options. The HRA quantification methods THERP, HEART, and INTENT were applied to derive calibration human error probabilities for two groups of operator actions. (author)

  11. Errors and mistakes in breast ultrasound diagnostics

    Directory of Open Access Journals (Sweden)

    Wiesław Jakubowski

    2012-09-01

    Full Text Available Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Neverthe‑ less, as in each imaging method, there are errors and mistakes resulting from the techni‑ cal limitations of the method, breast anatomy (fibrous remodeling, insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts, improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS‑usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, includ‑ ing the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  12. Volterra Filtering for ADC Error Correction

    Directory of Open Access Journals (Sweden)

    J. Saliga

    2001-09-01

    Full Text Available Dynamic non-linearity of analog-to-digital converters (ADCcontributes significantly to the distortion of digitized signals. Thispaper introduces a new effective method for compensation such adistortion based on application of Volterra filtering. Considering ana-priori error model of ADC allows finding an efficient inverseVolterra model for error correction. Efficiency of proposed method isdemonstrated on experimental results.

  13. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  14. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  15. Effect of Drying Moisture Exposed Almonds on the Development of the Quality Defect Concealed Damage.

    Science.gov (United States)

    Rogel-Castillo, Cristian; Luo, Kathleen; Huang, Guangwei; Mitchell, Alyson E

    2017-10-11

    Concealed damage (CD), is a term used by the nut industry to describe a brown discoloration of kernel nutmeat that becomes visible after moderate heat treatments (e.g., roasting). CD can result in consumer rejection and product loss. Postharvest exposure of almonds to moisture (e.g., rain) is a key factor in the development of CD as it promotes hydrolysis of proteins, carbohydrates, and lipids. The effect of drying moisture-exposed almonds between 45 to 95 °C, prior to roasting was evaluated as a method for controlling CD in roasted almonds. Additionally, moisture-exposed almonds dried at 55 and 75 °C were stored under accelerated shelf life conditions (45 °C/80% RH) and evaluated for headspace volatiles. Results indicate that drying temperatures below 65 °C decreases brown discoloration of nutmeat up to 40% while drying temperatures above 75 °C produce significant increases in brown discoloration and volatiles related to lipid oxidation, and nonsignificant increases in Amadori compounds. Results also demonstrate that raw almonds exposed to moisture and dried at 55 °C prior to roasting, reduce the visual sign of CD and maintain headspace volatiles profiles similar to almonds without moisture damage during accelerated storage.

  16. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    Science.gov (United States)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  17. Positional error in automated geocoding of residential addresses

    Directory of Open Access Journals (Sweden)

    Talbot Thomas O

    2003-12-01

    Full Text Available Abstract Background Public health applications using geographic information system (GIS technology are steadily increasing. Many of these rely on the ability to locate where people live with respect to areas of exposure from environmental contaminants. Automated geocoding is a method used to assign geographic coordinates to an individual based on their street address. This method often relies on street centerline files as a geographic reference. Such a process introduces positional error in the geocoded point. Our study evaluated the positional error caused during automated geocoding of residential addresses and how this error varies between population densities. We also evaluated an alternative method of geocoding using residential property parcel data. Results Positional error was determined for 3,000 residential addresses using the distance between each geocoded point and its true location as determined with aerial imagery. Error was found to increase as population density decreased. In rural areas of an upstate New York study area, 95 percent of the addresses geocoded to within 2,872 m of their true location. Suburban areas revealed less error where 95 percent of the addresses geocoded to within 421 m. Urban areas demonstrated the least error where 95 percent of the addresses geocoded to within 152 m of their true location. As an alternative to using street centerline files for geocoding, we used residential property parcel points to locate the addresses. In the rural areas, 95 percent of the parcel points were within 195 m of the true location. In suburban areas, this distance was 39 m while in urban areas 95 percent of the parcel points were within 21 m of the true location. Conclusion Researchers need to determine if the level of error caused by a chosen method of geocoding may affect the results of their project. As an alternative method, property data can be used for geocoding addresses if the error caused by traditional methods is

  18. A method for predicting errors when interacting with finite state systems. How implicit learning shapes the user's knowledge of a system

    International Nuclear Information System (INIS)

    Javaux, Denis

    2002-01-01

    This paper describes a method for predicting the errors that may appear when human operators or users interact with systems behaving as finite state systems. The method is a generalization of a method used for predicting errors when interacting with autopilot modes on modern, highly computerized airliners [Proc 17th Digital Avionics Sys Conf (DASC) (1998); Proc 10th Int Symp Aviat Psychol (1999)]. A cognitive model based on spreading activation networks is used for predicting the user's model of the system and its impact on the production of errors. The model strongly posits the importance of implicit learning in user-system interaction and its possible detrimental influence on users' knowledge of the system. An experiment conducted with Airbus Industrie and a major European airline on pilots' knowledge of autopilot behavior on the A340-200/300 confirms the model predictions, and in particular the impact of the frequencies with which specific state transitions and contexts are experienced

  19. Investigation into the limitations of straightness interferometers using a multisensor-based error separation method

    Science.gov (United States)

    Weichert, Christoph; Köchert, Paul; Schötka, Eugen; Flügge, Jens; Manske, Eberhard

    2018-06-01

    The uncertainty of a straightness interferometer is independent of the component used to introduce the divergence angle between the two probing beams, and is limited by three main error sources, which are linked to each other: their resolution, the influence of refractive index gradients and the topography of the straightness reflector. To identify the configuration with minimal uncertainties under laboratory conditions, a fully fibre-coupled heterodyne interferometer was successively equipped with three different wedge prisms, resulting in three different divergence angles (4°, 8° and 20°). To separate the error sources an independent reference with a smaller reproducibility is needed. Therefore, the straightness measurement capability of the Nanometer Comparator, based on a multisensor error separation method, was improved to provide measurements with a reproducibility of 0.2 nm. The comparison results revealed that the influence of the refractive index gradients of air did not increase with interspaces between the probing beams of more than 11.3 mm. Therefore, over a movement range of 220 mm, the lowest uncertainty was achieved with the largest divergence angle. The dominant uncertainty contribution arose from the mirror topography, which was additionally determined with a Fizeau interferometer. The measured topography agreed within  ±1.3 nm with the systematic deviations revealed in the straightness comparison, resulting in an uncertainty contribution of 2.6 nm for the straightness interferometer.

  20. KMRR thermal power measurement error estimation

    International Nuclear Information System (INIS)

    Rhee, B.W.; Sim, B.S.; Lim, I.C.; Oh, S.K.

    1990-01-01

    The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power

  1. Improving Papanicolaou test quality and reducing medical errors by using Toyota production system methods.

    Science.gov (United States)

    Raab, Stephen S; Andrew-Jaja, Carey; Condel, Jennifer L; Dabbs, David J

    2006-01-01

    The objective of the study was to determine whether the Toyota production system process improves Papanicolaou test quality and patient safety. An 8-month nonconcurrent cohort study that included 464 case and 639 control women who had a Papanicolaou test was performed. Office workflow was redesigned using Toyota production system methods by introducing a 1-by-1 continuous flow process. We measured the frequency of Papanicolaou tests without a transformation zone component, follow-up and Bethesda System diagnostic frequency of atypical squamous cells of undetermined significance, and diagnostic error frequency. After the intervention, the percentage of Papanicolaou tests lacking a transformation zone component decreased from 9.9% to 4.7% (P = .001). The percentage of Papanicolaou tests with a diagnosis of atypical squamous cells of undetermined significance decreased from 7.8% to 3.9% (P = .007). The frequency of error per correlating cytologic-histologic specimen pair decreased from 9.52% to 7.84%. The introduction of the Toyota production system process resulted in improved Papanicolaou test quality.

  2. Boundary integral method to calculate the sensitivity temperature error of microstructured fibre plasmonic sensors

    International Nuclear Information System (INIS)

    Esmaeilzadeh, Hamid; Arzi, Ezatollah; Légaré, François; Hassani, Alireza

    2013-01-01

    In this paper, using the boundary integral method (BIM), we simulate the effect of temperature fluctuation on the sensitivity of microstructured optical fibre (MOF) surface plasmon resonance (SPR) sensors. The final results indicate that, as the temperature increases, the refractometry sensitivity of our sensor decreases from 1300 nm/RIU at 0 °C to 1200 nm/RIU at 50 °C, leading to ∼7.7% sensitivity reduction and the sensitivity temperature error of 0.15% °C −1 for this case. These results can be used for biosensing temperature-error adjustment in MOF SPR sensors, since biomaterials detection usually happens in this temperature range. Moreover, the signal-to-noise ratio (SNR) of our sensor decreases from 0.265 at 0 °C to 0.154 at 100 °C with the average reduction rate of ∼0.42% °C −1 . The results suggest that at lower temperatures the sensor has a higher SNR. (paper)

  3. A Trial-and-Error Method with Autonomous Vehicle-to-Infrastructure Traffic Counts for Cordon-Based Congestion Pricing

    Directory of Open Access Journals (Sweden)

    Zhiyuan Liu

    2017-01-01

    Full Text Available This study proposes a practical trial-and-error method to solve the optimal toll design problem of cordon-based pricing, where only the traffic counts autonomously collected on the entry links of the pricing cordon are needed. With the fast development and adoption of vehicle-to-infrastructure (V2I facilities, it is very convenient to autonomously collect these data. Two practical properties of the cordon-based pricing are further considered in this article: the toll charge on each entry of one pricing cordon is identical; the total inbound flow to one cordon should be restricted in order to maintain the traffic conditions within the cordon area. Then, the stochastic user equilibrium (SUE with asymmetric link travel time functions is used to assess each feasible toll pattern. Based on a variational inequality (VI model for the optimal toll pattern, this study proposes a theoretically convergent trial-and-error method for the addressed problem, where only traffic counts data are needed. Finally, the proposed method is verified based on a numerical network example.

  4. A comparative evaluation of emerging methods for errors of commission based on applications to the Davis-Besse (1985) event

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B.; Dang, V.N.; Hirschberg, S. [Paul Scherrer Inst., Nuclear Energy and Safety Research Dept., CH-5232 Villigen PSI (Switzerland); Straeter, O. [Gesellschaft fur Anlagen- und Reaktorsicherheit (Germany)

    1999-12-01

    In considering the human role in accidents, the classical PSA methodology applied today focuses primarily on the omissions of actions required of the operators at specific points in the scenario models. A practical, proven methodology is not available for systematically identifying and analyzing the scenario contexts in which the operators might perform inappropriate actions that aggravate the scenario. As a result, typical PSA's do not comprehensively treat these actions, referred to as errors of commission (EOCs). This report presents the results of a joint project of the Paul Scherrer Institut (PSI, Villigen, Switzerland) and the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, Garching, Germany) that examined some methods recently proposed for addressing the EOC issue. Five methods were investigated: 1 ) ATHEANA, 2) the Borssele screening methodology. 3) CREAM, 4) CAHR, and 5) CODA. In addition to a comparison of their scope, basic assumptions, and analytical approach, the methods were each applied in the analysis of PWR Loss of Feedwater scenarios based on the 1985 Davis-Besse event, in which the operator response included actions that can be categorized as EOCs. The aim was to compare how the methods consider a concrete scenario in which EOCs have in fact been observed. These case applications show how the methods are used in practical terms and constitute a common basis for comparing the methods and the insights that they provide. The identification of the potentially significant EOCs to be analysed in the PSA is currently the central problem for their treatment. The identification or search scheme has to consider an extensive set of potential actions that the operators may take. These actions may take place instead of required actions, for example, because the operators fail to assess the plant state correctly, or they may occur even when no action is required. As a result of this broad search space, most methodologies apply multiple schemes to

  5. Medication errors detected in non-traditional databases

    DEFF Research Database (Denmark)

    Perregaard, Helene; Aronson, Jeffrey K; Dalhoff, Kim

    2015-01-01

    AIMS: We have looked for medication errors involving the use of low-dose methotrexate, by extracting information from Danish sources other than traditional pharmacovigilance databases. We used the data to establish the relative frequencies of different types of errors. METHODS: We searched four...... errors, whereas knowledge-based errors more often resulted in near misses. CONCLUSIONS: The medication errors in this survey were most often action-based (50%) and knowledge-based (34%), suggesting that greater attention should be paid to education and surveillance of medical personnel who prescribe...

  6. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    Science.gov (United States)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  7. Error Analysis of Variations on Larsen's Benchmark Problem

    International Nuclear Information System (INIS)

    Azmy, YY

    2001-01-01

    Error norms for three variants of Larsen's benchmark problem are evaluated using three numerical methods for solving the discrete ordinates approximation of the neutron transport equation in multidimensional Cartesian geometry. The three variants of Larsen's test problem are concerned with the incoming flux boundary conditions: unit incoming flux on the left and bottom edges (Larsen's configuration); unit, incoming flux only on the left edge; unit incoming flux only on the bottom edge. The three methods considered are the Diamond Difference (DD) method, and the constant-approximation versions of the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic (AHOT-C) type. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L 1 , L 2 , and L ∞ error norms are calculated. The results of this study demonstrate that while integral error norms, i.e. L 1 , L 2 , converge to zero with mesh refinement, the pointwise L ∞ norm does not due to solution discontinuity across the singular characteristic. Little difference is observed between the error norm behavior of the three methods considered in spite of the fact that AHOT-C is locally exact, suggesting that numerical diffusion across the singular characteristic as the major source of error on the global scale. However, AHOT-C possesses a given accuracy in a larger fraction of computational cells than DD

  8. Evaluation of Two Methods for Modeling Measurement Errors When Testing Interaction Effects with Observed Composite Scores

    Science.gov (United States)

    Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C.

    2018-01-01

    Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…

  9. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  10. A method for local transport analysis in tokamaks with error calculation

    International Nuclear Information System (INIS)

    Hogeweij, G.M.D.; Hordosy, G.; Lopes Cardozo, N.J.

    1989-01-01

    Global transport studies have revealed that heat transport in a tokamak is anomalous, but cannot provide information about the nature of the anomaly. Therefore, local transport analysis is essential for the study of anomalous transport. However, the determination of local transport coefficients is not a trivial affair. Generally speaking one can either directly measure the heat diffusivity, χ, by means of heat pulse propagation analysis, or deduce the profile of χ from measurements of the profiles of the temperature, T, and the power deposition. Here we are concerned only with the latter method, the local power balance analysis. For the sake of clarity heat diffusion only is considered: ρ=-gradT/q (1) where ρ=κ -1 =(nχ) -1 is the heat resistivity and q is the heat flux per unit area. It is assumed that the profiles T(r) and q(r) are given with some experimental error. In practice T(r) is measured directly, e.g. from ECE spectroscopy, while q(r) is deduced from the power deposition and loss profiles. The latter cannot be measured directly and is partly determined on the basis of models. This complication will not be considered here. Since in eq. (1) the gradient of T appears, noise on T can severely affect the solution ρ. This means that in general some form of smoothing must be applied. A criterion is needed to select the optimal smoothing. Too much smoothing will wipe out the details, whereas with too little smoothing the noise will distort the reconstructed profile of ρ. Here a new method to solve eq. (1) is presented which expresses ρ(r) as a cosine-series. The coefficients of this series are given as linear combinations of the Fourier coefficients of the measured T- and q-profiles. This formulation allows 1) the stable and accurate calculation of the ρ-profile, and 2) the analytical calculation of the error in this profile. (author) 5 refs., 3 figs

  11. Ultra Wide X-Band Microwave Imaging of Concealed Weapons and Explosives Using 3D-SAR Technique

    Directory of Open Access Journals (Sweden)

    P. Millot

    2015-01-01

    Full Text Available In order to detect and image concealed weapons and explosives, an electromagnetic imaging tool with its related signal processing is presented. The aim is to penetrate clothes and to find personal-born weapons and explosives under clothes. The chosen UWB frequency range covers the whole X-band. The frequency range is justified after transmission measurements of numerous clothes that are dry or slightly wet. The apparatus and the 3D near-field SAR processor are described. A strategy for contour identification is presented with results of some simulants of weapon and explosive. A conclusion is drawn on the possible future of this technique.

  12. Errors and Correction of Precipitation Measurements in China

    Institute of Scientific and Technical Information of China (English)

    REN Zhihua; LI Mingqin

    2007-01-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  13. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    Energy Technology Data Exchange (ETDEWEB)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L. [NASA Marshall Space Flight Center, Huntsville, AL 35812 (United States); Khazanov, Igor, E-mail: David.a.Falconer@nasa.gov [Center for Space Plasma and Aeronomic Research, University of Alabama in Huntsville, Huntsville, AL 35899 (United States)

    2016-12-20

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter at central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.

  14. Flight Test Results of a GPS-Based Pitot-Static Calibration Method Using Output-Error Optimization for a Light Twin-Engine Airplane

    Science.gov (United States)

    Martos, Borja; Kiszely, Paul; Foster, John V.

    2011-01-01

    As part of the NASA Aviation Safety Program (AvSP), a novel pitot-static calibration method was developed to allow rapid in-flight calibration for subscale aircraft while flying within confined test areas. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. This method has been demonstrated in subscale flight tests and has shown small 2- error bounds with significant reduction in test time compared to other methods. The current research was motivated by the desire to further evaluate and develop this method for full-scale aircraft. A goal of this research was to develop an accurate calibration method that enables reductions in test equipment and flight time, thus reducing costs. The approach involved analysis of data acquisition requirements, development of efficient flight patterns, and analysis of pressure error models based on system identification methods. Flight tests were conducted at The University of Tennessee Space Institute (UTSI) utilizing an instrumented Piper Navajo research aircraft. In addition, the UTSI engineering flight simulator was used to investigate test maneuver requirements and handling qualities issues associated with this technique. This paper provides a summary of piloted simulation and flight test results that illustrates the performance and capabilities of the NASA calibration method. Discussion of maneuver requirements and data analysis methods is included as well as recommendations for piloting technique.

  15. Error Cost Escalation Through the Project Life Cycle

    Science.gov (United States)

    Stecklein, Jonette M.; Dabney, Jim; Dick, Brandon; Haskins, Bill; Lovell, Randy; Moroney, Gregory

    2004-01-01

    It is well known that the costs to fix errors increase as the project matures, but how fast do those costs build? A study was performed to determine the relative cost of fixing errors discovered during various phases of a project life cycle. This study used three approaches to determine the relative costs: the bottom-up cost method, the total cost breakdown method, and the top-down hypothetical project method. The approaches and results described in this paper presume development of a hardware/software system having project characteristics similar to those used in the development of a large, complex spacecraft, a military aircraft, or a small communications satellite. The results show the degree to which costs escalate, as errors are discovered and fixed at later and later phases in the project life cycle. If the cost of fixing a requirements error discovered during the requirements phase is defined to be 1 unit, the cost to fix that error if found during the design phase increases to 3 - 8 units; at the manufacturing/build phase, the cost to fix the error is 7 - 16 units; at the integration and test phase, the cost to fix the error becomes 21 - 78 units; and at the operations phase, the cost to fix the requirements error ranged from 29 units to more than 1500 units

  16. A Theoretically Consistent Method for Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2014-01-01

    We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features for noise robust automatic speech recognition (ASR). The method is based on a minimum number of well-established statistical assumptions; no assumptions are made which are inconsistent with others....... The strength of the proposed method is that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC's), cepstral mean-subtracted MFCC's (CMS-MFCC's), velocity, and acceleration coefficients. Furthermore, the method is easily modified to take into account other compressive non-linearities than...... the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non...

  17. Error Correction for Non-Abelian Topological Quantum Computation

    Directory of Open Access Journals (Sweden)

    James R. Wootton

    2014-03-01

    Full Text Available The possibility of quantum computation using non-Abelian anyons has been considered for over a decade. However, the question of how to obtain and process information about what errors have occurred in order to negate their effects has not yet been considered. This is in stark contrast with quantum computation proposals for Abelian anyons, for which decoding algorithms have been tailor-made for many topological error-correcting codes and error models. Here, we address this issue by considering the properties of non-Abelian error correction, in general. We also choose a specific anyon model and error model to probe the problem in more detail. The anyon model is the charge submodel of D(S_{3}. This shares many properties with important models such as the Fibonacci anyons, making our method more generally applicable. The error model is a straightforward generalization of those used in the case of Abelian anyons for initial benchmarking of error correction methods. It is found that error correction is possible under a threshold value of 7% for the total probability of an error on each physical spin. This is remarkably comparable with the thresholds for Abelian models.

  18. The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.

    Science.gov (United States)

    Kaskowitz, Gary S.; De Ayala, R. J.

    2001-01-01

    Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…

  19. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  20. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  1. Development of a new cause classification method considering plant ageing and human errors for adverse events which occurred in nuclear power plants and some results of its application

    International Nuclear Information System (INIS)

    Miyazaki, Takamasa

    2007-01-01

    The adverse events which occurred in nuclear power plants are analyzed to prevent similar events, and in the analysis of each event, the cause of the event is classified by a cause classification method. This paper shows a new cause classification method which is improved in several points as follows: (1) the whole causes are systematically classified into three major categories such as machine system, operation system and plant outside causes, (2) the causes of the operation system are classified into several management errors normally performed in a nuclear power plant, (3) the content of ageing is defined in detail for their further analysis, (4) human errors are divided and defined by the error stage, (5) human errors can be related to background factors, and so on. This new method is applied to the adverse events which occurred in domestic and overseas nuclear power plants in 2005. From these results, it is clarified that operation system errors account for about 60% of the whole causes, of which approximately 60% are maintenance errors, about 40% are worker's human errors, and that the prevention of maintenance errors, especially worker's human errors is crucial. (author)

  2. Numerical method for multigroup one-dimensional SN eigenvalue problems with no spatial truncation error

    International Nuclear Information System (INIS)

    Abreu, M.P.; Filho, H.A.; Barros, R.C.

    1993-01-01

    The authors describe a new nodal method for multigroup slab-geometry discrete ordinates S N eigenvalue problems that is completely free from all spatial truncation errors. The unknowns in the method are the node-edge angular fluxes, the node-average angular fluxes, and the effective multiplication factor k eff . The numerical values obtained for these quantities are exactly those of the dominant analytic solution of the S N eigenvalue problem apart from finite arithmetic considerations. This method is based on the use of the standard balance equation and two nonstandard auxiliary equations. In the nonmultiplying regions, e.g., the reflector, we use the multigroup spectral Green's function (SGF) auxiliary equations. In the fuel regions, we use the multigroup spectral diamond (SD) auxiliary equations. The SD auxiliary equation is an extension of the conventional auxiliary equation used in the diamond difference (DD) method. This hybrid characteristic of the SD-SGF method improves both the numerical stability and the convergence rate

  3. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  4. Computational method for the astral servey and the effect of measurement errors on the closed orbit distortion

    International Nuclear Information System (INIS)

    Kamiya, Yukihide.

    1980-05-01

    Has been developed a computational method for the astral survey procedure of the primary monuments that consists in the measurements of short chords and perpendicular distances. This method can be applied to any astral polygon with the lengths of chords and vertical angles different from each other. We will study the propagation of measurement errors for KEK-PF storage ring, and also examine its effect on the closed orbit distortion. (author)

  5. Error Covariance Estimation of Mesoscale Data Assimilation

    National Research Council Canada - National Science Library

    Xu, Qin

    2005-01-01

    The goal of this project is to explore and develop new methods of error covariance estimation that will provide necessary statistical descriptions of prediction and observation errors for mesoscale data assimilation...

  6. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China).

    Science.gov (United States)

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-05-25

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The

  7. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    Science.gov (United States)

    Zhao, Q.

    2017-12-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The

  8. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We exam...

  9. Cognitive aspect of diagnostic errors.

    Science.gov (United States)

    Phua, Dong Haur; Tan, Nigel C K

    2013-01-01

    Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.

  10. Error Analysis in Mathematics. Technical Report #1012

    Science.gov (United States)

    Lai, Cheng-Fei

    2012-01-01

    Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

  11. The image of the butcher (13th-20th): In search of respectability between corporate pride and blood concealment.

    OpenAIRE

    Leteux , Sylvain

    2015-01-01

    International audience; In most images that represent butchers in France since the Middle Ages, the animal’s blood and death are often eclipsed or softened, except for the realistic photographs of slaughterhouses in the 20th century. The will to conceal blood shows the butchers’ will to build an honourable image of themselves. This quest for respectability is obvious if you look at the ceremony clothes worn by butchers during civil and religious celebrations. In the 19th century, as the trade...

  12. Manufacturing error sensitivity analysis and optimal design method of cable-network antenna structures

    Science.gov (United States)

    Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye

    2016-03-01

    Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.

  13. Analysis of S-box in Image Encryption Using Root Mean Square Error Method

    Science.gov (United States)

    Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan

    2012-07-01

    The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes

  14. Pediatric Nurses' Perceptions of Medication Safety and Medication Error: A Mixed Methods Study.

    Science.gov (United States)

    Alomari, Albara; Wilson, Val; Solman, Annette; Bajorek, Beata; Tinsley, Patricia

    2017-05-30

    This study aims to outline the current workplace culture of medication practice in a pediatric medical ward. The objective is to explore the perceptions of nurses in a pediatric clinical setting as to why medication administration errors occur. As nurses have a central role in the medication process, it is essential to explore nurses' perceptions of the factors influencing the medication process. Without this understanding, it is difficult to develop effective prevention strategies aimed at reducing medication administration errors. Previous studies were limited to exploring a single and specific aspect of medication safety. The methods used in these studies were limited to survey designs which may lead to incomplete or inadequate information being provided. This study is phase 1 on an action research project. Data collection included a direct observation of nurses during medication preparation and administration, audit based on the medication policy, and guidelines and focus groups with nursing staff. A thematic analysis was undertaken by each author independently to analyze the observation notes and focus group transcripts. Simple descriptive statistics were used to analyze the audit data. The study was conducted in a specialized pediatric medical ward. Four key themes were identified from the combined quantitative and qualitative data: (1) understanding medication errors, (2) the busy-ness of nurses, (3) the physical environment, and (4) compliance with medication policy and practice guidelines. Workload, frequent interruptions to process, poor physical environment design, lack of preparation space, and impractical medication policies are identified as barriers to safe medication practice. Overcoming these barriers requires organizations to review medication process policies and engage nurses more in medication safety research and in designing clinical guidelines for their own practice.

  15. A channel-by-channel method of reducing the errors associated with peak area integration

    International Nuclear Information System (INIS)

    Luedeke, T.P.; Tripard, G.E.

    1996-01-01

    A new method of reducing the errors associated with peak area integration has been developed. This method utilizes the signal content of each channel as an estimate of the overall peak area. These individual estimates can then be weighted according to the precision with which each estimate is known, producing an overall area estimate. Experimental measurements were performed on a small peak sitting on a large background, and the results compared to those obtained from a commercial software program. Results showed a marked decrease in the spread of results around the true value (obtained by counting for a long period of time), and a reduction in the statistical uncertainty associated with the peak area. (orig.)

  16. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    International Nuclear Information System (INIS)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-01-01

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well

  17. Quantitative developments in the cognitive reliability and error analysis method (CREAM) for the assessment of human performance

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Librizzi, Massimo

    2006-01-01

    The current 'second generation' approaches in human reliability analysis focus their attention on the contextual conditions under which a given action is performed rather than on the notion of inherent human error probabilities, as was done in the earlier 'first generation' techniques. Among the 'second generation' methods, this paper considers the Cognitive Reliability and Error Analysis Method (CREAM) and proposes some developments with respect to a systematic procedure for computing probabilities of action failure. The starting point for the quantification is a previously introduced fuzzy version of the CREAM paradigm which is here further extended to include uncertainty on the qualification of the conditions under which the action is performed and to account for the fact that the effects of the common performance conditions (CPCs) on performance reliability may not all be equal. By the proposed approach, the probability of action failure is estimated by rating the performance conditions in terms of their effect on the action

  18. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning.

    Science.gov (United States)

    Deng, Zhongliang; Fu, Xiao; Wang, Hanhua

    2018-01-20

    Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization.

  19. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    Science.gov (United States)

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  20. Errors and discrepancies in the administration of intravenous infusions: a mixed methods multihospital observational study

    OpenAIRE

    Lyons, I.; Furniss, D.; Blandford, A.; Chumbley, G.; Iacovides, I.; Wei, L.; Cox, A.; Mayer, A.; Vos, J.; Galal-Edeen, G. H.; Schnock, K. O.; Dykes, P. C.; Bates, D. W.; Franklin, B. D.

    2018-01-01

    INTRODUCTION: Intravenous medication administration has traditionally been regarded as error prone, with high potential for harm. A recent US multisite study revealed few potentially harmful errors despite a high overall error rate. However, there is limited evidence about infusion practices in England and how they relate to prevalence and types of error. OBJECTIVES: To determine the prevalence, types and severity of errors and discrepancies in infusion administration in English hospitals, an...

  1. MEDICAL ERROR: CIVIL AND LEGAL ASPECT.

    Science.gov (United States)

    Buletsa, S; Drozd, O; Yunin, O; Mohilevskyi, L

    2018-03-01

    The scientific article is focused on the research of the notion of medical error, medical and legal aspects of this notion have been considered. The necessity of the legislative consolidation of the notion of «medical error» and criteria of its legal estimation have been grounded. In the process of writing a scientific article, we used the empirical method, general scientific and comparative legal methods. A comparison of the concept of medical error in civil and legal aspects was made from the point of view of Ukrainian, European and American scientists. It has been marked that the problem of medical errors is known since ancient times and in the whole world, in fact without regard to the level of development of medicine, there is no country, where doctors never make errors. According to the statistics, medical errors in the world are included in the first five reasons of death rate. At the same time the grant of medical services practically concerns all people. As a man and his life, health in Ukraine are acknowledged by a higher social value, medical services must be of high-quality and effective. The grant of not quality medical services causes harm to the health, and sometimes the lives of people; it may result in injury or even death. The right to the health protection is one of the fundamental human rights assured by the Constitution of Ukraine; therefore the issue of medical errors and liability for them is extremely relevant. The authors make conclusions, that the definition of the notion of «medical error» must get the legal consolidation. Besides, the legal estimation of medical errors must be based on the single principles enshrined in the legislation and confirmed by judicial practice.

  2. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Science.gov (United States)

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  3. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Directory of Open Access Journals (Sweden)

    Martin eSpüler

    2015-03-01

    Full Text Available When a person recognizes an error during a task, an error-related potential (ErrP can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback.With this study, we wanted to answer three different questions: (i Can ErrPs be measured in electroencephalography (EEG recordings during a task with continuous cursor control? (ii Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action. We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible.Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG.

  4. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Measuring worst-case errors in a robot workcell

    International Nuclear Information System (INIS)

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot's model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors

  6. GILBERT'S SYNDROME - A CONCEALED ADVERSITY FOR PHYSICIANS AND SURGEONS.

    Science.gov (United States)

    Rasool, Ahsan; Sabir, Sabir; Ashlaq, Muhammad; Farooq, Umer; Khan, Muhammad Zatmar; Khan, Faisal Yousaf

    2015-01-01

    Gilbert's syndrome (often abbreviated as GS) is most common hereditary cause of mild unconjugated (indirect) hyperbilirubinemia. Various studies have been published depicting clinical and pharmacological effects of Gilbert's syndrome (GS). However GS as a sign of precaution for physician and surgeons has not been clearly established. A systematic study of the available literature was done. Key words of Gilbert's syndrome, hyperbilirubinemia and clinical and pharmacological aspects of GS were searched using PubMed as search engine. Considering the study done in last 40 years, 375 articles were obtained and their abstracts were studied. The criterion for selecting the articles for through study was based on their close relevance with the topic. Thus 40 articles and 2 case reports were thoroughly studied. It was concluded that Gilbert's syndrome has immense clinical importance because the mild hyperbilirubinemia can be mistaken for a sign of occult, chronic, or progressive liver disease. GS is associated with lack of detoxification of few drugs. It is related with spherocytosis, cholithiasis, haemolytic anaemia, intra-operative toxicity, irinotecan toxicity, schizophrenia and problems in morphine metabolism. It also has profound phenotypic effect as well. The bilirubin level of a GS individual can rise abnormally high in various conditions in a person having Gilbert's syndrome. This can mislead the physicians and surgeons towards false diagnosis. Therefore proper diagnosis of GS should be ascertained in order to avoid the concealed adversities of this syndrome.

  7. Pressurized water reactor monitoring. Study of detection, diagnostic and estimation methods (least error squares and filtering)

    International Nuclear Information System (INIS)

    Gillet, M.

    1986-07-01

    This thesis presents a study for the surveillance of the ''primary coolant circuit inventory monitoring'' of a pressurized water reactor. A reference model is developed in view of an automatic system ensuring detection and diagnostic in real time. The methods used for the present application are statistical tests and a method related to pattern recognition. The estimation of failures detected, difficult owing to the non-linearity of the problem, is treated by the least error squares method of the predictor or corrector type, and by filtering. It is in this frame that a new optimized method with superlinear convergence is developed, and that a segmented linearization of the model is introduced, in view of a multiple filtering [fr

  8. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  9. Error analysis of isotope dilution mass spectrometry method with internal standard

    International Nuclear Information System (INIS)

    Rizhinskii, M.W.; Vitinskii, M.Y.

    1989-02-01

    The computation algorithms of the normalized isotopic ratios and element concentration by isotope dilution mass spectrometry with internal standard are presented. A procedure based on the Monte-Carlo calculation is proposed for predicting the magnitude of the errors to be expected. The estimation of systematic and random errors is carried out in the case of the certification of uranium and plutonium reference materials as well as for the use of those reference materials in the analysis of irradiated nuclear fuels. 4 refs, 11 figs, 2 tabs

  10. A concealed observational study of infection control and safe injection practices in Jordanian governmental hospitals.

    Science.gov (United States)

    Al-Rawajfah, Omar M; Tubaishat, Ahmad

    2017-10-01

    The recognized international organizations on infection prevention recommend using an observational method as the gold standard procedure for assessing health care professional's compliance with standard infection control practices. However, observational studies are rarely used in Jordanian infection control studies. This study aimed to evaluate injection practices among nurses working in Jordanian governmental hospitals. A cross-sectional concealed observational design is used for this study. A convenience sampling technique was used to recruit a sample of nurses working in governmental hospitals in Jordan. Participants were unaware of the time and observer during the observation episode. A total of 384 nurses from 9 different hospitals participated in the study. A total of 835 injections events were observed, of which 73.9% were performed without handwashing, 64.5% without gloving, and 27.5% were followed by needle recapping. Handwashing rate was the lowest (18.9%) when injections were performed by beginner nurses. Subcutaneous injections were associated with the lowest rate (26.7%) of postinjection handwashing compared with other routes. This study demonstrates the need for focused and effective infection control educational programs in Jordanian hospitals. Future studies should consider exploring the whole infection control practices related to waste disposal and the roles of the infection control nurse in this process in Jordanian hospitals. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  11. Strategies to exclude subjects who conceal and fabricate information when enrolling in clinical trials.

    Science.gov (United States)

    Devine, Eric G; Peebles, Kristina R; Martini, Valeria

    2017-03-01

    Clinical trials within the US face an increasing challenge with the recruitment of quality candidates. One readily available group of subjects that have high rates of participation in clinical research are subjects who enroll in multiple trials for the purpose of generating income through study payments. Aside from issues of safety and generalizability, evidence suggests that these subjects employ methods of deception to qualify for the strict entrance criteria of some studies, including concealing information and fabricating information. Including these subjects in research poses a significant risk to the integrity of data quality and study designs. Strategies to limit enrollment of subjects whose motivation is generating income have not been systematically addressed in the literature. The present paper is intended to provide investigators with a range of strategies for developing and implementing a study protocol with protections to minimize the enrollment of subjects whose primary motivation for enrolling is to generate income. This multifaceted approach includes recommendations for advertising strategies, payment strategies, telephone screening strategies, and baseline screening strategies. The approach also includes recommendations for attending to inconsistent study data and subject motivation. Implementing these strategies may be more or less important depending upon the vulnerability of the study design to subject deception. Although these strategies may help researchers exclude subjects with a higher rate of deceptive practices, widespread adoption of subject registries would go a long way to decrease the chances of subjects enrolling in multiple studies or more than once in the same study.

  12. Diagnosis of Cognitive Errors by Statistical Pattern Recognition Methods.

    Science.gov (United States)

    Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.

    The rule space model permits measurement of cognitive skill acquisition, diagnosis of cognitive errors, and detection of the strengths and weaknesses of knowledge possessed by individuals. Two ways to classify an individual into his or her most plausible latent state of knowledge include: (1) hypothesis testing--Bayes' decision rules for minimum…

  13. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning

    Directory of Open Access Journals (Sweden)

    Zhongliang Deng

    2018-01-01

    Full Text Available Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS. Wireless positioning signals have a considerable attenuation in received signal strength (RSS when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization.

  14. Open quantum systems and error correction

    Science.gov (United States)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC

  15. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    International Nuclear Information System (INIS)

    Parker, S

    2015-01-01

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignment of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors

  16. Parts of the Whole: Error Estimation for Science Students

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2017-01-01

    Full Text Available It is important for science students to understand not only how to estimate error sizes in measurement data, but also to see how these errors contribute to errors in conclusions they may make about the data. Relatively small errors in measurement, errors in assumptions, and roundoff errors in computation may result in large error bounds on computed quantities of interest. In this column, we look closely at a standard method for measuring the volume of cancer tumor xenografts to see how small errors in each of these three factors may contribute to relatively large observed errors in recorded tumor volumes.

  17. Repeat-aware modeling and correction of short read errors.

    Science.gov (United States)

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors

  18. Medication Error, What Is the Reason?

    Directory of Open Access Journals (Sweden)

    Ali Banaozar Mohammadi

    2015-09-01

    Full Text Available Background: Medication errors due to different reasons may alter the outcome of all patients, especially patients with drug poisoning. We introduce one of the most common type of medication error in the present article. Case:A 48 year old woman with suspected organophosphate poisoning was died due to lethal medication error. Unfortunately these types of errors are not rare and had some preventable reasons included lack of suitable and enough training and practicing of medical students and some failures in medical students’ educational curriculum. Conclusion:Hereby some important reasons are discussed because sometimes they are tre-mendous. We found that most of them are easily preventable. If someone be aware about the method of use, complications, dosage and contraindication of drugs, we can minimize most of these fatal errors.

  19. Galilean-invariant preconditioned central-moment lattice Boltzmann method without cubic velocity errors for efficient steady flow simulations

    Science.gov (United States)

    Hajabdollahi, Farzaneh; Premnath, Kannan N.

    2018-05-01

    Lattice Boltzmann (LB) models used for the computation of fluid flows represented by the Navier-Stokes (NS) equations on standard lattices can lead to non-Galilean-invariant (GI) viscous stress involving cubic velocity errors. This arises from the dependence of their third-order diagonal moments on the first-order moments for standard lattices, and strategies have recently been introduced to restore Galilean invariance without such errors using a modified collision operator involving corrections to either the relaxation times or the moment equilibria. Convergence acceleration in the simulation of steady flows can be achieved by solving the preconditioned NS equations, which contain a preconditioning parameter that can be used to tune the effective sound speed, and thereby alleviating the numerical stiffness. In the present paper, we present a GI formulation of the preconditioned cascaded central-moment LB method used to solve the preconditioned NS equations, which is free of cubic velocity errors on a standard lattice, for steady flows. A Chapman-Enskog analysis reveals the structure of the spurious non-GI defect terms and it is demonstrated that the anisotropy of the resulting viscous stress is dependent on the preconditioning parameter, in addition to the fluid velocity. It is shown that partial correction to eliminate the cubic velocity defects is achieved by scaling the cubic velocity terms in the off-diagonal third-order moment equilibria with the square of the preconditioning parameter. Furthermore, we develop additional corrections based on the extended moment equilibria involving gradient terms with coefficients dependent locally on the fluid velocity and the preconditioning parameter. Such parameter dependent corrections eliminate the remaining truncation errors arising from the degeneracy of the diagonal third-order moments and fully restore Galilean invariance without cubic defects for the preconditioned LB scheme on a standard lattice. Several

  20. Exact error estimation for solutions of nuclide chain equations

    International Nuclear Information System (INIS)

    Tachihara, Hidekazu; Sekimoto, Hiroshi

    1999-01-01

    The exact solution of nuclide chain equations within arbitrary figures is obtained for a linear chain by employing the Bateman method in the multiple-precision arithmetic. The exact error estimation of major calculation methods for a nuclide chain equation is done by using this exact solution as a standard. The Bateman, finite difference, Runge-Kutta and matrix exponential methods are investigated. The present study confirms the following. The original Bateman method has very low accuracy in some cases, because of large-scale cancellations. The revised Bateman method by Siewers reduces the occurrence of cancellations and thereby shows high accuracy. In the time difference method as the finite difference and Runge-Kutta methods, the solutions are mainly affected by the truncation errors in the early decay time, and afterward by the round-off errors. Even though the variable time mesh is employed to suppress the accumulation of round-off errors, it appears to be nonpractical. Judging from these estimations, the matrix exponential method is the best among all the methods except the Bateman method whose calculation process for a linear chain is not identical with that for a general one. (author)

  1. Ptychographic overlap constraint errors and the limits of their numerical recovery using conjugate gradient descent methods.

    Science.gov (United States)

    Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G

    2014-01-27

    Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.

  2. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  3. Optimized universal color palette design for error diffusion

    Science.gov (United States)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  4. Stand-alone error characterisation of microwave satellite soil moisture using a Fourier method

    Science.gov (United States)

    Error characterisation of satellite-retrieved soil moisture (SM) is crucial for maximizing their utility in research and applications in hydro-meteorology and climatology. Error characteristics can provide insights for retrieval development and validation, and inform suitable strategies for data fus...

  5. [Monitoring medication errors in personalised dispensing using the Sentinel Surveillance System method].

    Science.gov (United States)

    Pérez-Cebrián, M; Font-Noguera, I; Doménech-Moral, L; Bosó-Ribelles, V; Romero-Boyero, P; Poveda-Andrés, J L

    2011-01-01

    To assess the efficacy of a new quality control strategy based on daily randomised sampling and monitoring a Sentinel Surveillance System (SSS) medication cart, in order to identify medication errors and their origin at different levels of the process. Prospective quality control study with one year follow-up. A SSS medication cart was randomly selected once a week and double-checked before dispensing medication. Medication errors were recorded before it was taken to the relevant hospital ward. Information concerning complaints after receiving medication and 24-hour monitoring were also noted. Type and origin error data were assessed by a Unit Dose Quality Control Group, which proposed relevant improvement measures. Thirty-four SSS carts were assessed, including 5130 medication lines and 9952 dispensed doses, corresponding to 753 patients. Ninety erroneous lines (1.8%) and 142 mistaken doses (1.4%) were identified at the Pharmacy Department. The most frequent error was dose duplication (38%) and its main cause inappropriate management and forgetfulness (69%). Fifty medication complaints (6.6% of patients) were mainly due to new treatment at admission (52%), and 41 (0.8% of all medication lines), did not completely match the prescription (0.6% lines) as recorded by the Pharmacy Department. Thirty-seven (4.9% of patients) medication complaints due to changes at admission and 32 matching errors (0.6% medication lines) were recorded. The main cause also was inappropriate management and forgetfulness (24%). The simultaneous recording of incidences due to complaints and new medication coincided in 33.3%. In addition, 433 (4.3%) of dispensed doses were returned to the Pharmacy Department. After the Unit Dose Quality Control Group conducted their feedback analysis, 64 improvement measures for Pharmacy Department nurses, 37 for pharmacists, and 24 for the hospital ward were introduced. The SSS programme has proven to be useful as a quality control strategy to identify Unit

  6. Medication errors: an overview for clinicians.

    Science.gov (United States)

    Wittich, Christopher M; Burkle, Christopher M; Lanier, William L

    2014-08-01

    Medication error is an important cause of patient morbidity and mortality, yet it can be a confusing and underappreciated concept. This article provides a review for practicing physicians that focuses on medication error (1) terminology and definitions, (2) incidence, (3) risk factors, (4) avoidance strategies, and (5) disclosure and legal consequences. A medication error is any error that occurs at any point in the medication use process. It has been estimated by the Institute of Medicine that medication errors cause 1 of 131 outpatient and 1 of 854 inpatient deaths. Medication factors (eg, similar sounding names, low therapeutic index), patient factors (eg, poor renal or hepatic function, impaired cognition, polypharmacy), and health care professional factors (eg, use of abbreviations in prescriptions and other communications, cognitive biases) can precipitate medication errors. Consequences faced by physicians after medication errors can include loss of patient trust, civil actions, criminal charges, and medical board discipline. Methods to prevent medication errors from occurring (eg, use of information technology, better drug labeling, and medication reconciliation) have been used with varying success. When an error is discovered, patients expect disclosure that is timely, given in person, and accompanied with an apology and communication of efforts to prevent future errors. Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  7. Airborne LIDAR borsight error calibration based on surface coincide

    International Nuclear Information System (INIS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Li, Dong; Qi, Zengying; Qiu, Wen; Tan, Junxiang

    2014-01-01

    Light Detection and Ranging (LIDAR) is a system which can directly collect three-dimensional coordinate information of ground point and laser reflection strength information. With the wide application of LIDAR system, users hope to get more accurate results. Boresight error has an important effect on data accuracy and thus, it is thought that eliminating the error is very important. In recent years, many methods have been proposed to eliminate the error. Generally, they can be categorized into tie point method and surface matching method. In this paper, we propose another method called try value method based on surface coincide that is used in actual production by many companies. The method is simple and operable. Further, the efficacy of the method was demonstrated by analyzing the data from Zhangye city

  8. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  9. Electronic error-reporting systems: a case study into the impact on nurse reporting of medical errors.

    Science.gov (United States)

    Lederman, Reeva; Dreyfus, Suelette; Matchan, Jessica; Knott, Jonathan C; Milton, Simon K

    2013-01-01

    Underreporting of errors in hospitals persists despite the claims of technology companies that electronic systems will facilitate reporting. This study builds on previous analyses to examine error reporting by nurses in hospitals using electronic media. This research asks whether the electronic media creates additional barriers to error reporting, and, if so, what practical steps can all hospitals take to reduce these barriers. This is a mixed-method case study nurses' use of an error reporting system, RiskMan, in two hospitals. The case study involved one large private hospital and one large public hospital in Victoria, Australia, both of which use the RiskMan medical error reporting system. Information technology-based error reporting systems have unique access problems and time demands and can encourage nurses to develop alternative reporting mechanisms. This research focuses on nurses and raises important findings for hospitals using such systems or considering installation. This article suggests organizational and technical responses that could reduce some of the identified barriers. Crown Copyright © 2013. Published by Mosby, Inc. All rights reserved.

  10. Analysis of Employee's Survey for Preventing Human-Errors

    International Nuclear Information System (INIS)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun

    2013-01-01

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses

  11. Error Mitigation for Short-Depth Quantum Circuits

    Science.gov (United States)

    Temme, Kristan; Bravyi, Sergey; Gambetta, Jay M.

    2017-11-01

    Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.

  12. Students’ Written Production Error Analysis in the EFL Classroom Teaching: A Study of Adult English Learners Errors

    Directory of Open Access Journals (Sweden)

    Ranauli Sihombing

    2016-12-01

    Full Text Available Errors analysis has become one of the most interesting issues in the study of Second Language Acquisition. It can not be denied that some teachers do not know a lot about error analysis and related theories of how L1, L2 or foreign language acquired. In addition, the students often feel upset since they find a gap between themselves and the teachers for the errors the students make and the teachers’ understanding about the error correction. The present research aims to investigate what errors adult English learners make in written production of English. The significances of the study is to know what errors students make in writing that the teachers can find solution to the errors the students make for a better English language teaching and learning especially in teaching English for adults. The study employed qualitative method. The research was undertaken at an airline education center in Bandung. The result showed that syntax errors are more frequently found than morphology errors, especially in terms of verb phrase errors. It is recommended that it is important for teacher to know the theory of second language acquisition in order to know how the students learn and produce theirlanguage. In addition, it will be advantages for teachers if they know what errors students frequently make in their learning, so that the teachers can give solution to the students for a better English language learning achievement.   DOI: https://doi.org/10.24071/llt.2015.180205

  13. Analytical modeling for thermal errors of motorized spindle unit

    OpenAIRE

    Liu, Teng; Gao, Weiguo; Zhang, Dawei; Zhang, Yifan; Chang, Wenfen; Liang, Cunman; Tian, Yanling

    2017-01-01

    Modeling method investigation about spindle thermal errors is significant for spindle thermal optimization in design phase. To accurately analyze the thermal errors of motorized spindle unit, this paper assumes approximately that 1) spindle linear thermal error on axial direction is ascribed to shaft thermal elongation for its heat transfer from bearings, and 2) spindle linear thermal errors on radial directions and angular thermal errors are attributed to thermal variations of bearing relati...

  14. Analysis and Compensation for Gear Accuracy with Setting Error in Form Grinding

    Directory of Open Access Journals (Sweden)

    Chenggang Fang

    2015-01-01

    Full Text Available In the process of form grinding, gear setting error was the main factor that influenced the form grinding accuracy; we proposed an effective method to improve form grinding accuracy that corrected the error by controlling the machine operations. Based on establishing the geometry model of form grinding and representing the gear setting errors as homogeneous coordinate, tooth mathematic model was obtained and simplified under the gear setting error. Then, according to the gear standard of ISO1328-1: 1997 and the ANSI/AGMA 2015-1-A01: 2002, the relationship was investigated by changing the gear setting errors with respect to tooth profile deviation, helix deviation, and cumulative pitch deviation, respectively, under the condition of gear eccentricity error, gear inclination error, and gear resultant error. An error compensation method was proposed based on solving sensitivity coefficient matrix of setting error in a five-axis CNC form grinding machine; simulation and experimental results demonstrated that the method can effectively correct the gear setting error, as well as further improving the forming grinding accuracy.

  15. Methods for Estimation of Radiation Risk in Epidemiological Studies Accounting for Classical and Berkson Errors in Doses

    KAUST Repository

    Kukush, Alexander

    2011-01-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  16. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    Science.gov (United States)

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  17. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, Miguel; Alessie, Rob; Teulings, Coen

    2010-01-01

    The use of the perpetual inventory method for the construction of education data per country leads to systematic measurement error. This paper analyzes its effect on growth regressions. We suggest a methodology for correcting this error. The standard attenuation bias suggests that using these

  18. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, M.; Teulings, C.N.; Alessie, R.

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  19. Measurement error in education and growth regressions

    NARCIS (Netherlands)

    Portela, Miguel; Teulings, Coen; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  20. Reprogramming to pluripotency can conceal somatic cell chromosomal instability.

    Directory of Open Access Journals (Sweden)

    Masakazu Hamada

    Full Text Available The discovery that somatic cells are reprogrammable to pluripotency by ectopic expression of a small subset of transcription factors has created great potential for the development of broadly applicable stem-cell-based therapies. One of the concerns regarding the safe use of induced pluripotent stem cells (iPSCs in therapeutic applications is loss of genomic integrity, a hallmark of various human conditions and diseases, including cancer. Structural chromosome defects such as short telomeres and double-strand breaks are known to limit reprogramming of somatic cells into iPSCs, but whether defects that cause whole-chromosome instability (W-CIN preclude reprogramming is unknown. Here we demonstrate, using aneuploidy-prone mouse embryonic fibroblasts (MEFs in which chromosome missegregation is driven by BubR1 or RanBP2 insufficiency, that W-CIN is not a barrier to reprogramming. Unexpectedly, the two W-CIN defects had contrasting effects on iPSC genomic integrity, with BubR1 hypomorphic MEFs almost exclusively yielding aneuploid iPSC clones and RanBP2 hypomorphic MEFs karyotypically normal iPSC clones. Moreover, BubR1-insufficient iPSC clones were karyotypically unstable, whereas RanBP2-insufficient iPSC clones were rather stable. These findings suggest that aneuploid cells can be selected for or against during reprogramming depending on the W-CIN gene defect and present the novel concept that somatic cell W-CIN can be concealed in the pluripotent state. Thus, karyotypic analysis of somatic cells of origin in addition to iPSC lines is necessary for safe application of reprogramming technology.